313 17 8MB
English Pages 311 Year 2020
“The science of implicit bias is rather complex–much more complex than suggested by the dominant polarized views in the public discourse. The current volume is unique for embracing this complexity in answering broad philosophical questions about implicit bias. A highly accessible must-read for everyone interested in a nuanced view on the science of implicit bias and its significance for society.” Bertram Gawronski, Professor of Psychology, University of Texas at Austin, USA “This is an absolutely fantastic, much-needed book. People seeking a text book will find wonderfully accessible writing on an important issue that is bound to really draw students in. But this isn't just a textbook: the essays are written by top scholars, absolutely current with the latest scholarship in this fast-moving field. Even those who are themselves experts in the area will gain much from reading this volume which tackles some of the most difficult issues in the area in a careful, fair-minded manner.” Jennifer Saul, University of Waterloo, Canada
An Introduction to Implicit Bias
Written by a diverse range of scholars, this accessible introductory volume asks: What is implicit bias? How does implicit bias compromise our knowledge of others and social reality? How does implicit bias affect us, as individuals and participants in larger social and political institutions, and what can we do to combat biases? An interdisciplinary enterprise, the volume brings together the philosophical perspective of the humanities with the perspective of the social sciences to develop rich lines of inquiry. Its twelve chapters are written in a non-technical style, using relatable examples that help readers understand what implicit bias is, its significance, and the controversies surrounding it. Each chapter includes discussion questions and additional annotated reading suggestions, and there are teaching resources available as an eResource. The volume is an invaluable resource for stu dents—and researchers—seeking to understand criticisms surrounding implicit bias, as well as how one might answer them by adopting a more nuanced understanding of bias and its role in maintaining social injustice. Erin Beeghly is Assistant Professor of Philosophy at the University of Utah. She has received fellowships from the National Humanities Center, the National Endowment for the Humanities, and the American Council for Learned Societies. Alex Madva is Assistant Professor of Philosophy and Director of the Cali fornia Center for Ethics and Policy at California State Polytechnic Uni versity, Pomona. He has run numerous workshops and training sessions on implicit bias, stereotype threat, and impostor syndrome for schools, courts, and wider audiences.
An Introduction to Implicit Bias Knowledge, Justice, and the Social Mind
Edited by Erin Beeghly and Alex Madva
First published 2020 by Routledge 52 Vanderbilt Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2020 Taylor & Francis The right of Erin Beeghly and Alex Madva to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this title has been requested ISBN: 978-1-138-09222-8 (hbk) ISBN: 978-1-138-09223-5 (pbk) ISBN: 978-1-315-10761-5 (ebk) Typeset in Sabon by Taylor & Francis Books
Contents
List of illustrations List of contributors Acknowledgements Introducing Implicit Bias: Why This Book Matters
ix x xii 1
ERIN BEEGHLY AND ALEX MADVA
1 The Psychology of Bias: From Data to Theory
20
GABBRIELLE M. JOHNSON
2 The Embodied Biased Mind
41
CÉLINE LEBOEUF
3 Skepticism About Bias
57
MICHAEL BROWNSTEIN
4 Bias and Knowledge: Two Metaphors
77
ERIN BEEGHLY
5 Bias and Perception
99
SUSANNA SIEGEL
6 Epistemic Injustice and Implicit Bias
116
JULES HOLROYD AND KATHERINE PUDDIFOOT
7 Stereotype Threat, Identity, and the Disruption of Habit
134
NATHIFA GREENE
8 Moral Responsibility for Implicit Biases: Examining Our Options
153
NOEL DOMINGUEZ
9 Epistemic Responsibility and Implicit Bias NANCY ARDEN MCHUGH AND LACEY J. DAVIDSON
174
viii
Contents
10 The Specter of Normative Conflict: Does Fairness Require Inaccuracy?
191
RIMA BASU
11 Explaining Injustice: Structural Analysis, Bias, and Individuals
211
SARAY AYALA-LÓPEZ AND ERIN BEEGHLY
12 Individual and Structural Interventions
233
ALEX MADVA
Glossary Index
271 289
Illustrations
Figures 4.1 “Tokio Kid.” American World War II propaganda 9.1 Two paths to create the space for alternative epistemic
viewpoints: 1. the development of epistemic virtues and 2. the
experience of epistemic friction 10.1 Is this evidence? 10.2 Is this evidence? Take two. 10.3 The fairness and accuracy conflict. A (partial) map of
analytical space 11.1 A social structure, depicted visually
82
181
195
196
197
213
Contributors
Saray Ayala-López is Assistant Professor of Philosophy at Cal State Sacra mento. Ayala-López’s main interests are in the philosophy of science, philosophy of mind, and feminist philosophy, with special interest in explanations, conversational dynamics, cognitive externalism, social ontology, sexual orientation, and the sex/gender distinction in science. Rima Basu is Assistant Professor of Philosophy at Claremont McKenna College. Basu’s research—at the intersection of ethics, epistemology, and philosophy of race—examines the question of how we ought relate to one another, focusing on the epistemic dimensions of wrongs such as racism. Erin Beeghly is Assistant Professor of Philosophy at the University of Utah. Beeghly’s research lies at the intersection of feminist theory, social epistemol ogy, and ethics with a special focus on stereotyping, discrimination, and group oppression. Michael Brownstein is Associate Professor of Philosophy at John Jay College and the Graduate Centre, CUNY. His monograph—The Implicit Mind: Cognitive Architecture, the Self, and Ethics—was published by Oxford Uni versity Press in 2018, and in 2016 he co-edited (with Jennifer Saul) a twovolume series, Implicit Bias and Philosophy, also published by Oxford. Lacey J. Davidson has a PhD from the Department of Philosophy at Purdue University and, currently, is a Visiting Assistant Professor at California Lutheran University. Davidson’s areas of specialization are philosophy of race, philosophy of mind, and social epistemology. Noel Dominguez is a PhD Candidate in the Department of Philosophy at Harvard University. Dominguez’s research focuses on relational normativity, moral responsibility, and joint action. Nathifa Greene is Assistant Professor of Philosophy at Gettysburg College. Greene researches social and political philosophy, mainly in feminist theories, non-ideal ethics, and decolonial thought. Her current book project is an ethical analysis of habit in liberatory social practices.
List of contributors
xi
Jules Holroyd is a Lecturer in the Philosophy Department at the University of Sheffield. Holroyd’s research interests are in moral psychology, political philosophy and feminist philosophy. Her recent research examines the ways in which we are implicated in social injustices and the ways in which we might sustain them, including unwittingly. Gabbrielle M. Johnson is a Bersoff Faculty Fellow at the Department of Philosophy at New York University and will be an Assistant Professor at Claremont McKenna College after that. Johnson’s research interests are in the philosophy of psychology (particularly perception and social cognition), philosophy of mind, philosophy of science, and philosophy of language. Céline Leboeuf is Assistant Professor of Philosophy at Florida International University. Leboeuf’s research lies at the intersection of continental philosophy, feminist philosophy, and the critical philosophy of race. Nancy Arden McHugh is Professor and Chair of the Philosophy Department at Wittenberg University. She is the author of The Limits of Knowledge: Generating Pragmatist Feminist Cases for Situated Knowing (SUNY Press 2015) and Feminist Philosophies A-Z (University of Edinburgh Press 2007) and articles in feminist philosophy of science and epistemology. Alex Madva is Assistant Professor of Philosophy and Director of the California Center for Ethics & Policy at Cal Poly Pomona. He has run numerous workshops and training sessions on implicit bias, stereotype threat, and impostor syndrome for schools, courts, and wider audiences. Katherine Puddifoot is Assistant Professor in the Philosophy Department at Durham University. Puddifoot’s main research interest is in what should be said about the epistemic status of seemingly irrational thoughts, such as those involved in stereotyping, implicit bias, and distorted memory beliefs. Susanna Siegel is Edgar Pierce Professor of Philosophy at Harvard University. Her books The Contents of Visual Experience (2010) and The Rationality of Perception (2017) were both published by Oxford University Press.
Acknowledgements
The editors thank our contributors for their hard work in helping to create this book. We are also grateful to the following persons for their labor and thoughtfulness along the way: Helen Beebee, Endre Begby, Nora Berenstain, Nick Byrd, Lindsay Crawford, Steven Downes, Melinda Fagan, Stacey Goguen, Grace Helton, Dan Kelly, Adam Nahari, Helen Ngo, Joshua Rivkin, Tony Smith, Natalia Washington, Jada Wiggleton-Little, and Robin Zheng. Erin Beeghly did the beginning work for this introduction while on fellowship at the National Humanities Center in Durham, North Carolina. Contributor Katherine Puddifoot acknowledges the support of the European Research Council under the Consolidator grant agreement number 616358 for a project called Pragmatic and Epistemic Role of Factually Erroneous Cognitions and Thoughts (PERFECT). A four-part conference series (also organized with Jules Holroyd) called “Bias in Context: Psychological and Structural Explanations of Injustice” helped us find our contributors. Fun ders for the conference series included the College of Letters, Arts, and Social Sciences at Cal Poly Pomona, the University of Sheffield, the College of Humanities at the University of Utah, the Mind Association, the Analysis Trust, and the Society for Applied Philosophy. Finally, we thank Andrew Beck for proposing this volume and to everyone at Routledge for supporting its completion.
Introducing Implicit Bias Why This Book Matters Erin Beeghly and Alex Madva
Why do we do what we do? What makes humans tick? We often like to think that our actions are shaped by our choices, and our choices are mostly shaped by careful and well-thought-out deliberations. However, research from the social and cognitive sciences suggests that a wide range of automatic habits and unintentional biases shape all aspects of social life. Imagine yourself walking through a grocery store. The smaller the floor tiles, the slower people tend to walk. The slower people walk, the more they buy. Hidden biases like this are well known to marketers and consumer psychologists. Yet store shoppers do not notice the ways in which the flooring affects their walking patterns or spending decisions (e.g. Van Den Bergh et al. 2016; see also Brownstein 2018). Examples such as this are only the tip of the iceberg. Increasingly, social scientists cite “implicit” or “automatic” mental processes to explain persistent social inequities and injustices in a broad range of contexts, including educa tional, corporate, medical, and informal “everyday” settings. Implicit biases have been invoked to explain heightened police violence against black US citizens, as well as subtle forms of discrimination in the criminal justice system and the underrepresentation of women and people of color in the workplace. Across the globe, employees are now required to attend implicit bias trainings, where they learn about these biases and their detrimental effects. A popular buzzword, “implicit bias” was even discussed by Hillary Rodham Clinton and Donald Trump during the 2016 US Presidential debates (Johnson 2016). At the same time, critics of implicit bias research raise red flags. A key challenge—voiced by theorists of race, gender, disability, and economic inequality—maintains that the focus on individual psychology is at best irrelevant to, and at worst obscures, the more fundamental causes of injustice, which are institutional and structural (Ayala‐López 2018; Haslanger 2015; Payne and Vuletich 2017). Consider racial segregation in housing. To this day, neighborhoods in the United States are often racially segregated (for visualiza tion of census data, see Cable 2013). One could try to explain the phenomenon by citing individual preferences, choices, or beliefs, including racist beliefs, or a preference, perhaps common among people of color, to avoid discrimination from other racial groups (Feagin and Sikes 1995). But such explanations may
2
Erin Beeghly and Alex Madva
cover up the most important parts of the story. Throughout the twentieth century, the US government created and maintained racial segregation through federal (as well as local and state) policy and laws. US courts legally upheld policies that promoted residential segregation well into the 1970s, as did local police. The issue is this. If we center our explanation on individuals’ beliefs and preferences regarding where to live, we have offered an explanation that hides the deepest causes of racial segregation and, in doing so, obscures our government’s complicity in injustice (Anderson 2010; Shelby 2016; Rothstein 2018). This is not the only pressing criticism on the table. Some have argued that traditional psychology mistakenly characterizes biases as entirely “in the head” whereas biases may instead be actively embodied and socially constituted (Ngo 2019). Other theorists have expressed doubts about the quality of the scientific research on implicit bias and the power of implicit bias to predict real-world behavior (Hermanson 2017; Oswald et al. 2013; Singal 2017). Criticism comes from all sides. On one side of the political spectrum, implicit bias research is perceived as nothing more than a way for liberals to justify their political agenda by claiming that it’s backed by science—even though the underlying science is, these critics allege, flimsy (MacDonald 2016; 2017). On the opposite side of the political spectrum, implicit bias research is perceived as an evasion of the fact that old-fashioned, explicit bigotry never went away (Blanton and Ikizer 2019; Haslanger 2015; Lauer 2019). Administrators only fund implicit bias trainings, this criticism goes, because people who run these trainings assuage participants’ guilt and make them feel good. (“You’re not racist. Even good people are implicitly biased!”) Moreover, by paying lip service to the value of equality, they pro tect institutions against lawsuits alleging discrimination and sidestep more impactful changes to their workplaces. In this historical context, the present volume was conceptualized. As editors, our ambitions are big. Teaching and outreach are part of our mission. In our classrooms, we found ourselves wanting accessible resources, in particular, readings that could be used in contexts where not everyone had an in-depth background in social psychology, political theory, or philosophy of mind. Much of the existing literature on implicit bias is incredibly technical. Theorists tend to explore the implications of implicit bias for sophisticated, longstanding debates regarding, for example, moral responsibility or the cognitive archi tecture of the mind. Others try to persuade readers that implicit biases are important to specific debates within, for instance, the nature of perception. We ourselves have written technical papers like this! However valuable, these writings often miss the mark when it comes to introducing students to bias and its implications. We also found that more accessible discussions of implicit bias ended up sacrificing accuracy and rigor. What we wanted for our students was the best of both worlds: conceptually rigorous, empirically informed work written in a non-technical and accessible style, using engaging examples. We also wanted our students to have a big-picture sense of why bias matters, a goal
Introducing Implicit Bias
3
that we thought would be best served by a collection that covered a broad swath of conceptual, ethical, and political terrain. The text we envisioned did not shy away from controversy. We needed our students to understand and reflect on the criticisms surrounding implicit bias, as well as how one might answer them by adopting a less simplistic understanding of implicit bias and its role in maintaining injustice. With these goals in mind, we sought out contributors. All the authors in this volume are philosophers by training. They are also deeply immersed in the psychological and social-scientific literature on bias, and their chapters have been carefully and anonymously peer-reviewed by other experts in their respective fields. Some specialize in the philosophy of mind and cognitive science. Some work in epistemology—the branch of philosophy that deals with knowledge. Some are ethicists. Others work in political and social theory. Each has their special area of expertise and methodological commitments, and together they represent a wide array of social backgrounds. Yet all of our authors share something in common. They write in a non-technical style, using relatable examples that help readers better understand what implicit bias is, its significance, and the controversies surrounding it. Think of this volume as a guide to the territory. It is interdisciplinary through and through: bringing the philosophical perspective of the huma nities together with the perspective of the social sciences. Each chapter focuses on a key aspect of bias. A glossary at the volume’s end defines important terminology. Web resources are available, which provide food for thought and connections to wider cultural resources, including podcasts and movies. For those who want to dive deeper into the philosophical or empirical research, our authors have provided suggestions for additional reading at the end of each chapter. For teachers who want to use this volume in educational settings, we also offer Discussion Questions. We want this book to be useful to you, no matter who you are and what you want out of it. To that end, the book is organized into three parts. Part 1 explores what implicit bias is. It also examines a variety of critical arguments that suggest implicit bias does not really exist, or does not help to explain social injustice, or is based on shoddy scientific evidence. Part 2 focuses on questions surrounding knowledge and bias. One of the key aims is to document the myriad of ways in which biases impede knowledge. However, it also broaches the question of whether biases track truth and are reliable in some cases. Part 3 examines key practical, ethical, and political questions surrounding implicit biases. In this part of the book, our authors build on rich traditions of scientific research to ask questions that go beyond psychology, or that don’t lend themselves to straightfor ward scientific investigation. For example, they ask whether—and how— we can be held responsible for our implicit biases, and how we ought to structure society to combat implicit biases.
4
Erin Beeghly and Alex Madva
Significantly, no one in this volume argues that implicit bias is somehow the most important target for social-justice efforts. However, what emerges throughout these chapters are the complex ways that implicit bias connects to other issues. Return to the example of racial segregation in the United States. While institutional tools—including federal law, the criminal justice system, and local police—were consistently deployed throughout the twentieth century to create and maintain residential segregation, one ought to ask why lawmakers, police, and many of the people whom they served supported white-only residential spaces in the first place. Who created these policies? How were they responsive to market pressures, if at all? Who pushed back against them? Individual psychology has to be invoked if these questions are to be answered fully (e.g. Alexander 2012; Enos 2017; Lopez 2017; Payne et al. 2010; Pettigrew 1998). One could never get the whole story about why segregation succeeded for so long—and persists even today—unless one talks about individuals’ beliefs, desires, aversions, preferences, and so on. Segregation fed and was fed by biases, many of which were explicit prejudices but some of which were subtler, a fact to which Martin Luther King Jr.’s criticisms of white moderates attest (King Jr. 1963). Implicit bias is thus one piece of a complex puzzle. Another way to highlight how implicit bias is one piece of a complex puzzle is to consider how it relates to explicit bias. One concern about implicit bias research is that it is out of touch with current social and political realities, which have witnessed a surge in intergroup political division and openly endorsed bigotry, including violent, white supremacist rallies and mass shootings in the United States as well as far-right, neo-Nazi parties gaining ground worldwide. Readers might understandably look at these trends and ask: why are we still talking about implicit bias? However, implicit bias research is surprisingly well-positioned to help us understand what’s going on (Madva 2019). First, consider findings from developmental psychology. Researchers find that infants and young children form social biases very early on, and even tend to openly report racial preferences through the age of six (Dunham et al. 2008). Ten-year-olds, however, become less likely to report explicit biases, while adults are much less likely still. Meanwhile, overall patterns of implicit bias tend to remain stable all the way from childhood to adulthood. These patterns suggest that children form explicit biases early on, but then gradually learn that these biases are wrong, and not OK to say out loud. Seen in this light, implicit biases are like a “residue” left over in people raised to endorse anti-prejudiced values even while they are immersed in a broadly prejudiced society (Payne et al. 2019). In adulthood, however, this implicit residue of prejudice continues to affect behavior—and comes to function like a “sleeper cell” of bias, waiting to be activated when social norms change. Studies find that when you plunk an implicitly biased person into a social context where authority figures and peers promote prejudiced norms and values, then their implicit biases become explicit once again (Crandall et al. 2002; Lee et al. 2017; White and Crandall 2017). It takes very little, it turns out, for the implicit to bubble up
Introducing Implicit Bias
5
into the explicit, and for suppressed prejudices to become openly endorsed and acted upon. In other words, explicit bias feeds and is fed by implicit bias (Gawronski and Bodenhausen 2006; De Houwer 2019). One wouldn’t appreciate complexities like this from the simplistic portrayals of implicit bias one finds in popular media, but the sophisticated studies being pub lished at a rapid pace in top social-science journals have much to say about unfolding social and political events. When you dig deep into the nature of implicit and explicit biases, part of what emerges is their interconnections, and the fuzzy boundaries between them. In this vein, some of the chapters in this volume will consider bias in general, rather than focus solely on its more implicit forms. What can you expect when you read this volume, more specifically? In the rest of this introduction, we’ll provide a thumbnail sketch of each chapter. If you don’t like spoilers, you can skip this section. However, if you’d like a more concrete sense of what our authors will be discussing in each chapter and the scope of concerns covered in this volume, keep reading.
Part 1 Knowing what to do requires knowing what we are up against. So figuring out how to deal with implicit bias requires a better understanding of what it is. Accordingly, Part 1 of this volume introduces readers to theoretical accounts of implicit bias. These accounts serve as key reference points for the moral and political questions raised in subsequent chapters. The first chapter analyzes implicit bias from a traditional psychological perspective. How do implicit biases fit into our understanding of the mind? The second chapter broadens the discussion to ask how implicit bias may be understood in a more holistic way, namely, as residing not just “inside our minds” but in our physical bodies, habits, and social practices. The third chapter explores—and aims to answer—various skeptical arguments to the effect that implicit biases don’t exist and, if they do, are not particularly helpful in understanding injustice. Chapter 1—The Psychology of Bias: From Data to Theory In Chapter 1, Gabbrielle Johnson introduces readers to leading psychologi cal theories of implicit bias. According to one model, implicit biases are automatic, relatively unconscious mental associations. For example, you hear “salt,” then think “pepper.” You think “woman,” then think “mother.” According to a second model, implicit biases are unconscious beliefs. To evaluate these models, Johnson asks what makes for a good psychological explanation, which would illuminate how the mind works. Next, Johnson explores the tools psychologists use to study the mind, and, in particular, the contrast between direct and indirect measures of people’s attitudes. Direct (or explicit) measures ask people to report their
6
Erin Beeghly and Alex Madva
beliefs and feelings openly (for example: “how much pain are you feeling, on a scale of 1 to 10?”). Indirect (or implicit) measures, such as the Implicit Association Test, instead aim to get at people’s attitudes without their reporting them. Indirect measures have been pivotal for advancing our knowledge of implicit bias. These measures reveal one of the most striking features of implicit biases, which is that they can come apart, or diverge, from our explicit beliefs and values. For example, a person might express a sincere commitment to treating members of all racial groups equally but nevertheless demonstrate subtle racial biases on an indirect measure. How should we understand the negative “gut reactions” or “snap judgments” that drive performance on these measures? What kinds of theories can explain their divergence from our explicit beliefs? Additionally, Johnson examines how emerging research upends common sense thinking about implicit bias. Originally, implicit biases were thought to be deeply ingrained products of our upbringing that would be difficult or impossible to change. However, new evidence reveals that in some contexts, our implicit biases are surprisingly easy to change. What kinds of theories can explain why biases change when they do, and why they don’t change when they don’t? Answers to these abstract, theoretical questions promise serious practical payoffs, as subsequent authors in this volume explore. Chapter 2—The Embodied Biased Mind Implicit bias is often framed in individualistic, psychological terms. The framing is not surprising given that cognitive and social psychologists have been at the forefront in theorizing the phenomenon. Yet the dominant way of understanding bias has a problem: it creates the impression that bias exists exclusively “in the head” of individuals. In Chapter 2, Céline Leboeuf explores a more holistic way of under standing what bias is. Drawing on the work of Maurice Merleau-Ponty, she argues that implicit biases can be thought of as perceptual habits. Habits are learned behaviors, which are realized by—and necessarily depend on—our bodies. If so, implicit bias would consist in bodily habits, rather than mental activity per se. An embodied view of bias is helpful. For example, it would go a long way towards explaining why implicit biases are often experienced as automatic and beyond conscious awareness. Just as we don’t have to consciously think about how to position our hand to turn off a light switch in a familiar room—our hand instinctively and thoughtlessly goes there by habit—one needn’t consciously think about habitual ways of seeing others. For example, in well-known experiments, young black men are perceived as acting more aggressively than young white men, even though their behavior is identical. People who see young black men act from habit. They manifest a tendency to look at and pay attention to young black men in specific ways,
Introducing Implicit Bias
7
and hence to engage and interact with them in particular ways. The perva siveness of such biases is no accident. As Leboeuf notes, we pick up the habits we do in social environments, where norms and expectations about different kinds of people are communicated to us in subtle and not so subtle ways. Biases thus reflect inequalities and norms in society at large. Her analysis also reveals—through discussion of sociologist Pierre Bourdieu’s work on “the habitus”—how widespread biases might impact how we navigate the social world in ways that reflect and entrench group hierarchy. One upshot of the embodied approach is that we need to literally retrain ourselves and develop better habits if we aim to create a more just world (see also McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit Bias” and Alex Madva, Chapter 12, “Individual and Structural Interventions”). Chapter 3—Skepticism About Bias In Chapter 3, Michael Brownstein raises and responds to six of the most prominent criticisms of implicit bias. The first, big-picture objection he considers is that the inequalities supposedly related to implicit bias either don’t exist at all (for example, maybe police officers treat whites and blacks just the same) or aren’t truly unfair (for example, maybe police officers arrest blacks more often than whites because blacks commit more crimes). The second overarching objection begins by acknowledging that these inequalities are both real and really unfair, but then counters that bias has little or nothing to do with them. A third objection grants that inequalities exist, that they’re unfair, and that they’re partly explained by bias—but then objects that the operative biases are explicit rather than implicit. Coming from another direction, a fourth objection—which is taken up in much greater depth in Chapters 11 and 12—is that the primary drivers of inequality are related to institutions and structures (like segregation, see above) rather than bias. Next, Brownstein steps back from these controversies about the explanatory power of implicit bias to consider a fifth criticism, which argues that implicit bias research has been largely “over hyped.” Here the objection has to do with the ways scientists and journalists have communicated implicit bias findings to the general public. Finally, a sixth criticism considered by Brownstein is that the basic tools social scientists have developed to study implicit bias (like the Implicit Association Test) are foundationally flawed, and that we’ll need better methods to measure minds before we can fully appreciate the roles that bias and injustice play in social life. Brownstein helpfully focuses on notable representatives of each criticism, and responds to each one by one. Along the way, he frequently grants that the critics are at least partly right, and that many outstanding challenges and unknowns remain. The upshot of these challenges, however, is not to give up on implicit bias research altogether, but to keep improving the research.
8
Erin Beeghly and Alex Madva
In other words, it’s a call to action. Researchers, scholars, and activists must redouble their efforts to understand what implicit bias is, how best to measure it, and ultimately how best to overcome it.
Part 2 What is the relationship between bias and knowledge? Part 2 explores this question. One of its key aims is to document ways in which biases can frustrate knowledge. However, it also asks whether biases could ever track truth and be reliable in some cases. Chapter 4—Bias and Knowledge: Two Metaphors In Chapter 4, Erin Beeghly investigates two metaphors used to characterize the relationship between bias and knowledge: bias as a kind of fog that sur rounds us and bias as a kind of shortcut for forming beliefs about the world. She argues that these two metaphors point to a deep disagreement among theorists about whether biases can help us be more reliable knowers in some cases. They also help us to better understand the range of knowledge-related concerns about bias. Examining these metaphors, Beeghly observes that biased judgments are motivated by stereotypes. One objection to biased judgments and perceptions is, therefore, that stereotypes are false and based on inadequate evidence. She argues that this objection will not always apply in cases of implicit bias due to the fact that group stereotypes—which are associated with group general izations—may be true or accurate in some cases, at least to some degree. “Doctors wear white coats” and “Women are empathetic” are potential examples. Even so, she argues, bias may compromise knowledge in other ways. Drawing on the psychological and behavioral-economic literature on heuristics and biases, she outlines a number of ways in which biased judgments—even if grounded in accurate views of groups—may be unreliable. Her analysis suggests that a unified theory of how biases impede knowledge is unlikely. Biased judgments may frustrate knowledge in a range of different ways and for different reasons. A second theme in her analysis is the positive epistemic function of biases. If bias is like a fog, it necessarily stops us from seeing other people and the world clearly. However, if biases are shortcuts, they will sometimes be an efficient, effective mode of forming beliefs or expectations about individuals. Some biases may even track truth. Beeghly supplements this observation with what she calls “the argument from symmetry.” The argument from symmetry says that biases—both implicit and explicit—reflect a more general feature of human cognition, namely, our tendency to carve up the world into categories and form expectations about individuals based on how we classify them. No one is tempted to say that category-based reasoning is always bad for knowledge. Differentiating things into categories—distinguishing viruses
Introducing Implicit Bias
9
from bacterial infections, distinguishing edible mushrooms from poisonous ones, and distinguishing cats from dogs—is essential for knowing and navi gating the world. So why would it always be bad to reason about social categories—to distinguish one racial or religious group from another? Beeghly suggests that the argument from symmetry articulates a challenge to theorists, namely, to more carefully delineate the conditions under which implicitly biased judgments are—and are not—reliable. Her analysis also raises the question whether ethical ideals could “raise the epistemic bar” when we interact with and think about other humans, forcing us to gather more and better evidence than would otherwise be required (as Basu argues in Chapter 10, “The Specter of Normative Conflict: Does Fairness Require Inaccuracy?”). Chapter 5—Bias and Perception In Chapter 5, Susanna Siegel explores how biases—both explicit and implicit— impact how we perceive others. She examines biased racial perception through three frames: cultural analysis, cognitive science, and epistemology. Each frame reveals something new and important about biased perception. Cultural analysis reveals “what it is like” to see others in a racialized way or to have such perceptions foisted upon you. Siegel reports the testimony of George Yancy, a black philosopher who describes walking through a department store. Yancy writes, “I feel that in their eyes [that is, in the eyes of white employees and shoppers] I am this indistinguishable, amorphous, black seething mass, a token of danger, a threat, a rapist, a criminal, a burden.” Siegel also describes the hypothetical example of Whit, a white person who grows up in an all-white town and who experiences men of color as suspicious. Turning to cognitive science, Siegel examines how empirical research sheds light on biased cognition and perception. In one study, research participants tend to misclassify benign objects like pliers as guns when black men were holding them. How do such mistakes arise? Siegel canvasses seven options. One option is that participants literally see the pliers as a gun, hence are subject to a visual illusion due to their biases. Another option is that participants correctly see the pliers as pliers but mistakenly push the button that indicates that they have seen a gun due to bias. In that case, their perception is accurate, but they act in biased way by compulsion. As she explores different kinds of mistakes that could manifest in biased perception and action, Siegel returns to an issue raised in Part 1: are biases “mere associations” or beliefs? She argues that modeling biases as akin to ordinary beliefs is more plausible, given the empirical research. Finally, Siegel deploys the lens of epistemology. If biases take the form of beliefs, we can evaluate them in terms of whether they are justified or unjustified, and whether they advance or undermine our knowledge of the social world. She then raises a worry about the nature of perception, namely, that it can be “hijacked” by ill-founded, inaccurate outlooks on
10
Erin Beeghly and Alex Madva
others. When a person’s perception is hijacked by racist stereotypes, she notes, their experiences will seem reasonable to them, as demonstrated by cultural analysis. For example, the shoppers who look at Yancy with suspicion will see their worries about him as justified. To them, he really does look like he is up to no good. However, such visual experiences are unreasonable. People should, in these cases, not believe what they see. Her conclusion cuts against the common view that visual perception is one of the most reliable forms of knowledge—which has profound implications for policing and law. Chapter 6—Epistemic Injustice and Implicit Bias Each one of us has the ability to produce knowledge and contribute to collective inquiry. These knowledge-producing abilities are part of what gives our life meaning and makes it valuable. In Chapter 6, Jules Holroyd and Katherine Puddifoot use this observation as a springboard. Here is what they say: because our knowledge-generating abilities are connected to our moral worth as individuals, we can wrong other people by treating them in ways that are disrespectful of their status as knowers. Such wrongs are called “epistemic injustices.” Using the film Hidden Figures, which charts the key contributions and unjust experiences of black women in NASA’s early space program, Holroyd and Puddifoot discuss five forms that bias-driven epistemic injustice can take. Testimonial injustice, for example, occurs when people are believed less than they otherwise would be due to pernicious group stereo types (e.g., when the police don’t believe you because you’re black). Epistemic appropriation occurs when people do not get adequate credit for the ideas they produce due to unfair power dynamics, which often involve bias (e.g., “he-peating,” when a woman puts forward an idea at a meeting and no one responds, but then a man repeats the same idea later and gets credit for it). Epistemic exploitation occurs when members of marginalized groups bear the burden of educating members of dominant groups about the injustices they face (e.g., when white people expect their black acquaintances to explain what’s wrong with dressing up in blackface, rather than just looking up the answers themselves). According to Holroyd and Puddifoot, injustices such as these are ever-present in social life. This is to be expected, they argue: knowledge and power are inseparable. The project of fighting epistemic injustice must therefore be a collective enterprise.
Part 3 Building on Part 1’s discussion of the psychological nature of implicit bias, and Part 2’s discussion about implicit bias and knowledge, Part 3 turns to questions of morality and justice. What are the practical, ethical, and political implications of implicit bias?
Introducing Implicit Bias
11
Chapter 7—Stereotype Threat, Identity, and the Disruption of Habit Implicit biases don’t just affect how we judge other people; they also affect how we see and judge ourselves. In this way, the phenomenon of implicit bias is closely related to stereotype threat. Roughly, stereotype threat occurs when being reminded of one’s social identity and the stereotypes associated with it (such as gender and racial stereotypes) leads to anxiety, alienation, and underperformance. In Chapter 7, Nathifa Greene investigates this phenomenon and intro duces readers to different ways of understanding its importance. According to the standard view, stereotype threat occurs when a person fears that she will vindicate negative group stereotypes. For example, a woman might do worse on a math exam if her gender is made salient to her or if she is reminded of negative stereotypes like “women are bad at math.” In social psychology, researchers have attempted to document the effects of stereotype threat and hypothesize about what happens inside the mind of someone who experiences it. Greene points out numerous problems with the standard approach. First, empirical studies of stereotype threat have not always been replicated, and serious doubts exist about the robustness of empirical find ings. Second, the standard view makes it seems as if people who experience stereotype threat are irrational: they shouldn’t doubt their abilities or themselves because they are fully capable, but they do. Third, the view implies that people are simply stereotyping themselves; if so, the standard account further harms victims of stereotype threat by suggesting that they are the root of the problem. Greene suggests an alternative view. In particular, she argues that stereo type threat primarily consists in a form of disruption, when an individual cannot just “be” in the world with one’s skills and habits, but gets knocked out of the “flow.” In this way, stereotype threat is similar to the experience of “choking” that athletic and artistic performers can experience. Drawing on the work of W.E.B. Du Bois and Frantz Fanon, Greene persuasively argues that knowledge of the phenomena related to stereotype threat existed long before social psychologists began to study it in the 1990s. Further, she suggests that the perspective of cognitive science sometimes hides rather than reveals the true nature of the phenomenon. From a first-person perspective, we see stereotype threat is not irrational: it typically involves correctly perceiving that others—often, those in more socially privileged groups—are prone to think badly of you due to group stereotypes. More over, people suffer stereotype threat not because they stereotype themselves out of insecurity or anxiety but because negative stereotypes are foisted upon them in everyday social environments. From Greene’s analysis, a vision of a just society emerges in which people are able to seamlessly and safely navigate their world, and inhabit their bodies, without the imposition of others’ harmful implicit and explicit biases.
12
Erin Beeghly and Alex Madva
Chapter 8—Moral Responsibility for Implicit Biases: Examining Our Options Are we responsible for our knowledge and how we act on biases in everyday settings? Is it ever appropriate to blame or hold individuals accountable when their actions are subtly or not-so-subtly influenced by implicit biases? Answering these questions requires that we first step back to consider what makes people morally responsible in general. Why is anybody ever respon sible for anything? In Chapter 8, Noel Dominguez surveys leading theories about the nature and necessary conditions for moral responsibility. He asks which verdicts each of these theories delivers about responsibility for implicit bias. Answering this question is difficult, he argues, because much about implicit bias remains unknown and our scientific “best guesses” keep evolving (as discussed in Part 1 of this volume). One leading theory of responsibility argues that we can only be held responsible for actions that we intentionally choose to do, that is, actions within our control. If I bump into you because somebody pushed me, then that’s out of my control, and it seems inappropriate to hold it against me. But if I bump into you because, well, that’s what I chose to do, then maybe I deserve your anger. Then the question for implicit bias becomes: are we enough in control of our biases to be responsible for them? Perhaps we can’t control them directly, in the moment, but maybe we can control them indirectly, by cultivating the sorts of long-term habits or social policies (such as by grading papers and reviewing job applications anonymously). Indirect strategies likely won’t work against all biases all the time, however. A second leading theory states that control is not necessary for responsi bility, because what really matters is whether our actions reflect “deep” facts about our character traits and values. Consider one example introduced by Angela Smith (2005). If I forget my best friend’s birthday, then plausibly my failure to remember was not the result of any conscious, controlled choice— and yet it still seems appropriate for my friend to resent me for forgetting. Why might that be? Maybe my failure to remember was a result of not caring about my friend as much as I should, and so reflects a “deep” fact about who I am and what I value. Then the question for implicit bias becomes: do implicit biases reflect deep facts about who we are as indivi duals, or are they just superficial features of our minds, or generic biases that most folks in our culture absorb? Dominguez considers arguments on both sides. Ultimately, he thinks that it would be unfair to hold people responsible for every aspect of their deepest self. Dominguez concludes by arguing that research on implicit bias may force us to revise or revolutionize our understanding of moral responsibility altogether. To know whether such revisions are justified, he says, we must think harder about what we want a theory of moral responsibility to do.
Introducing Implicit Bias
13
Chapter 9—Epistemic Responsibility and Implicit Bias A topic of special importance when it comes to responsibility and implicit bias is responsibility for knowledge. Are there strategies for becoming more responsible and respectful knowers? How might we work together to reduce the negative effects of bias on what we see and believe, as well as the wrongs associated with biases? In Chapter 9, Nancy Arden McHugh and Lacey J. Davidson explore these questions. Like Holroyd and Puddifoot in Chapter 6, “Epistemic Injustice and Implicit Bias,” they argue that adequately answer ing them requires thinking about responsibility as having both individual and collective dimensions. Their chapter begins with a discussion of moral responsibility for bias. They argue that typical discussions of responsibility—such as those discussed by Dominguez in Chapter 8, “Moral Responsibility for Implicit Biases: Examining Our Options”—tend to think of responsibility exclusively as an individual matter. They argue that individualistic approaches lead to puzzles that misleadingly suggest that we should not be held responsible for our biases and that implicit biases don’t belong to us. These puzzles disappear, they contend, if we recognize the collective dimensions of responsibility. They thus introduce the concept of epistemic responsibility, which they believe better tracks the social and collective aspects of responsibility. Epistemic responsibility, they explain, “is a set of habits or practices of the mind that people develop through the cultivation of some basic epistemic virtues, such as open-mindedness, epistemic humility, and diligence that help knowers engage in seeking information about themselves, others, and the world that they inhabit.” Achieving these virtues requires putting oneself in a larger social frame. Note the centrality of habits and practices (as emphasized in Leboeuf, Chapter 2, “The Embodied Biased Mind” and Greene, Chapter 7, “Stereotype Threat, Identity, and the Disruption of Habit”). Habits and practices are social in that they are acquired in the context of communities. Knowers always exist in a time and place, the view goes, and we acquire the habits we do partially in virtue of how we are raised. It follows that we need to work together to make our world one in which epistemic virtues like open-mindedness, creativity, and self-reflection can flourish. In a world that is highly unjust, this is a serious challenge. Nonetheless, McHugh and Davidson offer a way forward. Outlining four promising strategies for combatting bias-related epistemic vices, they argue that we can change our world—and ourselves—for the better. “World traveling,” they note, drawing on the work of María Lugones (1987), requires actively seeking out social situations that are outside your comfort zone and interacting with people who are different from you in order to challenge your own way of thinking. They also consider the strengths and weaknesses of practices like “calling in” and “calling out.” When you call someone in, you confront that person about their biased or otherwise insensitive behavior, but you do so in a way that aims to build a stronger relationship with them and
14
Erin Beeghly and Alex Madva
alerts them to the harm they have caused. The aim is restorative. They also consider whether punitive “call outs” are always counterproductive, or might sometimes have their place. Recommendations like this—and others—are explored in greater depth in Chapter 12 (Madva, “Individual and Structural Interventions”). Chapter 10—The Specter of Normative Conflict: Does Fairness Require Inaccuracy? A further question surrounding responsibility, knowledge, and bias is this: is it always morally wrong to rely on stereotypes when making judgments about other people? What if the stereotypes are sometimes accurate (as Beeghly suggests in Chapter 4, “Bias and Knowledge: Two Metaphors”)? Does judging people fairly mean that you must ignore what is most likely true about them given your evidence? While Chapters 8 and 9 (Dominguez, “Moral Responsibility for Implicit Biases: Examining Our Options”; McHugh and Davidson, “Epistemic Responsibility and Implicit Bias”) focus on responsibility and action, Chapter 10 turns to rationality and belief, exploring the relationship between knowledge and justice. Rima Basu asks whether research on social bias pits fairness against accuracy. One intuitive thought—which has been endorsed by Tamar Szabó Gendler (2011)—is that we are faced with a tragic dilemma in everyday situations because we live in an unjust world. Our evidence tells us that we should have certain beliefs about people, but ethical norms forbid it. Suppose, for instance, you are eating at a restaurant where 90 percent of employees are people of color and almost all restaurant patrons are white. If you see a person of color at the other end of the room, the statistical evidence appar ently suggests that this person is an employee. On one hand, it seems like you should believe what the evidence tells you. On the other hand, it is wrong to stereotype someone based on race. Forming beliefs about someone based on their perceived race is unfair after all, and it can be experienced as harmful. So you seem to face a hard choice. Ignore your evidence. Or, believe your evidence but judge someone unjustly. As Basu explains, there are good reasons to question whether cases like these truly present irresolvable dilemmas, and whether accuracy and fairness are diametrically opposed. Basu thus sketches and evaluates a range of alternative views. One theory, for instance, is that ethical norms are inherently superior to norms related to knowledge and, thus, take priority over them. After raising challenges for this theory and others, Basu suggests that ethical and epistemic norms ultimately work together rather than in opposition. In cases where the moral stakes are high, Basu argues, it is never appropriate to judge someone based merely on statistics or stereotypes. You always need more and better information, tailored to the individual. Her view suggests an exciting possibility, namely, that it is often—if not always—ethically and epistemically wrong to judge people based solely on probabilistic evidence, and even on reliable group generalizations.
Introducing Implicit Bias
15
Chapter 11— Explaining Injustice: Structural Analysis, Bias, and Individuals Several contributors to this volume stress that implicit bias and social injustice are neither best understood nor best overcome solely at the indivi dual level. The critique is persuasive: the specific actions of implicitly and explicitly biased individuals are not the whole story about how the most egregious patterns of injustice and inequality arise and persist. Nor are they the only place to look when we think about potential ways forward. In Chapter 11, Saray Ayala-López and Erin Beeghly offer an in-depth analysis of these and related points. The chapter begins with two hypothetical examples of social injustice. Individualistic approaches understand injustices such as these as the result of individuals’ preferences, beliefs, and choices. For example, they explain racial injustice as the result of individuals acting on racial stereotypes and prejudices. Structural approaches, in contrast, appeal to the circumstances in which individuals make their decisions. For example, they explain social injustice in terms of beyond-the-individual features, including group dynamics, laws, institutions, city layouts, and social norms. Often these two approaches are seen as competitors. Framing them as competitors, AyalaLópez and Beeghly argue, suggests that only one approach can win and that the loser offers worse explanations of injustice. Before they evaluate this claim, Ayala-López and Beeghly ask whether individualistic and structural approaches are as different as people often think. They argue that the answer is no. The best accounts of implicit and explicit bias, for example, see individual psychology as responsive to and as an expression of social structures. Hence explanations of injustice that cite individuals’ biases need not ignore the existence of social structures. Indeed some bias-focused explanations suggest that structural factors provide the deepest explanations of why social injustices occur and persist. Likewise, structural accounts can—and often do—cite the actions of individuals when they explain injustice. If so the two approaches are better seen not as diametrically opposed but as making claims about which explanatory factors should be prioritized. Working with this more nuanced picture, Ayala-López and Beeghly step back and explore criteria for comparing and evaluating the two approa ches. Does one approach better identify what’s ethically troubling about injustice? Does one have a better story about why social injustices occur? And, does one approach provide superior strategies for changing our world for the better? Ayala-López and Beeghly argue that no one approach has the upper hand with respect to answering these questions in any and all contexts. If they are right, both approaches are needed to adequately understand and address injustice. They contend, further, that the two approaches can work together—synergistically—to produce deeper explanations of social injustice.
16
Erin Beeghly and Alex Madva
Chapter 12—Individual and Structural Interventions Given all that we have learned about bias and injustice, what can we do—and what should we do—to fight back? In our volume’s final chapter, Alex Madva builds on the philosophical, psychological, and ethical insights of earlier chapters to make the case for a set of scientifically-tested debiasing interven tions—that is, interventions to combat implicit (and explicit) bias as well as promote a fairer world. Madva begins by noting a few of the leading obstacles to meaningful social change, stressing in particular the gaps in what we know, both about which concrete goals we should aim for and how best to pursue them. He illustrates our knowledge limitations by focusing on two case studies of persistent injustice that have proven difficult to overcome, namely, the gender pay gap and the challenges of reentering society for people getting out of prison. Madva shows how the seemingly obvious ways to fix these injustices sometimes do nothing, and sometimes make matters worse. He next draws several lessons from these case studies. One is that we must adopt an experimental mindset: we have to test out different strategies and see how they go, then go back to the drawing board, revise our strategies, and test them again. Another lesson is that, given how multifaceted these problems are, we have to adopt an equally multifaceted approach to solving them. This means, among other things, that we will need to make changes both as individuals (how can I do better?) and as communities (how can we do better?). With this ground cleared, Madva then turns to recommending concrete strategies. These range from small-scale, daily-life debiasing tricks, like perspective-taking, to large-scale societal transformations. Each step of the way, Madva teases out various moral, political, and strategic questions for these interventions, and points the reader to the outstanding unknowns about them. In doing so, he highlights the many gaps in our knowledge that future scientists, activists, artists, and frankly leaders of all kinds will have to fill. None of the large-scale transformations that Madva, and other authors in this volume, consider will be possible with the snap of a finger. They’ll only be brought about through a correspondingly large social movement. This volume is not a textbook for how to activate and sustain that movement. Yet some of the challenges caused by implicit and explicit bias in social life more generally will also arise in the context of social movements (Cortland et al. 2017; Jost et al. 2017; van Zomeren 2013). When we must build coa litions across boundaries of class, race, gender, sexuality, ability, and other dimensions of difference, we will be faced with biased people with whom we must work if we want our movement to succeed, and we will inevitably be forced to reckon with biases in ourselves. In such contexts, individuals have to find a way to build solidarity in the face of bias. There are no easy answers here. If nothing else, carefully thinking through the hard questions
Introducing Implicit Bias
17
discussed in this volume provides a step in the right direction. We hope it motivates readers to do their small part, in their own corner of the world, and to be ready to join into that movement when it arises. Hint: the time is now!
REFERENCES Alexander, M. (2012) The New Jim Crow: Mass Incarceration in the Age of Colorblindness. New York: The New Press. Anderson, E. (2010) The Imperative of Integration (reprint edition). Princeton, NJ: Princeton University Press. Ayala‐López, S. (2018) A structural explanation of injustice in conversations: It’s about norms. Pacific Philosophical Quarterly, 99: 726–748. https://doi.org/10. 1111/papq.12244 Blanton, H. and Ikizer, E.G. (2019) Elegant science narratives and unintended influ ences: An agenda for the science of science communication. Social Issues and Policy Review, 13: 154–181. https://doi.org/10.1111/sipr.12055 Brownstein, M. (2018) The Implicit Mind: Cognitive Architecture, the Self, and Ethics. Oxford: Oxford University Press. Cable, D. (2013. The Racial Dot Map. Weldon Cooper Center for Public Service. https://demographics.coopercenter.org/racial-dot-map [accessed 14 August 2019]. Cortland, C.I., Craig, M.A., Shapiro, J.R., Richeson, J.A., Neel, R., and Goldstein, N.J. (2017) Solidarity through shared disadvantage: Highlighting shared experi ences of discrimination improves relations between stigmatized groups. Journal of Personality and Social Psychology, 113: 547–567. https://doi.org/10.1037/psp i0000100 Crandall, C.S., Eshleman, A., and O’Brien, L. (2002) Social norms and the expression and suppression of prejudice: The struggle for internalization. Journal of Personality and Social Psychology, 82: 359–378. https://doi.org/10.1037//0022-3514.82.3.359 De Houwer, J. (2019) Moving beyond System 1 and System 2: Conditioning, implicit evaluation, and habitual responding might be mediated by relational knowledge. Experimental Psychology, 66(4): 257–265. https://doi.org/10.1027/1618-3169/a000450 Dunham, Y., Baron, A.S., and Banaji, M.R. (2008) The development of implicit intergroup cognition. Trends in Cognitive Sciences, 12: 248–253. https://doi.org/10. 1016/j.tics.2008.04.006 Enos, R.D. (2017) The Space Between Us: Social Geography and Politics. Cambridge: Cambridge University Press. Feagin, J.R. and Sikes, M.P. (1995) Living with Racism: The Black Middle-Class Experience (reprint edition). Boston, MA: Beacon Press. Gawronski, B. and Bodenhausen, G.V. (2006) Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change. Psychological Bulletin, 132: 692–731. https://doi.org/10.1037/0033-2909.132.5.692 Gendler, T.S. (2011) On the epistemic costs of implicit bias. Philosophical Studies:
An International Journal for Philosophy in the Analytic Tradition, 156: 33–63.
Haslanger, S. (2015) Social structure, narrative, and explanation. Canadian Journal
of Philosophy, 45: 1–15. Hermanson, S. (2017) Implicit bias, stereotype threat, and political correctness in philosophy. Philosophies, 2, 12. https://doi.org/10.3390/philosophies2020012
18
Erin Beeghly and Alex Madva
Johnson, J. (2016) Two days after the debate, Trump responds to Clinton’s comment on implicit bias. Washington Post. Jost, J.T., Becker, J., Osborne, D., and Badaan, V. (2017) Missing in (collective) action: Ideology, system justification, and the motivational antecedents of two types of protest behavior. Current Directions in Psychological Science, 26: 99–108. https://doi.org/10.1177/0963721417690633 KingJr, M.L. (1963) Letter from a Birmingham Jail. https://kinginstitute.stanford. edu/king-papers/documents/letter-birmingham-jail [accessed 6 January 2019]. Lauer, H. (2019) Implicitly racist epistemology: Recent philosophical appeals to the neurophysiology of tacit prejudice. Angelaki, 24: 34–47. https://doi.org/10.1080/ 0969725X.2019.1574076 Lee, K.M., Lindquist, K.A., and Payne, B.K. (2017) Constructing bias: Conceptualization breaks the link between implicit bias and fear of Black Americans. Emotion. https://doi. org/10.1037/emo0000347 Lopez, G. (2017) The past year of research has made it very clear: Trump won because of racial resentment. Vox. https://www.vox.com/identities/2017/12/15/ 16781222/trump-racism-economic-anxiety-study [accessed 17 December 2017]. Lugones, M. (1987) Playfulness, “world”-travelling, and loving perception. Hypatia, 2: 3–19. Mac Donald, H. (2017) The false ‘science’ of implicit bias. Wall Street Journal. Mac Donald, H. (2016) The War On Cops: How the New Attack on Law and Order Makes Everyone Less Safe. New York: Encounter Books. Madva, A. (2019) Social psychology, phenomenology, and the indeterminate content of unreflective racial bias. In E.S. Lee (ed.), Race as Phenomena: Between Phenomenology and Philosophy of Race (pp. 87–106). Lanham, MD: Rowman & Littlefield International. Ngo, H. (2019) The Habits of Racism: A Phenomenology of Racism and Racialized Embodiment. Lanham, MD: Lexington Books. Oswald, F.L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P.E. (2013) Predicting ethnic and racial discrimination: A meta-analysis of IAT criterion studies. Journal of Personality and Social Psychology, 105: 171–192. https://doi.org/10.1037/a0032734 Payne, B.K., Krosnick, J.A., Pasek, J., Lelkes, Y., Akhtar, O., and Tompson, T. (2010) Implicit and explicit prejudice in the 2008 American presidential election. Journal of Experimental Social Psychology, 46: 367–374. https://doi.org/10.1016/j. jesp.2009.11.001 Payne, B.K. and Vuletich, H.A. (2017) Policy insights from advances in implicit bias research. Policy Insights from the Behavioral and Brain Sciences. https://doi.org/10. 1177/2372732217746190 Payne, B.K., Vuletich, H.A., and Brown-Iannuzzi, J.L. (2019) Historical roots of implicit bias in slavery. PNAS. https://doi.org/10.1073/pnas.1818816116 Pettigrew, T.F. (1998) Intergroup Contact Theory. Annual Review of Psychology, 49: 65–85. https://doi.org/10.1146/annurev.psych.49.1.65 Rothstein, R. (2018) The Color of Law: A Forgotten History of How Our Govern ment Segregated America. New York and London: Liveright. Shelby, T. (2016) Dark Ghettos: Injustice, Dissent, and Reform. Cambridge, MA: The Belknap Press of Harvard University Press. Singal, J. (2017) Psychology’s favorite tool for measuring racism isn’t up to the job. Science of Us, New York Magazine.
Introducing Implicit Bias
19
Smith, A.M. (2005) Responsibility for attitudes: Activity and passivity in mental life. Ethics, 115: 236–271. https://doi.org/10.1086/426957 Van Den Bergh, B., Heuvinck, N., Schellekens, G.A.C., and Vermeir, I. (2016) Altering speed of locomotion. Journal of Consumer Research, 43: 407–428. https:// doi.org/10.1093/jcr/ucw031 van Zomeren, M. (2013) Four core social‐psychological motivations to undertake collective action. Social and Personality Psychology Compass, 7: 378–388. White, M.H. and Crandall, C.S. (2017) Freedom of racist speech: Ego and expressive threats. Journal of Personality and Social Psychology, 113: 413–429. https://doi. org/10.1037/pspi0000095
1
The Psychology of Bias: From Data to Theory Gabbrielle M. Johnson
What’s going on in the head of someone with an implicit bias? Attempts to answer this question have centered on two problems: first, how to explain why implicit biases diverge from explicit attitudes and second, how to explain why implicit biases change in response to experience and evidence in ways that are sometimes rational, sometimes irrational. Chapter 1 introduces data, methods, and theories to help us think about these questions. First, the chapter briefly outlines the features of good, explanatory psychological theories built on empirical data, and the pitfalls they must avoid. Next, it presents an overview of the empirical data relevant to two main questions: implicit-explicit divergence and rationality. Finally, it surveys the theories intended to provide psychological explanations for those empirical data, providing examples of each. The chapter ends with some summarizing reflections, and in particular it confronts the possibility that bias is in fact a mixed-bag of many different sorts of psychological phenomena, making one unified psychological explanation misplaced.
1 The Psychology of Bias: A First Pass What’s going on in the head of someone with an implicit bias? Often psy chologists answer this question by saying such a person has an unconscious mental association. On this view, when we say someone has an implicit bias against the elderly, for example, we’re saying they quickly, automatically, and unconsciously associate someone’s being elderly with, say, that person’s being frail, forgetful, or bad with computers. This view of implicit bias comes very naturally to us, as we’re used to our minds making associations quickly, automatically, and without our conscious awareness. For example, when I say salt, you automatically think pepper; or when I say hip, you think hop; when I say Tweedledee, you think … . You don’t need to deliberate about what comes next; you just know (for more discussion, see Siegel, Chapter 5, “Bias and Perception”). These characteristics of the associationist picture may help us explain one of the most vexing aspects of implicit bias: divergence. Divergence occurs when our unconscious mental states differ, or diverge, from our consciously-held mental states. Consider, for example, an individual
The Psychology of Bias
21
who, when asked, says women are just as capable of succeeding in leadership roles as men are but continues to act in ways that seem at odds with that sentiment. They might, for example, rate a male applicant for a leadership role more favorably than a female applicant with equally impressive credentials. In this case, we say that this individual’s explicit (or consciously accessible) beliefs about gender diverge from their implicit attitudes. At the conscious level, the person believes men and women are equally capable; however, at the implicit level, the person has a bias against women. The associationist view provides a natural explanation for divergence: this person’s conscious beliefs diverge from their unconscious beliefs because there are two distinct and independent mental constructs involved at each level of consciousness. At the conscious level, this person has deliberately considered evidence and is convinced that men and women are equal. However, at the unconscious level, this person’s automatic, reflex-like processes lead them to associate male with leader. Because distinct and independent constructs operate at each level, we get different results depending on which level the individual relies on at any given time. I call this approach, which distinguishes between different kinds of states or processes for explicit and implicit attitudes, a dualconstruct model. Dual-construct models excel at explaining divergence, or the differences between explicit and implicit attitudes. Dual-construct models—like the associationist picture above—have gained favor among psychological accounts of implicit biases. However, more recently, interesting studies exploring the malleability of implicit biases—that is, our ability to change implicit attitudes—suggest that the operation of mental processes at the two levels might not be so different after all. In particular, our implicit attitudes sometimes change when confronted with reasons to do so. I’ll call this newly emerging data rationality of bias, or rationality for short. The term ‘rationality’ is intended as a term of art here, which I’ll use to pick out particular features of mental states that I’ll explain in more detail later. This notion of rationality is intentionally more robust than you might initially think when hearing the term. For example, you might think that my having an association between salt and pepper is rational—it makes sense that I would think of one right after the other since they often appear together. However, this sort of superficial rationality won’t be enough to capture the unique features I want this technical notion to capture. I’ll return to this point and explain exactly what unique features I have in mind in Section 3.3. The possibility that implicit biases might ever be rational, albeit rarely, is surprising for the dual-construct model, which predicts that rational and deliberative processes are unique to the explicit level and, thus, entirely absent at the implicit level. Even stronger, the fact that rational processes might be in operation at both the explicit and implicit levels suggests that the dual-construct model, which attributes to each level distinct and independent kinds of states and processes, might be mistaken. Instead, one
22
Gabbrielle M. Johnson
might think the evidence for rationality suggests that implicit attitudes are just like, or at least similar to, ordinary explicit beliefs, save for one kind of belief is unconscious while the other is conscious. I’ll call these sorts of approaches belief-based models, because they equate the kinds of constructs leading to explicit and implicit attitudes. Belief-based models excel at explaining rationality, or similarities between explicit and implicit attitudes. In this chapter, I discuss these two fact patterns—divergence and rationality—in detail. I begin by reviewing standard assumptions about psychological theories more generally, such as what they aim to do and how we evaluate them. Following this preliminary discussion, I review the empirical data indicating patterns of divergence and rationality, and I examine how the two main approaches—dual-construct models and beliefbased models—are each sufficient to deal with one of the fact patterns, but struggle to explain both. I’ll then look at views that attempt to carve out a middle ground between dual-construct and belief-based models. These views argue that implicit biases constitute a unique kind of mental construct, which is not easily explained by either standard dual-construct or beliefbased models.
2 What is a Psychological Explanation? Roughly, psychology is a scientific discipline that aims to explain an intelligent creature’s behavior in terms of that creature’s mental states and processes. In other words, psychologists look to a creature’s state of mind in order to understand why they acted the way that they did. One fundamental assumption among most psychologists today is that humans have mental states that represent the world as being a certain way and that those representations of the world affect how they think and act in it. For example, you might explain your roommate’s going to Chipotle for lunch using her belief that Chipotle makes the best guacamole. Of course, this belief might turn out to be false or your roommate might have gone to Chipotle for a different reason. But the kind of explanation you gave is what psychologists understand and expect. This psychological methodology of building theories that explain by making reference to distinctively mental states—beliefs, desires, fears, intentions, etc.—is an example of what philosophers of science beginning with Thomas Kuhn (1962) call a paradigm. Within any paradigm, scientists take certain fundamental assumptions for granted as shared among members of a scientific community—in this case, the assumption that humans have mental states in the form of representations. Alternative to this methodological paradigm was a different approach made popular by B.F. Skinner called behaviorism. It claimed that psychology should only study objective, observable physical stimuli and behavioral responses, and not concern itself with subjective, private mental states. In its most radical form, behaviorism claimed that all behaviors of intelligent
The Psychology of Bias
23
creatures can ultimately be explained in this way, without ever needing to mention internal, distinctively mental states. Although no longer popular, behaviorism made several important contributions to the methodology of psychology. One contribution is a general suspicion toward the ease of relying on mental state explanations (see Dennett 1981: 56 citing Skinner 1971: 195). The fear is that we can’t explain an unknown fact—why your roommate went to Chipotle for lunch—by using an equally mysterious object—her internal belief about Chipotle. Because her belief is a mental state, it is observable only by her and no one else. So we weren’t really explaining anything at all, merely replacing one mysterious fact with another. This worry is sometimes called the ‘homunculus fallacy’. The word ‘homunculus’ (plural ‘homunculi’) is Latin for ‘little person’. A theory that commits this fallacy attempts to explain some intelligent behavior by way of positing some equally intelligent cause of that behavior. The idea is that this was tantamount to positing a little person inside the head of the first intelligent creature whose own behavior goes unexplained. This same basic idea is depicted humorously in the 2015 Pixar film Inside Out. In this film, the perspective switches between that of a young girl, Riley, and the five personifications of the basic emotions that live in her head: Joy, Sadness, Fear, Disgust, and Anger. These five little people (or homunculi) inside Riley’s head operate a control center that influences all of Riley’s actions. According to the film, the explanation for why Riley acted the way she did—for example, why, when her parents feed her broccoli, she frowns, gags, and swats the vegetable away—is that there is a little person in her head prompting those reactions. In this case, Disgust finds broccoli disgusting. If you’re like me, you might ask: if we looked into the head of these little people, would there be more, even smaller people inside their heads? Of course, the film never shows us what’s inside any of their heads. You might then wonder if the film has really provided any explanation of Riley’s actions, or if instead it has merely pushed the explanation of her behavior back a level. We can apply a point made by Skinner (1971: 19) and say the whole purpose of introducing the little people seems to be to help us understand why Riley acts how she does. But without providing an explanation of why the little people in Riley’s head act the way they do, we’ve failed to explain anything. Over time, behaviorism itself was criticized for purporting to provide explanations without actually doing so, and there was a return to theories that unabashedly allowed for reference to mental states (Fodor 1981: 6). On such views, the way to avoid the homunculus fallacy is to slowly replace complex mental phenomena with combinations of simpler, more intelli gible mental phenomena (Fodor 1968: 629). The hope is that eventually we arrive at an analysis constituted entirely by simple, elementary states (for example, thoughts and concepts) and the processes that combine them (for example, logical rules). We’ll call any collection of states and processes that
24
Gabbrielle M. Johnson
enters into such an analysis a mental construct. Crucially, the explanation of how these states operate can be given without any reference to intelligent behavior. And thus we return to the modern-day paradigm that explains behavior by reference to mental states. This paradigm has come to dominate theories of cognitive science and psychology, and is tacitly present in the theories of bias to follow. However, we should not forget the lessons of behaviorism. You should continue to ask yourself as we move through the theories: has this explanation rendered important parts of the psychological picture more understandable, or has it merely posited a convenient, but equally mysterious mental process? In other words, has it provided a genuine explanation or has it merely pushed the entire explanation back a level to equally intelligent homunculus-like states?
3 Empirical Data of Social Bias At the onset of our investigation, we’re faced with several questions. What are the data surrounding social bias? In what ways do methods of testing for social bias differ from one another? What patterns emerge from these data? 3.1 Direct and Indirect Measures Before the early 1970s, tests for social bias took a direct route: if a psychologist wanted to know if someone had a bias against a particular social group, she would ask her subjects directly. Such tests are called direct measures. Let’s focus on the case of racial attitudes in the United States. One of the earliest examples of a direct measure was a test created by Katz and Braly (1933) that asked 100 Princeton students to read through a list of 84 adjectives and write down those that they think best characterized a particular race or ethnicity. Characteristic of the time, the results indicated pervasive negative racial biases. The majority of participants in the study paired African Americans with traits like superstitious and lazy, while pairing Germans with traits like scientificallyminded and industrious. Over time, the social landscape of the United States changed dramatically. The Civil Rights Movement of the 1950s and 60s strived to establish racial equality across the country, and ushered in a new public standard that discriminatory opinions about African Americans were socially unacceptable. During this time, direct measures began to show a decline in negative racial bias. However, although overt expressions of racist ideology were curbed, the pervasive and destructive effects of racism were still painfully evident. It seemed that people still harbored racist opinions, opinions that influenced their beliefs about and actions toward people of color; it’s just that either those individuals stopped wanting to admit those opinions to others or, more curiously, those opinions were not obvious even to them. This prompted the emergence of indirect measures (sometimes called “implicit” measures) that do not rely on asking subjects to report their attitudes. Today, the most famous and widely-used indirect measure is the Implicit Association Task
The Psychology of Bias
25
(IAT) first developed by Greenwald et al. (1998). (The following discussion will attempt to describe the test in detail. However, the easiest way to understand how the test operates is to take it for yourself, which you can do online at https://imp licit.harvard.edu/implicit/ in the span of approximately 10 minutes.) The IAT asks participants to quickly sort stimuli appearing in the middle of a computer screen to either the left or the right depending on the categories on each side. There are always two categories on either side, forming compound categories for each; these compound categories are not static but change during the test. For example, in race IATs, compound categories combine racial groups, e.g., black and white, with valences, e.g., good and bad. The stimuli to be sorted into the compound categories are representations of members of one of those four groups: photos of black faces, photos of white faces, positively-valenced words like ‘happy’ or ‘love,’ and negatively-valenced words like ‘sad’ or ‘hate.’ Subjects are asked to complete several rounds or blocks of these sorting tasks, with the compound categories changing from round to round. The congruent blocks of sorting tasks combine racial groups with their stereotypical valences: with ‘black and bad’ on one side and ‘white and good’ on the other. In the incongruent blocks, the valences are swapped to make counter-stereotypical pairings: ‘black and good’ on one side and ‘white and bad’ on the other. The tests also switch the sides of the congruent and incongruent categories on different rounds, attempting to eliminate any dominant-hand advantage in certain blocks, as well as rando mize the order in which the incongruent and congruent blocks are presented, attempting to eliminate any conditioning effects from getting used to the test. The results of these tasks reveal that most Americans, including some African Americans, are quicker and make fewer mistakes (e.g., sorting to the wrong side) when sorting stimuli in the congruent blocks (see, for example, Banaji and Greenwald 2013: 47 and Axt et al. 2014: 1806). I’ll call results of this sort demonstrating a positive preference toward white faces. (This label doesn’t make any assumptions about individuals’ psychologies. It is just about their behavioral responses: whether a given participant paired African American faces faster and more accurately with positive words, negative words, or—as is true for some participants—neither.) Now the key question motivating psychological theories of bias is this: what’s going on inside the head of someone who demonstrates a positive preference toward white faces on an IAT? The favored response among the creators of the test is that it measures specific mental constructs where the states involved are simple concepts and the relevant process is association. In the case of the Race IAT, the test measures whether particular race concepts are associated with valence concepts. Just like in the salt and pepper example mentioned in the introduction, what it means for the racial concept black to be associated with the valence concept bad is just for mental activations of the concept black to reliably cause mental activations of the concept bad. That is, when you think of
26
Gabbrielle M. Johnson
one (the concept activates), you think of the other (the other concept activates). Crucially, this theory assumes that two concepts being associated makes it easier to sort examples of them together, and harder to sort examples of each separately. So, if black is associated with bad, then it will be easier to sort examples of black faces (which activate the concept black) and examples of negative words (which activate the con cept bad) to the same side than to opposite sides, and the same will be true of white faces and positive words if their concepts are associated. There are other examples of indirect measures, for example semantic priming (Banaji and Hardin 1996) and evaluative priming (Fazio et al. 1995), the Go/No-Go Associations Task (Nosek and Banaji 2001), the Sorting Paired Features Task (Bar-Anan et al. 2009), the Weapons Identification Task (Correll et al. 2002), the Extrinsic Affective Simon Task (De Houwer 2003), and the Affect Misattribution Procedure (Payne et al. 2005). But roughly, all of these tests rely on similar theoretical assumptions: that certain behavioral patterns (like quick sorting) occur as a result of the subjects’ mental constructs being composed a certain way (like being made up of associated concepts). With the basics of direct and indirect measures out of the way, we’re now in a position to explore various patterns that emerge in the data they provide. 3.2 Divergence Our first fact pattern is divergence: results of indirect measures are often at odds with, i.e., diverge from, results of direct measures for the same subject. In what follows, I’ll explain two ways in which results of these measures diverge. The discussion that follows focuses only on empirical data, namely, observations of behavior; we will discuss psychological theories that attempt to explain this data in the following section. As we advance through the observational evidence, however, it will be a fruitful exercise for readers to hypothesize about plausible explanations, and then weigh their hypotheses against the psychological models presented in subsequent sections. 3.2.1 Reportability The first aspect of divergence is the most striking. It involves an individual’s ability to report on the results of indirect and direct tests. Early explorations of implicit bias suggested that subjects who demonstrate a positive preference toward white faces on indirect measures could not report harboring preferences or aversions that would explain such behavior. In fact, when people were confronted with the fact that their indirect measure results indicated bias, many avowed egalitarians expressed shock and disbelief. In their book Blindspot, Mahzarin Banaji and Tony Greenwald report several instances of the “dis turbing” feeling one gets when confronted with the IAT evidence indicating an implicit bias (Banaji and Greenwald 2013: 56–58). These cases include a gay
The Psychology of Bias
27
activist who finds out he harbors negative associations toward the gay community and a writer whose mother is Jamaican finding out he harbors prowhite biases, stating the revelation was “creepy,” “dispiriting,” and “devastat ing.” Even the authors report the first-personal shock of finding out they have biases. Banaji’s experience is described as “one of [her] most significant self revelations” (Banaji and Greenwald 2013: 57). However, the claim that individuals are always unable to predict their results of indirect tests has received criticism. Some empirical studies indicate that when participants were pressed to offer a prediction about how they will perform on an indirect test, their predictions were mostly accurate (Hahn et al. 2014). In addition, other studies have shown that when interviewed after an IAT, most subjects could accurately report how they fared on the test and, moreover, many attributed their biased IAT performance to racial or stereotypical associations (Monteith et al. 2001: 407). In some studies, merely telling subjects to attend to their “gut feelings” was enough to increase their predictions of biased results (Hahn and Gawronski 2019). 3.2.2 Motivational and Social Sensitivity The second way in which results on indirect and direct tests appear to diverge involves motivational and social sensitivity. Regarding motivation, studies demonstrate that the more a subject describes themselves as motivated, the more their pro-white results from direct tests, e.g., self-reports, goes down while results from indirect tests, e.g., IATs, remain unchanged. To demonstrate this, researchers conducted a trio of tests (Fazio et al. 1995). In addition to a timed indirect measure somewhat (called the “Eva luative Priming Task”) and a direct measure, researchers provided a third set of questions gauging how motivated the subjects were to avoid being prejudiced or appearing prejudiced to others. The researchers found that the correlation between subjects’ performances on the indirect measure and their scores on the direct measure varied depending on how high they indicated their motivation to appear non-pre judiced was (Fazio et al. 1995: 1024). In situations where they claimed to be not highly motivated, their results from indirect and direct measures matched up (either both exhibited a preference or neither did). In situations where the subjects claimed they were highly motivated, their results on the other two tasks were often mismatched, with the results on the direct measure often demon strating no preference, and the results of the indirect measure demonstrating a positive preference toward white faces. Similar results were found with respect to the social sensitivity of the subject matter. When researchers Greenwald et al. (2009) and Kurdi et al. (2019) produced meta-analyses—taking a large group of studies about the correlation between direct and indirect measures and analyzing their overall average effects—they found that direct measures correlate with results from indirect measures differently depending on how socially
28
Gabbrielle M. Johnson
sensitive the topic is. For example, for topics that are not socially sensitive, like consumer preference, subjects’ direct and indirect preferences align, whereas for topics that are socially sensitive, like gender and sexual orien tation preferences, their direct and indirect preferences were much less correlated. Perhaps the most interesting finding from these studies, and a crucial aspect of divergence, is about predictive validity. Researchers use predictive validity to identify the degree to which they’re able to predict some phe nomenon on the basis of some other phenomenon. In the case of implicit bias, many researchers are interested in predictive validity between direct and indirect measures on the one hand and real-world discriminatory beha vior on the other. Meta-analyses differ about the predictive validity of implicit and explicit measures. Greenwald and colleagues found that, on topics where indirect and direct measure results diverge, self-reports of racial bias showed smaller predictive validity for biased behavior compared to the predictive validity of IAT scores for biased behavior. All meta-analyses of the predictive validity of direct and indirect mea sures score both measures in the “small” to “small-to-medium” range. Even critics of the IAT grant that it has some predictive power, but the question of amount is a matter of ongoing debate and research. Importantly, many studies appear to support the predictive validity of IAT and other indirect measures. Real-world examples of this predictive validity include results from a Swedish study presented by Rooth (2007), which found that dis criminatory hiring practices for applicants with Arab/Muslim sounding names were well predicted by IAT measures. Additionally, studies pre sented by Jacoby-Senghor and colleagues (2016) and Fazio and colleagues (1995) indicated that subjects’ results from indirect measures predicted their perceived friendliness toward African Americans. Moreover, similar results have been seen with respect to racial biases in the treatment of patients by emergency room and resident physicians (Green et al. 2007) and racial biases in the accuracy of simulated shooting tasks (Payne 2001). (However, again, some critics argue that these results are exaggerated. See Brownstein, Chapter 3, “Skepticism About Bias,” and Brownstein et al. 2020.) Before moving on to the theories that attempt to resolve the puzzle of divergence, I first want to examine the other major fact pattern in the empirical data: the puzzle of the rationality of bias. 3.3 Rationality of Bias As with the puzzle of divergence, the puzzle of rationality is multifaceted. This puzzle arises from data suggesting that results from indirect tests can actually be manipulated based on what I’ll call rational interventions. A rational interven tion is an attempt to intervene on a person’s implicit attitudes that relies on the informational content of the intervention (the reasons they present) rather than
The Psychology of Bias
29
mere repeated exposure to the intervention (known in psychology as “classic conditioning”). The two indications of rationality for bias that we’ll focus on are responsiveness to the rational interventions of simple instructions and strength of evidence. 3.3.1 Sensitivity to Simple Instructions Initially it seemed like intensive training was necessary to change how people performed on indirect measures, indicating that change was a result of conditioning rather than rational intervention (see, for example, Kawakami et al. 2000). However, recent studies have shown that some indirect measure results can be changed by one-off, simple instructions, indicating that changes might be the result of more rational interventions after all. The first relevant study involves an experiment presented by Gregg et al. (2006: 9; additionally discussed in Mandelbaum 2016.) In this experiment, participants were given hypothetical narratives about two fictional tribes. The narrative about the first tribe was positive while the narrative about the second was negative. Participants were then given an IAT, the results of which indicated that participants demonstrated a positive preference toward the first tribe. Experimenters then did something strange: they told the par ticipants that due to an unforeseen error (like computer malfunction), participants had been given incorrect information about the two tribes. Participants were then instructed to swap the previous narrative assign ments. Participants were then asked to retake the IAT. Surprisingly, the results exhibited a shift: subjects demonstrated a positive preference toward the first tribe to a far lesser degree than before (Mandelbaum 2016; See also De Houwer 2006a; 2014). 3.3.2 Sensitivity to Strength of Argument A second common data pattern cited by theorists interested in the puzzle of rationality is the relationship between results on indirect measures and the strength of evidence being presented to subjects. The most relevant study for this point is presented by Brinol et al. (2009; additionally discussed in Mandelbaum 2016). Here, researchers present two experiments aimed at testing this relationship, one involving vegetable preferences and one involving race. In the experiments, participants were split into groups and presented with one of two arguments—a strong argument or a weak argument—in favor of some conclusion regarding the benefit of the target stimulus. For example, in the experiment involving attitudes toward vegetables, one group of participants was given a persua sive argument (regarding the beneficial health effects of diets that include vegetables) while the other group was given an unpersuasive argument (regarding the popularity of vegetables at weddings). The participants were also given IATs before and after being exposed to the arguments. Interest ingly, only those participants who were exposed to the strong arguments
30
Gabbrielle M. Johnson
demonstrated any change in IAT performance (demonstrations of positive preference toward vegetables were increased). The experiment with argu ments involving race showed similar results.
4 Psychological Theories of Social Bias Now that we’ve seen the data for social bias, new questions emerge. How can we explain these results? What is the best way to explain why indirect measures diverge from explicit measures? If implicit biases are just associations, then why does performance on indirect measures sometimes shift in apparently rational ways? 4.1 Dual-Construct Theories According to dual-construct theories, we can explain the differences between direct and indirect measure responses by positing separate mental constructs that independently give rise to results on each kind of test. Let’s call the mental constructs that give rise to results of indirect measures like the IAT results implicit constructs, and the mental constructs that give rise to results of direct measures like self-reports explicit constructs. Consider again the first quality of divergence, reportability. One hypothesis for why subjects can report on their results of direct tests but not indirect tests is that they have conscious access to their explicit constructs, but not their implicit constructs. Psychologists and philosophers disagree about whether implicit constructs are really unconscious or just not always noticed, the latter of which would explain the data that subjects are not always incapable of predicting and reporting IAT results (De Houwer 2006b: 14–16; Fazio and Olson 2003: 302–303; and Holroyd and Sweetman 2016: 80–81). If implicit constructs are indeed less easy to access consciously and to report on than explicit constructs, then this would account for the divergence in reportability. Adopting a dual-construct theory similarly helps explain the other quality of divergence: motivational and social sensitivity. First, it seems reasonable to assume these are two versions of the same phenomenon—socially sensitive contexts are a kind of motivational context because subjects are motivated to be and to appear egalitarian (i.e., to value equal treatment for members of different social groups). We can then explain the relevant differences by postulating dual constructs: deliberative explicit constructs and automatic implicit constructs. In situations where a subject is highly motivated to express egalitarian attitudes, their motivation can influence the operation of deliberative constructs, but they have no control over automatic constructs. Notice also how this explanation and the previous one about unconsciousness relate to one another: the control someone has over some mental construct might be affected by the degree to which they’re consciously aware of it. Let’s walk through an example of a dual-construct model. The Associa tive-Propositional Evaluation (APE) Model presented in Gawronski and
The Psychology of Bias
31
Bodenhausen (2006; 2014a; 2014b) is one of the most developed dual-con struct models. (For another example of the dual-construct approach, see Fazio and Olson’s MODE model, presented in Fazio 1990 and Fazio and Olson 2003.) The theory suggests that implicit constructs and explicit constructs involve two distinct processes: associative processes and proposi tional processes, respectively. Imagine Sounak sees his elderly neighbor Carol for the first time. When his mother asks him what he thinks of his new neighbor, he responds that he likes her and is happy to have an elderly individual in the neighborhood. However, some of his behaviors indicate he’s less warm to the idea; for example, he tends to cross the street whenever he sees Carol outside. According to the APE model, although his explicitly held beliefs indicate his warm feeling toward Carol, his mental associations tell a different story. According to APE, when Sounak sees Carol, this activates the concept elderly in his mind. This associative activation then spreads by way of associative processes to other mental concepts, e.g., wise, frail, and forgetful might all activate. Some of these concepts might have a positive valence (like being wise), but some of them might have a negative valence (like being frail). Some of Sounak’s responses, like his crossing the street, are a direct result of these valences. Since the overall valence of the activating concepts is negative (wise is positive, while frail and forgetful are negative), Sounak has what Gawronski and Bodenhausen (2014b: 449) call a “negative spontaneous evaluative response.” This response is what causes him to cross the street; it’s also what is measured by indirect measures like the IAT. But what about his report to his mother that he’s glad Carol moved to the neighborhood? APE is able to account for these responses too. According to the model, the overall valence of the activating concepts goes through another process called propositionalization, which produces in Sounak a sentence-like thought of the form “I don’t like Carol.” This thought is treated more deliberatively than the spontaneous evaluative response. Crucially, this thought is evaluated against all of Sounak’s background beliefs, which include things like his belief that he likes elderly individuals and that Carol is an elderly individual. Of course, if he likes elderly individuals and Carol is an elderly individual, then it stands to reason that he likes Carol, making this new thought “I don’t like Carol” inconsistent with his other beliefs. So, according to APE, he rejects this new sentence-like thought while leaving intact his background beliefs that indicate he’s glad an elderly individual moved to the neighborhood. This is what leads to divergence: his mental concepts and associative processes that lead to the spontaneous response (i.e., the implicit construct) indicate a negative response toward Carol, while his sentence-like thoughts and propositional processes that lead to his rejection of the thought that he dislikes Carol (i.e., the explicit construct) indicate a positive response toward Carol.
32
Gabbrielle M. Johnson
APE is tailormade to reflect the ways that motivational and social sensitivity lead to divergence. It claims that the more socially sensitive the domain, the more likely people are to engage in deliberative processes, whereas socially insensitive domains rely on the spontaneous, non-delib erative processes. 4.2 Belief-Based Theories Belief-based theories take as their primary data the apparent rationality of results on both indirect and direct tests and claim these similarities occur because the underlying constructs for each are of the same belief-like type, namely, both are sentence-like structures that involve rational processing. One of the most developed and popular versions of this view in social psychology is named the propositional model, where a proposition is a representation with a sentence-like structure (De Houwer 2014). This model has two core assumptions supporting the conclusion that implicit constructs involve rational processes and sentence-like representations: first, changes by rational interventions are the result of rational processes and second, only sentence-like representations can be changed by rational processes. Since indirect measure results can be changed by rational interventions (as demonstrated by the rationality data), it follows that first, the constructs measured by them—implicit constructs—must involve rational processes (by assumption one), and second, that they must be composed of sentence-like representations (by assumption two). According to De Houwer (2014: 346), a sentence-like structure is necessary for representing relational information, and relational information is necessary for rational interventions. Since simple associations between concepts are not able to capture the relational information that sentence-like structures are able to capture, the processes performed on them cannot be rational. This argument is complicated, and so running through an example will help. Consider a belief with the content Rahul loves Priya. This belief is propositional; its structure is sentence-like. It also captures how Rahul and Priya are related. An association, remember, exists between concepts. So, the complex Rahul loves Priya would be a combination of three singular concepts Rahul, loves, and Priya, that are all associatively linked. But then the associationist model can’t distinguish between complexes that are built out of the same constituents but contain different relational information, for example Rahul loves Priya versus Priya loves Rahul. The relational information conveyed by these two complexes are very different. We form different rational conclusions depending on which we believe, and one might be true while the other is false (much to Rahul’s chagrin). So it’s important that we can distinguish between them. To do that, this theory argues, we need to combine the concepts in a particular order with a sentence-like structure. Mere associative bundles just won’t do.
The Psychology of Bias
33
Because the associationist model can’t account for this difference, repre sentations involved in rational processes can’t be modeled by associations. Since, as we’ve seen from the data, some implicit constructs are affected by rational interventions, then according to the assumptions of this theory, they must be sentence-like mental states rather than mere associations between concepts (see also Mandelbaum 2016). 4.3 Problems Both dual-construct and belief-based theories excel at answering their fact patterns of choice; however, each falter in resolving the other’s preferred fact pattern. Dual-construct models do well in explaining divergence data, but run into difficulties in explaining rationality data due to assuming that implicit constructs involve only associations. Likewise, belief-based models explain rationality with implicit constructs being sentence-like representa tions that respond to rational processes. However, if implicit biases are so belief-like, then why don’t they look like beliefs in many respects pertaining to divergence—that is, why would they be relatively unconscious or automatic in ways that our explicit beliefs are not? This is not to say that either theory cannot explain the other’s data. For example, proponents of the APE model claim it’s possible that some propositional information can affect spontaneous evaluative responses by a sort of “spillover” from the rational processes (Gawronski and Bodenhausen 2014b: 456; see also Fazio and Olson 2003: 302). Likewise, proponents of the beliefbased model can account for divergence by stipulating that the mind is made up of many conflicting, fragmented, and redundant sentence-like thoughts, some of which are unconscious and automatic, some of which are not, and some of which cause positive reactions toward individuals while others cause negative reactions toward those same individuals (Mandelbaum 2016). The problem with these sorts of concessions, however, is that they appear ad-hoc. Ad-hoc applies to parts of theories “made up on the fly” when problematic evidence comes in, rather than predicted in advance. Consider a famous example studied by psychologist Leon Festinger and colleagues involving a doomsday cult that had predicted the end of the world by way of a great flood on December 21st, 1954. The prophecy stated that “the chosen” among the believers would be saved from the destruction of the flood at precisely midnight the evening before by a spaceman piloting a flying saucer, who would then escort them to safety. Unsurprisingly to us now, midnight came and passed with no arrival of a spacecraft. The flood also failed to come to fruition. When faced with the apparent disconfirma tion of the prophecy, rather than reasoning that the cult teachings were false, believers rationalized the failures as consistent with the cult teachings after all. They reasoned ad-hoc that it was due to the true faithfulness of the cult members themselves that the flood was avoided (Festinger et al. 1956). Perhaps the real trouble with concessions like these is that they appear to
34
Gabbrielle M. Johnson
lack explanatory efficacy. Returning to theories of implicit bias, although it’s true that spillover effects, mixed processes, and fragmentation can result in the relevant fact patterns, none of these theories offers an account of why these effects occur when they do, they merely stipulate them by decree. Without filling in this story, these changes to the theories in order to account for both fact patterns are explanatorily inert, as in the homunculus fallacy discussed in Section 2. More and more research is investigating these questions and it’s important that this research draws on theories to make predictions rather than trying to explain the data with 20/20 hindsight.
5 Meeting in the Middle So far, we’ve looked at views of implicit bias that fit into two basic camps: dual-construct theories and belief-based theories—as well as the problems with each. In what remains, we will briefly survey views that attempt to carve out space in the middle. These views will often attribute to implicit bias constructs some characteristics that are shared by the familiar constructs discussed above (e.g., associations and beliefs), but also characteristics unique to implicit bias. 5.1 Unique States and Processes Some views attempt to carve out middle ground by treating implicit biases as being pretty similar to beliefs, but differing from them in important ways. I’ll begin by surveying prominent views of this type, then present a prominent criticism. One view that adopts the uniqueness approach is Schwitzgebel’s Inbetween belief view (2002, 2010). Schwitzgebel’s view is importantly different from the other views we’ve discussed in that it characterizes beliefs not in terms of their representational contents, but rather as tendencies to behave in various ways when faced with various physical stimuli (in this way, it’s similar to behaviorism, discussed in Section 2). So, rather than viewing my belief that I should take the garbage out when it’s full as a mental state with the propositionally-structured content I should take the garbage out when it’s full, we instead view it as my tendency to take the garbage out when con fronted with the overflowing can. But such a view can run into problems when faced with cases where an individual seems disposed to a mixed bag of behaviors. The data on divergence above are an example: often, individuals are disposed to act in ways that make it seem like they have one belief about members of a particular social group, but they’re also disposed to act in ways that make it seem like they harbor the opposite belief. (Recall also the case of Sounak and his elderly neighbor.) For these reasons, Schwitzgebel introduces the notion of in-between cases of believing for cases where it seems an indi vidual doesn’t fully believe, but doesn’t not believe either (Schwitzgebel 2002: 260). Implicit biases, according to Schwitzgebel, are cases of in-between believing. (See also Levy 2015 and Machery 2016; 2017.)
The Psychology of Bias
35
Another, very different uniqueness approach is Tamar Gendler’s alief view (Gendler 2008; 2011). Gendler argues that we cannot capture implicit constructs with any of the familiar categories of psychological explanation, such as associations or beliefs. Instead, she argues, implicit constructs, as well as other “more deviant” arenas of human life, such as phobias and superstitions, should be explained by a new psychological kind called aliefs, which are a three-part mix of thoughts, feelings, and behavioral impulses. When a person’s implicit bias construct is activated, then, that person not only has particular representational components activated (e.g., concepts like elderly and frail), but they’re also prone to experience certain feelings (like being sad or scared) as well as exhibit certain behaviors (like avoidance). These aliefs are often at odds with a person’s beliefs. (See her example of walking across the Grand Canyon on the transparent Skywalk: although you might believe that it’s safe, you might simultaneously alieve that it’s not.) Regarding implicit biases as aliefs, we might believe one thing regarding individuals of a particular social group, while also harboring arational aliefs that cause us to automatically exhibit behaviors diverging in various ways from the behaviors we would expect based on those beliefs. (See also Madva and Brownstein 2018 and Brownstein 2018.) The criticism often directed at uniqueness approaches is that they, like the additions to the associative and belief-based models above, appear ad-hoc. They deal with the problems above—namely, that the collection of proper ties harbored by implicit bias constructs makes it so that they don’t neatly fit any models of standard, familiar mental constructs, like beliefs or associations—by merely postulating a new kind of state that has all and only the relevant properties, and thereby, they fix the problem by fiat. Worries of the homunculus fallacy loom large here. That is, it’s not clear that postulating these unique states really explains the operation of implicit bias constructs rather than just pushing the explanation back a level.
6 Concluding Remarks At this point, we’ve surveyed many views on the topic, all attempting to account for various aspects of implicit bias operation. In fact, more and more research is coming out on bias that continues to complicate the overall picture. As methods for studying bias become more sophisticated, so too does our understanding of how bias operates in the minds of individuals. Given the variety, readers might be skeptical that there is even a unified phenomenon to be studied under the heading of implicit bias research. Holroyd and Sweetman (2016) raise this possibility. If this is right, it would explain why some data surrounding implicit bias operation just can’t be explained using one, monolithic psychological explanation. Instead, we would need a variety of different theories. The purpose of psychological theorizing around implicit bias, then, would be to search for different explanations, describing in what instances they’re
36 Gabbrielle M. Johnson apt, investigating what, if anything, unifies them, and, importantly, doing all this while ensuring that such explanations are genuinely explanatory. Such a view paints a picture of the psychology of bias in which there’s still a lot of work to be done, but leaves open that many of the views we’ve discussed here might eventually find a home together, constituting different and important aspects of the overall picture.
SUGGESTIONS FOR FUTURE READING For a more technical overview of many of the psychological theories discussed in this chapter: •
Brownstein, M. (2015) Implicit bias. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/ spr2015/entries/implicit-bias For more on the notion of a paradigm within scientific theorizing:
•
Kuhn, Thomas (1962) The Structure of Scientific Revolutions. Chicago, IL: University of Chicago Press.
For more on the relationship between behaviorism, the homunculi fallacy, and methodologies in psychological explanation: • •
Watson, John B. (1913) Psychology as the behaviorist views it. Psychological Review, 20(2). Fodor, Jerry A. (1968) The appeal to tacit knowledge in psychological explanation. The Journal of Philosophy, 65(20): 627.
For an introduction to mental heuristics, system one vs system two, and more on the unconscious, automatic mind more generally: •
Kahneman, Daniel (2011) Thinking, Fast and Slow. New York: Macmillan.
DISCUSSION QUESTIONS 1
2
What is a psychological explanation? Imagine that my roommate comes home from school, stomps across the living room to her bedroom, and slams the door. How might we explain her actions using mental repre sentation paradigm? What objections would behaviorists make to this explanation? What is the homunculi fallacy and why is it bad for psychological explanation?
The Psychology of Bias 3
4
5
6
37
What’s the primary difference between indirect and direct measures for implicit bias? Why is it important to have indirect measures of bias? Do you think having indirect measures is less important in the study of preferences in other domains, like what kind of soda someone likes to drink or what genre of movie they like best? In most of the psychological theories discussed, there was a focus on mental representations, i.e., mental states that represent the world as being a certain way. However, apart from Gendler’s alief model, there was very little talk about other mental states, like affective or emotional states, that might affect how biases operate. How might these fit into the model we’ve discussed so far? Do you think they’re an important aspect of how we think about and act toward others? We’ve been talking about the processes subserving implicit bias as a self-contained mental construct. One of the great criticisms of beha viorism, however, is that it cannot account for mental interaction, that is, effects that are the joint result of many mental causes. So, try to think about implicit bias in the context of a complete psychology that has perceptions, inferences, actions, desires, problem-solving abilities, is embodied, etc. (see Leboeuf, Chapter 2, “The Embodied Biased Mind,” and Greene, Chapter 7, “Stereotype Threat, Identity, and the Disruption of Habit”). How might our understanding of implicit bias and its construct change when we think about it in this domain? Psychologists Amos Tversky and Daniel Kahneman are famous for work demonstrating that the human mind is prone to adopt a widevariety of shortcuts and heuristics (see Beeghly, Chapter 4, “Bias and Knowledge: Two Metaphors” for more on the view of biases as “shortcuts”). They argue that the mind is composed of two distinct “systems”: system one involves the many mental shortcuts and is responsible for our fast, automatic behaviors while system two involves more rational processes and is responsible for our slow, deliberate behaviors. How does this picture fit with the theories of implicit biases that we’ve been discussing? What implications might this have for the existence of other implicit constructs outside of the social domain? How might this change the way we theorize about the psychology of bias?
REFERENCES Axt, J.R., Ebersole, C.R., and Nosek B.A. (2014) The rules of implicit evaluation by race, religion, and age. Psychological Science, 25(9): 1804–1815. Banaji, M.R. and Greenwald, A.G. (2013) Blindspot: Hidden Biases of Good People. New York: Delacorte Press. Banaji, M.R. and Hardin, C.D. (1996) Automatic stereotyping. Psychological Science, 7(3):136–141. Bar-Anan, Y., Nosek, B.A., and Vianello, M. (2009) The Sorting Paired Features Task: A measure of association strengths. Experimental Psychology, 56(5): 329–343.
38
Gabbrielle M. Johnson
Brinol, P., Petty, R., and McCaslin, M. (2009) Changing attitudes on implicit versus explicit measures: What is the difference? In R. Petty, R.H. Fazio, and P. Brinol (eds), Attitudes: Insights from the New Implicit Measures. New York: Psychology Press. Brownstein, M. (2018) The Implicit Mind: Cognitive Architecture, the Self, and Ethics. New York: Oxford University Press. Brownstein, M., Madva, A., and Gawronski, B. (2020) Understanding implicit bias: Putting the criticism into perspective. Pacific Philosophical Quarterly. https://doi. org/10.1111/papq.12302 Correll, J., Park, B., Judd, C.M., and Wittenbrink, B. (2002) The police officer’s dilemma: Using ethnicity to disambiguate potentially threatening individuals. Journal of Personality and Social Psychology, 83(6): 1314–1329. De Houwer, J. (2003) The extrinsic affective Simon task. Experimental Psychology, 50(2): 77. De Houwer, J. (2006a) Using the Implicit Association Test does not rule out an impact of conscious propositional knowledge on evaluative conditioning. Learning and Motivation, 37(2): 176–187. De Houwer, J. (2006b) What are implicit measures and why are we using them? In R.W. Wiers and A.W. Stacy (eds), The Handbook of Implicit Cognition and Addiction (pp. 11–28). Thousand Oaks, CA: SAGE Publications. De Houwer, J. (2014) A propositional model of implicit evaluation: Implicit evalua tion. Social and Personality Psychology Compass, 8(7): 342–353. https://doi.org/10. 1111/spc3.12111 Dennett, D.C. (1981) Brainstorms: Philosophical Essays on Mind and Psychology. Cambridge, MA: The MIT Press. Fazio, R.H. (1990) Multiple processes by which attitudes guide behavior: The Mode Model as an integrative framework. In Advances in Experimental Social Psychol ogy (Vol. 23, pp. 75–109). Elsevier. Retrieved from http://linkinghub.elsevier.com/ retrieve/pii/S0065260108603184 Fazio, R.H., Jackson, J.R., Dunton, B.C., and Williams, C.J. (1995) Variability in automatic activation as an unobtrusive measure of racial attitudes: A bona fide pipeline? Journal of Personality and Social Psychology, 69(6): 1013–1027. Fazio, R.H. and Olson, M.A. (2003) Implicit measures in social cognition research: Their meaning and use. Annual Review of Psychology, 54(1): 297–327. https://doi. org/10.1146/annurev.psych.54.101601.145225 Festinger, L., Riecker, H.W., and Schachter, S. (1956) When Prophecy Fails. Min neapolis, MN: University of Minnesota Press. Fodor, J.A. (1968) The appeal to tacit knowledge in psychological explanation. The Journal of Philosophy, 65(20): 627–640. Fodor, J.A. (1981) Introduction: Something of the state of the art. In Representations (pp. 1–31). Cambridge, MA: The MIT Press. Gawronski, B. and Bodenhausen, G.V. (2006) Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change. Psycholo gical Bulletin, 132(5): 692–731. https://doi.org/10.1037/0033-2909.132.5.692 Gawronski, B. and Bodenhausen G.V. (2014a) The Associative-Propositional Evaluation Model: Operating principles and operating conditions of evaluation. In J.W. Sherman, B. Gawronski, and Y. Trope (eds), Dual-Process Theories of the Social Mind (pp. 188–203). New York: The Guilford Press. Gawronski, B. and Bodenhausen, G.V. (2014b) Implicit and explicit evaluation: A brief review of the Associative-Propositional Evaluation Model: APE Model.
The Psychology of Bias
39
Social and Personality Psychology Compass, 8(8): 448–462. https://doi.org/10.1111/ spc3.12124 Gendler, T.S. (2008) Alief and belief. The Journal of Philosophy, 105(10): 634–663. Gendler, T.S. (2011) On the epistemic costs of implicit bias. Philosophical Studies, 156(1): 33–63. https://doi.org/10.1007/s11098-011-9801-7 Green, A.R., Carney, D.R., Pallin, D.J., Ngo, L.H., Raymond, K.L., Iezzoni, L.I., and Banaji, M.R. (2007) Implicit bias among physicians and its prediction of thrombolysis decisions for black and white patients. Journal of General Internal Medicine, 22(9): 1231–1238. Greenwald, A.G., McGhee, D.E., and Schwartz, J.L. (1998) Measuring individual dif ferences in implicit cognition: The Implicit Association Test. Journal of Personality and Social Psychology, 74(6): 1464. Greenwald, A.G., Poehlman, T.A., Uhlmann, E.L., and Banaji, M.R. (2009) Under standing and using the Implicit Association Test: III. Meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97(1): 17–41. Gregg, A.P., Seibt, B., and Banaji, M.R. (2006) Easier done than undone: Asymmetry in the malleability of implicit preferences. Journal of Personality and Social Psy chology, 90(1): 1–20. Hahn, A. and Gawronski, B. (2019) Facing one’s implicit biases: From awareness to acknowledgement. Journal of Personality and Social Psychology, 116(5): 769–794. Hahn, A., Judd, C.M., Hirsh, H.K., and Blair, I.V. (2014) Awareness of implicit attitudes. Journal of Experimental Psychology: General, 143(3): 1369–1392. Holroyd, J. and Sweetman, J. (2016) The heterogeneity of implicit bias. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy Volume I: Metaphysics and Epistemology. New York: Oxford University Press. Jacoby-Senghor, D.S., Sinclair, S., and Shelton, J.N. (2016) A lesson in bias: The relationship between implicit racial bias and performance in pedagogical contexts. Journal of Experimental Social Psychology, 63: 50–55. https://doi.org/10.1016/j. jesp.2015.10.010 Katz, D. and Braly, K. (1933) Racial stereotypes of one hundred college students. The Journal of Abnormal and Social Psychology, 28(3): 280–290. Kawakami, K., Dovidio, J.F., Moll, J., Hermsen, S., and Russin, A. (2000) Just say no (to stereotyping): Effects of training in the negation of stereotypic associations on stereotype activation. Journal of Personality and Social Psychology, 78(5): 871–888. Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago, IL: University of Chicago Press. Kurdi, B., Seitchik, A.E., Axt, J.R., Carroll, T.J., Karapetyan, A., Kaushik, N., Tomezsko, D., Greenwald, A.G., and Banaji, M.R. (2019) Relationship between the Implicit Association Test and intergroup behavior: A meta-analysis. American Psychologist, 74(5): 569–586. https://doi.org/10.1037/amp0000364 Levy, N. (2015) Neither fish nor fowl: Implicit attitudes as patchy endorsements. Noûs, 49(4): 800–823. https://doi.org/10.1111/nous.12074 Machery, E. (2016) De-Freuding implicit attitudes. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy Volume I: Metaphysics and Epistemology. New York: Oxford University Press. Machery, E. (2017) Do indirect measures of biases measure traits or situations? Psy chological Inquiry, 28(4): 288–291. Madva, A. and Brownstein, M. (2018) Stereotypes, prejudice, and the taxonomy of the implicit social mind. Noûs, 52: 611–644. https://doi.org/10.1111/nous.12182
40
Gabbrielle M. Johnson
Mandelbaum, E. (2016) Attitude, inference, association: On the propositional structure of implicit bias. Noûs. https://doi.org/10.1111/nous.12089 Monteith, M.J., Voils, C.I., and Ashburn-Nardo, L. (2001) Taking a look underground: Detecting, interpreting, and reacting to implicit racial biases. Social Cognition, 19(4): 395–417. Nosek, B.A. and Banaji, M.R. (2001) The Go/No-Go Association Task. Social Cognition, 19(6): 625–666. Payne, B.K., Cheng, C.M., Govorun, O., and Stewart, B.D. (2005) An inkblot for attitudes: Affect misattribution as implicit measurement. Journal of Personality and Social Psychology, 89(3): 277–293. Payne, B.K. (2001) Prejudice and perception: the role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 81(2): 181. Rooth, D.O. (2007) Implicit Discrimination in Hiring: Real World Evidence. Discussion Paper 2764. Bonn, Germany: Institute for the Study of Labor. Schwitzgebel, E. (2002) A phenomenal, dispositional account of belief. Noûs, 36: 249–275. https://doi.org/10.1111/1468-0068.00370 Schwitzgebel, E. (2010) Acting contrary to our professed beliefs or the gulf between occurrent judgment and dispositional belief. Pacific Philosophical Quarterly, 91: 531–553. https://doi.org/10.1111/j.1468-0114.2010.01381.x Skinner, B.F. (1971) Beyond Freedom and Dignity. New York: Knopf.
2
The Embodied Biased Mind Céline Leboeuf
1 Introduction We often think of our mental lives as “in our heads.” This comes out in idioms such as “it’s all in your head,” “my head is in the clouds,” or “it never entered my head.” These idioms usually refer to our thoughts, emotions, or desires. Is this language also relevant to implicit biases? In fact, discussions of implicit bias typically refer to implicit biases as “in our minds.” For example, in its FAQ on the Implicit Association Test, the researchers at Project Implicit respond to the question of whether implicit biases come from oneself or one’s culture in these terms: “even if our attitudes and beliefs come from our culture, they are still in our own minds” (Project Implicit). Likewise, a New York Times article on implicit bias defines implicit bias as follows: “Implicit bias is the mind’s way of making uncontrolled and automatic associations between two concepts very quickly” (Badger 2016). What does such talk mean? If we speak of implicit biases as “in the mind,” does that signify that they are exclusively located in the head? This chapter aims to show that implicit biases should not be conceived of as “inside the head” of individuals, but rather as embodied and social. My argument unfolds in three stages. First, I make the case for conceiving of implicit biases as perceptual habits. Second, I argue that we should think of perceptual habits and, by extension, implicit biases, as located in the body. Third, I claim that individual habits are shaped by the social world in which we find ourselves and that the social world is itself shaped by our habitual ways of interacting with and perceiving others. This suggests that implicit biases are part of the fabric of the social world. Thus, I conclude that implicit biases should be understood as embodied and social.
2 Implicit Biases as Perceptual Habits To defend the claim that implicit biases can be conceived of as habitual ways of perceiving the world, I want to proceed in two stages: first, I’ll explain why we can interpret at least some implicit biases as forms of perception; second, I’ll show why we can conceive of them as habitual forms of perception.
42
Céline Leboeuf
Let’s begin with the first claim, about perception. Consider the following studies of racial bias. First, take Wilson, Hugenberg, and Rule (2017), which suggests that “people have a bias to perceive young Black men as bigger (taller, heavier, more muscular) and more physically threatening (stronger, more capable of harm) than young White men” (2017, 1). Second, Price and Wolfers (2010) found that “more personal fouls are called against players when they are officiated by an opposite-race refereeing crew than when officiated by an own-race crew” (2010, 1859). Third, Payne (2006) discovered that participants were more likely to identify objects as guns when they were first flashed with a face of a black person than that of a white person. In each of these examples, participants were biased in such a way that they perceived the world (the appearance or behavior of others, or of inanimate objects) in ways affected by race. More generally, these examples indicate that implicit biases can take the shape of differential ways of perceiving our bodies, other persons, or the environment that are modulated by social category of race. To put this idea more generally, when we are biased, we perceive the environment and other persons in ways that are structured by social categories. Now, more needs to be said about what makes these examples cases of implicit bias specifically. We can interpret these experimental examples as implicit because the biased ways of perceiving the world are “relatively unconscious” or “relatively automatic” (Brownstein 2017). First, consider the study of the perception of young black vs. young white men. In this study, participants were called on to make hundreds of judgments of size or strength in timed blocks. This constraint meant that decisions were very swift. Second, take the research on NBA refereeing. Calling a foul typically takes (and should take) the shape of a split-second decision, rather than being based on a considered study of a player’s behavior. Third, let’s have a look again at the study of weapon bias: in one version of the trial, the participants in this study were allowed to respond to the objects at their own pace, whereas in another version, they were required to respond within half a second on each trial. The study found that “in the snap-judgment condition, race shaped people’s mistakes” (Payne 2006, 287). Thus, when I speak of implicit biases in this chapter, I mean ways of perceiving the world that are either relatively unconscious or relatively automatic. The studies of implicit racial bias mentioned above tell us that implicit biases can take the shape of perceptual biases. But you may wonder whether it also makes sense to think of the biases measured in the well-known Implicit Association Test (IAT) as forms of biased perception. For example, as Kelly and Roedder (2008) explain, one version of the IAT involves pairing words with positive connotations (e.g., “delicious” or “happy”) or with bad connotations (e.g., “death” or “unhappy”), with white names (e.g., “Greg”) or black names (e.g., “Lakisha”). Kelly and Roedder note that if you take this version of the IAT, “[m]ost likely, you found it easier to sort the words when the good adjectives were paired with the white names (delicious, Greg)
The Embodied Biased Mind
43
and the bad adjectives were paired with black names (sad, Lakisha)” (2008, 525). The finding that different verbal associations are easier than others speaks to the idea that we have biases for or against different social groups. Although these results concern cognitive biases, rather than perceptual ones, I think that there is reason to believe that these biases are also embodied. But I’ll return to this point later on. What is now worth highlighting is that scores on the IAT have been found to be correlated with biases that are enacted during psychological studies. For instance, Payne (2005) claims that weapon bias is correlated with measures like the IAT. Interestingly, Lira et al. (2017) have found correlations between performance on a version of the IAT that involves pairing words with black or white faces and the influence of skin color on the experience of ownership in the Rubber Hand Illusion. In the Rubber Hand Illusion, a person experiences a foreign object, such as a rubber hand, as part of one’s body. Studies of the rubber hand illusion typically involve placing a test subject’s hand out of sight and placing a lifelike rubber hand before them (so, within eyesight); when experimenters stroke both the rubber hand and the hand out of sight, subjects generally experience the rubber hand as part of their own body. Lira found that the onset of the Rubber Hand Illusion was delayed when white subjects were presented with a dark rubber hand and, further, that the higher degree of anti-black bias as measured on the IAT, the longer the onset of the Rubber Hand Illusion. In this light, we can think of the IAT as an indirect measure of implicit bias (Kelly and Roedder 2008). The IAT predicts the types of the biases at stake in this paper, namely, biased ways of perceiving others and the world. At this point, it is worth asking how implicit biases go unnoticed. So, let’s now turn to the second stage in the argument of this section, which is the case for conceiving of implicit biases as perceptual habits. First, what is a habit? For the purposes of this exposition, I would like to begin with William James’s classic presentation in The Principles of Psychology. Although James wrote this over 120 years ago, in 1890, many of his central thoughts about habit have withstood the test of time. According to James, habits have a basis in nerve pathways (James 1950, 106–107). Commenting on James’s work, Gail Weiss explains that for him habit is a “phenomenon that is mani fested as a set of … nerve discharges that forms a ‘reflex path’” such that “each of an individual’s habits has its own nerve path, and, through repetition over time, the path becomes more and more fixed and less subject to change” (Weiss 2008, 78). James cites two results from this conception of habits. First, he asserts that habits make the performance of tasks easier. In his words: “habit simplifies the movements required to achieve a given result, makes them more accurate and diminishes fatigue” (James 1890/1950, 112). This result follows from his conception of habits since the more settled the nerve paths associated with a habit are, the more automatic the movement will be. Consequently, there will be fewer extraneous movements, less “fumbling,” to perform a habitual task; hence the increase in accuracy and decrease in fatigue. Second, James claims that habits
44
Céline Leboeuf
require less conscious attention: “habit diminishes the conscious attention with which our acts are performed” (James 1890/1950, 114). James reasons that the attention we need to pay to habitual tasks decreases the more we perform that task because the nerve “grooves” associated with a task are set in place. Or, to put it another way, conscious attention is required in order to perform tasks that are new and cognitively demanding. And since habits, by definition, are not new behaviors, they will require less conscious attention. For instance, James would explain that once I have mastered the movements necessary to execute a piano piece, I am able to play the piece more easily (that is, with fewer extraneous movements) and with less conscious attention than when I first learned the piece. This ease and the relatively unconscious character of my performance would be the result of having repeated the movements enough times for the path of neurons related to each movement to be “well worn.” In sum, according to James, habits are behaviors which become simpler and easier to perform, and which require less conscious attention, the greater an individual’s familiarity with them. In this spirit, certain philosophers, such as the French thinker Maurice Merleau-Ponty (whose understanding of the acquisition of habits we’ll discuss in the next section), have argued that we have perceptual habits. That is to say that our access to the world is patterned and depends on a process of learning. What does perception look like to Merleau-Ponty? As commentator Alia Al-Saji indicates, perception for him is not a “neutral recording” of the features of objects external to us (Al-Saji 2014, 138). For instance, seeing a banana on a wooden table and then, a foot away, an orange on the same table, does not amount to first recording that there is a long, elongated, and yellow object in front of my body, followed by a smooth, flat, and brown surface, and, then, one foot away, positioned on the same surface, a small, spherical, orange object. Rather, seeing this banana and orange means experiencing the banana and orange as accessible to me (let’s say by grabbing them), and that the banana is more easily accessible to me than the orange. According to Merleau-Ponty, I don’t truly even experience the space between the banana and orange as present, unless someone draws my attention to it. Rather, what is salient to me in my visual field are the banana and, a little further away, the orange. To borrow from James’s description of habit, just as we have “grooves” that are relatively well settled and that simplify our manipulation of objects, so too we have perceptual “grooves” that simplify the task of accessing the objects in our environment. Returning to the banana and orange example, just as I know how to grab the banana and then the orange without thinking through the hand and arm posi tions and exact bodily movements required to grab the banana and then the orange, so too, I know how to “cut to” the banana and then the orange through my sense of sight. To see the banana and then the orange, I do not need to first fix my eyes on the banana and make a visual inventory of all of its features, then, to move my eyes along the table and record all the features of the space between the banana and orange, and, finally, turn my gaze to the orange and take note of all its features. My eyes “know how” to move from the banana to the orange.
The Embodied Biased Mind
45
On the whole, on Merleau-Ponty’s conception of perception, perception is habitual, and we acquire perceptual habits through a process of learning. Thus, he explains that children “learn to see colors” as properties of objects, rather than as a confusion of shades unattached to objects, and that this learning amounts to “acquiring a certain style of seeing, a new use of one’s body” (Merleau-Ponty 1945/2002, 177). (By “style,” he simply means a “characteristic manner.”) Notice how the body figures prominently in Merleau-Ponty’s discussion of perceptual habits. We’ll soon come to why he endorses an embodied conception of habits. Returning to the topic of implicit bias, here’s why we can think of them as perceptual habits. First, we learned that we can think of implicit biases as biased forms of perception. Furthermore, to account for the automaticity and unnoticed character of implicit biases, we can think of them as sets of perceptual habits. For example, we don’t neutrally record the size and muscularity of young black men and young white men; rather, under the influence of implicit bias, we see the former as bigger and more threatening than the latter, although we are not explicitly aware that we see them that way. Moreover, we might argue that these patterns of perception are instilled in us through a process of socialization. We don’t come into the world biased against young black men. We learn to see members of different races in different ways. Likewise, we do not neutrally record the behavior of basketball players or of inanimate objects. Rather, we see them in char acteristic manners that we have acquired from birth onwards: we tend to see the opposite-race basketball player as committing a foul, but we tend not to not see the same-race basketball player as committing a foul. And we see the object as a gun after having seen a black man, but not as a gun having seen a white man. In summary, having an implicit bias means that one has acquired a certain pattern of perceiving the world that is predicated on a process of learning. From the discussion in this section, we can take on board the idea that implicit biases are perceptual habits (for further investigation into related questions, see Siegel, Chapter 5, “Bias and Perception”). But this does not yet tell us whether we should think of implicit biases as “exclusively in the head” or rather as “embodied.” To support the claim that implicit biases should be thought of as bodily in character, I propose that we consider phenomenological accounts of habit-formation.
3 An Embodied Account of Habit This section defends the claim that habits, including perceptual habits, are embodied. I base my argument on considerations about the experience, or “phenomenology,” of habit-formation, drawn from Merleau-Ponty’s Phenom enology of Perception, first published in 1945. But let me first say a few words about phenomenology. Phenomenology, or the study of lived experience, is a movement in philosophy that emerged in the early 1900s and dominated French
46
Céline Leboeuf
and German philosophical life in the first half of the twentieth century. Phenomenologists address traditional topics in philosophy such as the nature of time or objects. But they do so in a very particular way: in answering these questions, they look to how we experience phenomena. For example, a phenomenologist working on the nature of objects might adopt everyday experience of using tools as a guide in her inquiry. According to Alva Noë, for instance, offering a phenomenology of perception means “to study the way in which perceptual experience—mere experience, if you like—acquires a worldpresenting content” (Noë 2004, 179). In other words, a phenomenology of perception does not aim to explain perception from an external perspective, that is, the physical processes cause our perceptual experiences (e.g., how the impact of light on the retina causes impulses to be sent via the optic nerve to the brain and then how various parts of the brain become activated). Instead, phenomenologists inquire into how we experience the world from within. Merleau-Ponty’s phenomenology of habit-formation has a lot to teach us about the perceptual habits at stake in this chapter. In Phenomenology of Perception, he pays close attention to what we might call “skillful activity.” He has in mind activities that we have mastered to the point that we perform them without paying close attention to the movements that are part of performing the activity. Examples of skillful activity include the performances of masters in certain domains, such as the performances of pro-athletes or of professional musicians. But the notion is not exclusive to these cases. For instance, someone who has passed her driver’s license test is presumably someone who is skillful at the activity of driving a car. The phenomenology of habit formation tells us something very interesting about skillful activity, namely, that that awareness we have of our bodies when we are dealing skillfully with our environment is practical as opposed to thematic. In practical awareness, bodily movements are in the periphery of our awareness, and go virtually unnoticed. By contrast, thematic awareness consists in explicitly focusing our attention on executing a movement. According to Merleau-Ponty, forming habits requires that we have a practical awareness, rather than a thematic of bodily movements Merleau-Ponty 1945/2002, 176). When we form new habits, we typically focus our attention at first on the relevant movements. That is, we have a thematic awareness of these movements. Consider, for example, the attention that a student first pays to proper hand and arm positioning when she is learning to play the piano. Through practice, though, this focus on bodily movements recedes. In fact, Merleau-Ponty argues that to the extent to which our awareness of my body is thematic, our ability to skillfully deal with the environment will be compromised. In order to grasp this point, think of the experience of actively listening to your voice as you speak: instead of hearing your voice in a dim, receding fashion, your voice becomes the focus of your attention; in such instan ces, you adopt the type of awareness that your listeners typically have of your voice. As you probably know from experience, your speech usually
The Embodied Biased Mind
47
loses its fluidity in such cases, and you begin to stumble as you speak. Merleau-Ponty would say that your thematic awareness of your voice causes a breakdown in skillful speaking. The phenomenology of habit-formation teaches us that we should think of habits, including perceptual habits, as embodied. Experience tells us that for a movement to become habitual, it needs to be incorporated into our practical awareness of our bodies. In other words, habits allow us to skillfully deal with the world because they vanish from our awareness and become part of our bodies. In fact, Merleau-Ponty describes habits as “knowledge in the hands” (1945/2002, 166). Habits require a medium “to stick” to in order for them to truly become habitual. This medium, the phenomenology of habit-formation reveals, is the body. Let’s have a closer look, though, at why Merleau-Ponty’s phenomenology of habit-formation is helpful for understanding implicit biases specifically. First, his framework helps us make sense of the fact that implicit biases typically go unnoticed. If we have any awareness of them, it would be a practical rather than a thematic awareness of them. I’ve argued that it’s useful to think of implicit biases as perceptual habits. Combining this insight with Merleau Ponty’s view of habit yields a picture according to which implicit biases are ways of perceiving the world that are enacted in the body and that are outside of the focus of our attention. Second, Merleau-Ponty’s phenomenology also tells us about how we acquire implicit biases. They are habits acquired through the repetition of bodily movements. Furthermore, since the perceptual habits I’ve focused on in this chapter concern interactions within a social world, it’s reasonable to think that the acquisition of perceptual habits requires imitating the movements of others. After all, we are not born with biased ways of interacting with members of different social groups! To better grasp this second point, it’s also useful to turn to Helen Ngo’s The Habits of Racism (2017). According to Ngo, Merleau-Ponty’s phenomenology makes better sense of the acquisition of implicit racial biases than accounts that appeal to the idea of completely unconscious attitudes or perceptions. She claims that “while this discourse is effective in illuminating the depth of racist attitudes and perceptions in our psychical being, as well as their near-imperceptibility [….] its framing in terms of the unconscious makes it difficult to give an account of the uptake involved in such racist orientations” (2017, 23). In simpler terms, Ngo’s point is that portraying biases as entirely unconscious does not explain how we acquire racist ways of interacting with others; it merely appeals to the idea of the unconscious to explain why these patterns of interaction typically go unnoticed by those enacting them. On her view, appealing to Merleau-Ponty’s phenomen ology tells us how we become implicitly biased: we become so by mirroring the body language of others and by repeating this body language over time. To borrow a geological metaphor often used by phenomenologists, habits become “sedimented” in our bodies: just as mountains emerge through the gradual layering of sediments in a particular location, so too habits emerge through the repeated enactment of bodily movements.
48
Céline Leboeuf
Ngo’s argument is not only applicable to racist body language. In fact, it is relevant to all body language—whether the relevant sphere concerns interactions between racial groups or between persons of different genders, sexual orienta tions, ages, or bodily abilities. On an embodied view of implicit bias, to harbor an implicit bias simply means to “use the body” in a biased way. This does not mean that we actively or consciously choose to use our bodies in biased ways. Remember that we are talking about implicit bias. What it means, then, to say that implicit biases are embodied is to say that we use our bodies in relatively automatic and nearly imperceptible ways. For example, having an implicit bias against members of one racial group might mean engaging with them in subtly different ways than with members of other social groups. But to make the embodied conception of implicit bias more concrete, let’s revisit the empirical studies of implicit bias with which we began. According to the first study, young black men were perceived as bigger and more physically threatening than young white men. The second study indicated that fouls were called more or less often depending whether the referee and players were of the same race or not. And, in the third study, participants were more likely to identify objects as guns when they were first flashed with a face of a black person than of a white person. In each of these examples, participants were biased in such a way that they per ceived the world (of inanimate objects, others’ appearance or behavior, etc.) in ways affected by race. On an embodied conception, in each of these examples, what it means to be implicitly biased is to interact with the world—whether directly with other persons or with the objects associated with them—according to patterns that are barely in the background of our awareness. So far, I’ve argued that perceptual habits require the medium of the body to be enacted. But this leaves open the question of whether we should think of verbal associations, such as those studied in the IAT, as bodily in char acter. When I pair the words with negative connotations with stereotypical “black names,” am I also embodying a habit? I would argue that we should think of these cognitive biases as bodily, for reasons drawn from theories of embodied cognition. According to such theories, “[m]any features of cogni tion are embodied in that they are deeply dependent upon characteristics of the physical body of an agent” (Wilson and Foglia 2017). There are many routes to this idea, and it is beyond the scope of this chapter to consider them in detail. Yet, it is clear that the IAT requires people to direct their gaze at a computer screen, to press buttons, and so on; it is an embodied behavior. As such, the act of pairing a word with another, or with an image, which we might otherwise interpret as the result of a process in the brain, might be interpreted as a problem solved by enacting bodily habits.
4 The Body and the Social World In this section, I turn to the claim that implicit biases are social. To get a handle on this idea, I propose that we turn to the work of a philosopher who critically responds to Merleau-Ponty, the French thinker Pierre Bourdieu.
The Embodied Biased Mind
49
Merleau-Ponty recognizes that we acquire habits through a process of learning that is embedded in a social world; he describes how imitating others allows us to form new habits (Merleau-Ponty 1945/2002, 408–415). Yet, Bourdieu faults Merleau-Ponty for ignoring how large-scale “social structures,” such as socio-economic class and gender, shape our habits. Bourdieu thinks that Merleau-Ponty focuses too much on the individual and not enough on the fact that we tend to embody the habits of the social groups to which we belong. The missing ingredient in Merleau-Ponty’s phenomen ology is an account of social structures, that is, a theory of how social norms and institutions (e.g., prohibitions against incest, churches or the laws of a country) generate patterns of social interaction (e.g., forms of marriage or gendered ways of behaving). According to Bourdieu, habits are both a property of an individual’s body and a property of social groups. So, for him, to say that implicit biases are embodied would not only be a claim about individual bodies, but also a claim about the social world. Simply put, implicit biases are not just in our bodies but in the social world. I recognize that this might seem an odd idea, one even harder to get a handle on than the idea that implicit biases are not just “in our heads.” That’s why I propose that we walk through Bourdieu’s thinking about bodily habits very slowly. Bourdieu’s The Logic of Practice, first published in 1980, offers an account of our bodily habits, one of whose central concepts is the habitus, which is derived from the Latin word for “habit” or “comportment.” In Bourdieu’s words, the habitus refers to: “structured structures predisposed to function as structuring structures, that is, as principles which generate and organize practices and representations” (Bourdieu 1980/1990, 53). This is quite a mouthful! Put simply, a person’s habitus refers to the embodiment of ways of perceiving and navigating the world as well as our tastes (or “apprecia tions”). What’s interesting about Bourdieu’s notion of a habitus is that it helps makes sense of how social structures both shape individual habits and are shaped by these in turn. To begin, a structure for Bourdieu is something “that is systematically ordered rather than random or unpatterned” and that “comprises a system of dispositions which generate perceptions, apprecia tions, and practices” (Maton 2014, 49). For example, the “gender habitus” is a feature of the social world that orders our individual ways of perceiving the world, our tastes, and our bodily movements in relation to the category of gender. For example, I inhabit ways of dressing or bodily postures that are coded in terms of gender categories, such as that of “being a woman.” In virtue of my identity as a woman, I am likely to wear certain clothes or to adopt certain gestures, such as crossing my legs when I am in a formal set ting. (Of course, not all women enact their gender habitus in the same ways! Factors such as class, race, and nationality modulate how women enact their “gender habitus.”) Bourdieu thinks of the habitus in terms of the idea of a social structure because our perceptions, tastes, and bodily movements exhibit a certain regularity and because different aspects of the habitus, such
50
Céline Leboeuf
as my tastes and bodily habits, are typically coordinated. To return to the gender habitus example, my being a woman means that I am not only likely to regularly embody postures coded as “womanly,” but I am also likely to regularly have tastes in clothing coded as “womanly.” In short, the ways of perceiving the world, tastes, and bodily practices of a habitus exhibit a certain systematicity. As a “structuring structure,” the habitus should be conceived as a feature of the social world that shapes individual habits in systematic ways: it affects our practical activity, that is, everything from the most unthinking perceptions and bodily movements to our tastes. In this regard, the habitus is a “top-down” phenomenon: belonging to a social group conditions me to behave in particular ways. However, as “structured structure” the habitus is a phenomenon that is also structured, or shaped, by our individual habits (Bourdieu 1990, 54). This implies that the habitus is “updated” by changes in society. When women began to wear pants in Western societies, for instance, that meant that the gender habitus evolved there. This suggests that, in another regard, the habitus is a “bottom-up” social structure: our individual perceptual and bodily habits, as well as our tastes, give order to the social world. In short, there is a continuous feedback loop between individual habits (so how we use our bodies) and the social world. One important point to note in the above quotation is that, for Bourdieu, we are conditioned by the habitus in a way that does not typically result in deliberate choices. When he defines the habitus, he adds that our “practices and representations” are “adapted to their outcomes without presupposing a conscious aiming at ends” (1980/1990, 53). For instance, consider the case of gendered patterns of behavior, such as occupying more or less physical space with one’s body. As Iris Marion Young has eloquently described, occupying more space with one’s body is typical of masculine patterns of behavior in the West, whereas occupying less space with one’s body is typical of femi nine patterns of behavior there (2005, 35–45). In such cases, Bourdieu would argue that I do not reason: 1. I identify with a certain gender; 2. persons of that gender typically occupy space in such and such a way; and conclude 3. I will choose to occupy physical space in that way. Simply put, we often do not deliberately choose these types of behavior. Of course, this is not true of all cases; sometimes we do explicitly choose to comport ourselves in certain ways. But by and large, Bourdieu would say that I just comport myself in certain ways out of habit and that the habits I have are conditioned by the social group(s) to which I belong. In summary, we often embody habits that are continuous with those of other members of one’s social group. Another important point to take from Bourdieu’s work is that our habits tend to conform most to the habits of members of our own social group. For example, Bourdieu explains that our political beliefs will gravitate towards those of other members of our socio-economic group and that we are even more disposed to discuss politics with those who share our political beliefs, thus reinforcing the homogeneity of political beliefs within a given
The Embodied Biased Mind
51
socio-economic group (Bourdieu 1980/1990, 61). For Bourdieu, there is a “homology,” that is, a mirroring effect, between individual habits and the habits of a social group (Bourdieu 1980/1990, 60). You could say that individual habits are a “microcosm” of the habits of a social group, or vice versa, that the habits of a social group are a “macrocosm” of individual habits. At this point, you may be wondering what this all has to do with implicit bias! Although Bourdieu did not explicitly discuss implicit bias (he did discuss gender norms), I think that there are lessons that we should take from him for our topic. First, from the idea that the habitus structures our individual habits, we learn that our individual habits are conditioned by the social group to which each of us belongs. For instance, if I am part of a racial group X, which tends to harbor implicit biases against racial group Y, then, in all likelihood, I will be conditioned to harbor implicit biases against members of racial group Y. One type of bias that illustrates this idea is in-group-out-group bias. This consists in being biased in favor of members of one’s own group and being biased against members of other groups, and it suggests that members of the same social group will be likely to hold similar in-group-out-group biases. For instance, studies indicate that third-party punishment, the phenomenon of punishing someone who has not directly harmed oneself but has harmed another person, is subject to in-group-out-group bias: we tend to punish members of other groups more severely than members of our own group (Yudkin et al. 2016). Second, from the idea that the habitus of a social group is largely homogeneous, we learn that an individual is more likely to harbor the implicit biases of the social group to which she or he belongs than those harbored by members of another social group. Returning to the previous example, Bourdieu would say that, as a member of racial group X, I am more likely to harbor implicit biases against racial group Y (because I am a member of racial group X) than members of racial group Z, who tend be biased favorably towards members of racial group Y. In short, I am more likely to share the biases of members of my own social group than those of members of other social groups. Third, from the idea that the habitus is itself conditioned by individual behavior, Bourdieu might surmise that were it possible for individuals to change their implicit biases (see Dominguez, Chapter 8, “Moral Responsibility for Implicit Biases: Examining Our Options”; McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit Bias”; Madva, Chapter 12, “Individual and Structural Interventions”), then it would be possi ble for implicit biases of that social group as a whole to be changed. This refers to the feedback-loop aspect of the habitus. In a nutshell, to say that implicit biases are social is to say that they are not just enacted in individual bodily behavior, but that they are enacted by social groups as a whole. Implicit biases “live in” the bodies of individuals and also in the social world. Accordingly, Bourdieu might say that biases are part of our environment. For further developments of this suggestion, see Holroyd and Puddifoot (Chapter 6, “Epistemic Injustice and Implicit Bias”),
52 Céline Leboeuf Greene (Chapter 7, “Stereotype Threat, Identity, and the Disruption of Habit”), and Ayala-López and Beeghly (Chapter 11, “Explaining Injustice: Structural Analysis, Bias, and Individuals”).
5 Concluding Thoughts on the Embodied and Social Character of Implicit Bias This chapter has examined the idea that, rather than conceiving of implicit biases as “in the head,” we should conceive of them as embodied and social. I drew on a phenomenological interpretation habits as embodied and on an interpretation of bodily habits as social to defend this idea. More generally, this discussion should encourage us to rethink our understanding of the relations between mind, body, and world. Instead of locating the mind in the head or in the brain, we should think, to borrow the words of Alva Noë, of our mental lives as “out of our heads” (Noë 2009). And, instead of thinking of bodies as individual “atoms” located in social world, we should think of the body as shaped by and shaping of the social world.
SUGGESTIONS FOR FUTURE READING If you’d like to learn more about how our perception of other persons is biased by social categories, read: • •
Price, J. and Wolfers, J. (2010) Racial discrimination among NBA referees. The Quarterly Journal of Economics, 125(4): 1859–1887. Wilson, J.P., Hugenberg, K., and Rule, N.O. (2017) Racial bias in judg ments of physical size and formidability: From size to threat. Journal of Personality and Social Psychology, 113(1): 59–80.
If you’d like to learn more about how our perception of physical environments and objects is biased by social categories, read: • •
•
Payne, B.K. (2005) Conceptualizing control in social cognition: The role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 89(4): 488–503. Cesario, J., Plaks, J.E., Hagiwara, N., Navarrete, C.D., and Higgins, E. T. (2010) The ecology of automaticity: How situational contingencies shape action semantics and social behaviour. Psychological Science, 21 (9): 1311–1317. Cheryan, S., Plaut, V.C., Davies, P.G., and Steele, C.M. (2009) Ambient belonging: How stereotypical cues impact gender participation in computer science. Journal of Personality and Social Psychology, 97(6): 1045–1060.
If you are curious about the psychology of habit-formation, read:
The Embodied Biased Mind •
•
53
Duhigg, C. (2014) The Power of Habit: Why We Do What We Do in Life and Business. New York: Random House. Duhigg’s popular introduction to the notion of habit offers an accessible introduction to the science of habit-formation. James, W. (1890/1950) The Principles of Psychology, vol. 1. Mineola, NY: Dover Publications. William James’s classic explores a very wide range of psychological phenomena, from consciousness and attention to memory. But if you want to learn more about his theory of habit-formation, focus on Chapter 4, “Habit.”
For discussions of habit that focus specifically on gender and race, consider reading: •
•
Ngo, H. (2017) The Habits of Racism: A Phenomenology of Racism and Racialized Embodiment. Lanham, MD and London: Lexington Books. Ngo’s recent work explores how race affects bodily experience as well as the ways in which this social category inflects both the bodily habits of white persons and persons of color. Young, I.M. (2005) On Female Body Experience: “Throwing Like a Girl” and Other Essays. Oxford: Oxford University Press. This book brings together essays written by Young on the bodily experiences of women. “Throwing Like a Girl” explicitly challenges traditions accounts of bodily habits, such as those formulated by Merleau-Ponty.
If you’d like to learn more about embodied conceptions of perception and cognition, read: • •
•
Carman, T. (2008) Merleau-Ponty. New York: Routledge. Carman’s clear and engaging introduction to Merleau-Ponty’s thought focuses on his account of perception as embodied. Dreyfus, H. (2014) Skillful Coping: Essays on the Phenomenology of Everyday Perception and Action. Oxford: Oxford University Press. This work collects many of Dreyfus’s writings on the phenomenology of perception, cognition, and action, and defends an embodied account of perception and cognition. This advanced text will be of interest to those who have some background in the philosophy of mind or phenomenology. Merleau-Ponty, M. (1945/2002) Phenomenology of Perception. New York: Routledge. This is Merleau-Ponty’s most famous work. While the stated aim of the book is to explore the phenomenology of perception, the topics discussed range from the nature of embodiment to human freedom, cov ering along the way such diverse issues as language, sexuality, space, and time. It is a very dense and hefty book, so if you are new to Merleau-Ponty, use Carman’s Merleau-Ponty as a guide.
54 Céline Leboeuf •
Noë, A. (2009) Out of Our Heads. New York: Farrar, Strauss, & Giroux. This book offers an accessible introduction to Noë’s conception of our mental life as embodied. For Noë, although the brain matters to consciousness, consciousness does not unfold “in our heads”; rather, it is a skillful activity that is embodied as we navigate the environment.
If you’d like to learn more about Bourdieu’s social account of habits: •
•
•
•
Bourdieu, P. (1980/1990) The Logic of Practice. Palo Alto, CA: Stanford University Press. This is one of Bourdieu’s most in-depth statements of his theory of the body and social world. (The other is Outline of a Theory of Practice.) It is a very difficult text and engages with such philosophical schools as phenomenology and structuralism. For his theory of the habitus, start with the chapter entitled “Structures, habitus, practices.” For those new to Bourdieu, consider reading some of the secondary sources below. Social Theory Re-Wired’s Profile of Bourdieu: http://routledgesoc.com/p rofile/pierre-bourdieu. This is a very short and accessible entry into Bourdieu’s thought. Check out the “Key Concepts” page for a summary of his concept of the habitus as well as his two signature concepts: “social capital” and the “field.” Maton, K. (2012) Habitus. In M. Grenfell (ed.), Pierre Bourdieu: Key Concepts (second edition). New York: Routledge. Maton’s article offers a more advanced introduction to Bourdieu’s theory of the habitus and connects it with his concepts of the “field” and “social capital.” Moi, T. (1999) What Is a Woman? And Other Essays. Oxford: Oxford University Press. Chapter 3 of Moi’s book critically examines what Bourdieu’s ideas, including his theory of the habitus, have to offer for thinking about the condition of women. This chapter is also a helpful companion for a first read through Bourdieu’s work.
For a comparative account of Pierre Bourdieu, William James, and Maurice Merleau-Ponty on habits, consider reading: •
Weiss, G. (2008) Refiguring the Ordinary. Bloomington, IN: Indiana University Press. Chapter 5 of Weiss’s work, “Can an Old Dog Learn New Tricks? Habitual Horizons in James, Bourdieu, and MerleauPonty,” provides an insightful comparative analysis of James’s, Bour dieu’s, and Merleau-Ponty’s accounts of habits. Two of the main topics covered by Weiss are the plasticity of habits and their social character.
DISCUSSION QUESTIONS 1
Leboeuf’s essay asks us to reconsider the idea that implicit biases are “in our heads.” She begins by suggesting that we typically think of our mental life as “in our heads.” What are idioms in English or in another
The Embodied Biased Mind
2
3
4
5
6
7
55
language that illustrate this idea? Where do you think mental life is located? Leboeuf suggests that we should interpret implicit biases as perceptual habits. What are perceptual habits according to her? What aspects of implicit biases does this interpretation shed light on? Do you think that she is right to interpret implicit biases as perceptual habits? Why or why not? Merleau-Ponty argues that habits are embodied. What does this mean? How does the distinction between practical and thematic awareness help us understand the bodily character of habits? What does Merleau-Ponty’s account of habit as embodied tell us about implicit biases? How should we interpret studies of weapon bias in light of an embodied account of implicit bias? Do you think that it is correct to think of implicit biases as bodily? Why or why not? What does Bourdieu’s account of habit tell us about implicit biases? Do you think that it is correct to think of implicit biases as social? Why or why not? Compare and contrast Leboeuf’s and Johnson’s account of implicit biases. Are their accounts mutually exclusive in any ways, or can they be thought of as “different sides of the same coin?” Does Johnson’s account require that biases are in the head or can it accommodate Leboeuf’s points about bias in the body and the social world? Does Leboeuf’s account require that we ignore the inside of the mind and the brain? Are all implicit biases a matter of perceptual habits? Can you think of any “purely intellectual” or “cognitive” biases?
REFERENCES Al-Saji, A. (2014) A phenomenology of hesitation: Interrupting racializing habits of seeing. In E.S. Lee (ed.), Living Alterities: Phenomenology, Embodiment, and Race (pp. 133–172). Albany, NY: SUNY Press. Amodio, D.M. and Devine, P.G. (2006) Stereotyping and evaluation in implicit race bias: Evidence for independent constructs and unique effects on behaviour. Journal of Personality and Social Psychology, 91(4): 652–661. Badger, E. (2016) We’re all a little biased, even if we don’t know it. New York Times, October 5, 2016. Available at: https://www.nytimes.com/2016/10/07/upshot/were-a ll-a-little-biased-even-if-we-dont-know-it.html [accessed 14 March 2018]. Bourdieu, P. (1980/1990) The Logic of Practice. Palo Alto, CA: Stanford University Press. Brownstein, M. (2017) Implicit bias. In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2017 edition). Available at: https://plato.stanford.edu/a rchives/spr2017/entries/implicit-bias/ [accessed 14 March 2018]. Carman, T. (2008) Merleau-Ponty. New York: Routledge. Cesario, J., Plaks, J.E., Hagiwara, N., Navarrete, C.D., and Higgins, E.T. (2010) The ecology of automaticity: How situational contingencies shape action semantics and social behavior. Psychological Science, 21(9): 1311–1317.
56
Céline Leboeuf
Cheryan, S., Plaut, V.C., Davies, P.G., and Steele, C.M. (2009) Ambient belonging: How stereotypical cues impact gender participation in computer science. Journal of Personality and Social Psychology, 97(6): 1045–1060. Dreyfus, H. (2014) Skillful Coping: Essays on the Phenomenology of Everyday Perception and Action. Oxford: Oxford University Press. Duhigg, C. (2014) The Power of Habit: Why We Do What We Do in Life and Business. New York: Random House. James, W. (1890/1950) The Principles of Psychology, vol. 1. Mineola, NY: Dover Publications. Kelly, D. and Roedder, E. (2008) Racial cognition and the ethics of implicit bias. Philosophy Compass, 3(3): 522–540. Lira, M., Egito, J.H., Dall’Agnol, P.A., Amodio, D.M., Gonçalves ,O.F., and Boggio, P.S. (2017) The influence of skin colour on the experience of ownership in the rubber hand illusion. Scientific Reports, 7(1): 15745. doi:10.1038/s41598–41017– 16137–16133. Maton, K. (2014) Habitus. In M. Grenfell (ed.), Pierre Bourdieu: Key Concepts (second edition). New York: Routledge. Merleau-Ponty, M. (1945/2002) Phenomenology of Perception. New York: Routledge. Moi, T. (1999) What Is a Woman? And Other Essays. Oxford: Oxford University Press. Ngo, H. (2017) The Habits of Racism: A Phenomenology of Racism and Racialized Embodiment. Lanham, MD and London: Lexington Books. Noë, A. (2004) Action in Perception. Cambridge, MA: The MIT Press. Noë, A. (2009) Out of Our Heads. New York: Farrar, Strauss, & Giroux. Payne, B.K. (2005) Conceptualizing control in social cognition: The role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 81: 488–503. Payne, B.K. (2006) Weapon bias: Split-second decisions and unintended stereotyping. Current Directions in Psychological Science, 15(6): 287–291. Price, J. and Wolfers, J. (2010) Racial discrimination among NBA referees. The Quarterly Journal of Economics, 125(4): 1859–1887. Project Implicit (2018) Frequently asked questions. Available at: https://implicit.harva rd.edu/implicit/faqs.html [accessed 14 March 2018]. Social Theory Re-Wired (2016) Profile of Pierre Bourdieu. Available at: http://routle dgesoc.com/profile/pierre-bourdieu [accessed 15 March 2018]. Weiss, G. (2008) Refiguring the Ordinary. Bloomington, IN: Indiana University Press. Wilson, J.P., Hugenberg, K., Rule, N.O. (2017) Racial bias in judgments of physical size and formidability: From size to threat. Journal of Personality and Social Psychology, 113(1): 59–80. http://dx.doi.org/10.1037/pspi0000092 [accessed 14 March 2018]. Wilson, R.A. and Foglia, L. (2017) Embodied cognition. In E.N. Zalta (ed.), The Stan ford Encyclopedia of Philosophy (Spring 2017 2017). Available at: https://plato.sta nford.edu/archives/spr2017/entries/embodied-cognition/ [accessed 27 July 2018]. Young, I.M. (2005) On Female Body Experience: “Throwing Like a Girl” and Other Essays. Oxford: Oxford University Press. Yudkin, D., Rothmund, T., Twardawski, M., Thalla, N., and Van Bavel, J.J. (2016) Reflexive intergroup bias in third-party punishment. Journal of Experimental Psychology, 145(11): 1448–1459.
3
Skepticism About Bias Michael Brownstein
Criticism of research on implicit bias has grown in recent years. Psycholo gists, philosophers, and journalists have raised a variety of concerns. These range from very technical critiques of the way that implicit bias is measured to foundational worries about the social sciences in general. Particularly in the popular press, some have drawn strong conclusions, for example, that research on implicit bias is a “false science” (e.g., Bartlett 2017; Mac Donald 2017c; Singal 2017). In what follows, I distinguish six lines of critique and reply to each. Some are more widely held than others. For reasons of space, I home in on one or two representatives of each line of critique. A common thread is that there are many open questions and challenges facing research on implicit bias. However, it is a mistake to conclude from these open questions and challenges that the implicit bias research program ought to be abandoned. Instead, scholars ought to seek to improve our understanding of what implicit bias is, how it affects people’s lives, the ways in which it is measured, and the most effective strategies to combat it.
1 Inequalities Thought to Be Explained by Bias Don’t Exist or Aren’t Unjust One way to criticize research on implicit bias is to claim that it is much ado about nothing. If the kinds of discrimination and inequalities with which social scientists are concerned—those stemming from people’s attitudes toward race, gender, age, sexual orientation, class, etc.—don’t really exist, then there is nothing important for implicit bias to explain. There are two related forms of this argument. One is to deny the existence of specific social disparities, for example, to deny that the police treat black people differently from white people. Another is to explain away disparities in ways that make them seem banal, or that exonerate members of dominant social groups, for example, to argue that police arrest black people more often because black people are more likely to commit crimes. To some extent, the difference between these marks the difference between inequalities and inequities. Inequalities are simply differences, which may or may not be just. Some
58
Michael Brownstein
people are taller than others, which is an inequality, but there is nothing morally worrying about this. Inequities are unjust differences. If a manager only hires tall people, and there is no justifying reason for this (e.g., a justifying reason might be that she is hiring basketball players), then she is contributing to inequity. For an example of the first kind of critique—denying the existence of specific social disparities—recall that implicit bias is often invoked as helping to explain racial disparities in policing in the United States. Heather Mac Donald (2017a), however, author of The War on Cops, denies the existence of racial disparities in policing. She argues that “the police have much more to fear from black males than black males have to fear from the police” (2017a). Mac Donald blames the putatively false narrative about racial disparities in the criminal justice system on the mainstream media, President Obama, and Black Lives Matter protestors, whom she calls “savages” (2017b, 15). Similarly, Mac Donald claims that race-based affirmative action is rampant and that “the most influential sectors of the economy today employ preferences in favor of blacks” (2017c). Elsewhere, as an example of the second line of critique, Mac Donald acknowledges the existence of racial disparities in the United States, but blames these inequalities on disenfranchised minorities themselves. This is to say that while she acknowledges the existence of certain racial inequalities, she denies that these inequalities are inequities (i.e., that they are unjust). In her article, “The False ‘Science’ of Implicit Bias” (2017c) she writes: It is taboo to acknowledge that socioeconomic disparities might be caused by intergroup differences in cultural values, family structure, interests or abilities. The large racial gap in academic skills renders preposterous any expectation that, absent bias, blacks and whites would be proportionally represented in the workplace. And vast differences in criminal offending are sufficient to explain racial disparities in incarceration rates. Reply Leaving aside the ideological charge and apparent racial animus of Mac Donald’s views, her first set of claims about race and policing, which I take to be representative of the view that specific social inequalities don’t exist, are unconvincing. The evidence of racial disparities in health, wealth, policing, education, and virtually every measurable index of group well being is overwhelming (respectively, see for example, Penner et al. 2013; Chetty et al. 2018; Pfaff 2017; Magnuson and Waldfogel 2008). When Mac Donald asserts that the police have more to fear from black men than the reverse, or that most sectors of the economy have preferences for black employees, she does so on an extremely narrow reading of the data. For example, to justify her claim about police officers’ fear of black men, she asserts that, in 2015, a police officer was 18.5 times more likely to be killed
Skepticism About Bias
59
by a black man than an unarmed black man was to be killed by a police officer. All of these deaths are terrible, and minimizing the dangers associated with being a police officer is hugely important, of course. But the statistic Mac Donald cites, assuming it is true, in no way justifies her assertion. Black Americans have countless reasons to fear the police that Mac Donald does not consider, from the role of police in supporting past racial injustices—slavery, lynchings, Jim Crow—to the kinds of daily harassment and abuse documented in the Investigation of the Ferguson Police Department by the U.S. Department of Justice (2015), to the mass incarceration of black men today in the United States, not to mention the state’s monopoly on legal violence and what that means for those who are treated unfairly by it (for review, see Alexander 2010). Police officers choose their job, moreover, consciously accepting that it entails certain risks. Black Americans do not choose to be targets of suspicion and fear. Likewise unconvincing are Mac Donald’s explanations of racial disparities in terms that make them seem innocuous, inevitable, or the fault of the victims. The idea that socioeconomic disparities between black and white Americans is the result of “black culture,” for example, has been thoroughly debunked (e.g., see Chetty et al. 2018, discussed below). This is not to say, of course, that people’s individual choices don’t matter, nor that individuals have no responsibility for the course of their lives. Rich and extensive literatures explore the interplay of individual responsibility under conditions of inequality and inequity, an interplay that is relevant, of course, not only to race but also to gender, age, disability, and so on (see Dominguez, Chapter 8, “Moral Responsibility for Implicit Biases: Examin ing Our Options,” and McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit Bias”). One can also be critical of aspects of the cultures of those who have suffered inequities without thereby brushing aside the problem of those inequities (e.g., Coates 2015).
2 Inequities Exist but Bias Doesn’t Explain Them Unlike Mac Donald, most people writing about implicit bias acknowledge the existence of widespread group-based inequities in societies like the United States. Some have claimed, however, that there is little evidence that bias and prejudice—whether implicit or explicit—contribute to those social inequities. For example, Sean Hermanson highlights studies that appear to find “no evidence of racial bias in officer-involved shootings” (2017b). Her manson also claims that “given base rates for criminality the relationship between race and police shootings is unsurprising” (2018). (A “base rate” is the probability of some event happening, such as having a heart attack, prior to intervention, such as taking medication to lower your cholesterol.) Hermanson implies that the reason police officers shoot more black people than white people is because black people commit more crimes. While
60
Michael Brownstein
Hermanson (unlike Mac Donald) acknowledges that this fact may be due to race-based inequity, he claims that racial bias is not the cause of this inequity. Police officers’ attitudes toward race, in other words, don’t explain the rates at which they shoot black people. These claims contribute to Hermanson’s overall conclusion that “I want my money back” with respect to research on implicit bias. His critique is not based on shooting data alone. Like myself, Hermanson is a philosopher, and he is concerned with whether women and members of historically marginalized groups are at a disadvantage within the field. His view is that men are at a comparative disadvantage on the philosophy job market. Based on his own analysis of hiring data, Hermanson argues that there is, for example, “weak evidence” that CVs and résumés are judged differently according to the perceived gender and race of their authors (2017a). “Since 2010,” he writes, “women seem to enjoy significant advantage on the [phi losophy job] market. In order to understand demographic variance we need to think more about pre-university influences, not implicit biases” (2018). This point contributes to Hermanson’s broad conclusion that it is a myth that there is widespread evidence that implicit bias contributes to real-world discrimination. Reply It may indeed turn out that some of the inequities for which researchers have thought that implicit bias is a partial cause are better explained by other factors. And, it is true that some have “over-hyped” research on implicit bias, suggesting (implicitly or explicitly) that it is the primary driver of discrimination in the contemporary world. But it is, for example, demonstrably false that the relationship between race and police shootings is explained by base rates of race-based criminality alone. An analysis of data from 2015, for example, finds no correlation between rates of violent crimes in 50 major cities in the USA and rates at which police officers killed people in those cities (https://mappingpoliceviolence.org/2015/). A more recent study reaffirms the basic finding; police officers are no more likely to shoot and kill someone who is unarmed in a high-crime area compared with a low-crime area (Nix et al. 2017). Black Americans are, moreover, disproportionate targets of police force, compared with white Americans, even when controlling for whether the person targeted is a violent criminal (Atiba-Goff et al., Center for Policing Equity 2016). Criminologist Justin Nix summarizes his recent study of 990 fatal police shootings this way: “The only thing that was significant in predicting whether someone shot and killed by police was unarmed was whether or not they were black,” continuing, “this just bolsters our confidence that there is some sort of implicit bias going on” (quoted in Lowery 2016). A much more open question is how much bias (whether implicit or explicit) contributes to police shootings (see Saul (2018) for discussion).
Skepticism About Bias
61
Hermanson cites a much-discussed New York Times op-ed by economist Sendhil Mullainathan which argues that, if bias was a significant factor in contributing to police shooting of black Americans, then there would be a significant difference between the rates at which black Americans are arrested and the rates at which black Americans are killed by police officers. First, however, it is important to note that, despite this point, Mullainathan concedes that “Police killings are a race problem” and that “African-Amer icans are being killed disproportionately and by a wide margin” (2015). Second, Mullainathan argues that the problem is driven by the fact that black Americans have disproportionate encounters with the police (i.e., base rates for police contact are higher for black Americans than for white Americans) and that the drivers of police encounters are likely not the biases of individual officers, but rather administrative decisions about where to patrol and descriptions of suspects given to the police by citizens. This is a reasonable point, but one which invites further reflection on, and hopefully further research into, the roles bias plays in shaping decision-making at different levels of society. The administrators who craft policies regarding where to patrol may be biased, as may be the people who describe what suspects look like. Indeed, there is a large literature on the latter, indicating that thoughts of crime trigger thoughts of black people (e.g., Kleider-Offutt et al. 2018). Furthermore, differences in the reasons black and white Americans have encounters with the police are crucial. Black drivers, for example, are three times more likely than white drivers to be pulled over for “investigatory” stops, where police officers see something suspicious but no unambiguous violation has taken place (Epp et al. 2014). The phenomenon of being stopped for “driving while black” means that police encounters cannot only be explained in terms of administrative decisions. FBI data relatedly suggest that when initial civilian–police interactions are the least serious, racial disparities in violent outcomes are the greatest (Lind 2015). Similarly, in the case of academia, there is ample evidence of gender bias against women contributing to harmful outcomes, but there are also many open questions about how much bias contributes to outcomes and in what specific domains bias negatively affects women. In general, research shows that women faculty are asked to do more favors from students than men (El-Alayli et al. 2018), to do more service (Guarino and Borden 2017), are invited less frequently than men to prestigious colloquia (Nittrouer et al. 2018), systematically receive worse teaching evaluations, in both audit studies (e.g., Mengel et al. 2018) and studies in which online instructors teach multiple courses under different gender identities (MacNell et al. 2014), and more. In the case of philosophy, Hermanson’s own analysis of the available data over roughly the last decade suggests that women are more likely than men to be hired for tenure-track jobs (Allen-Hermanson 2017). Moreover, Hermanson finds that women are hired for tenure-track jobs in philosophy with approxi mately half the publications than men who are hired for tenure-track jobs. These findings are consistent with some studies examining gender and hiring in STEM
62
Michael Brownstein
fields too. Williams and Ceci (2015) find that STEM faculty prefer to hire women over men, at least when both the women and men candidates are exceptionally well-qualified. Hermanson takes these findings to speak “against the hypothesis that sexist attitudes (whether conscious or unconscious) held by philosophers are a major cause of disproportion according to gender” (Allen-Hermanson 2017). These data are important. More research is needed in reconciling them with conflicting accounts of gender and academic hiring specifically (e.g., Jennings (2016) analyses hiring in philosophy and arrives at different conclusions from Hermanson (Allen-Hermanson 2017); the findings for gender and hiring in STEM fields are similarly mixed (e.g., see Moss-Racusin et al. 2012)). But Hermanson’s conclusion is too strong, even taking his hiring data for granted. His arguments do not rule out, or cast any doubt, on the possibility that philosophers’ sexist attitudes contribute to the decision many women under graduates make to leave philosophy behind (Thompson et al. 2016), to the gender stereotypes about philosophy that students understand prior to starting college (Baron et al. 2015), to women philosophy professors’ decisions to submit their work to relatively lower-tier journals (Allen-Hermanson 2017), or to the countless experiences of harassment women philosophers have described in public (see https://beingawomaninphilosophy.wordpress.com/). Moreover, another possibility to take seriously, but which Hermanson dismisses, is that women have been faring better specifically in tenure-track hiring in philosophy over the past 10 years because more and more philosophers are taking gender bias seriously. Whether this is the case is, of course, an empirical question, one which scholars will hopefully investigate.
3 Inequities Exist but They’re Explained by Explicit Bias Research into implicit bias has been driven partly by widespread declines in overt racism and sexism. However, particularly since the election of President Trump, the growth of far-right political movements in Europe, and the #MeToo movement, some critics have suggested that the historical turn to less overt forms of bias has been overstated. We don’t need implicit bias to explain contemporary discrimination and inequality, some critics say, as explicit bias can do the explaining. For example, Jesse Singal (2017) points to the Justice Department’s report that the Ferguson police and court officials engaged in widespread intentional race-based discrimination. He writes, “It might be advantageous to various people to say implicit bias rather than explicit bias is the most important thing to focus on, but that doesn’t make it true —a point driven home, perhaps, by the fact that the United States just elected one of the more explicitly racist presidential candidates in recent history” (2017). Reply Explicit bias undoubtedly plays a causal role in explaining outrageous dis criminatory practices like those in Ferguson. It is also true that researchers
Skepticism About Bias
63
(including me) have sometimes been too quick to suggest that explicit racism and sexism are mostly old-fashioned and that implicit bias is the contemporary face of prejudice. And, as I said above, research on implicit bias has been overhyped by some, and to the extent that anyone has claimed that implicit bias is the most important cause of contemporary discrimination and inequality, Singal is right to disagree. However, the criticisms described above are overstated and misleading. First, many researchers suggest that implicit bias is important to address within the large segment of the population that disavows prejudice and common social stereotypes. I suspect, for example, that the majority of readers of this volume oppose explicit discrimination. The fact that inten tional discrimination is a pervasive and still-contemporary problem does not obviate the fact that there are many people who are explicitly opposed to discrimination, and who are aiming to be unbiased, and yet are susceptible to the kinds of biased behavior implicit bias researchers have been concerned about. The persistence of intentional discrimination, in other words, is no reason to abandon the research program focused on implicit bias. Second, some critics elide the extensive and ongoing debate in the literature about the nature and relationship between implicit and explicit mental states, processes, and biases. “Implicit bias” is a term of art that refers to a set of unendorsed or disavowed behaviors, such as one’s performance on an IAT. Implicit measures, such as the IAT, are tests that quantify implicit bias. These tests have psychological causes, which are a combination of implicit and explicit processes (or implicit and explicit attitudes, depending on your pre ferred theory), as well as other features of cognition (e.g., the ability to con trol one’s impulses). In arguing that implicit bias is a confused notion because explicit biases are more significant causes of discriminatory behavior, Her manson and others appear to be confusing mental processes with behavioral outcomes. By definition, implicit bias is disavowed behavior. Moreover, there is a rich, long-standing, and ongoing literature exploring the interactions of implicit and explicit processes that give rise to implicit bias. For example, researchers have known for some time that the best way to predict a person’s scores on an implicit measure like the IAT is to ask them their opinions about the IAT’s targets. Recent data have also demonstrated that people are fairly good at predicting their own IAT scores (Hahn et al. 2014). This doesn’t suggest that implicit bias is a meaningless construct. Rather, it suggests that measures of implicit bias are not “process pure” (i.e., what they measure is a mix of various cognitive and affective processes). By analogy, you are likely to find that people who say that cilantro is disgusting are likely to have aversive reactions to it, but this doesn’t mean that their aversive reactions are an invalid construct. Indeed, one of the leading theories of the dynamics and processes of implicit social cognition since 2006—Gawronski and Bodenhau sen’s “Associative-Propositional Evaluation” model (APE, updated, e.g., in 2014; see Johnson, Chapter 1, “The Psychology of Bias: From Data to Theory”)—is based on a set of predictions about this process impurity (i.e., about the interactions of implicit and explicit evaluative processes).
64
Michael Brownstein
That said, there are crucial open questions in the literature, and some forms of confusion are the responsibility of problematic theories. Arguably the most advertised and influential account of implicit bias posits attitudes that are outside of agents’ awareness and control. This definition is thrice problematic, as research suggests that we are not that unaware of our implicit biases (op cit.; according to APE, people fail to report their implicit biases not because they are unaware of them but because they reject them, either because they think they are unjustified or they want to appear unbiased); that there are many techniques available for gaining some degree of control over them (see Lai et al. 2013 for review); and that people don’t have “dual” attitudes that exist in isolation from one another. But, of course, there are many alternatives to this definition, from APE (op cit.) to my own (Brownstein and Madva 2012; Madva and Brownstein 2018; Brownstein 2018). The lively debate about how to characterize the construct of interest is ongoing, as it should be.
4 Social Structures, Not Biased Minds, Are the Causes of Discrimination and Inequity So long as social science has existed, there have been debates about whether to explain social phenomena in terms of institutional-structural features of societies or in terms of the psychologies of the individuals who comprise those societies. The (crudely put) anti-individualist, sociological approach running from Marx through Durkheim through “situationism” has surfaced as a form of critique of implicit bias too. The central idea running through this “structuralist” stream of thought is that what happens in the minds of individuals, including their biases, is the product of social inequalities rather than an explanation for them. Features of the organization of the social world, in other words—poverty, housing segregation, social norms, etc., which exist in independence of any particular person’s thoughts about them—create biases, rather than the other way around. As such, structural ist critics argue, we should focus our attention on changing social structures themselves, rather than trying to change individual’s biases directly. For example, Saray Ayala- López (2018) argues that social structures, such as informal norms governing conversations, and not biased minds, are the cause of many forms of injustice, such as how certain individuals’ (e.g., women’s) speech is degraded, ignored, or discounted. Ayala- López builds on Sally Haslanger’s (2015) influential account of social structure, and while Haslanger grants that there is a “small space for attention to implicit bias in social critique” (2015, 11), Ayala-López argues that “agents’ mental states [are] … not necessary to understand and explain” when considering social injustice (2016, 887). In a related vein, Elizabeth Anderson (2010) argues that segregation continues to be a key cause of contemporary racial inequality in the United States. Along the way to making this point, she is critical of what she sees as a distracting focus on the psychology of bias. Likewise, Ron
Skepticism About Bias
65
Mallon (2018) explores the ways in which social privileges like property, wealth, and education accumulate along racial lines. Mallon proposes that the relevant mechanisms of accumulation “crowd out” implicit biases as important causes of inequity. He writes, “there exists little reason to think that contemporary implicit associations are an important part of the expla nation of contemporary disparities caused by or grounded in accumulation mechanisms.” Reply The structuralist critique is discussed in much more depth in Chapters 11 and 12 (Ayala-López and Beeghly, “Explaining Injustice: Structural Analysis, Bias and Individuals”; Madva, “Individual and Structural Interventions”) of this volume. Here I make a few brief points. As is a theme in this chapter, I think the structuralist critics point to important open questions and challenges for implicit bias researchers. There is much work to do in understanding how psychological and structural phenomena interact to produce and entrench discrimination and inequity. But I think strong claims to the effect that there is little or no room for implicit bias to explain these outcomes are mistaken. First, some data are notably hard for structuralists to explain. For exam ple, Raj Chetty and colleagues’ (2018) striking intergenerational analysis of race and economic opportunity in the United States appears to find no way around the conclusion that prejudice against black people is a crucial driver of inequity. They studied racial disparities in wealth over the entire US population from 1989–2015. Black Americans, they find, have substantially lower rates of upward economic mobility, and substantially higher rates of downward economic mobility, compared with white Americans. This gap is almost entirely driven by differences in wages and employment rates between black and white men, not women. Crucially, they find that differ ences in parental marital status, education, and personal abilities explain very little of the gap. (These findings also speak against the claim I discussed in §1, that “black culture” is to blame for racial inequity.) Moreover, the gap persists amongst black and white boys who grow up in the same neighborhood. The only areas in which the gap is small are low-poverty neighborhoods with high rates of black fathers being present and low levels of racial bias amongst whites. Findings like this put pressure on structuralists to explain why social prac tices (e.g., parental marital status) and situational factors (e.g., high-poverty neighborhoods) don’t all by themselves explain economic inequity. Rather, it appears that social practices and situational factors in interaction with psychological attitudes (e.g., racism) do. This “interactionism” represents an alternative to the structural critique. The aim is not to eschew psychological bias, or to ignore contextual factors, but to understand how bias operates differently in different contexts. By analogy, if you wanted to combat housing segregation, you would want to consider not only problematic institutional
66
Michael Brownstein
practices, such as “redlining” certain neighborhoods within which banks will not give mortgage loans, and not only psychological factors, such as the propensity to perceive low-income people as untrustworthy, but the interaction of the two. A low-income person from a redlined neighborhood might not be perceived as untrustworthy when they are interviewing for a job as a nanny, but might be perceived as untrustworthy when they are interviewing for a loan. Adopting the view that bias and structure interact to produce unequal outcomes does not mean that researchers must always account for both. Sometimes it makes sense to emphasize one kind of cause or the other. If my aim is to explain why people are happier in Scandinavian countries than in the United States, I will prioritize structural features of the relevant governments and communities, on the presumption that the nature of the Scandinavian psyche is not likely to be all that inherently different from the American one. If my aim is to explain why twins raised in the same home don’t always end up equally happy in life, however, I will prioritize differ ences between their individual minds. But even in these extreme cases, a more complete explanation requires focusing on the interaction of individual differences within situations. What psychological dispositions make it such that people are happier when they live in countries with less inequality (as in the Scandinavian countries)? And, given that not everyone in these countries is in a state of perpetual bliss, what individual differences explain why some people remain unhappy even in these apparently favorable structural condi tions? Likewise, are there subtle differences in the norms and expectations placed on people that can lead to disparate outcomes for extremely similar individuals (e.g., the twin sibling who is more outgoing having more success in school)? Another possibility that structuralists seem to discount is that psychological biases might be key contributors to social-structural phenomena. For example, oft-touted examples of structural racism are the drug laws and sentencing guidelines that contribute to the mass incarceration of black men in the USA. The recent surge of states legalizing marijuana presents a novel twist in this story. What has happened in these states? While marijuana-related arrests have declined for all racial groups in these states, black people continue to be arrested for marijuana-related offenses at a rate of about 10 times that of white people (Drug Policy Alliance 2018). This could be due to officers’ biases, or to the biases of those administrators who made decisions about where to patrol, or to the biases of the people who give officers descriptions of suspects. But it is hard to understand the phenomenon without adverting to someone’s biases. Finally, it is important to note that there is nothing intrinsically individualistic about contemporary measures of implicit bias. Payne and colleagues (2017) argue that tests like the IAT can be understood as a way to measure general features of cultures, such as norms and unjust structures (see also Ayala-López and Beeghly, Chapter 11, “Explaining Injustice: Structural Analysis, Bias, and Individuals”). For example, average scores on implicit measures of prejudice and stereotypes, when aggregated at the level of cities within the United States, predict racial
Skepticism About Bias
67
disparities of shootings of citizens by police in those cities (Hehman et al. 2017). Thus, while it is certainly true that most of the relevant literature and discussion conceptualizes implicit bias as way of differentiating between individuals, antiindividualists might utilize the data for differentiating regions, cultures, and so on. Only time will tell, but measures of implicit bias might represent powerful tools for detecting large-scale structural injustices.
5 The Importance of Understanding Implicit Bias Has Been Overstated Research on implicit bias has received a lot of attention recently, in politics, journalism, philosophy, jurisprudence, business, medicine, and more. Some critics have argued that this attention is unwarranted. Lee Jussim, for example, argues that the current backlash against implicit bias in general, and against the IAT in particular, is due to its “wild overselling” (2018). Singal writes, “the overhyping of IAT stacks the deck so much that sometimes it feels like implicit bias can explain everything” (2017, emphasis in original). Reply In general, the most grandiose claims about implicit bias have been found in the popular press, rather than in scholarly writing. For example, New York Times columnist Nicholas Kristof writes, “it’s sobering to discover that whatever you believe intellectually, you’re biased about race, gender, age or ability” (2015). This is an oversimplification, and it’s likely untrue. Researchers themselves are not entirely innocent, however. For example, Project Implicit – the online home of the IAT – phrases feedback about participants’ scores in such a way that people can easily (and falsely) think that taking the test one time reveals the true permanent nature of their own social attitudes. It is also true that some common formulations of the nature of implicit bias, promoted by researchers, have contributed to confusion. As I discussed above, implicit attitudes were originally (and, by some, still commonly) described as unconscious, although current evidence fairly clearly shows that this picture is too simple. Similarly, the aforementioned APE model originally posited that implicit attitudes were purely associative structures, which could only be indirectly influenced by non-associative interventions, such as persuasion, but the current picture is more complex (for discussion, see Johnson, Chapter 1, “The Psychology of Bias: From Data to Theory”). For reasons like these, it is too early to tell whether the IAT really constitutes a “revolution in social psychology,” as Kester (2001) claimed in the bulletin of the American Psychological Society. But the takeaways are not that the IAT is useless or that we should move on from talking about implicit bias. Instead, there are two core takeaways. First, the relationship between scientific research and science communication must be improved. When communication about science fails, science
68
Michael Brownstein
literacy, public policies, public trust in science, and public funding for sci entific research may be impacted. These are serious problems, which should not be underestimated. The onus to improve science communication is on both journalists and researchers, in psychology and all other scientific fields. However, the harms of poor science communication should not be conflated with the status of the actual scientific research. Second, the ongoing refine ment of theories of the nature of implicit bias, and the ongoing debate about the proper explanatory scope of tools like the IAT, is simply the regular process of (often slow) scientific progress at work. To claim otherwise—to conclude from overhype about implicit bias and from the need to revise early theories that the research program ought to be ditched—is akin to claiming that when astronomers decided that Pluto wasn’t a planet, that Pluto no longer exists.
6 Something Like Implicit Bias Is Important to Understand, but the Tools We Have to Study It at Present Are Fatally Flawed Many critics point to two psychometric concerns. (Psychometrics is about the measurement of the mind, thus the concerns are about the way that implicit bias is measured.) These have focused largely on the IAT in particular. The first concerns the extent to which implicit measures predict behavior. The second concerns the stability of individuals’ scores on implicit measures over time. I cannot address these concerns in depth here; instead I summarize what I have argued elsewhere (see Brownstein et al. 2020). Correlations between two things can range from −1 to 1. A correlation of 1 means that every time you find X, you find Y. A correlation of 0 means that when you find X, you are just as likely to find Y as you are to not find Y. And a correlation of −1 means that when you find X, you never find Y. The average correlations found between individuals’ scores on implicit measures (e.g., an IAT score) and measures of behavior (e.g., ratings given to a résumé) have ranged from approximately .14 to .37 (Cameron et al. 2012; Greenwald et al. 2009; Oswald et al. 2013; Kurdi et al. 2019). According to standard conventions, these are considered small-to-medium correlations. That is, these meta-analyses suggest that the power of implicit bias to predict some specific behavior is, on average, small. Critics of research on implicit bias have taken these findings to show that tests of implicit bias, like the IAT, are not very useful. Oswald and colleagues, for example, conclude that “the IAT provides little insight into who will discriminate against whom, and provides no more insight than explicit measures of bias” (2013, 188). Many have taken Oswald and colleagues’ conclusion to be definitive (especially many critics outside psychology; e.g., Bartlett 2017; Singal 2017; Yao and Reis-Dennis 2017). Edouard Machery, for example, calls the predictive validity of current implicit measures “ridiculous” (2017a).
Skepticism About Bias
69
The second commonly held concern is that individuals’ scores on implicit measures fluctuate considerably over time. Some psychological tests, such as the best validated tests of personality, show a high degree of stability over time (“test–retest reliability”). This means that if you take a (well-validated) personality test today and it says that you’re very shy, then if you take that test again in a few weeks or months, it will likely say you’re shy again. However, the most popular measures of implicit biases don’t work like this. If you take an IAT today that says you have a strong bias against black people, and then you take it again in a week, it might now say that you have a slight bias against white people. (See Gawronski et al. 2017 for review.) Thus, critics argue that implicit measures do not provide meaningful information about individuals’ minds or behavior. If implicit bias exists, this line of thought goes, our current tools for measuring it are fatally flawed. Reply What does it mean for a psychological test to predict behavior in small to medium ways and for people’s scores on that test to vary considerably over time? These are not trivial issues, but nor are they reasons to believe that implicit measures are fatally flawed. Rather, what ought to be emphasized is that (a) the status of implicit measures is not radically different from the status of many other widely accepted psychological tests, in particular measures of attitudes, and (b) there are many avenues for improving the current crop of implicit measures, rather than abandoning them. Expectations about how well a given test performs should be calibrated to the difficulty of measuring what the test aims to capture. How predictively powerful should we expect implicit measures to be? Compare the average correlations between individuals’ scores on implicit measures and measures of behavior (.14 to .37) to correlations between other constructs and beha vior: beliefs and stereotypes about outgroups and behavior (r =.12; Talaska et al. 2008); SAT scores and freshman grades in college (r =.24; Wolfe and Johnson 1995); parents’ and their children’s socioeconomic status (r =.2 to .3; Strenze 2007). The fact that tests of implicit bias fall within the same range as these other kinds of tests is telling. Predicting behavior is difficult. This isn’t cause for despair, but rather for having modest ambitions. One core reason why predicting behavior is difficult is that people act differently in different contexts. Continuing the example from above: a shy person is unlikely to act shy all of the time. She might be outgoing in her neighborhood but not when traveling abroad; she might be vocal and loud when playing soccer but not in class; her leadership style might involve bold risk-taking, even if she is timid in large meetings. Analogously, one might act biased against women in some contexts—for example, when teaching a class—but not in other contexts—for example, meeting with one’s boss. This is to be expected according to at least one leading theory of implicit bias—APE (op. cit)—because what tests like the IAT capture is more or less
70
Michael Brownstein
what’s on a person’s mind at a given time, in a given situation, rather than what that person is like all of the time, in all situations. A central task for researchers is to identify the situations and other variables that affect whether biases come to mind and how they affect our behavior. Well before criticism of implicit bias caught on, some researchers were already developing theoretical accounts of key moderators of implicit bias/behavior relations. Friese, Hofmann, and Schmitt (2008), for example, offered systematic, detailed, theoretically derived, and empirically supported predictions about precisely when and why implicit measures should and should not predict behavior, such as whether individuals were or were not motivated to control their spontaneous impulses, whether individuals were high or low in working memory capacity (and so were differentially able to control their impulses), and so on. In research on implicit bias, just as in other psychological research, such as the science of personality, the relationship between one generic construct—“anti-black bias” or “shyness”—and behavior is expected to be small when key moderators are ignored. And this is exactly what has been found; not a single meta-analysis of implicit measures has reported nonsignificant correlations close to zero or negative correlations with behavior. When key moderators are taken into consideration, it is clear that implicit measures are scientifically worthwhile instruments. For example, Cameron and colleagues (2012) analyzed 167 studies that used sequential priming measures. They found a small average correlation between sequential priming tasks and behavior (.28). Yet, much as Friese and colleagues (2008) predicted, correlations were substantially higher under theoretically expected conditions and close to zero under conditions where no relation would be expected. Likewise, Kurdi and colleagues’ (2019) meta-analysis finds sig nificantly higher correlations between IATs and behavior when researchers adhere to a set of best methodological practices. This finding points the way toward avenues for continued improvement of implicit measures. The second core challenge to the measurement of implicit bias focuses on the stability of people’s scores over time. This too is a serious challenge for implicit bias research, but here too the lesson is not that these tools are fatally flawed, but rather that more research is needed to continue to improve them. In a recent longitudinal study (i.e., a study of change over time), Gawronski and colleagues (2017) find that implicit measures of self-concept, political attitudes, and racial attitudes were less stable across 1–2 months than corresponding explicit measures. It would, however, be premature to interpret such findings as evidence that implicit measures are unreliable, or generally less reliable or useful than explicit measures. The natural analogies here are to measures of heart rate and blood pressure, which fluctuate dramatically across contexts (because the measures are accurately tracking that heart rate and blood pressure themselves fluctuate dramatically), but are also used to measure more chronic, trait-like features of individuals.
Skepticism About Bias
71
Using these tools to measure chronic constructs requires, among other things, doing as much as possible to hold fixed the contexts of measurement. Hence the phrase “resting heart rate.” Strictly speaking, a one-time measurement of heart rate is merely capturing a fleeting event, but, with careful attention to context, it can be used to gather (partial, defeasible) evidence about more stable heart-rate dispositions. In light of these findings, several studies have explored ways to tweak implicit measures to make them more robust across contexts (e.g., Gschwendner et al. 2008; Cooley and Payne 2016).
7 Conclusion Undoubtedly, there are challenges, fundamental open questions, problematic assumptions, and conflicting data in research on implicit social cognition. Perhaps the central question arising from these issues is whether they demonstrate the research to be useless or, rather, whether they point toward areas in need of further theorizing, data collection, and analysis. In my view, critics and defenders of implicit bias research should join together in calling for continued research on these challenges. Doing so is not tantamount to asserting that everything is peachy and perfect as-is, nor does it entail a commitment to widespread current assumptions (e.g., that implicit biases are unconscious) or to specific methods (e.g., the IAT). Rather, the com mitment should be to avoiding ungrounded assumptions, surveying the full sweep of data, modestly keeping in mind the complexities of the human mind and social life, and aiming for improvement.
SUGGESTIONS FOR FUTURE READING On the measurement and nature of implicit bias: • • • • •
Banaji, M.R. and Greenwald, A.G. (2016) Blindspot: Hidden Biases of Good People. New York: Bantam. Brownstein, M. (2015) Implicit bias. In E. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2015). http://plato.stanford.edu/ entries/implicit-bias/ Brownstein, M., Madva, A., and Gawronski, B. (2020). Understanding Implicit Bias: Putting the Criticism into Perspective Pacific Philosophical Quarterly. DOI: 10.1111/papq.12302. Hahn, A. and Gawronski, B. (2018) Implicit social cognition. Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience, Developmental and Social Psychology, 4: 395. Payne, B.K., Vuletich, H.A., and Lundberg, K.B. (2017) Flipping the script on implicit bias research with the bias of crowds. Psychological Inquiry, 28(4): 306–311.
72 Michael Brownstein •
Saul, J. (2018) (How) should we tell implicit bias stories? Disputatio, 10 (50): 217–244. https://doi.org/10.2478/disp-2018-0014
Criticism of implicit bias:
•
Blanton, H., Jaccard, J., Strauts, E., Mitchell, G., and Tetlock, P.E. (2015) Toward a meaningful metric of implicit prejudice. Journal of Applied Psychology, 100(5): 1468–1481. Hermanson, S. (2017a) Implicit bias, stereotype threat, and political correct ness in philosophy. Philosophies, 2(2): 12. doi:10.3390/philosophies2020012 Machery, E. (2016) De-Freuding implicit attitudes. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy: Volume 1, Metaphysics and Epistemology. Oxford: Oxford University Press. Machery, E. (2017a) Do indirect measures of biases measure traits or situations? Psychological Inquiry, 28(4): 288–291. Singal, J. (2017) Psychology’s favorite tool for measuring racism isn’t up to the job. New York Magazine. http://nymag.com/scienceofus/2017/ 01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html
• • • •
DISCUSSION QUESTIONS 1 Brownstein distinguishes six lines of critique of implicit bias. Which of these is the most convincing, in your view? What are the most significant similarities and differences between these? 2 Brownstein discusses the difference between inequalities and inequities. What are some examples of inequalities that might be caused by implicit bias that aren’t necessarily inequities? 3 Mac Donald argues that socioeconomic disparities between black and white Americans is due to differences in cultural values, family structure, interests, and abilities. What specific things do you think she has in mind here? How might one go about proving or disproving her claim? 4 What key facts speak for or against the claim that racial bias is a significant cause of police shootings of black Americans? 5 Brownstein suggests that one reason why women appear to be having increased success on the tenure-track job market in philosophy is that more and more philosophers are taking gender bias seriously. How might you go about trying to prove or disprove this claim? 6 Some critics, like Hermanson, argue that discriminatory behavior that other scholars have suggested might be caused by implicit bias is more likely caused by explicit bias. What’s the difference between implicit and explicit bias? Does the implicit/explicit distinction remind you of any other distinctions people have made when theorizing about the mind? 7 Brownstein argues for an “interactionist” view of the relationship between individual minds and social structures. How would you summarize this view? Use an example to show how an interactionist
Skepticism About Bias
73
would analyze this example differently compared to an individualist or a structuralist. 8 What can researchers do to avoid over-selling their research in public? 9 What are some of the ways that Brownstein suggests that both proponents of implicit bias research and critics of it have cherry-picked studies that support their favored conclusions? 10 What are the two principal psychometric critiques of implicit bias research that Brownstein describes. How damning do you think these two issues are for implicit bias research?
REFERENCES Alexander, M. (2010) The New Jim Crow: Mass Incarceration in the Age of Colorblindness. New York: New Press. Allen-Hermanson, S. (2017) Leaky pipeline myths: In search of gender effects on the job market and early career publishing in philosophy. Frontiers in Psychology, 8: 953. doi:10.3389/fpsyg.2017.00953 Anderson, E. (2010) The Imperative of Integration. Princeton, NJ: Princeton University Press.) Atiba Goff, P., Lloyd, T., Geller, A., Raphael, S., and Glaser, J. (2016) The Science of Justice: Race, Arrests, and Police Use of Force. Center for Policing Equity (online report). https://policingequity.org/what-we-do/research/the-science-of-justi ce-race-arrests-and-police-use-of-force Ayala, S. (2016) Speech affordances: A structural take on how much we can do with our words. European Journal of Philosophy, 24(4): 879–891. Ayala‐López, S. (2018) A structural explanation of injustice in conversations: It’s about norms. Pacific Philosophical Quarterly, 99(4): 726–748. https://doi.org/10. 1111/papq.12244 Baron, S., Dougherty, T., and Miller, K. (2015) Why is there female under-representa tion among philosophy majors? Evidence of a pre-university effect. Ergo, 2(14). http:// dx.doi.org/10.3998/ergo.12405314.0002.014 Bartlett, T. (2017) Can we really measure implicit bias? Maybe not. The Chronicle of Higher Education. http://www.chronicle.com/article/Can-We-Really-Measure-Imp licit/238807 Brownstein, M. (2018) The Implicit Mind: Cognitive Architecture, the Self, and Ethics. New York: Oxford University Press. Brownstein, M. and Madva, A. (2012) The normativity of automaticity. Mind and Language, 27(4): 410–434. Brownstein, M., Madva, A., and Gawronski, B. (2020) Understanding implicit bias: Putting the criticism into perspective. Pacific Philosophical Quarterly. Published online via early view: https://doi.org/10.1111/papq.12302 Cameron, C.D., Brown-Iannuzzi, J.L., and Payne, B.K. (2012) Sequential priming measures of implicit social cognition: A meta-analysis of associations with beha vior and explicit attitudes. Personality and Social Psychology Review, 16: 330–350. Chetty, R., Hendren, N., Jones, M.R., and Porter, S.R. (2018) Race and Economic Opportunity in the United States: An Intergenerational Perspective (No. w24441).
74
Michael Brownstein
National Bureau of Economic Research. http://www.equality-of-opportunity.org/a ssets/documents/race_paper.pdf Coates, T.N. (2015) Between the World and Me. New York: Random House. Cooley, E. and Payne, B.K. (2016) Using groups to measure intergroup prejudice. Per sonality and Social Psychology Bulletin. November. doi:10.1177/0146167216675331. Drug Policy Alliance (2018) From Prohibition to Progress: A Status Report on Mar ijuana Legalization. http://www.drugpolicy.org/sites/default/files/dpa_marijuana_ legalization_report_feb14_2018_0.pdf El-Alayli, A., Hansen-Brown, A.A., and Ceynar, M. 2018. Dancing backwards in high heels: Female professors experience more work demands and special favor requests, particularly from academically entitled students. Sex Roles, 79(3–4): 136–150. Epp, C.R., Maynard-Moody, S., and Haider-Markel, D.P. (2014) Pulled Over: How Police Stops Define Race and Citizenship. Chicago, IL: University of Chicago Press. Friese, M., Hofmann, W., and Schmitt, M. (2008) When and why do implicit mea sures predict behaviour? Empirical evidence for the moderating role of opportu nity, motivation, and process reliance. European Review of Social Psychology, 19 (1): 285–338. Gawronski, B. and Bodenhausen, G. (2006) Associative and propositional processes in evaluation: an integrative review of implicit and explicit attitude change. Psychological Bulletin, 132(5): 692–731. Gawronski, B. and Bodenhausen, G. (2014) The associative-propositional evaluation model: Operating principles and operating conditions of evaluation. In J.W. Sherman, B. Gawronski, and Y. Trope (eds), Dual-process Theories of the Social Mind (pp. 188–203). New York: Guilford Press. Gawronski, B., Morrison, M., Phills, C., and Galdi, S. (2017) Temporal stability of implicit and explicit measures: A longitudinal analysis. Personality and Social Psychology Bulletin, 43(3): 300–312. Greenwald, A., Poehlman, T., Uhlmann, E., and Banaji, M. (2009) Understanding and using the implicit association test: III meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97(1): 17–41. Gschwendner, T., Hofmann, W., and Schmitt, M. (2008) Differential stability: The effects of acute and chronic construct accessibility on the temporal stability of the Implicit Association Test. Journal of Individual Differences, 29: 70–79. Guarino, C.M. and Borden, V.M. (2017) Faculty service loads and gender: Are women taking care of the academic family? Research in Higher Education, 58(6): 672–694. Hahn, A., Judd, C.M., Hirsh, H.K., and Blair, I.V. (2014) Awareness of implicit attitudes. Journal of Experimental Psychology: General, 143(3): 1369. Haslanger, S. (2015) Social structure, narrative, and explanation. Canadian Journal of Philosophy, 45(1): 1–15. Hehman, E., Flake, J.K., and Calanchini, J. (2017) Disproportionate use of lethal force in policing is associated with regional racial biases of residents. Social Psychological and Personality Science. https://doi.org/10.1177/1948550617711229 Hermanson, S. (2017a) Implicit bias, stereotype threat, and political correctness in philosophy. Philosophies, 2(2): 12. doi:10.3390/philosophies2020012 Hermanson, S. (2017b) Review of Implicit Bias and Philosophy (vol. 1 & 2), Edited by Michael Brownstein and Jennifer Saul, Oxford University Press, 2016. Philosophy, 92(2): 315–322.
Skepticism About Bias
75
Hermanson, S. (2018) Rethinking Implicit Bias: I want my money back. http://lei terreports.typepad.com/blog/2018/04/sean-hermanson-rethinking-implicit-bias-i-wa nt-my-money-back.html Jennings, C.D. (2016) Philosophy placement data and analysis: An update. http://da ilynous.com/2016/04/15/philosophy-placement-data-and-analysis-an-update/ Jussim, L. (2018) Comment on Hermanson, S., 2018, Rethinking Implicit Bias: I want my money back. http://leiterreports.typepad.com/blog/2018/04/sean-hermanson-rethinkin g-implicit-bias-i-want-my-money-back.html Kester, J. (2001) A revolution in social psychology. APS Observer Online, 14(6). http s://www.psychologicalscience.org/observer/0701/family.html Kleider-Offutt, H.M., Bond, A.D., Williams, S.E., and Bohil, C.J. (2018) When a face type is perceived as threatening: Using general recognition theory to understand biased categorization of Afrocentric faces. Memory & Cognition, 6(5):716–728. Kristof, N. (2015) Our biased brains. New York Times, May 7. https://www.nytim es.com/2015/05/07/opinion/nicholas-kristof-our-biased-brains.html Kurdi, B., Seitchik, A., Axt, J., Carroll, T., Karapetyan, A., Kaushik, N., Tomezsko, D., Greenwald, A., and Banaji, M. (2018) Relationship between the Implicit Association Test and intergroup behavior: A meta-analysis. American Psychologist. Lai, C.K., Hoffman, K.M., and Nosek, B.A. (2013) Reducing implicit prejudice. Social and Personality Psychology Compass, 7(5): 315–330. Lind, D. 2015. The FBI is trying to get better data on police killings. Here’s what we know now. Vox, April 10. https://www.vox.com/2014/8/21/6051043/how-many-p eople-killed-police-statistics-homicide-official-black Lowery, W. 2016: Aren’t more white people than black people killed by police? Yes, but no. The Washington Post, July 11. https://www.washingtonpost.com/news/p ost-nation/wp/2016/07/11/arent-more-white-people-than-black-people-killed-by-p olice-yes-but-no/?noredirect=on&utm_term=.9fae2c0f22f4 Mac Donald, H. (2017a) All that kneeling ignores the real cause of soaring black homicides. Manhattan Institute, September 27. https://www.manhattan-institute. org/html/all-kneeling-ignores-real-cause-soaring-black-homicides-10655.html Mac Donald, H. (2017b) The War on Cops: How the New Attack on Law and Order Makes Everyone Less Safe. New York: Encounter Books. Mac Donald, H. (2017c) The false “science” of implicit bias. Wall Street Journal, October 9. https://www.wsj.com/articles/the-false-science-of-implicit-bias-1507590908 Machery, E. (2017a) Do indirect measures of biases measure traits or situations? Psychological Inquiry, 28(4): 288–291. Machery, E. (2017b) Should we throw the IAT on the scrap heap of indirect measures? Comment on The Brains Blog, January 17. http://philosophyofbrains.com/2017/01/ 17/how-can-we-measure-implicit-bias-a-brains-blog-roundtable.aspx MacNell, L., Driscoll, A., and Hunt, A. (2014) What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4): 291–303. Madva, A. and Brownstein, M. (2018) Stereotypes, prejudice, and the taxonomy of the implicit social mind. Noûs. doi:10.1111/nous.12182 Magnuson, K. and Waldfogel, J. (eds) (2008) Steady Gains and Stalled Progress: Inequality and the Black–White Test Score Gap. New York: Russell Sage Foundation. Mallon, R. (2018) Constructing race: Racialization, causal effects, or both? Philosophical Studies, 175: 1039–1056. https://doi.org/10.1007/s11098-018-1069-8 Mengel, F., Sauermann, J., nd Zölitz, U. (2018) Gender bias in teaching evaluations. Journal of the European Economic Association. https://doi.org/10.1093/jeea/jvx057
76
Michael Brownstein
Moss-Racusin, C.A., Dovidio, J.F., Brescoll, V.L., Graham, M.J., and Handelsman, J. (2012) Science faculty’s subtle gender biases favor male students. Proceedings of the National Academy of Sciences, 109(41): 16474–16479. Mullainathan, S. (2015) Police killings of blacks: Here is what the data say. New York Times, October 16. https://www.nytimes.com/2015/10/18/upshot/police-killings-of blacks-what-the-data-says.html Nittrouer, C.L., Hebl, M.R., Ashburn-Nardo, L., Trump-Steele, R.C., Lane, D.M., and Valian, V. (2018) Gender disparities in colloquium speakers at top universities. Proceedings of the National Academy of Sciences, 115(1): 104–108. Nix, J., Campbell, B.A., Byers, E.H., and Alpert, G.P. (2017) A bird’s eye view of civilians killed by police in 2015: Further evidence of implicit bias. Criminology & Public Policy, 16(1): 309–340. Oswald, F.L., Mitchell, G., Blanton, H., Jaccard, J., and Tetlock, P.E. (2013) Pre dicting ethnic and racial discrimination: A meta-analysis of IAT criterion studies. Journal of Personality and Social Psychology, 105: 171–192. Payne, B.K., Vuletich, H.A., and Lundberg, K.B. (2017) Flipping the script on implicit bias research with the bias of crowds. Psychological Inquiry, 28(4): 306–311. Penner, L.A., Hagiwara, N., Eggly, S., Gaertner, S.L., Albrecht, T.L., and Dovidio, J.F. (2013) Racial healthcare disparities: A social psychological analysis. European Review of Social Psychology, 24(1): 70–122. Pfaff, J. (2017) Locked In: The True Causes of Mass Incarceration—And How To Achieve Real Reform. New York: Basic Books. Saul, J. (2018) (How) should we tell implicit bias stories? Disputatio, 10(50): 217–244. https://doi.org/10.2478/disp-2018-0014 Singal, J. (2017) Psychology’s favorite tool for measuring racism isn’t up to the job. New York Magazine. http://nymag.com/scienceofus/2017/01/psychologys-racism-measur ing-tool-isnt-up-to-the-job.html Strenze, T. (2007) Intelligence and socioeconomic success: A meta-analytic review of longitudinal research. Intelligence, 35: 401–426. Talaska, C., Fiske, S., and Chaiken, S. (2008) Legitimating racial discrimination: emotions, not beliefs, best predict discrimination in a meta-analysis. Social Justice Research, 21(3): 263–296. Thompson, M., Adleberg, T., and Nahmias, E. (2016) Why do women leave philosophy? Surveying students at the introductory level. Philosophers’ Imprint, 16(6): 1–36. Williams, W.M. and Ceci, S.J. (2015) National hiring experiments reveal 2: 1 faculty preference for women on STEM tenure track. Proceedings of the National Acad emy of Sciences. https://doi.org/10.1073/pnas.1418878112 United States Department of Justice, Civil Rights Division (2015) Investigation of the Ferguson Police Department. https://www.justice.gov/sites/default/files/opa/p ress-releases/attachments/2015/03/04/ferguson_police_department_report.pdf Wolfe, R. and Johnson, S. (1995) Personality as a predictor of college performance. Education and Psychological Measurement, 55(2): 177–185. Yao, V. and Reis-Dennis, S. (2017) “I love women:” The conceptual inadequacy of “Implicit Bias”. Blog post. http://peasoup.us/2017/09/love-women-conceptual-ina dequacy-implicit-bias-yao-reis-dennis/
4
Bias and Knowledge Two Metaphors Erin Beeghly
If you care about securing knowledge, what is wrong with being biased? Often it is said that we are less accurate and reliable knowers due to implicit biases. Likewise, many people think that biases reflect inaccurate claims about groups, are based on limited experience, and are insensitive to evidence. Chapter 4 investigates objections such as these with the help of two popular metaphors: bias as fog and bias as shortcut. Guiding readers through these metaphors, I argue that they clarify the range of knowledge-related objections to implicit bias. They also suggest that there will be no unifying problem with bias from the perspective of knowledge. That is, they tell us that implicit biases can be wrong in different ways for different reasons. Finally, and perhaps most importantly, the metaphors reveal a deep—though perhaps not intractable—disagreement among theorists about whether implicit biases can be good in some cases when it comes to knowledge.
1 How We Talk About Bias In the fall of 2016, The New York Times published a six-part series of videos—Who Me? Biased?—about implicit bias and race. Part of the challenge of these videos was to convey, as quickly and effectively as possi ble, what implicit bias is and why anyone should care about it. To meet this challenge, filmmaker Saleem Reshamwala used metaphor. In the first video, he explained to viewers that biases are “little mental shortcuts that hold judgments that you might not agree with” (Reshamwala 2016). One of his guests, psychologist Dolly Chugh, likened implicit bias to a “fog that you’ve been breathing in your whole life.” These two metaphors—bias as fog and bias as shortcut—are two of many metaphors that one finds in popular and academic writing about implicit bias. One TedX presenter explains to her audience that implicitly biased people live “in the matrix”—a reference to a 1990s film in which characters believe that they are in touch with reality but their experiences are in fact generated by a computer (Funchess 2014). In Blindspot: Hidden Biases of Good People, psychologists Mahzarin Banaji and Anthony Greenwald write that implicit biases are “mind bugs,” deploying a computer programming
78
Erin Beeghly
metaphor (Banaji and Greenwald 2013: 13). Nilanjana Dasgupta uses the image of a mirror, writing that biases are “mirror-like reflections” of the social world (Dasgupta 2013: 240). In the sections that follow, I examine two of the most striking metaphors mentioned above: bias as fog and bias as shortcut. As I show, each metaphor makes a distinctive claim about the relationship between bias, knowledge, and error. The metaphors also clarify the range of knowledge-related objections to implicit bias; specifically, they tell us that implicitly biased judgments can be wrong in different ways for different reasons.
2 What Is an Epistemic Objection? - - meaning The word “epistemic” comes from the Greek word “episteme,” “knowledge.” Epistemic objections are objections concerning knowledge and belief. Imagine a teenager and a parent having an early morning conversation. The parent says to the teen, “You should get ready to go. The bus will be here at 8:30am.” The teen replies, “That’s not true. The bus is coming at 8:50.” The teen is making an epistemic objection. She is arguing that her father’s belief is false. Suppose the parent tries to defend his claim by saying, “I know the bus schedule. Get ready.” The teen replies: “You shouldn’t trust your memory. I just checked my phone for the latest bus times. The next bus is coming at 8:50.” These are epistemic objections too. The teen asserts that her father’s belief is unwarranted by the evidence; moreover, she points out that the way in which he formed the belief is unreliable or perhaps less reliable than the way in which she formed hers. As this example suggests, epistemic criticism is a constant feature of human life. Humans constantly evaluate each other in epistemic terms, and we are capable of reflecting on the ways in which our own judgments and processes of reasoning could be improved.
3 Metaphors and the Epistemic Significance of Bias What are the best epistemic objections to implicit bias? Is there a single objection that always applies when people make biased judgments? Might there be cases in which implicitly biased judgments are permissible or even good from the perspective of knowledge? Or, are biased judgments necessarily bad from an epistemic point of view? When trying to understand how one might answer these questions, it is useful to start with metaphors. Metaphors often serve as what philosopher Elisabeth Camp calls interpretative frames. Camp explains: a representation [which could be visual or linguistic in nature] functions as a frame when an agent uses it to organize their overall intuitive thinking … a frame functions as an overarching, open-ended interpretative principle: it
Bias and Knowledge
79
purports to determine for any property that might be ascribed to the subject, both whether and how it matters. (Camp 2020: 309, original emphasis) Two features of frames are especially important. First, they make certain features of a person or thing salient in cognition or perception. Also, Camp says, metaphors attribute centrality to certain features of a person, group, or thing. For example, they identify some features of a thing as having special causal powers and as especially important to making the thing what it is (310). To better understand these two effects of metaphor, consider an example. In the play Romeo and Juliet, Romeo says about his love interest, “Juliet is the sun.” Romeo’s use of metaphor renders a specific feature of Juliet salient: her stunning physical beauty, i.e., her “hotness.” As Juliet’s beauty becomes salient, other features of her recede into the background. The metaphor also attributes centrality to Juliet’s beauty. Her desire-inducing physical appearance is what makes Juliet worthy of Romeo’s love and devotion. It is a driver of drama in the play and is asserted to be crucially important to making Juliet the special person she is. Here is what Camp’s view suggests. The metaphors used to talk about implicit bias are not mere rhetorical flourishes whose main purpose is to make discussions of implicit bias more exciting or accessible. On her view, metaphors are cognitively crucial. They reveal how speakers intuitively conceptualize a phenomenon like implicit bias. Camp puts the point like this: metaphors—and interpretative frames more generally—provide the “intuitive ‘mental setting’ (Woodfield 1991: 551) or background against which specific beliefs and questions are formulated” (307; Lakoff and Johnson 1980). If she is correct, investigating the metaphors associated with implicit bias will tell us something interesting about how theorists intuitively understand bias and its epistemic significance. These metaphors will also give us a vivid entry point into thinking about when and why implicitly biased judgments are problematic from an epistemic point of view.
4 Living in a Fog Start with the metaphor of fog. Fog is “a state of the weather in which thick clouds of water vapor or ice crystals suspended in the atmosphere form at or near the earth’s surface, obscuring or restricting visibility to a greater extent than mist” (OED 2017a). At the website for Take the Lead—an organization that promotes women in business—writer Michele Weldon says: Implicit gender bias has hung around women leaders in the workplace in nearly every imaginable sector and discipline for generations. The bias surrounds the workplace culture in a fog at times thick and impenetrable, and at other times, a mist that only feels instinctively palpable. (Weldon 2016)
80
Erin Beeghly
If implicit bias is fog, the effect is obvious: people will have a hard time perceiving the world as it really is (see Siegel, Chapter 5, “Bias and Percep tion”). Sensory perceptions of other people and the world will become fuzzy, impaired. If you look at someone through a fog, for example, you might think, “I can’t really see you as you are. I see the fog, and I see a blurry version of you.” In the Times video, the metaphor is taken a step further. Not only does fog obstruct visual and auditory perception, it becomes internalized. “We’ve all grown up in a culture,” says Chugh, “with media images, news images, conversations we’ve heard at home, and education … think of that as a fog that we’ve been breathing our whole lives, we never realized what we’ve been taking in.” That fog, Reshamwala adds, “causes associations that lead to biases.” For example, when you hear peanut butter, you think jelly. That association exists because peanut butter and jelly are typically paired together in our culture. Similarly, Chugh observes, “in many forms of media, there is an overrepresentation of black men and violent crime being paired together.” The result is, as educational scholar Shaun Harper puts it in the video, “deep down inside we have been taught [or perhaps have simply absorbed the view] that black men are violent and aggressive and not to be trusted, that they’re criminals, that they’re thugs.” Remember that metaphors, like all interpretative frames, are supposed to do two important things. First, they render certain aspects of a phenomenon more salient in cognition, and they assert claims about the causal centrality of certain properties. What becomes salient if we think of implicit bias as fog? Here is one thing: its epistemic badness. Bias, if it is a kind of fog, clouds vision and distorts hearing. Cognitive fog is no better. When people talk about the fog of war, what they mean is that war creates an environment where soldiers cannot think clearly, cannot accurately evaluate risks, and cannot make good decisions. Oppressive social conditions create something similar, according to this metaphor: the fog of oppression. This way of thinking about bias resonates, in particular, with theorists of oppression, especially philosophers of race. In The Racial Contract, for example, Charles Mills describes conditions of white supremacy as requiring “a certain schedule of structured blindness and opacities in order to establish and maintain the white polity” (Mills 1999: 19). Applied to bias, the thought is something like this. Many folks today may not explicitly endorse racist, sexist, classist, or otherwise prejudiced views. Yet they—especially but not exclusively members of dominant groups—absorb these problematic views and, as a result, think and act in ways that reproduce conditions of injustice. Even so, they do not recognize themselves as being part of the problem; sometimes they don’t even realize that there is a problem. Mills prefers to use the metaphor of collective hallucination to describe this state of ignorance (18). But fog is
Bias and Knowledge
81
supposed to function similarly. For oppressive conditions to persist, the fog/hallucination must continue. How does bias—understood as fog—frustrate accurate vision and cognition? Here is one possibility: through group stereotypes. Stereotypes fit the description of ‘fog’ at least in one way. They exist in the world and not just in individuals’ minds. During the eighteenth and nineteenth centuries, the word “stereotype” was a technical term in the printing industry. Stereo types were the metal plates used in printing presses. The process of creating these plates was called ‘stereotyping.’ The first book stereotyped in the USA was the New Testament in 1814. By the early twentieth century, every newspaper office had a stereotyping room, where both full-page plates for regular pages and smaller plates for advertising were produced. Common images and phrases reproduced by this technology were also deemed “stereotypical.” In World War II propaganda posters, one finds a stereo typical Japanese person, “Tokio Kid,” who possesses fangs, insect-like ears, and a dagger dripping blood (see Figure 4.1). In 1987, Time Magazine put a group of stereotypically ‘nerdy’ Asian-American kids on its cover and themed the issue “Asian-American Whiz Kids” (Time 1987). Remember what Chugh says: media and news images partially constitute the fog. This claim dovetails with assertions by feminist scholars that stereotypes exist “in the social imaginary” (Fricker 2009; Rankine and Loffreda 2017) and in “the mind of the world” (Siegel 2017). A very common claim about stereotypes—which would explain why they constitute a kind of fog—is that stereotypes are necessarily false or misleading. As philosopher Lawrence Blum notes, By and large, the literature on stereotypes (both social psychological and cultural) agrees that the generalizations in question are false or misleading, and I think this view generally accords with popular usage … . The falseness of a stereotype is part of, and is a necessary condition of, what is objectionable about stereotypes in general. (Blum 2004: 256) If stereotypes were always false or misleading, one could diagnose what is epistemically wrong with implicit bias in simple terms. Implicit biases would be constituted by group stereotypes. Once internalized, stereotypes would cause individuals to form inaccurate beliefs about social groups and the individuals that belong to them. Thinking about the metaphor of bias as fog thus leads us to think of the epistemic significance of implicit bias in a particular way. Biases are always epistemically bad, if we adopt the metaphor, and their badness is multidimensional. Biases are widely thought to articulate false or misleading claims about groups, which—once internalized—taint perceptual and cognitive judgments about individuals.
82
Erin Beeghly
Figure 4.1 “Tokio Kid.” American World War II propaganda.
5 Taking Shortcuts If one were looking for the most popular metaphor about bias, there would be no contest. Implicit bias is most often thought of as a shortcut (Ross 2014; Google 2014; Kang 2016). In The Times video, one finds this metaphor alongside bias as fog. Yet the convergence is puzzling. The two metaphors
Bias and Knowledge
83
have contradictory implications when it comes to the epistemic significance of bias. They also potentially diverge in their appraisals of when and why implicitly biased judgments undermine knowledge. To see this, think about what a shortcut is. Here are two definitions: “a path or a course taken between two places which is shorter than the ordinary road” and “a compendious method of attaining some object” (OED 2017b). The metaphor of bias as a shortcut is largely due to psychologists Daniel Kahneman and Amos Tversky. Since the early 1970s, their work on heuristics and biases has been enormously influential in psychology, economics, legal theory, and philosophy (Kahneman and Tversky 1973; Tversky and Kahneman 1973; 1974). Humans, they argue, often engage in fast ways of thinking. Fast thinking saves time and mental energy. It also sometimes results in correct predictions and can be reasonable. However, fast thinking is unreliable in certain contexts, leading “to severe and systematic errors” (Kahneman and Tversky 1973: 237). Much of their life’s work consists in documenting the myriad of ways in which biases cause unreliable judgments. In the 1980s and 1990s, Kahneman and Tversky’s work was taken up by social psychologists who studied stereotyping. In an influential textbook on social cognition, Susan Fiske and Shelley Taylor wrote that humans are “cognitive misers” (Fiske and Taylor 1984). We have limited time, knowledge, and attention. Because of this, they argue, humans automatically opt for quick, efficient ways of thinking. Hence, we stereotype. Stereotyping is a substitute for more careful, slow ways of forming judgments about individuals. To see how biases function as shortcuts, consider an example that I have used elsewhere, which I call I Need a Doctor. Imagine a panicked father in an emergency room, holding an unconscious child in his arms. “Where is the doctor?” he might yell, “I need a doctor.” The man might grab the first person he sees in a white coat, relying on the stereotype that doctors wear white coats, not caring that he is grabbing this or that particular doctor, not caring about the doctor at all in their individuality. Using shortcuts sometimes works. However, it will sometimes also fail. For example, during a recent emergency room visit, I saw a sign on the wall. It read: “Doctors wear blue scrubs.” A sign like this was necessary because white coats are strongly associated with doctors. Stereotypically, doctors wear white coats. As one M.D. puts it, the white coat “has served as the preeminent symbol of physicians for over 100 years” (Hochberg 2007: 310). Likewise, white coats are associated with competence. As one recent study found: “patients perceived doctors as more trustworthy, responsible, authoritative, knowledgeable, and caring in white coats” (Tiang et al. 2017: 1). When the father reaches for the person in the white coat, he is thus doing something entirely typical. He is using a stereotypic association to identify someone as a doctor, and he is forming expectations about that person on that basis. Depending on how one describes the details of this case, his judgment may even count as a manifestation of implicit bias. However we describe the case, this much is clear: his judgment and behavior betray reliance on a cognitive shortcut.
84
Erin Beeghly
What is rendered salient if we think of implicit biases as shortcuts? Their epistemic virtues! Shortcuts are, by definition, “compendious,” which means “economical,” “profitable,” “direct,” and “not circuitous” (OED 2017c). To call stereotypes shortcuts is thus to pay them a compliment. It is to underscore their pragmatic and cognitive utility. This metaphor also emphasizes the universality of bias. Philosopher Keith Frankish writes: an implicitly biased person is one who is disposed to judge others according to a stereotyped conception of their social group (ethnic, gender, class, and so on), rather than by their individual talents. (Frankish 2016: 24) Since all humans have the disposition to use stereotypic shortcuts, we are all biased. In The Times videos, Reshamwala emphasizes the normalcy and universality of bias repeatedly. “If you’re seeing this,” he says, “and are thinking that it doesn’t apply to you, well, you might be falling prey to the blindspot bias. That’s a scientific name for a mental bias that allows you to see biases in others but not yourself. We’re [all] biased!” The universality of bias is due to the fact that stereotypes—in the form of schemas associated with social groups—structure human cognition in foundational ways (Beeghly 2015). One can already see how the metaphor of bias as shortcut differs from that of bias as fog. When someone says implicit bias is fog, they are committed to saying that it is always an obstruction, something that makes it harder to perceive and judge individuals clearly. When someone says that bias is a shortcut, they imply—whether intentionally or not—that biases could facilitate perception and judgment by providing an efficient means of judging and making predictions about individuals. In medical contexts where doctors wear white coats, for example, and hospital staff wears other attire, relying on the stereotype of doctors as wearing white coats will help you quickly and reliably predict who is and who is not a doctor. It is also possible that some stereotypes are based on a lifetime of experience, perception, even wisdom, including stereotypes based on gender, ethnicity, and religion. 5.1 The Diversity of Epistemic Objections to Bias In addition to revealing the potential epistemic benefits of bias, the meta phor of bias as shortcut also invites us to think more carefully about the conditions under which implicitly biased judgments are epistemically problematic. Consider, first, the objection that implicit biases are constituted by false, unwarranted stereotypes. As I noted in Section 4, stereotypes are typically thought of as false or misleading group generalizations. Often they are also
Bias and Knowledge
85
thought to be unwarranted by evidence. This way of thinking about stereo typing fits perfectly with the metaphor of bias as fog. However, once one starts to think of stereotypes as shortcuts, one begins to wonder, “is it really true that stereotypes are always false and based on limited experience?” Think about the following gender stereotype: women are empathetic. This stereotype is likely true, if considered as a claim about most women or as a claim about the relative frequency of empathic characteristics in women compared to men. We live in a patriarchal society. When women in a society like ours are raised to value empathy and actually tend to self-describe as empathic, when they tend to fill social roles where empathy is required or beneficial, women will, on average, have a greater disposition for empathy than men and one that is stable over time (Klein and Hodges 2001; Ickes 2003). Accordingly, the claim that women are empathetic could be true. Moreover, as feminist scholars have argued about similar stereotypes, we would be justified in implicitly or explicitly believing it was true (de Beauvoir 1953: xxiv; Haslanger 2012: 449; Haslanger 2017: 4). Such observations complicate epistemic evaluations of bias. If biased judgments were always based on false, unwarranted beliefs about groups, we would have a decisive epistemic objection to people using them. Of course one shouldn’t deploy false, unwarranted beliefs to judge individuals. On the other hand, if the stereotypes that drive biased judgments might sometimes be true and warranted by the evidence, one cannot always invoke this objection to explain why people should never judge others in implicitly biased ways. After all, the objection will only sometimes apply. To find an objection that always applies, one must get more creative. Thinking of biases as shortcuts helps here. In the literature on heuristics and biases—where the metaphor that we are considering originated— authors tend to articulate epistemic objections that apply to processes of reasoning that involve stereotyping. Consulting this literature, one finds ample reason to think that implicitly biased judgments are always or usually unreliable. The reason for their unreliability is not premised on the falsity or lack of justification of stereotypes. Biased judgments would be unreliable, according to these theorists, even if the stereotypes being deployed were true and warranted. Here are three examples. (A) The Representativeness Heuristic. Suppose someone handed you the following character sketch: Steve is shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has need for order and structure and a passion for detail. (Kahneman 2011: 7) That person then asks you, “Is it more probable that Steve is a librarian or a farmer?” What would you say?
86
Erin Beeghly
If you were like typical research participants, you would say that Steve is probably a librarian. In giving this answer, one relies on what Tversky and Kahneman call the representativeness heuristic. Here is the OED definition of a heuristic, as understood by psychologists: “designating or relating to decision making that is performed through intuition or common sense” (OED 2017d). The opposite of heuristic is “systematic.” Systematic ways of reasoning adhere to the norms of ideal rationality, as modeled by decision theorists. When people use the representativeness heuristic, they make judgments about the likelihood of people having this or that property—for example, the property of being a librarian—based on stereotypes. Thinking quickly, we automatically expect that Steve will be a librarian because he fits the stereotype of a librarian. The problem with using the representativeness heuristic is that it involves ignoring a great deal of other information. “Did it occur to you,” writes Kahneman, that there are 20 male farmers for each librarian in the United States? Because there are so many farmers, it is almost certain that more meek and tidy souls will be found at tractors than at library desks. (7) If you stereotyped Steve, he says, you committed base rate neglect. A person neglects base rates if they ignore background statistics—such as the percen tages of librarians and farmers in the population at large—when reasoning. Implicitly biased people, one might worry, always make judgments by ignoring base rates. Their predictions and expectations of individuals are thus unreliable. (B) The Availability Heuristic. Implicitly biased people also make use of the availability heuristic. When people use this heuristic, Kahneman says, “their task is to estimate the size of a category or the frequency of an event but … [they instead] report an impression of ease with which instances come to mind” (130). Because one is not paying attention to actual probabilities, one ends up overestimating or underestimating the probability of an event or property occurring (Tversky and Kahneman 1973; Lichtenstein et al. 1978). This effect very often occurs when properties are dangerous or striking. But it may occur in other cases as well. The mere existence of a trait as part of a cultural stereotype may bring it more easily to mind than would otherwise be the case. For example, we may overestimate the percentage of mothers among women because, stereotypically, women bear children. If implicitly biased people use the availability heuristic, they would often have unreliable predictions, expectations, educated guesses, and beliefs about individuals. (C) The Affect Heuristic. Implicit biases may also leave us open to non-cognitive biases. Stereotypes can bring to mind aversions and affinities and are often laden with evaluative and emotional significance (Madva and Brownstein 2018).
Bias and Knowledge
87
Some of the most interesting work on affect and biases has been done by Paul Slovic and colleagues. Slovic introduced the idea of an affect heuristic. As before, the idea with heuristics is that people aim to find easy ways of answer ing questions when thinking quickly and intuitively. Emotions can be helpful for this purpose. A person may simply consult his feelings to determine what he should think and do. If one’s feelings are clear cut, one can ‘just go with it’ and suppose that affect provides the right answer to the question. “Using an overall, readily available affect impression can be easier and more efficient than weigh ing the pros and cons of various reasons or retrieving relevant examples from memory,” writes Slovic, “especially when the required judgment or decision is complex or mental resources are limited” (Slovic et al. 2004: 314). Think, first, about the content of stereotypes. Stereotypes will often be affectively laden. Just as people vastly overestimate the likelihood of being attacked by a shark while swimming due to fear, they may vastly overestimate the likelihood that individuals from stigmatized groups will possess negative properties stereotypically attributed to them. Emotion—not facts—would guide estimation of probabilities. Of course emotions—especially ones like fear—are not a reliable sources of probabilistic information. So using this heuristic in conjunction with stereotypes would lead to unreliable judgments. A second observation concerns relationship between moods, quick thinking, and stereotyping. What psychologists have found is that people in happy or positive moods often think quickly, hence they tend to stereotype (Park and Banaji 2000; Chartrand, Van Baaren, and Bargh 2006; Forgas 2011; Holland et al. 2012). For example, Forgas (2011) asked research participants to read a one-page philosophy essay written by “Robin Taylor.” Attached to the essay was either a picture of a middle-aged white man with glasses—a stereotypical-looking philosopher—or a young white woman with “frizzy” hair—someone who poorly fits the stereotype of a philosopher. When the essay was attributed to the middle-aged white male, participants tended to rate it more positively. This bias was most pronounced when participants were in a good mood. In contrast, participants in bad moods were less influenced by stereotypes when evaluating the essay. They spent more time reading and thinking about the essay, and they evaluated the essay as just as good no matter who wrote it. As such experiments show, affect plays a complex role in our epistemic life and can undermine our ability to evaluate others in fair, unbiased ways (Madva 2018). By paying attention to the literature on biases and heuristics, we seem to have found a promising epistemic objection to biases. The objection is that biased judgments are unreliable because they are the product of fast thinking. What we need to do, the argument goes, is to slow down, reason more carefully, and judge persons as individuals. Have we now found a foolproof objection to bias? Perhaps not. Within the literature on heuristics and biases, theorists often do not make the above argument. They argue that fast thinking will sometimes but not always lead us astray (Kahneman and Tversky 1973: 237; Jussim 2012: 360–388). For instance, Kahneman offers a list of purportedly accurate stereotypes,
88
Erin Beeghly
including “young men are more likely than elderly women to drive aggres sively” (2011: 151). According to him, because stereotypes like this track the truth, they are reliable. So the representativeness heuristic—despite his emphasis on the ways in which it fails in certain contexts—will not always violate norms of epistemic rationality; nor will it always be untrustworthy in terms of the knowledge it provides. The point generalizes. In medical contexts where doctors exclusively wear white coats, we may be able to reliably pick out who is and is not a doctor based on attire. Similar claims can be made about gender stereotypes like ‘women are empathetic.’ This line of argument picks up further steam when one considers that stereotyping—a major cause of biased judgments and perception—shares a good deal in common with inductive reasoning about kinds of things in general. Notice that we have “pictures in our heads” of lightning storms and rivers, tables and skyscrapers, skunks and otters, just as we have stereotypes of social groups. We make generalizations about all kinds of things, and doing so is epistemically useful. By relying on kind-based generalizations, we save time and energy. We get around better in the world, having a better sense of what to expect from new things, situations, and people. We can avoid potentially dangerous situations and seek out advantageous ones. Stereotyping can also fail in all the same ways as kind-based reasoning more generally. We may form stereotypes based on a limited sample size then overgeneralize. Our past experience with social groups may not be a reliable guide to the future. Our expectations can lead us to pay attention only to what confirms them and ignore disconfirming evidence. We may systemically overestimate the likelihood of events based on heuristics. Despite these problems, no one is tempted to say that kind-based reasoning in general is always epistemically bad. The above claims culminate in what I call the argument from symmetry. The argument goes like this. If we claim that it is always epistemically bad to use stereotypes (which is what happens when people make implicitly biased judgments), we will have to endorse this thought in other domains as well. For example, we will have to say that there is always something epis temically bad with forming expectations about objects like chairs or non human animals or physical events like lightning storms on the basis of group generalizations. Yet, the argument continues, it is very hard to believe that kind-based reasoning about anything whatsoever is necessarily epistemically problematic. Parity of reasoning requires us to see stereotyping people as sometimes rational and, indeed, potentially good from the perspective of knowledge in certain contexts. In response to this argument, one could insist that stereotypic judgments about humans are never permissible: we have an epistemic and ethical duty to judge persons as individuals. However, it is not clear that such a duty exists or, if it does, how to articulate it. Philosopher Benjamin Eidelson notes:
Bias and Knowledge
89
Taken literally, the principle [of treating persons as individuals] seems to express a broad hostility to forming judgments about individual people by appeal to generalizations about whole classes of people. (Eidelson 2013: 204) It is absurd to think that epistemic norms require never using group generalizations (Levin 1992: 23; Schauer 2006: 19; Arneson 2007: 787). We would lack schemas for organizing our social world. We couldn’t learn about groups of people. We would be forbidden from categorizing unfami liar individuals as members of types and forming expectations about them based on group membership. For example, in I Need a Doctor, the father would be forbidden on epistemic grounds from identifying the white-coated person as a doctor. The injunction to always treat persons as individuals in the way specified above is not only epistemically odd; it is also ethically troubling. Imagine a woman who believes that people of color in her community are often subject to police harassment. When she sees an unfamiliar black man, she expects that he, too, has likely experienced police harassment at some point in his life. This person is “using race as a proxy for being subject to unjust racebased discrimination,” as Elizabeth Anderson puts it (Anderson 2010: 161). She is thus failing to treat someone as an individual. Yet, I would say, she has done nothing epistemically or ethically wrong as of yet. Indeed, stereotyping here may be the best ethical and epistemic response. Perhaps the epistemic and moral duty to treat person as individuals can be interpreted in a more plausible way. For example, Kasper Lippert-Rasmussen has suggested: X treats Y as an individual if, and only if, X’s treatment of Y is informed by all relevant information, statistical or non-statistical, reasonably available to X. (Lippert-Rasmussen 2011: 54) Call this the use-all-your-information conception of treating persons as individuals. Adopting this conception, one might argue that implicitly biased people always fail to treat persons as individuals because they fail to use all relevant, reasonably available information when judging others. Such a claim fails, however. Biased judgments will only sometimes involve failing to treat person as individuals, as defined above. When agents face serious informational deficits—and thus have very little information reasonably available to them—they will count as treating persons as individuals, even if they stereotype others based on group generalizations (Beeghly 2018). Likewise, it is possible for someone to use all the relevant information reasonably available to her in forming a prediction about someone; yet implicit group stereotypes—which she might disavow—may corrupt how she interprets or weighs that information. In such a case, her judgment would be epistemically
90
Erin Beeghly
problematic. But the reason why it is problematic is not that she has failed to use all of her information. What’s gone wrong is something different. Though she uses all her information, she fails to weigh different pieces of evidence appropriately. What is the upshot? Perhaps it is that the range of epistemic objections to implicit bias is astoundingly wide. Or, maybe the lesson here is that no single epistemic problem will be present in all epistemically problematic cases of implicit bias. The last possibility is very important to philosophers and social scientists. One thing that we like to do is create theories. A theory of what’s epistemically wrong with implicit bias could be unified or non-unified. Unified theories are so-called because they identify a single property or set of properties that all epistemically bad cases of bias allegedly have in common, in virtue of which the cases count as bad. Non-unified theories are so-called because they identify multiple properties that bad cases of bias might have in common. Though all wrongful cases are alleged to share one of many wrong-making properties specified in the list, no single property mentioned on the list will be found in every wrongful case. If the analysis so far is on the right track, we should not expect a unified theory of what’s epistemically wrong with implicit bias to succeed. Certainly implicit biases hamper our knowledge in many cases, but there seems to be no single objection that fully explains why they do so in every case. Just as importantly, we have not yet been able to definitively rule out the possibility that implicitly biased judgments are sometimes epistemically rational. An important project for future research is to consider these issues more systemically, in the hopes of better understanding the conditions under which implicitly biased judgments are epistemically problematic. 5.2 Why Implicit Biases Are Not Just Shortcuts I am not fully onboard with the metaphor of bias as shortcut, despite its advantages. There are a few reasons why. First, thinking of bias as a shortcut encourages us to believe that biased judgments are primarily due to quick thinking. If that were true, we could rid ourselves of biases by attending more carefully to the facts. Yet, as philosopher Louise Antony notes, it is a kind of fantasy to think that biases intrude only when our guard is down—a fantasy that permits us to think that if we were only more careful in our thinking, more responsible or more virtuous in our epistemic practice, things would be all right. That leaves intact the conviction that there is within each one of us some epistemic still place from which we can see clearly and judge soundly … (Antony 2016: 160)
Bias and Knowledge
91
The reality is that stereotypes are always with us. They structure how we—as humans—see the world and move in it. Even when thinking carefully, biases can shape our judgments. A recent meta-analysis of the role of gender in hiring decisions, for example, found that people who were motivated not to discriminate displayed less gender bias when evaluating women job candidates in male-dominated fields; however, they were still not able to get rid of their biases completely (Koch et al. 2015). Second, by using the metaphor of a shortcut, one implies that biased people are holding and using stereotypes because they are lazy, rushed for time, or overwhelmed by the world’s complexity (Bargh 1999). However, biased judgments don’t just occur because people are pressed for time and overwhelmed with stimuli. They happen because we exist in a world where certain kinds of people stand in particular power relationships to one another. Stereotyping—whether implicit or explicit—is thus wrapped up in power, privilege, ideologies, and histories of oppression. Likewise, stereo types serve an evaluative function and are often used to keep individuals in their ‘appropriate’ social place (McGeer 2015; Haslanger 2019). It is no accident that the metaphor of bias as shortcut largely hides the ideological and social dimensions of bias. The metaphor identifies implicit biases with mental states; hence it renders the psychological elements of the phenomenon central and salient. Biases are typically described as cognitive shortcuts, after all. Nothing is said about their origin or social function, and their existence is often alleged to be a matter of innate cognitive architecture. The connection between bias, the social world, and group oppression—which was foregrounded in the metaphor of bias as fog—is thus lost.
6 Concluding Thoughts on the Epistemic Significance of Implicit Bias In this chapter, I have explored two metaphors used to think about implicit bias: bias as fog and bias as shortcut. Like any metaphor, neither one is perfect. Both misrepresent the phenomenon of bias in some respects. On the other hand, both metaphors bring something important to the table. Perhaps that is why the two coexist in the Reshamwala’s video for The New York Times. Thinking of bias as a shortcut encourages us to pay attention to the relationship between biased judgments and fast thinking. Thinking of bias as fog, in contrast, brings out its connection to group oppression and the ways in which false stereotypes frustrate knowledge. Considering these two metaphors together is also productive for a dif ferent reason: it calls attention to a pressing question about the epistemic significance of bias. That is, what is the actual connection between bias, knowledge, and error? If implicit bias is fog, it is always epistemically bad; however, if biases are shortcuts, implicitly biased judgments are not always bad from the point of view of knowledge. Which claim is correct? Significantly, we are not yet able to definitively answer that question. The crux of the matter is whether implicit bias has any epistemically positive
92 Erin Beeghly role to play in our individual and collective attempts to gain knowledge of the world and of the people in it. Some readers will no doubt want to defend the epistemic claim behind the metaphor of bias as fog, namely, that biases are always problematic from the perspective of knowledge. If my arguments here are correct, they face a challenge: the argument from symmetry. One strategy for pushing back against the argument is to identify special epistemic problems that occur when we deploy social stereotypes in perception and cognition, which do not occur when we use other kinds of generalizations (Siegel, Chapter 5, “Bias and Perception”; Holroyd and Puddifoot, Chapter 6, “Epistemic Injustice and Implicit Bias”). A second strategy is to argue that we face higher epistemic standards when judging persons (and, perhaps, non-human animals) for ethical reasons, whereas lower epistemic standards apply when we reason about other kinds of things (Basu, Chapter 10, “The Specter of Normative Conflict: Does Fairness Require Inaccuracy?”). A third strategy is to link the use of stereotypes to collective epistemic practices and, in parti cular, to the existence of epistemic vices like laziness and lack of imagina tion that flourish in members of socially privileged groups (McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit Bias”; AyalaLópez and Beeghly, Chapter 11, “Explaining Injustice: Structural Analysis, Bias, and Individuals”; Medina 2013). I am not sure which metaphor will ultimately win out—or whether we have to choose. Both are tempting, albeit for different reasons. More than anything, what they reveal is how much there is to learn about the condi tions under which biased judgments are epistemically problematic. This chapter has only been able to scratch the surface. Investigating further, one would have to consider a range of other epistemic objections to biased judgments, actions, and speech (Gendler 2011; Blum 2004; Haslanger 2012; Medina 2013; Munton 2019). Even scratching the surface, one can understand why these two metaphors are so prevalent in popular discussions of implicit bias. They simplify and provide an accessible frame from which to begin deeper philosophical reflections on bias, knowledge, and justice. The questions they raise are extremely pressing. Might it be possible that judging other humans according to reliable shortcuts gives us knowledge but is unjust? In what ways does oppression impact our evidence and what can be true of people in our world? With such questions, we find ourselves at the heart of a deep disagreement about knowledge and bias, a disagreement that has no end in sight.
SUGGESTIONS FOR FUTURE READING If you’d like to read more about how metaphors structure thought, read: •
Lakoff, J. and Johnson, M. (1980) Metaphors We Live By. Chicago, IL: University of Chicago. Lakoff and Johnson argue that metaphors are
Bias and Knowledge
93
fundamental to how we understand and reason about the world and others. A classic text that has inspired a great deal of current research. If you want to learn more about the metaphor of bias as shortcut, read: •
Kahneman, D. (2011) Thinking Fast and Slow. New York: Farrar, Strauss, & Girroux. Kahneman presents his and Amos Tversky’s research from the last four decades for a popular audience. He argues that stereotyping is not always bad from an epistemic perspective, while documenting the myriad ways in which people who rely on stereotypes as heuristics can be criticized on epistemic grounds.
If you are looking for an argument that bias—like fog—exists in environ ments and not just in the heads of individuals, read: • •
Mills, C. (1999) The Racial Contract. Ithaca, NY: Cornell University Press. Mills explores the ideological dimensions of white supremacy and argues that collective hallucination is required for white domination. Payne, K., Vuletich, H., and Lundberg, K. (2017) The bias of crowds: How implicit bias bridges personal and systemic prejudice. Psychological Inquiry, 28(4): 233–248. Payne and his collaborators argue that implicit biases exist in social environments and discuss the significance of geographic variations in bias.
If you’d like to read more about ways in which implicitly biased judgments might promote knowledge, see: •
•
Antony, L. (2016) Bias: Friend or foe? Reflections on Saulish skepticism. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy: Volume 1 (pp. 157–190). Oxford: Oxford University Press. According to Antony, “bias is an essential element in epistemic success” and “biases make possible perception, language, and science.” Jussim, L. (2012) Social Perception and Social Reality. Oxford: Oxford University Press. Jussim argues that stereotypes can be accurate and evidentially warranted; he also argues that forming judgments about individuals based on stereotyping is sometimes epistemically rational.
If you are looking to explore epistemic objections to biased judgments fur ther, check out: •
Blum, L. (2004) Stereotypes and stereotyping: A moral analysis. Philoso phical Papers, 33: 251–289. Blum argues that stereotypes are necessarily false and resistant to evidence, and he claims that stereotyping will always be epistemically (and ethically) defective on multiple grounds, one of which is that stereotyping always involves failing to treat persons as individuals.
94 Erin Beeghly •
•
•
•
•
Dotson, K. (2012) A cautionary tale: On limiting epistemic oppression. Frontiers, 33: 24–47. Dotson argues that biased judgments—implicitly and explicitly biased judgments—can cause epistemic oppression and epistemic exclusion, as well as undermine epistemic agency. A criticism of Fricker, see below. Fricker, M. (2009) Epistemic Injustice: Power and The Ethics of Knowing. Oxford: Oxford University Press. Fricker argues that implicitly biased judgments hinder knowledge by giving people less credibility than they are due—thereby constituting testimonial injustice—and by limiting people’s interpretative horizons—thereby constituting hermeneutical injustice. She also explores epistemic virtues that one might cultivate to avoid biasgenerated injustices. Gendler, T.S. (2011) On the epistemic costs of implicit bias. Philoso phical Studies, 156: 33–63. Gendler discusses three epistemic costs of implicitly biased judgments, and she argues that some implicitly biased judgments might be epistemically rational but morally problematic. Haslanger, S. (2012) Ideology, generics, and common ground. In Resisting Reality: Social Construction and Social Critique. Oxford: Oxford University Press. Haslanger argues that pernicious stereotypes can be true and evidentially warranted, but using them in conversational contexts, e.g., saying things like “women are empathetic” is epistemically proble matic because the utterances have false, unwarranted essentialist implications. Medina, J. (2013) The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. Oxford: Oxford University Press. Medina explores the epistemic vices associated with biases, including close-mindedness, lack of imagination, and hubris. He also documents epistemic virtues the members of oppressed groups are likely to possess.
DISCUSSION QUESTIONS 1
2
3
What is an epistemic criticism? Give at least two examples from your own life; there is no need for the examples to be related to implicit bias. What are the main pros and cons of the two central metaphors for bias considered in this chapter? Which one of the metaphors is most apt, in your view? Can you think of other, better metaphors for talking about bias? This chapter focuses on the question what makes implicit biases bad from the perspective of knowledge. The author argues that there is likely no unified epistemic problem with biases. Why does she say this? For each potential epistemic problem with implicitly biased judgments, think of an example that shows that it doesn’t apply in every case. Can you think of possible ways of unifying the epistemic objections, so as to
Bias and Knowledge
4
5
6
7
95
create a unified theory of when and why implicitly biased judgments are epistemically problematic? Even if a certain claim is partly warranted from an epistemic perspective (e.g., if most women are more empathic than most men), it might still, at least sometimes, be morally problematic to use that information in making judgments about other people (Basu, Chapter 10, “The Specter of Normative Conflict: Does Fairness Require Accuracy?”). What are some potential ethical problems that might face the use of such biases— even if we are just focusing on the “biases” that seem to be relatively accurate? The author of this article suggests that the use of stereotypes in the I Need a Doctor case is not obviously problematic from the perspective of knowledge. Do you agree? Can you think of other contexts where attitudes that look like problematic stereotypes might help support knowledge? When might they be valuable or necessary shortcuts, if at all? Think of contexts where the Representativeness, Availability, and Affect Heuristics might promote knowledge and hinder knowledge. Think about whether there will be shared features that we can generalize from to think more broadly about when to put these heuristics into place as useful shortcuts, and when to put up structural or mental “fences” to block the usage of these shortcuts (like the blue-scrubs sign). Ethically and epistemically, there is something very intuitive about the claim that we ought to treat people as individuals. However, many philosophers have argued that it is absurd to think that we must always treat persons as individuals, if that means never using group-based generalizations to form judgments and expectations about others. In your opinion, how should we interpret the ethical and epistemic demand to treat persons as individuals? Do you think it is possible that failing to treating persons as individuals could advance knowledge in some cases? Are some forms of generalization different from others, such that we think some generalizations fail to treat people as individuals (e.g., based on race or gender) and others don’t (e.g., based on how people choose to dress or display themselves)?
REFERENCES Anderson, E. (2010) The Imperative of Integration. Princeton, NJ: Princeton University Press. Antony, L. (2016) Bias: Friend or foe? Reflections on Saulish skepticism. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy: Volume 1 (pp. 157–190). Oxford: Oxford University Press. Arneson, R. (2007) What is wrongful discrimination? San Diego Law Review, 43: 775–808. Banaji M. andGreenwald, A. (2013) Blindspot: The Hidden Biases of Good People. New York: Delacorte.
96
Erin Beeghly
Bargh, J. (1999) The cognitive monster: The case against the control of automatic stereotype effects. In S. Chaiken and Y. Trope (eds), Dual-process Theories in Social Psychology (pp. 361–382). New York: Guilford Press. Beeghly, E. (2015) What is a stereotype? What is stereotyping? Hypatia, 30: 675–691. Beeghly, E. (2018) Failing to treat persons as individuals. Ergo, 25(5): 687–711. Blum, L. (2004) Stereotypes and stereotyping: A moral analysis. Philosophical Papers, 33: 251–289. Camp, E. (2020) Imaginative frames for scientific inquiry: Metaphors, telling facts, and just-so stories. In P. Godfrey-Smith and A. Levy (eds), The Scientific Imagi nation: Philosophical and Psychological Perspectives (pp. 304–336). Oxford: Oxford University Press. Chartrand, T., van Baaren, R., and Bargh, J. (2006) Linking automatic evaluation to mood and processing style: Consequences for experienced affect, impression for mation, and stereotyping. Journal of Experimental Psychology, 135: 70–77. Dasgupta, N. (2013) Implicit attitudes and beliefs adapt to situations: A decade of research on the malleability of implicit prejudice, stereotypes, and the self-concept. Advances in Experimental Social Psychology, 47: 233–279. de Beauvoir, S. (1953) The Second Sex. Trans. H.M. Parshley. New York: Knopf. Eidelson B. (2013) Treating people as individuals. In D. Hellman and S. Moreau (eds), Philosophical Foundations of Discrimination Law (pp. 203–227). Oxford: Oxford University Press. Fiske, S. and Taylor, S. (1984/1991) Social Cognition. New York: McGraw Hill Inc. Forgas, J. (2011) She just doesn’t look like a philosopher…? affective influences on the halo effect in impression formation. European Journal of Social Philosophy, 41: 812–817. Frankish, K. (2016) Playing double: Implicit bias and self-control. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy: Volume 1 (pp. 23–46). Oxford: Oxford University Press. Fricker, M. (2009) Epistemic Injustice: Power and The Ethics of Knowing. Oxford: Oxford University Press. Funchess, M. (2014) Implicit bias—How it affects us & how we push through. Tedx Talk, 16 October 2014. Available at https://www.youtube.com/watch?v=Fr8G7MtRNlk [accessed 10 June 2017]. Gendler, T.S. (2011) On the epistemic costs of implicit bias. Philosophical Studies, 156: 33–63. Google (2014) Google video on implicit bias: Making the unconscious conscious, 25 September 2014. Available at https://www.youtube.com/watch?v=NW5s_-Nl3JE [accessed 10 June 2017]. Haslanger, S. (2012) Ideology, generics, and common ground. In Resisting Reality: Social Construction and Social Critique. Oxford: OUP. Haslanger, S. (2017) Racism, ideology, and social movements. Res Philosophica, 94: 1–22. Haslanger, S. (2019) Cognition as a social tool. Australasian Philosophical Review, forthcoming. Holland, R., de Vries, M., Hermsen, B., and van Kippenberg, A. (2012) Mood and the attitude–behavior link: The happy act on impulse, the sad think twice. Social Psychology and Personality Science, 3: 356–364. Hochberg, M. (2007) History of medicine: The doctor’s white coat—a historical perspective. American Medical Association Journal of Ethics, 9: 310–314.
Bias and Knowledge
97
Ickes, W. (2003) Everyday Mind Reading: Understanding What Other People Think and Feel. Amherst, NY: Prometheus. Jussim, L. (2012) Social Perception and Social Reality. Oxford: Oxford University Press. Kahneman, D. and Tversky, A. (1973) On the psychology of prediction. Psychological Review, 80: 237–251. Kahneman, D. (2011) Thinking Fast and Slow. New York: Farrar, Strauss, & Girroux. Kang, J. (2016) Implicit bias, preface: Biases and heuristics. Bruin X. UCLA Office of Equity, Diversity, and Inclusion, September 9, 2016. Available at: https://www. youtube.com/watch?v=R0hWjjfDyCo&feature=youtu.be [accessed 10 June 2017]. Klein, K. and Hodges, S. (2001) Gender difference, motivation, and empathic accuracy: When it pays to understand. Personality and Social Psychology Bulletin, 27: 720–730. Koch, A., D’Mello, S., and Sackett, P. (2015) A meta-analysis of gender stereotypes and bias in experimental simulations of employment decision making. Journal of Applied Psychology, 100: 128–161. Lakoff. J. and Johnson, M. (1980) Metaphors We Live By. Chicago, IL: University of Chicago. Levin, M. (1992) Responses to race differences in crime. Journal of Social Philosophy, 23: 5–29. Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M., and Combs, B. (1978) Judged frequency of lethal events. Journal of Experimental Psychology: Human Learning and Memory, 4(6): 551–578. https://doi.org/10.1037/0278-7393.4.6.551 Lippert-Rasmussen, K. (2011) “We are all different”: Statistical discrimination and the right to be treated as an individual. Journal of Ethics, 15: 47–59. McGeer, V. (2015) Mind-making practices: The social infrastructure of self-knowing agency and responsibility. Philosophical Explorations, 18: 259–281. Madva, A. (2018) Implicit bias, moods, and moral responsibility. Pacific Philosophical Quarterly, 99(S1): 53–78. Madva, A. and Brownstein, M. (2018) Stereotypes, prejudice, and the taxonomy of the implicit social mind. Noûs, 52(3): 611–644. Medina, J. (2013) The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. Oxford: Oxford University Press. Mills, C. (1999) The Racial Contract. Ithaca, NY: Cornell University Press. Munton, J. (2019) Perceptual skill and social structure. Philosophy and Phenomenological Research, 99(1): 131–161. https://doi.org/10.1111/phpr.12478 Oxford English Dictionary Online (2017a) “fog, n.2,” Oxford University Press, March 2017 [accessed 31 May 2017]. Oxford English Dictionary Online (2017b) “shortcut \ short-cut, n.1,” Oxford University Press, March 2017 [accessed 31 May 2017]. Oxford English Dictionary Online (2017c) “compendious, adj,” Oxford University Press, March 2017 [accessed 31 May 2017]. Oxford English Dictionary Online (2017d) “heuristic, adj. B1b,” Oxford University Press, March 2017 [accessed 12 June 2017]. Park, J. and Banaji, M. (2000) Mood and heuristics: The influence of happy and sad states on sensitivity and bias in stereotyping. Journal of Personality and Social Psychology, 78: 1005–1023.
98
Erin Beeghly
Rankine, C. and Loffreda, B. (2017) The racial imaginary. In R. Hazelton and A.M. Parker (eds), The Manifesto Project (pp. 162–172). Akron, OH: University of Akron Press. Ross, H. (2014) Everyday Bias: Identifying and Navigating Unconscious Judgments in Everyday Life. New York: Rowman & Littlefield Publishers. Reshamwala, S. (2016) Peanut butter, jelly, and racism. In Who Me Biased? New York Times Video. Available at: https://www.nytimes.com/video/us/100000004818663/pea nut-butter-jelly-and-racism.html [accessed 10 June 2017]. Schauer, F. (2006) Profiles, Probabilities, and Stereotypes. New York: Belknap Press. Siegel, S. (2017) The Rationality of Perception. Oxford: Oxford University Press. Slovic, P., Finucane, M.L., Peters, E., and MacGregor, D.G. (2004) Risk as analysis and risk as feelings: Some thoughts about affect, reason, risk, and rationality. Risk Analysis, 24: 311–322. Tiang, K.W., Razack, A.H., and Ng, K.L. (2017) The ‘auxiliary’ white coat effect in hospitals: Perceptions of patients and doctors. Singapore Medical Journal, 4 April 2017. doi:10.11622/smedj.2017023 Time Magazine (1987) Asian-American whiz kids. http://content.time.com/time/ covers/0,16641,19870831,00.html [accessed 21 August 2019]. Tversky, A. and Kahneman, D. (1973) Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5: 207–232. Tversky, A. and Kahneman, D. (1974) Judgment under uncertainty: Heuristics and biases. Science, 185: 1124–1131. Weldon, M. (2016) Implicit gender bias: Strategies to own the power to succeed as women leaders. The Movement Blog, 1 November 2016. Available at: https:// www.taketheleadwomen.com/blog/implicit-gender-bias-strategies-to-own-the-pow er-to-succeed-as-women-leaders/ [accessed 10 June 2017].
5
Bias and Perception Susanna Siegel
How do biases influence perception? If we select a culturally specific bias for the purposes of illustration, then we can address this question from three perspectives: from the receiving end of biased perception, using cultural analysis; from within the biased perceiver’s mind, using cognitive science; and from the perspective of epistemology. This chapter will consider all three perspectives and discuss their relationship. The culturally specific example will be a type of racialized perception found in the USA. The epistemic consequences of racial bias in this context have deep implications for how reasonable it can be to act on what one ‘sees’ when those percep tions are influenced by bias, and this entry will introduce those consequences at the end.
1 Cultural Analysis In a narrative that is easy to recognize, George Yancy (2008, 844) describes a type of micro-interaction between strangers: When followed by white security personnel as I walk through department stores, when a white salesperson avoids touching my hand, when a white woman looks with suspicion as I enter the elevator, I feel that in their eyes I am this indistinguishable, amorphous, black seething mass, a token of danger, a threat, a rapist, a criminal, a burden … In the USA, narratives resembling this have long been found in many regis ters, such as memoir (Coates 2015; Cadogan 2018), fiction, film, music, poetry, ethnography, and social scientific studies (Glaser 2014), including psychological studies of stereotype threat (Steele 2011); and studies in poli tical science of the effects on political attitudes of contact with the criminal justice system (Lerman and Weaver 2014). Some renditions of this narrative detail what it is like to navigate public space when the possibility of being responded to as a threat or likely criminal is salient, including the often elaborate efforts and adjustments made to prevent that response, or reverse it, or negotiate it in some other way. Other versions of the same narrative
100
Susanna Siegel
highlight, encourage, and enforce the point of view of the reactor, such as the high-profile Willie Horton ad in the 1988 US election (Mendelberg 2001), the political scientist John Dilulio’s introduction in the 1990s of the concept of a “superpredator” to describe black youth who were supposedly prone to crime (Dilulio 1996; Hinton 2016), and around the same time, analytic philosopher Michael Levin’s defense of racialized fear (Levin 1992). The wealth of cultural production of narratives casting black men in this role makes it plausible that this racial attitude is widespread, and part of what people embedded in US society have to respond to both from within their minds and in the behavior of other people. Here’s a portrait of how the same attitudes might inhabit a differently positioned person, who I’ll call Whit. Whit is eighteen years old. He has always lived in the same town, in the early-twenty-first-century United States. He inhabits a world of a white people. All of the people that he and his parents take themselves to depend on are white. White people are his neighbors, his teachers, his schoolmates, the professionals that regularly interact with his family (accountants, teachers, doctors, lawyers, mechanics, local religious figures, and community leaders), his friends and his family’s friends, his local politicians, police officers, restaurant owners, and people he sees when he goes to restaurants. Whit knows that elsewhere, not everyone is white. He knows there are black professionals of all kinds. He knows that in other places, distant from where he lives, there are neighborhoods where people are mainly black, where they tend to be much poorer than his family is, and where many people his age have a lot of contact with the criminal justice system. He doesn’t know personally anybody who lives there. If Whit were asked to assess the productive capabilities or personal credibility of a man who is black, he would tend to be disproportionately doubtful. And if he expressed or acted on his doubt, he would not face any challenges from the people within his usual social horizons. In this way, Whit has little in his mind or life to pull against his absorption of the attitude that black men are dangerous. Whit’s racial isolation is the kind that Allport (1954) predicted would make a person more likely to absorb the presumption depicted in the narratives, rather than contesting the presumption or discounting it. Of course like any individual’s outlook, Whit’s cannot be entirely predicted by social context. And conversely, Whit’s social situation is not the only route to the racial attitude he ends up with. The fact that Whit’s attitude is normal worsens his society. From the point of view of people on the receiving end of Whit’s reactions, his attitude will be obviously unjustified. Imagine stepping into a line at an automatic teller machine where Whit and his friends are waiting, and seeing their palpable discomfort as they look uneasy and make sure their wallets are deep inside their pockets. Or imagine asking Whit for directions, and finding him ill at ease in talking to you, seemingly suspicious of whether what you want is really directions, as opposed to something else. In these situations,
Bias and Perception
101
you’d think Whit and his friends were in the grip of a fear that they were projecting onto you. There’s nothing more you could do to manifest the ordinariness of your own behavior. Outside of Whit’s world, many people would easily pick up on the ample cues that indicate innocuous everyday activity. Due to their racial attitudes, Whit and his friends either don’t take in these cues, or they discount them. In these ways, their perception is compromised. The examples of Whit’s obtuseness and Yancy’s experience in the elevator gives us two common manifestations of racialized bias in social perception. Yancy describes what it is like to be perceived when you are on the receiving end of that kind of bias-influenced perception. And Whit’s scenario shows us how someone unknowingly steeped in racialized bias could end up with perceptions that are congruent with the bias, without having a clue about either the bias or its effects on perception.
2 Cognitive Science Alongside cultural analysis, psychological experiments provide evidence that racial attitudes can operate even in the minds of people who would explicitly disown the hypothesis that black men are dangerous. For instance, consider a set of experiments designed to test how racial attitudes impact perception: —A mild human collision where one person pushes another is seen as aggressive or playful, depending on the perceived race of the pusher. (Duncan 1976; Sagar and Schofield 1980) —A face in a picture is matched (for coloring) to a darker or a lighter patch, depending on the racial label written under the face. (Levin and Banaji 2006; see Firestone and Scholl 2015 for discussion) —A boy in a photograph said to be accused of a felony is estimated to be older when the child is black than when he is Latino. (Goff et al. 2014) —A man categorized as black is estimated to be both bigger and stronger than a man of the same size and strength who is categorized as white. (Wilson et al. 2017; Johnson and Wilson 2019) —Emotions are detected at lower thresholds when they are congruent with stereotypes linking anger to Moroccan men and sadness to white Dutch men, as measured by Implicit Association Tests. (Bijlstra et al. 2014)
102
Susanna Siegel
2.1 How Should These Experiments Be Interpreted? An important distinction is the difference between visual appearances (a kind of perceptual experience), and the beliefs or judgments you form in response to them. Perceptual experiences are the conscious aspects of perception, in which the things you’re perceiving are presented to you in a certain way. For instance, when you put a straight stick in water and suddenly it looks bent, your visual experience presents it as bent, but what you believe or judge is that it’s straight. We don’t always believe our eyes, and that situation shows that there’s a difference between what you experience and what you believe in response to it. We can also appreciate the difference between experience and judgment by considering cases where what you judge in response to perception goes beyond what you see. For example, if you’re looking for your brother in the kitchen, and see that the kitchen is empty, your experience tells you about what is in the kitchen, and then you infer on the basis of the experience that your brother isn’t there. Here, you do believe your eyes. But in addition, you also form other beliefs that go beyond what you experience. It can be useful to discuss the ways a perceptual experience presents things to you, and that’s the point of the notion of the contents of experience. If the stick looks bent even when you know it’s straight, then the contents of your visual experiences include “it’s bent.” Contents characterize your perspective on the world in perceptual experience. Your experience is accurate if things are the way you experience them, and they’re inaccurate if things aren’t that way. The experience of the stick is inaccurate. The experiments listed earlier raise the question whether the background expectations are affecting perceptual experiences, or only beliefs formed on the basis of those experiences. For example, in the case of the collision: does the pusher look aggressive, or do some perceivers just believe he is, on the basis of how he looks to them? It can be difficult to test experimentally whether the effect shapes the content of the experience, or rather influences how people respond to their experiences. But we can understand a range of different ways in which bias can affect perception, even if we don’t know from cognitive science where exactly the effects lie. We can see the potential range of effects on perception by focusing on different possible interpretations of the result of an experiment done by Keith Payne. The experiment was designed to test the influence of racial attitudes on categorizing the things you see (Payne 2001). Weapon categorization: Participants in an experiment are shown an object quickly and asked to press a button designated for “gun” if it is a gun, and a different button if it is a hand tool—pliers, wrench, or a drill. Before they see the object, they are quickly shown a man’s face. The man is either black or white. Participants frequently indicate “gun”
Bias and Perception
103
when shown a tool, but more frequently make this error following a black prime, compared with a white prime. (Payne 2001) When participants in Payne’s experiment misclassify a pair of pliers as a gun, there are many possible ways in which they might in principle arrive at their misclassification. • •
• •
•
•
•
Disbelief: The pliers look to the subject exactly like pliers. But the subjects disbelieve their perceptual experience, and misclassify the object as a gun. Bypass: The pliers look to the subject exactly like pliers. Subjects do not respond in any way to the experience—not even by disbelieving it. The state activated by the black prime controls their classification error directly, bypassing their experience. Cognitive Penetration: The pliers look to the subject exactly like a gun due to the influence on perceptual experience of a cognitive state activated by the black prime. Attention: The pliers look somewhat like a gun because the state activated by the black prime directs the subject’s attention to features of the pliers that are congruent with being a gun (metallic), and away from features incongruent with being a gun (shape). Introspective Error: The pliers look to the subject exactly like pliers. But the subject makes an introspective error in which they take themselves to experience a gun. The introspective error makes them misclassify the object as a gun. Hasty Judgment: The pliers look to the subject somewhat like pliers and somewhat like a gun. Before perceiving enough detail to decide the matter on the basis of what they see, the subject judges that the object is a gun due to the state activated by the black prime. Disowned Behavior: The pliers look to the subject exactly like pliers. But the state activated by the black prime guides the behavior of pushing the button that the subject uses to indicate their classification verdict. The subject immediately afterward will regard their answer as mistaken.
These options differ from one another along several dimensions. Some options impact the content of a judgment rather than the content of perceptual experience (Disbelief, Bypass, Introspective error). Other options impact the content of experience, either by influencing it directly (Cognitive Penetration) or by selecting which features will be attended (Attention). A different dimension of influence constrains the role of the experience in making a judgment (Bypass, Haste), or in producing behavior (Disowned Behavior). These options illustrate the possibility that perception could have less impact on behavior than we might have supposed—even when we are engaged explicitly in a classification task that we would normally use
104
Susanna Siegel
perception to accomplish. Perception’s usual role in guiding behavior is neutralized. These different ways that a prior state can influence perception can be sys tematized using two philosophical distinctions. The first distinction is between perceptual experiences and judgments (or beliefs) formed in response to them. The second distinction is between two aspects of perceptual experiences: their contents, as opposed to the role of the experience in the mind. Haste, Bypass, Disowned Behavior, and Disbelief all illustrate ways that racialized bias can influence the role of the experience in the mind. Cognitive Penetration and Attention illustrate ways that racialized bias can influence the contents of experience. Payne’s results are probably best explained by the Disowned Behavior option, given a follow-up experiment that allowed participants to correct their responses (Stokes and Payne 2010). But the distinctions between different potential analyses show us a range of ways that prior states including racialized bias can influence perception. One way to influence perception is to influence patterns of attention, and these patterns in turn can affect the contents of experience. In the context of Payne’s experiment, attention could be directed to different parts of the object, depending on the influence of racialized bias. Other experiments suggest that racialized bias can influence the distributions of attention across wider scenes, and the bias can function as what psychologist Jennifer Eberhardt calls a “visual tuning device.” Here’s an example. Crime-suggestive acuity: After being shown a man’s face in a subliminal prime, participants are shown a sequence of progressively less degraded images, beginning with visual noise and ending with a clear image of an object and asked to indicate when they can recognize the object. They identify crime-relevant objects (guns or knives) at lower thresholds than crime-irrelevant objects, after being shown a black man’s face, com pared to crime-irrelevant objects, and compared to crime-relevant objects after being shown a white man’s face. (Eberhardt et al. 2004) This experiment belongs to a series from which Eberhardt concludes that there is a two-way association between the concepts ‘black’ and ‘crime’. In the task described above, a racialized prime prompts attention to crime-related objects. In a different task, priming with crime-related objects prompts attention to black male faces as opposed to white male faces. Attention is measured using a dotprobe task. In the experiment, two faces of men appear, one black and one white, and then both disappear and a dot appears in a position of one of the faces. The task is to find the dot. When participants are primed with a crimerelated object before they see the faces, they find the dot faster when it replaces the black face. The crime-prime seems to facilitate attention to black faces.
Bias and Perception
105
2.2 Associations So far, I’ve discussed a range of ways for racialized bias to influence perception, using experiments from cognitive science to illustrate behavior that could be underwritten by a range of different psychological relationships between bias, perceptual experience, and perceptual judgment. The experiments we’ve dis cussed activate what the experimenters call a “stereotypical association” between the concepts ‘black man’ and ‘danger’ or ‘crime’ (Payne 2006). It is unlikely that the content of racialized biases is independent of gender categories, and the experiments discussed so far focus exclusively on men (Johnson et al. 2012). A wide range of black feminist writings from the USA have long discussed the ways that racialized narratives, dynamics, and representations of race have different contours depending on whether they focus on men or women (Cooper 1892; Crenshaw 2015; Dotson 2017; Morris 2016; Murray 1970). Like the experiments I’m discussing, I focus on men here. When it comes to the general structure of racialized bias, though, there’s reason to think the structure will be the same whether the racialized bias concerns men or women, even if the contents differ. For instance, according to Johnson et al. (2012), black women are perceived as more masculine and Asian men are perceived as more feminine. An analysis of racialized bias in terms of associations between concepts could mislead us as to the underlying structure of the racialized bias. When the experimenters say that participants make a “stereotypical association,” they are saying that the mind moves from one concept to another. We can better understand what kind of movement of the mind this could be by drawing a few more distinctions (see Johnson, Chapter 1, “The Psychology of Bias: From Data to Theory”). These distinctions will later help us analyze the epistemic impacts of racialized bias. First, here are two different ways to associate concepts X and Y, such as ‘salt’ and ‘pepper.’ Minimal association between concepts: transition from isolated concepts expressed by words: e.g., “drip” to “drop,” “salt” to “pepper,” “tic” and “tac” to “toe.” This kind of movement between concepts is a mental analog of the verbal phenomenon in which a person hears “salt” and (perhaps upon being prompted to report the word that first comes to mind) says “pepper.” Associative transitions can also be made between thoughts. Minimal association between thoughts: transition from thought involving X (X-thoughts) to thoughts involving Y (Y-thoughts), with no constraints on which thoughts these are. In a minimal association between thoughts, whenever one thinks a thought involving the concept ‘salt’—such as that the chips are salty, or that
106
Susanna Siegel
the soup needs more salt, or that salt on the roads prevents skidding— one is disposed to think a thought—any thought—involving the concept ‘pepper.’ A minimal association between thoughts is therefore a kind of association between concepts. When it is used in a salt-thought, the concept ‘salt’ triggers a pepper-thought. But which thoughts are triggered is not constrained by the semantic relationships between them. Both kinds of minimal associations leave entirely open what standing attitudes the subject has toward the things denoted by the concepts, such as salt and pepper. A subject with a minimal association may have zero further opinions about salt and pepper, if for her, the concepts are no more related than the words “tic,” “tac,” and “toe.” If she does have further opinions, she may think that salt goes well with pepper, that salt and pepper should never be seen or tasted together, that where there is salt there tends to be pepper, that salt and pepper are exclusive seasonings, or any of an enormous variety of other thoughts. No standing outlook about how the things denoted by the concepts are related belongs to a minimal association. A minimal association between ‘black’ (or a more specific racial concept) and ‘crime’ could be an artifact of a presumption that black men are espe cially unlikely to be holding a crime-related object. If they’re not an artifact of this kind of presumption, and they are merely minimal, they do not belong to the same phenomenon as the racialized perceptions and attitudes discussed in Section 1. Minimal associations are also unable to explain sev eral other experimental results from cognitive science: The shooter task: Participants in an experiment play a video game. They are supposed to press either a button designated for “shoot” or “don’t shoot,” depending on whether the person they see on the screen (the target) is holding a gun or an innocuous object—such as a cell phone or wallet. The targets are men. Sometimes the men are black, sometimes white. Participants more frequently press “shoot” when shown an unarmed black target than they do when shown an unarmed white target. (Correll et al. 2002; 2007; Plant and Peruche 2005; Glaser and Knowles 2008; James et al. 2013)
Age overestimation: Participants are shown a picture of a boy aged 10–17, paired with a description of a crime that the boy is said to have committed. They are asked to estimate the boy’s age. Across subjects, the pictures of boys and their names change, but the crime descriptions stay the same. Both police officers and college-age laypersons overestimate the age of black boys by at least four years when the crime is a felony, but overestimate ages of white and Latino boys by only two years, for the same crime. On a scale of culpability, black boys are rated more culpable than white or Latino boys for the same crime. (Goff et al. 2014)
Bias and Perception
107
Looking deathworthy: Defendants in capital crimes whose victims are white are more likely to be sentenced by juries to death, the more stereotypically black their faces appear. (Eberhardt et al. 2006) Minimal associations do not predict the Looking Deathworthy result, or Age Overestimation. These results link ‘black man’ and with negative concepts in a specific way, not just minimally. Minimal associations also do not predict one pattern of shooting error over any other. That’s because a minimal association between ‘black’ (or a more specific racial concept) and ‘crime’ could be an artifact of many different presumptions. For instance, it could be an artifact of the presumption that black men are especially unlikely to be holding a crimerelated object. But the results of the experiments would not be explained by that presumption. It would not explain why participants are so ready to press “shoot” when the target is black. The fact that minimal associations can’t explain these results strengthens the idea that culturally prevalent attitudes sometimes operate in the minds of individuals in ways that are typical of beliefs. They contribute to the interpretation of information, they lead to inferences, and they guide action.
3 Epistemology If racialized bias operates in the mind in the same basic ways as beliefs, then nothing in the structure of such biases precludes them from being epistemically evaluable in the same way that beliefs can be. Beliefs are evaluable along two dimensions: first, they can be true or false, and second, they can be proper responses to a subject’s evidence or not. The most general version of the second dimension is that beliefs can be formed and maintained epistemically well or epistemically badly. A belief or judgment is ill-founded if it is formed or maintained epistemically badly, and in contrast it is well-founded if it is formed and maintained epistemically well. Being ill-founded or well-founded is distinct from being true or false. True beliefs can be ill-founded, and well-founded beliefs can be false. We’ve seen that racialized bias can influence perception in several different ways, by affecting the contents of perceptual experience, the role of experience in the forming beliefs, or the contents of beliefs formed in response to percep tion. These functional differences involve different kinds of epistemic impact on the perceiver. For instance, in the Bypass scenario, you have very good grounds from your experience to think that the thing you’re seeing is a tool, but you end up judging that it’s a gun. Here, your judgment is ill-founded because it is formed in a way that does not take account of the evidence you have. By contrast, if you look in the fridge for some mustard and see the jar on the shelf, normally you have very good reason to think that there’s mustard in
108
Susanna Siegel
the fridge. If you believe that there’s mustard in the fridge on the basis of seeing it, then your belief is well-founded. If there is mustard in the fridge, and you believe that there is, but you believe this because you have a superstition that mustard appears in the fridge when the sun comes out and disappears when the sun goes behind the clouds, then your belief is true but ill-founded. If we want to know what kind of epistemic impact racialized bias makes on perception, we can treat racialized bias as an ill-founded belief. And then we can ask: what epistemic impact would an ill-founded belief make on perception? A first observation is that if perceptual judgment ends up congruent with illfounded racialized bias, then perception is pressed into the service of an ill-foun ded outlook, either by making the outlook seem supported by experience via Cognitive Penetration or Attention, or by making experience irrelevant to judgment (Bypass, Introspective Error). A second observation is that when ill-founded bias influences perceptual experience through Cognitive Penetration, a special philosophical problem arises. Here is a simple example. Jack and Jill have a complicated relationship. One day, Jill is worried that Jack is angry with her. She’s anxious to see him so that she can figure out where things stand. When she sees him, her suspicion that he’s angry affects the way he looks to her. In reality, Jack’s expression is neutral. If you saw him, his face would look neutral to you. But he looks angry to Jill. Does Jill’s visual experience give her reason to believe that Jack is angry? On the one hand, if Jack really looks angry to Jill when she sees him, and she has no indication that her experience is due to her fear, then what else could Jill reasonably believe about Jack’s emotional state, other than that he’s angry? To her, that’s just how he looks. From Jill’s point of view, she’s in an utterly ordinary circumstance. On the other hand, it looks like what’s happening to Jill is that fear is leading to her belief, via an experience. If it’s wrong to base your belief on an unfounded fear, why should it be okay to base your belief on an experi ence that comes from an unfounded fear? The philosophical problem is that this simple Yes–No question has no simple answer. It is called the problem of hijacked experiences, to capture the idea that in these cases, perceptual experience is hijacked by fear, and in being overly influenced, it’s overly influenced by a factor that in some intuitive sense shouldn’t be steering it (Siegel 2016; 2017). Since the problem takes the form of a Yes–No question, one form its solution could take is to argue that one of these answers is correct. In the rest of this section, I explore the position that the correct answer is No, it’s not reasonable for Jill to believe her eyes, because perceptual experiences can actually be formed irrationally or rationally, in response to expectations—even if you have no idea how your perceptions came about, and even if you’re not aware of the fact that you in effect reasoned your way to your experience from
Bias and Perception
109
your expectations. Believing that someone’s happy just because you want them to be happy is called wishful thinking. Believing that someone’s angry, or dangerous, just because you fear that they are is called fearful thinking. This position says that Jill’s situation is like fearful thinking—except it’s fearful seeing, and fearful seeing redounds just as badly on a person as fearful thinking does (at least when the fear is unfounded). It is called the Rationality of Perception solution to the epistemological problem (Siegel 2017; Clark 2018). This of course is not the only solution. Some say No, but give a different backstory about why No is the right answer (Lord 2020; Ghijsen 2018; McGrath 2013a; 2013b; Peacocke 2018). Others say Yes (Pautz 2020; Huemer 2013; Fumerton 2013). Or you can say both, by saying Yes in some ways and No in others. Like most philosophical problems, this problem has many possible solutions. The Rationality of Perception solution is especially interesting for two kinds of reasons. First, it can make a difference in legal contexts. The social versions of perceptual hijacking are especially vivid and extreme when perception leads quickly to violence, and often death. In the first decades of the twenty-first century cases like these have been brought to the front of political discussion by the Movement for Black Lives Matter. There was the shooting and killing of 12-year-old Tamir Rice in Cleveland, where the officers decided to shoot within a few moments of perceiving the boy, who they were told had a gun, describing the boy as “about 20 years old.” There was the killing of 18-year-old Michael Brown by officer Darren Wilson, who testified to a grand jury about how Michael Brown’s face looked to him when they were physically struggling by saying that Wilson “had the most intense aggressive face. The only way I can describe it, it looks like a demon, that’s how angry he looked.” We don’t know exactly what perceptual experiences these particular officers have, just as we don’t know whether to interpret psychological results as concerning experience or judgment. So the cases don’t necessarily give us more instances of the problem of hijacked experience, since that problem is specific to influences on perceptual experience. But to think through possible solutions to that problem, we don’t need actual cases of influences on experience. We can use hypothetical versions of the actual cases, where we just assume for the sake of argument that racialized expectations gave rise to perceptual experiences in which young people appear threatening and dangerous to the perceivers. And then we can ask: if these perceptual experiences were manifestations of a racialized atti tude that black boys and men are dangerous, is it reasonable for people having those perceptual experiences to believe their eyes? It will seem reasonable to them. In fact it will seem as reasonable as it is to conclude that there’s mustard in the fridge when you open the door and see the mustard. But these examples make clear the consequences of letting your solution to the problem be guided by how things seem to the perceiver.
110
Susanna Siegel
The idea that perceptual experiences can come about irrationally helps us see why we don’t have to be guided in that way. Police officers are legally allowed to use force based on perception of threat, so long as their perception is defined as reasonable—and it’s prosecutors, judges, juries or grand juries that are allowed to determine what’s reasonable. It is hard to see how you could have a police force at all without leeway for using force, and hard to see how you could have a decent policy about such leeway without something like a reasonable person standard. The difficulty comes in applying the standard. (For discussion, see reading suggestions under “Bias and the Law”.) Neither the officer who shot and killed Tamir Rice (Timony Loehmann) nor the one who shot and killed Michael Brown (Darren Wilson) were convicted, and their actions, and therefore their beliefs about threat, were found by the legal system to be reasonable. Those verdicts mobilized thousands of people who felt that what the officers did couldn’t possibly have been reasonable responses to the situation, because their estimations of the threat or danger posed by these young people were so far off the mark (Lebron 2017). When those juries, judges, or prosecutors determine whether a defen dant’s perception is reasonable, they’re supposed to consider what a rea sonable person in the defendant’s circumstances would believe about whether they face an imminent severe threat, and if so, how imminent and severe that threat is. They are supposed to ask what would be reasonable to believe about those things, in those circumstances. On the prevailing view, both in philosophy and in law, what it’s reasonable for people to believe depends in part on how it’s reasonable to respond to the way things look to them (in this case, the way other people look to them). How one comes to have the perceptual experiences they’re responding to isn’t supposed to matter. If someone looked dangerous to you, it’d be reasonable for you to believe that they’re dangerous. And if you looked dangerous to someone else, you should excuse them if they become agitated upon seeing you, because it’s reasonable to be agitated in response in danger. On the Rationality of Perception view, the reasonableness of a belief doesn’t just depend on how you respond to the perceptions you have. It can also depend on which perceptual experiences you have in the first place. For instance, the danger-experience could be inferred from unreasonable expec tations built into racial prejudice, and then it will be an experience that’s not reasonable to have. This way, when we assess what a reasonable person under similar circumstances would believe, we need not hold constant their experience. A reasonable person in similar circumstances would not have an experience that they inferred from an unreasonable prejudice. Finally, the Rationality of Perception view challenges the idea that perceptual experience occurs prior to reasoning in the mind. We reason from information we have already, whereas perception is a way of taking in new and current information about the environment. We’re used to thinking
Bias and Perception
111
about perceptual experience as part of what we respond to, rather than already a response to what we believe, suspect, or feel. We think of perceptual experience that way because we feel passive with respect to it. It never feels like we’re reasoning our way to experience. Going with that, we’re used to locating perceptual experience off the grid of moral or epistemic evaluation. The Rationality of Perception picture is different. It puts perceptual experience on par with beliefs when it comes to justification and morality.
SUGGESTIONS FOR FUTURE READING Association and associationism in psychology: •
Mandelbaum, E. (2017) Associationist theories of thought. In N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford. edu/archives/sum2017/entries/associationist-thought Implicit bias in psychology:
• •
Lai, C.K. et al. (2016) Reducing implicit racial preferences: II. Intervention effectiveness across time. Journal of Experimental Psychology: General, 145: 1001–1016. Lai, C.K. et al. (2014) Reducing implicit racial preferences: I. A comparative investigation of 17 interventions. Journal of Experimental Psychology: General, 143: 1765–1785. Bias, implicit bias, reasonableness, and the law:
• • • • • •
Bollinger, R. (2017) Reasonable mistakes and regulative norms. Journal of Political Philosophy, 25(1): 196–217. Brownstein, M. (2017) The Implicit Mind. New York: Oxford University Press. Harcourt, B. (2007) Against Prediction. Chicago, IL: University of Chicago Press. Lee, C. (2003) Murder and the Reasonable Man: Passion and Fear in the Criminal Courtroom. New York: NYU Press. Loury, G. (2002) The Anatomy of Racial Inequality. Cambridge, MA: Harvard University Press. Schauer, F. (2003) Profiles, Probabilities, and Stereotypes. Cambridge, MA: Harvard University Press. Epistemology and belief:
•
Basu, R. (2019) The wrongs of racist beliefs. Philosophical Studies, 176 (9): 2497–2515. https://doi.org/10.1007/s11098-018-1137-0
112 • • • •
Susanna Siegel Dotson, K. (2018) Accumulating epistemic power. Philosophical Topics, 46(1): 129–154. Munton, J. (2017) The eye’s mind: perceptual process and epistemic norms. Philosophical Perspectives, 31(1): 317–347. Siegel, S. (2011) Cognitive penetrability and perceptual justification. Noûs, 46(2): 201–222. https://doi.org/10.1111/j.1468-0068.2010.00786.x Siegel, S. (2013) The epistemic impact of the etiology of experience—sym posium. Philosophical Studies, 162(3): 697–722. https://doi.org/10.1007/ s11098-012-0059-5. This essay was published with commentaries by Richard Fumerton, Michael Huemer, and Matthew McGrath, as well as Siegel’s replies.
Cultural analysis: • • •
Coates, T.-N. (2015) Between the World and Me. New York: Spiegel & Grau. Ward, J. (ed.) (2017) The Fire This Time: A New Generation Speaks About Race (reprint edition). New York: Scribner. Lebron, C.J. (2018) The Making of Black Lives Matter: A Brief History of an Idea (reprint edition). New York: Oxford University Press.
DISCUSSION QUESTIONS 1
2
3
4
Bias-induced illusions could differ with respect to how long they persist in light of further information. For example, suppose someone sees a black man as angry even though his facial expression is neutral. What kinds of factors would you expect to make an illusion like this end, once it begins? What kinds of experiments could measure the persistence of illusions? Suppose you knew someone was susceptible to bias-induced illusions, and you had to brainstorm ways to make them less susceptible to them. What strategies come to mind? Are they focused around individuals, groups who have the same susceptibilities, groups that differ greatly in their susceptibilities? What advantages and drawbacks do the potential strategies have? Can you think of examples besides racialized bias where someone’s percep tions are inflected with their antecedent commitments? Can you find examples of this in fiction? In the cases you come up with, does the influence help the person epistemically, or make it worse for them, or both? In discussing Levin and Banaji (2006), Firestone and Scholl (2015) argue that the effect isn’t due to racial on the basis of a second study in which the “black” face still looked darker even though the images were blurred in ways that masked the features that standardly elicit the racial categorizations “black” and “white.” If the original effect is stronger
Bias and Perception
5
113
than the subsequent one, does the critique by Firestone and Scholl settle whether racial categories affect the perception of lightness? Some people think that our biases directly affect perception (how things look to us) whereas other people think that biases only affect the judgments and interpretations we form on the basis of what we see (what we think about how things look). What kinds of evidence might help us decide between these two possibilities?
REFERENCES Allport, G. (1954) The Nature of Prejudice. Boston, MA: Addison-Wesley. Bijlstra, G., Holland, R.W., Dotsch, R., Hugenberg, K., and Wigboldus, D.H. (2014) Stereotype associations and emotion recognition. Personality and Social Psychology Bulletin, 40(5): 567–577. https://doi.org/10.1177/0146167213520458 Cadogan, G. (2018) Black and blue. In J. Ward (ed.), The Fire This Time (pp. 129–144). New York: Scribner. Clark, A. (2018) Priors and prejudices: Comments on Susanna Siegel’s The Rationality of Perception. Res Philosophica, 95(4): 741–750. Coates, T.N. (2015) Between the World and Me. New York: Random House. Cooper, A.J. (1892) A Voice from the South. Xenia, OH: Aldine Printing House. Correll, J., Park, B., Judd, C.M., and Wittenbrink, B. (2002) The police officer’s dilemma: Using ethnicity to disambiguate potentially threatening individuals. Journal of Personality and Social Psychology, 83(6): 1314–1329. Correll, J., Park, B., Judd, C.M., Wittenbrink, B., Sadler, M.S., and Keesee, T. (2007) Across the thin blue line: Police officers and racial bias in the decision to shoot. Journal of Personality and Social Psychology, 92(6): 1006–1023. Crenshaw, K. (2015) Black girls matter. Ms. Magazine, 25(2): 26–29. Dilulio, J. (1996) My black crime problem, and ours. City Journal, 6(2): 14–28. Dotson, K. (2017) Theorizing Jane Crow, theorizing unknowability. Social Epistemology, 31(5): 417–430. Duncan, B.L. (1976) Differential social perception and attribution of intergroup violence: Testing the lower limits of stereotyping of blacks. Journal of Personality and Social Psychology, 34(4): 590–598. Eberhardt, J.L., Goff, P.A., Purdie, V.J., and Davies, P.G. (2004) Seeing black: Race, crime, and visual processing. Journal of Personality and Social Psychology, 87(6): 876–893. Eberhardt, J.L., Davies, P.G., Purdie-Vaughns, V., and Johnson, S.L. (2006). Looking deathworthy: Perceived stereotypicality of black defendants predicts capital-sentencing outcomes. Psychological Science, 17(5): 383–388. Firestone, C. and Scholl, B. (2015) Can you experience ‘top-down’ effects on perception? The case of race categories and perceived lightness. Psychonomic Bulletin & Review, 22(3): 694–700. doi:10.3758/s13423–13014–0711–0715 Fumerton, R. (2013) Siegel on the epistemic impact of ‘checkered’ experience. Philosophical Studies, 162(3): 733–739. Ghijsen, H. (2018) How to explain the rationality of perception. Analysis, 78(3): 500–512. Glaser, J. (2014) Suspect Race: Causes and Consequences of Racial Profiling. New York: Oxford University Press.
114
Susanna Siegel
Glaser, J. and Knowles, E. (2008) Implicit motivation to control prejudice. Journal of Experimental Social Psychology, 44(1): 164–172. Goff, P.A., Jackson, M.C., Di Leone, B.L., Culotta, C.M., and DiTomasso, N.A. (2014) The essence of innocence: Consequences of dehumanizing black children. Journal of Personality and Social Psychology, 106(4): 526–545. Hinton, E. (2016) From the War on Poverty to the War on Crime. Cambridge, MA: Harvard University Press. Huemer, M. (2013) Epistemological asymmetries between belief and experience. Philosophical Studies, 162(3): 741–748. James, L., Vila, B., and Daratha, K. (2013) Results from experimental trials testing parti cipant responses to White, Hispanic and Black suspects in high-fidelity deadly force judgment and decision-making simulations. Journal of Experimental Criminology, 9(2): 189–212. Johnson, D.J. and Wilson, J.P. (2019) Racial bias in perceptions and size and strength: the impact of stereotypes and group differences. Psychological Science. https://doi.org/10.1177/0956797619827529 Johnson, K.L., Freeman, J.B., and Pauker, K. (2012) Race is gendered: How covarying phenotypes and stereotypes bias sex categorization. Journal of Personality and Social Psychology, 102(1): 116–131. https://doi.org/10.1037/a0025335 Lebron, C.J. (2017) The Making of Black Lives Matter: A Brief History of an Idea. New York: Oxford University Press. Lerman, A. and Weaver, V. (2014) Arresting Citizenship: The Democratic Consequences of American Crime Control. Chicago, IL: University of Chicago Press. Levin, M. (1992) Responses to race differences in crime. Journal of Social Philosophy, 23(1): 5–29. Levin, D.T. and Banaji, M.R. (2006) Distortions in the perceived lightness of faces: The role of race categories. Journal of Experimental Psychology: General, 135(4): 501–512. Lord, E. (2020) The vices of perception. Philosophy and Phenomenological Research. McGrath, M. (2013a) Phenomenal conservatism and cognitive penetration: The ‘bad basis’ counterexamples. In C. Tucker (ed.), Seemings and Justification: New Essays on Dogmatism and Phenomenal Conservatism (pp. 225–247). New York: Oxford University Press. McGrath, M. (2013b) Siegel and the impact for epistemological internalism. Philosophical Studies, 162(3): 723–732. Mendelberg, T. (2001) The Race Card: Campaign Strategy, Implicit Messages, and the Norm of Equality. Princeton, NJ: Princeton University Press. Morris, M. (2016) Pushout: The Criminalization of Black Girls in Schools. New York: The New Press. Murray, P. (1970) The liberation of black women. In B. Guy-Sheftall (ed.), Words of Fire: An Anthology of Black Feminist Thought (pp. 186–198). New York: The New Press. Pautz, A. (2020) The arationality of perception: Comments on Siegel. Philosophy and Phenomenological Research. Payne, B.K. (2001) Prejudice and perception: The role of automatic and controlled pro cesses in misperceiving a weapon. Journal of Personality and Social Psychology, 81(2): 181–192. Payne, B.K. (2006) Weapon bias: Split-second decisions and unintended stereotyping. Current Directions in Psychological Science, 15(6): 287–291.
Bias and Perception
115
Peacocke, C. (2018) Are perceptions reached by rational inference? Comments on Susanna Siegel, The Rationality of Perception. Res Philosophica, 95(4): 751–760. Plant, E.A. and Peruche, B.M. (2005) The consequences of race for police officers’ responses to criminal suspects. Psychological Science, 16(3): 180–183. Sagar, H.A. and Schofield, J.W. (1980) Racial and behavioral cues in black and white children’s perceptions of ambiguously aggressive acts. Journal of Personality and Social Psychology, 39(4): 590–598. Siegel, S. (2016) Epistemic charge. Proceedings of the Aristotelian Society, 115(3): 277–306. Siegel, S. (2017) The Rationality of Perception. New York: Oxford University Press. Steele, C. (2011) Whistling Vivaldi: And Other Clues to How Stereotypes Affect Us. New York: W.W. Norton. Stokes, M.B. and Payne, B.K. (2010) Mental control and visual illusions: Errors of action and construal in race-based weapon misidentification. In AdamsJr, R.B., Ambady, N., Nakayama, K., and Shimojo, S. (eds), The Science of Social Vision (pp. 295–305). New York: Oxford University Press. Wilson, J.P., Hugenberg, K., and Rule, N.O. (2017) Racial bias in judgments of physical size and formidability: From size to threat. Journal of Personality and Social Psychology, 113(1): 59–80. https://doi.org/10.1037/pspi0000092 Yancy, G. (2008) Elevators, social spaces, and racism: A philosophical analysis. Philosophy and Social Criticism, 34(8): 843–876.
6
Epistemic Injustice and Implicit Bias Jules Holroyd and Katherine Puddifoot
How, if at all, do knowledge and social power relate to each other? A commonsense thought is that our practices of inquiry and knowledge-seeking have little to do with politics and social power: we simply find facts and build knowledge. But feminist philosophers have long argued that power and politics affect our knowledge-seeking practices. What is known and how inquiry proceeds are thoroughly inflected by social dynamics. Moreover, a strand of epistemology called social epistemology has emphasized that human practices of seeking knowledge often have a social dimension. We depend upon each other as sources of knowledge and understanding, in order to access resources for developing our knowledge, and in order that we can contribute to shared understandings of our world. This is to say that our knowledge-seeking practices – our epistemic practices – are social. This helps us to see how social power can affect knowledge-seeking: because our epistemic practices are social, the kinds of power dynamics that we find in social relations can impact on them. In particular, unequal power relations can impact our knowledge-seeking endeavors. In this chapter, we set out some of the ways that social power can affect knowledge, identifying instances of what philosophers have called ‘epistemic injustice’. This notion is explored by Patricia Hill Collins in Black Feminist Thought. Collins explores the ways that “power relations shape who is believed and why” (2000, 270). More recently, epistemic injustice has been characterized as “consisting, most fundamentally, in a wrong done to someone specifically in their capacity as a knower” (Fricker 2007, 1). The ability to produce knowledge and contribute to inquiry is a fundamental part of what it is to be human, Fricker argues. As we will see, there are various ways in which one might be harmed in this capacity. Here we focus on how racism and sexism can contribute to epistemic injustice. The film Hidden Figures provides a focal point for our analysis of epistemic injustices. We illustrate some key forms of epistemic injustice with reference to the experiences described of some of the black women—Katherine Goble (now Katherine Coleman Johnson Goble), Dorothy Vaughan and Mary Jackson—working at NASA (National Aeronautics and Space Administra tion) in the United States in the early 1960s. These women made enormous
Epistemic Injustice and Implicit Bias
117
contributions to the mathematical projects required to send the astronaut John Glenn into space, as described in the book by Margot Lee Shetterly, and depicted in the 2016 film of the same title, Hidden Figures. These contribu tions were made notwithstanding that, given the intersecting oppressions they faced, they each experienced various kinds of injustices, including epistemic injustice, as we describe here. While an important retelling of their stories, we note the ways that the film itself may also be implicated in certain epistemic injustices. As we describe some of the different ways in which epistemic injustice may be perpetrated, we present the case for supposing that implicit biases might contribute to these forms of epistemic injustices. Since implicit biases cannot be overcome simply by intending to avoid bias, this poses difficult questions about how to address epistemic injustices, and in particular, what sorts of collective and structural strategies need to be employed to confront them (see McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit Bias”; Ayala-López and Beeghly, Chapter 11, “Explaining Injustice: Structural Analysis, Bias, and Individuals”; Madva, Chapter 12, “Individual and Structural Interventions”).
1 Kinds of Epistemic Injustice There are various ways in which we participate in social practices as knowers and knowledge seekers. We get knowledge from others and share it with them; we try to access resources in order to develop our understanding of the world further; and we try to make, and be recognized for, contributions to our shared understanding. There are some obvious ways in which societies that are structured unequally will lead to epistemic injustices and hinder knowledge production. One protagonist of Hidden Figures, Mary Jackson, showed great flair for engineering, but could not work as an engineer at NASA without formal qualifications. These qualifications were only achievable by attending classes at a white-only school. Racial segregation posed a barrier to Jackson—and many others—insofar as her access to knowledge, and the markers of knowledge (the qualification) were restricted due to race. Jackson’s determination led her to court to petition for access to the necessary classes. Even when she gained access to the necessary classes, she did so after expending energy and confronting indignities that white students did not have to face. This is a clear case, then, in which racism hinders access to knowledge. But philosophers have identified other, perhaps less obvious, ways in which epistemic injustices might manifest. We outline these below, showing how the dynamics of racism and sexism of 1960s America led to various kinds of epistemic injustice against the striving mathematical engineers Goble, Vaughan and Jackson—and how implicit biases might implicate us all in epistemic injustices today.
118
Jules Holroyd and Katherine Puddifoot
1.1 Testimonial Injustice Epistemic injustice can occur when speakers attempt to provide knowledge, insights, and understanding to other people. When a speaker attempts to impart knowledge, an assessment is made, by the audience, of whether the speaker is credible. Judgments of credibility involve judgments of the speaker’s reliability and trustworthiness as a knower. The listener will (implicitly or explicitly) consider questions such as Is she intellectually capable enough to have acquired or produced knowledge? and Is she trustworthy? Can I believe her? Sometimes the resultant assessments are not based on the personal char acteristics of the individual speaker, such as her track record, or her qualifica tions, or whether, for instance, she shows proficiency with the technical terms of a domain of knowledge. Instead they are based on prejudices about the individual speaker’s social group. For example, a speaker might be deemed not to be intellectually capable because she is a woman and women are thought to be intellectually inferior to men. Or a speaker might be deemed not to be trustworthy because she is black and black people are taken to be dishonest. This is part of what Collins has in mind when she says that “power relations shape who is believed and why” (2000, 270). Sexist and racist stereotypes mean that some people are disempowered as knowers because of the stereotypes through which others see them. More recently, the phenomenon whereby speakers are given less credibility than they deserve due to prejudice has been called testimonial injustice by Miranda Fricker (2007). Testimonial injustice occurs when speakers are treated as less reliable and trustworthy than they really are, and not believed when they should be, due to prejudice. We see both dimensions of this form of epistemic injustice in Hidden Figures. In the opening scene, Goble, Jackson and Vaughan are at the side of the road, broken down. Vaughan is under the engine, trying to fix it. Immediately on their guard, a police officer pulls up to inquire about the troublesome spot they are in. ‘You being disrespectful?’ the officer challenges Jackson, when she points out they didn’t choose to break down there. His tone is overtly hostile—despite their deferential demeanor. The threatening tone of the exchange, however, rapidly changes when their ID reveals that they are employees of NASA. The default assumptions made by the police officer are revealed as he expresses surprise: ‘I had no idea they hired…’ ‘there are quite a few women, working in the space program’—Vaughan saves them all the indignity of his racist utterance. Impressed by their employment, the police officer has nonetheless revealed his racist and sexist default assumptions; never did he suppose they could be mathematicians. Only the veneer of esteem that comes with doing the calculations to get the rockets into space for NASA suffices to undo the credibility deficit the police officer brings to their exchange. There are lots of examples in the film of people not giving the protagonists the credibility they deserve. Fricker refers to these as credibility deficits. Testimonial injustice involving credibility deficits (Fricker 2007, 17) like this are harmful. Judging another person not to be credible, and not believing what they say,
Epistemic Injustice and Implicit Bias
119
simply because of the social groups to which they belong is dehumanizing (Fricker 2015). Moreover, failing to accord people the credibility they deserve can damage the person denied credibility and other members of their community who do not appreciate and cannot make use of the knowledge that they provide. Others lose out on knowledge, too. Thinking about epistemic injustice in terms of credibility also helps us to see the importance of markers of credibility—norms or indicators that help us to make accurate judgments of credibility. One such norm includes titles that indicate job roles and statuses—these help us to appropriately under stand an individual’s level of expertise. Recall, the police officer swiftly changes his tone when the three women show their NASA passes—key markers of esteem and credibility. This helps us to see a further way in which one could suffer an epistemic injustice: by being denied the markers of credibility—as was Dorothy Vaughan as depicted in Hidden Figures. Vaughan has been undertaking work in effect supervising the team of black women working as human computers within NASA, but is denied promo tion to supervisor, and thus denied a key marker of credibility. This is also makes it more likely that she will suffer testimonial justice in future, so paves the way for further harms. 1.2 Testimonial Injustice and Implicit Bias Some people might optimistically think that many of these forms of epistemic injustice are rarer today, insofar as many of us reject the overt racism that characterized social relations in the 1960s. However, racist dynamics have persisted alongside the trend that has seen many people profess egalitarianism and a commitment to equal treatment. How can this be explained? Are people who say they are care about equal treatment lying? Maybe. But recent research from psychology has provided a competing explanation: people who are sincere in professing a commitment to fair treatment may also have implicit biases. Implicit biases might be one of the mechanisms involved in perpetuating epistemic and other forms of injustice (other factors may include social structures, such as segregation, of the sort we mentioned above, and simply unjust and unequal access to material resources needed to gain and contribute knowledge). Other chapters in this volume examine the nature and moral implications of our implicit biases (Dominguez, Chapter 8, “Moral Responsibility for Implicit: Examining Our Options”; McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit”), and how our automatic patterns of thought, feeling, and behavior can be informed by stereotypes (see Leboeuf, Chapter 2, “The Embodied Biased Mind”; Beeghly, Chapter 4, “Bias and Knowledge: Two Metaphors”; Siegel, Chapter 5, “Bias and Perception”). Some of these stereo types and behaviors, as we will see below, concern people’s capacities as knowers and testifiers. Thinking about implicit bias and epistemic injustice can also help us to refine our understanding of the injustices at issue.
120
Jules Holroyd and Katherine Puddifoot
Credibility involves being judged reliable as a knower, and trustworthy when one imparts knowledge. But we might have implicit biases that impact these judgments; that is, we may have some automatic thoughts about the extent to which people are reliable and trustworthy. Some of the early stu dies on implicit bias suggested that people tend to be more ready to associ ate positive qualities with white people than with black people, especially when it comes to evaluating their competences. For example, in one study, (Dovidio and Gaertner 2000) individuals were asked to evaluate the qualifi cations and credentials of job applicants, and report back on how likely they were to recommend that the individual be hired. Sometimes the same mate rials would be identified as belonging to a black candidate and other times to a white candidate. When the applicant’s materials indicated they were racialized black, and where there was room for discretionary judgement, the evaluations were less positive, and less strong hiring recommendations were made than when the applicant was indicated as white. One explanation for these differential evaluations is that there are implicit biases at work here— automatic associations with positive qualities—that lead people to see white applicants as more suitable, competent candidates. While the impact of implicit bias in any one case may be marginal, if these biases are pervasive —as they have been found to be—then they could be part of an explanation for widespread discrimination. This could be because implicit biases led the evaluators to regard the white applicants as more credible knowers. But maybe this has nothing to do with the applicants’ knowledge relevant capacities. Maybe the evaluators have implicit preferences for white appli cants, whether or not they (implicitly) think of white people as more reliable knowers? That could be—but other studies suggest that sometimes implicit biases concern specific evaluations of credibility and trustworthiness. Some psychologists have looked specifically at whether implicit racial bias is related to the judgments of trustworthiness that people make. For exam ple, Stanley et al. (2011) first asked participants to take a race IAT. Then they asked participants to rate a series of faces for how trustworthy they appeared. They found that to the extent that individuals harbored stronger positive associations with white people than black people, they were also more likely to judge white faces as more trustworthy than black faces. This suggests that simply in virtue of being white, people are getting a credibility boost that is not afforded black people. Insofar as credibility judgements interact (cf. Anderson 2012; Coady 2017), this means that white people will be more likely to be believed than black people. Accordingly, the evidence from empirical psychology suggests that, even if people try to be fair-minded and treat people equally, implicit bias might mean they are implicated in testimonial injustice. And these implicit biases may be widespread precisely because they reflect a social context in which there is unequal access to the markers of credibility, as we described earlier.
Epistemic Injustice and Implicit Bias
121
1.3 Epistemic Appropriation Testimonial injustice and credibility deficits can prevent the sharing of knowl edge. However, epistemic injustice can also operate via credibility denial in a different way. In some cases, someone may produce knowledge, but the knowledge producer does not get credit for the idea. Their contribution is recognized, but that it is their contribution is not recognized. This has been called epistemic appropriation (Davis 2018). For example, suppose a black student makes a contribution in class. Shortly after, a white student takes up the idea and the rest of the class discuss it as if it had not already been introduced. The important idea is recognized, but the student who first introduced it does not get due credit. Their idea has been appropriated. As Davis explains, there can be voluntary epistemic appropriation, for example, someone from a group that is marginalized or stigmatized might allow her ideas to be published under another person’s name because she does not believe that the ideas would be well-received if known to be hers. However, epistemic appropriation may often be involuntary; ideas can be adopted without recognition of who produced them. When epistemic appropriation occurs, the knowledge-producer does not gain the boost to their credibility that they are due. Meanwhile those already in positions of power who are taken to be knowledge-producers can gain the benefit of undeserved recognition. For example, in the film Hidden Figures Katherine Goble is responsible for authoring reports that are crucial to NASA’s achievements in the Space Race. But she is denied the credit that she deserves for her contribution because her white male colleague, Paul Stafford, is named as sole author of the reports. Not only does Goble fail to receive the credit she is due; Stafford appropriates it instead. This harms her in various ways: if she had been named as author on the reports then she could have gained a boost to her status as a mathematician. She could consequently have been given opportunities to engage in other projects, further contributing to the knowledge in her field. In contrast, while her name is excluded from the reports her white male colleague has an unwarranted boost to his credibility—not only is he credited as author, he is wrongly credited as the sole author of the work, and taken to have been solely responsible for the knowledge contained therein. Epistemic appropriation can occur over time also. For example, one of the projects of the book and film Hidden Figures is to ensure that credit is appropriately accorded to the black women who were central to the Space Race. Notwithstanding their crucial role, prior to the publication of the book and later release of the film their important epistemic contributions were not widely known. Yet recording their roles, as Margot Lee Shetterly does in the book, is a way of ensuring, over time, that epistemic appropriation does not persist, due credit is given, and that at least some of the wrongs of previous epistemic appropriation are rectified.
122
Jules Holroyd and Katherine Puddifoot
1.4 Epistemic Appropriation and Implicit Bias Epistemic appropriation occurs when someone’s ideas are recognized, but not properly attributed. Might implicit bias be involved in this? Patterns of implicit bias—gender and racial bias, say—might be implicated particularly if we think about the appropriation of ideas at the level of social identities and widely held stereotypes or biases in a social context. An example of this concerns the kind of implicit stereotypes to do with trustworthiness that we talked about in relation to testimonial injustice. Patricia Hill Collins has also articulated the kinds of ‘controlling images’ through which black women in particular, in the USA, may be seen: as nurturing and obedient ‘mammies’ (80); as unfeminine, emasculating ‘matriarchs’ (82); materialistic, domineering, but dependent ‘welfare queens’ (88); as professional but unfeminine ‘black ladies’ who take white jobs through affirmative action programs (89); or sexually aggressive ‘jezebels’ (90). These controlling images may be ones to which racist people knowingly subscribe; but many people reject these stereotypes. In the latter case, it may also be that the stereotypes are nonetheless held implicitly, influencing people’s judgments without them realizing or intending this to be the case. As Goff and Kahn (2013) note, few studies on implicit bias have examined black women in particular, tending to focus on stereotypes that attach to white women or black men. This means that there is a gap in the psychological literature regarding the associations that black women might face). Why are black women seen through these kinds of stereotypes, when instead there are available to us inspirational stories such as those of the women who worked at NASA—such as Goble, Vaughan and Jackson? Collins traces the political utility of these stereotypes in entrenching the oppression and exploitation of black women. In contrast, NASA, and aeronautical engineering, is widely stereotyped as the kind of endeavor characteristic of white men. Indeed, a study that examined the association between men versus women, and the sciences versus the arts, found that participants strongly associated men with the sciences (Nosek et al. 2002). As it was characterized earlier, appropriation concerned individual knowledge-producers not being recognized. But we argue that we can think of the stereotyping of certain fields or subjects as a kind of collective epistemic appropriation. Activities in which black women made key contributions have come to be stereotyped as activities that are typically done by white men. The importance of that field is recognized, as is the ingenuity and scientific rigor of those working in it; the ideas seen as fundamental to the advancement of human knowledge. But that the field was shaped by black women, that their ideas have been fundamental to the advancement of human knowledge, has been overlooked and ignored. Aeronautical engineers are stereotyped as white men. Instead of aeronautical engineers, black women are stereotyped with the controlling images that Collins articulates. This is a kind of collective
Epistemic Injustice and Implicit Bias
123
epistemic appropriation. Insofar as implicit biases are implicated in these stereotypes and shape our patterns of what Charles Mills calls ‘collective remembering and amnesia’ (2007: 28–29)—remembering and forgetting whose epistemic contributions they were, in particular—we can see how people might, unintentionally, be implicated in this kind of epistemic injustice. 1.5 Epistemic Exploitation In a memorable set of scenes from Hidden Figures, Katherine Goble has to leave her office building and run half a mile across the NASA campus to reach the nearest available toilet, since the washrooms are segregated. She has asked the only other female in her office, a white administrator, where the toilet is and was told that the administrator did not know where her toilet was. In an intensive research environment in which people do not take breaks, it is quickly noticed that she is frequently away from her desk, and eventually her white male boss demands an explanation. In front of a large office full of white men (and one white woman), Goble is required to describe the difficulties and indignities that she has encountered every time she has needed to go to the toilet. She articulates the frustration and humiliation she faces due to not having access to a toilet nearby and due to being made to drink from a coffee pot that none of her colleagues wishes to even touch. There are obvious obstacles to developing knowledge here—literally having less time to do so due to having to take longer bathroom breaks. But in addition, Goble is required to articulate the difficulties that she has faced within her work environment due to being a member of a stigmatized and marginalized group. She is placed in a situation in which she has to educate members of the dominant group about the inequities that she has faced. She suffers stress and embarrassment in doing so. Katherine’s experience has many of the features commonly found in epistemic exploitation. Epistemic exploitation occurs when members of a marginalized group are expected or required to educate members of privileged groups about injustices that are faced by those who share their social identity (Berenstain 2016; Davis 2016; Spivak 1999; Audre Lorde 1995, 2007). Such educative work requires cognitive and emotional labor that is uncompensated (financially or otherwise), mentally draining, and time-consuming, taking their attention away from other activities that might have been rewarded. The efforts involved are not viewed as work, but people may face negative repercussions if they do not engage in them. For example, if a member of a marginalized group refuses to educate members of the privileged group, the people who made the request may be affronted. More over, stigmatized individuals may be viewed as confirming negative stereotypes about their groups, such as the stereotype that women are irrational, or that black people are uncooperative. These are all harms, and instances of injustice when social power explains why some people face repeated demands for burdensome cognitive work that goes unrecognized and unrewarded.
124
Jules Holroyd and Katherine Puddifoot
In the film Hidden Figures, Goble’s explanation of her experiences leads to action, as her white male boss racially desegregates the toilets and removes the segregating labels from the coffee pots. Note, though, that this scene is not historically accurate, and also problematic: it perpetrates the myth of the ‘white saviour’ who came along to end racism. While the film is of course entitled to fictional license, the problematic retelling of stories may be one of the reasons for which some authors have argued that there is a responsibility for the oppressed to educate others (Medina 2011)—such education is likely more reliable, and less likely to pander to problematic narratives whose aim is to make white people feel good. Yet still, although members of marginalized groups are asked to provide information, the testimony provided may not be deemed credible precisely due to stereotypes about the marginalized groups to which they belong. This combination of epistemic exploitation and testimonial injustice explains why members of marginalized groups are required to repeatedly explain the injustices that they face. It can also result in individuals engaging in what Kristie Dotson calls testimonial smothering (Dotson 2011): truncating or silencing testimony to avoid the risks associated with not being properly listened to. 1.6 Epistemic Exploitation and Implicit Bias How might implicit bias be involved in epistemic exploitation? In the face of systematic implicit bias, the onus on members of marginalized groups to educate others might be particularly heavy. First, where discrimination and inequalities result from implicit bias, they may be rationalized away, and so harder to recognize as inequalities or discrimination. It is therefore more likely that a demand will be placed on members of marginalized groups to educate others about how specific judgments and actions resulting from implicit bias are unjust. Second, negative implicit attitudes towards members of marginalized groups might increase the chance that the cognitive and emotional labor that they contribute will be systematically undervalued, an important component of epistemic exploitation. Third, perpetrators of discrimination may be resistant to the idea that they are complicit in treating people unfairly, in part because it is difficult for them to notice that they have and are influenced by implicit bias. People may have misleading evidence—they think about their beliefs, and come to believe that they are fair-minded and do not treat people unfairly. They don’t notice their impli cit biases and the influence they have on their behaviors. In fact, there is some evidence that to the extent that people think they are being objective and are not influenced by bias, they are more likely to be biased (Uhlmann and Cohen 2007). In such cases, it may be harder for those trying to explain and educate about experiences of discrimination or injustice to have their testimony accepted. This might be so even if someone is explicitly asked to educate others; the hearer may find the testimony harder to believe if it does
Epistemic Injustice and Implicit Bias
125
not cohere with their own evidence (namely, that they are objective and unbiased). So the fact that bias is implicit, and so hard to notice, may increase the likelihood of epistemic exploitation. 1.7 Hermeneutical Injustice The concepts that we have shape the way that we understand and communicate our experiences. There can be injustices surrounding whether people have access to and can utilize concepts and other conceptual resources (e.g. narra tives, scripts) that capture and can be used to understand their experiences. Members of dominant groups can shape and unduly influence what concepts are widely available. For example, white, middle-aged, able-bodied, cisgender, and middle- or upper-class males have traditionally had access to positions and resources that enabled them to shape the concepts—the interpretive, or ‘hermeneutical resources’—that are widely available. They have been the ones typically occupying positions of ‘hermeneutical power’; they have been, for example, the politicians, journalists, and educators—the people who have most influence over the concepts that are widely in use. Concepts that capture their experiences would be dominant. Meanwhile concepts that describe the experi ences of people in positions of less power may not be widely understood—or even may not be available at all, until people undertake the cognitive and emotional work to try to articulate and describe aspects of their experience. For example, the concept of sexual harassment came to prominence in the 1960s only after women worked collectively to understand and articulate the experi ences that were making their participation in the workplace so difficult and so costly. (Note that this followed middle-class white women’s entry, post war, into the workforce. Black women and working-class women—with fewer opportunities to shape the hermeneutical resources—had long been in the paid workforce, frequently with little choice about this, and had been subject to sexual harassment, assault, and rape with no opportunity for recourse or redress.) Even once more prominent, the concept was not widely understood and not part of the shared conceptual resources (arguably the need for movements such as #metoo reflects the extent to which it is still not well understood). Indeed, it can be in the interests of the dominant group to exclude from the widely available conceptual resources concepts and conceptual resources that capture the experiences of marginalized groups (see Fricker 2007 for discussion of this example). This means that, for example, women who experienced sexual harassment had—and still have—a hard time getting others to understand their experiences and what is wrong with it. The notion of hermeneutical injustice has been introduced by Miranda Fricker (2007) to capture this phenomenon. She characterizes it as obtaining when individuals lack the concepts they need, due to a gap in the shared conceptual resources, which is in turn due to some groups having undue influence over the formation of those resources, with others having insufficient influence (hermeneutical marginalization).
126
Jules Holroyd and Katherine Puddifoot
As an instance of hermeneutical injustice, consider the experiences faced by Katherine Goble when she has to run half a mile to use the toilet. She is likely to be able to conceptualize this experience alongside numerous other difficulties she has faced: racist structures mean she faces obstacles meeting even her most basic needs, and are a feature of her experience she confronts regularly. Consider though the response of her white colleague after Goble has taken a necessarily lengthy bathroom break: “My God, where have you been? Have you finished yet?” One interpretation of the inability to comprehend is that her colleague lacks the conceptual resources to understand well the barriers that Goble faces. She just sees a long absence from the desk and an incomplete work package—she lacks the interpretive resources to conceptualize what Goble is experiencing in terms of racist social structures. While Goble has the resources to make sense of her experiences, she is unable to communicate adequately about this with her colleague who lacks the relevant interpretive resources. Note, though, that Goble herself does not lack the resources to understand her experiences: she is hermeneutically marginalized, though, and therefore is unable to communicate with her colleagues about the racism she experiences, since they lack the relevant interpretive resources. This indicates we should expand the notion of hermeneutical injustice to include not only cases in which individuals lack resources to understand their own experience, but those cases in which they have the resources, but are unable to effectively communicate about them (e.g. to people in positions of power who could act to change those experiences) (Mason 2011; Medina 2011; Pohlhaus 2012; Dotson 2012). 1.8 Hermeneutical Injustice and Implicit Bias Recall Goble’s difficulties in communicating with her white colleagues about the racist structures that hindered her work. Given what we have said about implicit biases being a contributor to testimonial injustice and epistemic exploitation in particular, one might think that the development of the concept of implicit bias is particularly helpful for dealing with this form of epistemic injustice. Having the concept of implicit bias helps people who think they are unbiased and committed to fair treatment realize that in fact they may not be. As such, it may help people to realize that despite their good intentions, and despite their values, they might nonetheless sometimes discriminate, and in particular perpetrate epistemic injustice. When they think of aeronautical engineers, they might think immediately of a white man, rather than a black woman. When hearing about experiences of discrimination, they might automatically think ‘that person is wrong, I don’t discriminate’. But if this person has the concept of implicit bias they might be better able to carefully reflect and notice that their automatic patterns of thought are biased. They might be more willing to listen when they are called out, because they acknowledge that their own perceptions of how they thought and acted may not be reliable (see Hahn et al. (2014) for studies that suggest that when others prompt
Epistemic Injustice and Implicit Bias
127
us to reflect, we can better notice our own biases). Moreover, having the concept of implicit bias could help to articulate widespread patterns of discrimination and exclusion pervasively faced, even in contexts and interactions in which explicit prejudice does not prevail. Having the concept of implicit bias can also prompt us to reflect on the kind of changes we need to make to society and institutions in order to make it less likely that implicit biases have a role (e.g. changes to hiring practices, reducing informal segregation, etc.). As such, having the concept of implicit bias in our shared conceptual resources can fill a gap in those resources. And once that gap is filled, it might help people recognize the ways that they and the social structures they inhabit are involved in discrimination, and motivate steps to address these injustices. Once again, however, the optimism expressed here needs to be qualified. The concept of implicit bias has been around for some time but there has been far from universal recognition of the phenomenon and the need for institutions to address the problems that follow from it. Implicit bias remains a contested concept in psychology (see Brownstein, Chapter 3, “Skepticism About Bias”). This might be explained by the fact that recognition of implicit bias and its effects threatens the legitimacy of the power and status of privileged individuals. We should not expect the mere introduction of the concept of implicit bias to trigger actions to right epistemic and other wrongs. Instead, it is necessary to reflect carefully on how to use the concept effectively. 1.9 Contributory Injustice We introduced above the idea that some concepts may not be available to all, so that it is harder for some people to express or be understood in the claims that they make about their experiences. As we described, some con cepts may exist in some groups, but may not (yet) be part of the dominant conceptual resources. Contributory injustice, as characterized by Kristie Dotson (2012), occurs when someone is willfully ignorant in using concepts that thwart others’ abilities to contribute to the epistemic community. The injustice here is specifically in people being unable to contribute to the dominant shared interpretive resources (rather than in the existence of a gap that hinders understanding). Because some individuals and groups are unable to contribute to shared interpretive resources, it therefore becomes very likely that they will experience hermeneutical injustices of the sort we saw above. As we saw, there may be competing sets of concepts and conceptual resources (narratives, scripts, counter-mythologies) that exist among different social groups to explain the experiences of group members. The dominant conceptual resources may be structurally prejudiced, for example, by lacking concepts that some people need to make sense of, or communicate about, aspects of their oppression. Consider the example of sexual harassment. The dominant conceptual scheme may conceptualize certain behavior as banter or just a bit of fun. So
128
Jules Holroyd and Katherine Puddifoot
construed, it makes it harder for those who wish to use competing conceptual resources to capture the behavior—resources which capture the seriousness and the harm of the behavior they are experiencing—such as sexual harassment. The widespread use of competing resources (banter) poses obstacles for those using marginalized resources (sexual harassment) to express their experiences. Those who try to articulate their experiences in these terms, in particular to those who use the dominant conceptual resources, are thwarted in contributing knowledge and understanding. As we noted, when it is in the interests of people to ignore other important concepts, people are willfully ignorant—their ignorance is motivated by their interest in maintaining the status quo. Willful ignorance is helpfully characterized by Gaile Pohlhaus as “a willful refusal to acknowledge and to acquire the necessary tools for knowing parts of the world” (Pohlhaus 2012, 729; see also Mills 1997; 1998; Sullivan and Tuana 2007; Tuana and Sullivan 2006). For example, in the film Hidden Figures, an interaction between Vivian Mitchell, the white supervisor of the black human computer team can be read in terms of contributory injustice. In one scene she, seemingly genuinely, declares that she has nothing against Dorothy Vaughan, who was at that time her subordinate—professionally, and socially, as a black woman in a racist social context. “I know. I know you probably believe that”, Vaughan replies. Vaughan both believes that Mitchell doesn’t have anything personally against her, and that despite her protestations, Mitchell really does harbor prejudice against her, due to her status as a black woman. One way to understand this is in terms of the competing concepts of prejudice that each of them have. Reading between the lines, it is likely that Mitchell has a conception of prejudice—one which figures in the dominant interpretive resources—that is characterized by race-based hatred or animosity. Since she doesn’t hold such attitudes towards Vaughan, she isn’t prejudiced against her (according to this conception of racial prejudice). Again reading between the lines, we might suggest that Vaughan holds a different conception of prejudice—one which does not depend on these kinds of mental states, but rather on the ideology that one accepts or assumes, as belied by one’s behaviors. For example, that Mitchell accepts a workplace in which there are segregated job roles and assignments, bathrooms and coffee pots constitutes prejudice, and demon strates Mitchell’s subscription to a racist ideology, on this view, irrespective of whether she harbors animosity. If the dominant conceptual resources suppose that racism must be character ized by hatred or animosity, then it will be harder for competing conceptions of racism—those which are a better fit for the commonplace experiences of racism that pervade societies structured by racist hierarchy—to be expressed and understood. The use of the more limited conception of racism could be a form of contributory injustice if those who use it do so in willful ignorance, namely, a willful refusal to gain the tools needed to understand properly the nature of racism in which whites were, and are, implicated. There is reason to suppose this
Epistemic Injustice and Implicit Bias
129
is an accurate characterization of Mitchell, insofar as she willfully ignores important aspects of Vaughan’s experiences or the concepts needed to make good sense of them, and insofar as she uses competing concepts that thwart Vaughan’s (and others’) abilities to get their experiences of racism well understood. In this instance, there is not only the epistemic harm (to members of marginalized groups) of being unable to contribute knowledge to shared understandings, but also the harm (to those who use the dominant conceptual resources) involved in people being poorly placed to understand and address all aspects of racism, which goes beyond the problematic mental states of racists. 1.10 Contributory Injustice and Implicit Bias Contributory injustice occurs when people are marginalized and so unable to contribute to shared understandings, because the concepts they use to do so are not part of the shared resources. Others continue to use other shared resources that make it harder for marginalized people to make important contributions (recall the example of sexual harassment/banter). While we have suggested that having the concept of implicit bias is a helpful contribution to the shared resources, we here want also to raise a cautionary note. Using the concept of implicit bias might perpetrate contributory injustice if doing so makes it harder for marginalized people to contribute to the shared understandings. But why might having the concept of implicit bias make this the case? Haven’t we just argued that having the concept might make people more likely to recognize injustice? The concern we want to raise is simply this. The concept of implicit bias comes from empirical research programs conducted by academic research ers. And, while it might be helpful in many ways, there is something problematic if people only heed the possibility that they are implicated in discrimination when academic researchers suggest as much. There are other sources of this knowledge: notably, the testimony of people who have experienced discrimination even when the discriminator claims not to be biased and professes values of fair treatment. Moreover, there is a pattern of evidence all pointing towards the same conclusion—that people discriminate without realizing it, or unwittingly stereotype, or express bias even though they try to be fair. These sources of evidence should also be given due weight. If we only heed these ideas when they come from academic researchers, this can make it harder for people in marginalized groups, who often do not have access to the platforms that academics have, and are denied access to some markers of credibility, to contribute. It entrenches the idea that certain ideas, expressed by certain people, in certain ways, are legitimate, while others are not. So, in this context, only heeding the problems when academic researchers talk about implicit bias might make it harder for people from marginalized groups, who may use different concepts to capture their experiences of discrimination, to contribute to shared understandings. Whether this
130
Jules Holroyd and Katherine Puddifoot
constitutes a form of contributory injustice might depend on whether the use of the concept implicit bias is done in willful ignorance, or whether willful ignorance is in fact an inessential component of the notion of contributory injustice. In any case, it is important to consider the research findings about implicit bias along other sources of evidence about the phenomena of and mechanisms involved in discrimination.
2 Implicit Bias, Epistemic Injustice, and Remedies From the discussion above, we can see that addressing epistemic injustices requires a variety of different strategies. Sometimes it will require correcting stereotypes—including implicit stereotypes—that undermine credibility. If these are implicit stereotypes and biases, then creative strategies might be needed, since individual efforts alone may be insufficiently robust to secure change (McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit Bias”; Ayala-López and Beeghly, Chapter 11, “Explaining Injustice: Structural Analysis, Bias, and Individuals”; Madva, Chapter 12, “Individual and Structural Interventions”). Sometimes it will mean enabling people who are marginalized to access the markers of credibility. It might also involve revisiting what those markers of credibility are, and whether they are in fact good markers (should we rely on qualifications or prestige of schools, if not everyone has equal access to those markers of credibility?). It might involve ensuring that people get the recognition they are due, and ideas are properly credited to them; this can include challenging social stereotypes. It might involve recognizing unfair epistemic burdens, and taking on additional commitments to educate oneself. It might involve collective efforts to reflect on the shared concepts we have, and where they come from, and whether there are different or better concepts available to us. As we hope is clear from above, these kinds of strategies are best conceived as collective projects—shaping our shared understanding of what indicates credibility; challenging the social stereotypes that foster appropriation of ideas; shaping our shared interpretive or hermeneutical resources. While there are some things that we can do as individuals—we can educate ourselves, we can try to be better listeners, we can try to reflect on our automatic judgments—these will be of limited efficacy against a backdrop of systemic biases and history of structures of exclusion. Many of the changes needed are social changes—changes to structures that prevent people accessing knowledge or communicating it effectively. While formal segregation has ended, de facto segregation—of housing, jobs, and social groups—still persists, meaning that there persist barriers to accessing knowledge, accessing markers of credibility, accessing and shaping the rele vant conceptual resources for making sense of injustices. Changing social structures is not just a matter of justice, but also a matter of removing obstacles to knowledge, and opportunities to contribute to knowledge (Anderson 2010). Remedying epistemic injustice—including epistemic
Epistemic Injustice and Implicit Bias
131
injustices due to implicit biases—will require changes to what and who are on the curricula that we learn, and how that affects whether we implicitly associate, for example, aeronautical engineering with white men or black women; changes to our shared concepts, which requires what Kristie Dotson calls ‘transconceptual communication’—the ability to interact with people across social boundaries and try to understand the different concepts people use. This suggests that in addressing these various dimensions of epistemic injustice, we need to think about what we can do as individuals, but also as individuals who, with others, can bring about broader social change. Where does this leave us with regards to our answer to the opening question of how knowledge and social power relate to each other? We have seen that there are many important ways that the two interrelate. Unjust social systems can prevent knowledge from being produced, acknowledged, and acquired through phenomena like testimonial injustice, hermeneutical injustice, and epistemic appropriation. And where some individuals in society lack knowledge about the social experiences of members of margin alized groups this can produce injustices like contributory injustice and epistemic exploitation. Knowledge and power are deeply intertwined.
SUGGESTIONS FOR FUTURE READING If you’d like to learn more about testimonial and hermeneutical injustice, you should read: • • •
Fricker, M. (2007) Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press. Dotson, K. (2012) A cautionary tale: On limiting epistemic oppression. Frontiers: A Journal of Women Studies, 33(1): 24–47. Medina, J. (2011) The relevance of credibility excess in a proportional view of epistemic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1): 15–35.
If you’d like to learn about other forms of epistemic injustice (including epistemic appropriation and exploitation), take a look at: • • • •
Anderson, E. (2012) Epistemic justice as a virtue of social institutions. Social Epistemology, 26(2): 163–173. Kidd, I.J., Medina, J., and Pohlhaus, G. (eds) (2019) The Routledge Handbook of Epistemic Injustice. New York: Routledge. Davis, E. (2018) On epistemic appropriation. Ethics, 128(4): 702–727. Berenstain, N. (2016) Epistemic exploitation. Ergo, 3. http://dx.doi.org/ 10.3998/ergo.12405314.0003.022 For exploration of how epistemic injustice might be overcome, see:
132 •
Jules Holroyd and Katherine Puddifoot Sherman, B.R. and Goguen, S. (eds) (2019) Overcoming Epistemic Injustice: Social and Psychological Perspectives. Washington, DC: Rowman & Littlefield.
DISCUSSION QUESTIONS 1 2
3
4
5
6
Could it ever be just to give someone more credibility than they deserve? Why or why not? What sorts of markers do we use to indicate credibility? Are these likely to be good, or reliable markers? Are there ways in which these markers might reflect, or entrench, injustices? Consider cases where someone voluntarily allows her ideas to be appro priated, knowing that her ideas will be better received if reported by someone else. Does the fact that it is voluntary make it unproblematic? Why or why not? Do individuals from marginalized groups have responsibilities to educate others about aspects of oppression? Or is doing so always epistemically exploitative? Consider notions that have been recently developed: mansplaining, manspreading, he-peating. To what extent is it legitimate to think of these notions as filling a gap in the hermeneutical resources? Would the absence of such concepts have been a hermeneutical injustice? Is implicit bias a useful concept for identifying instances of epistemic injus tice? Or might its use sometimes—or always—perpetrate contributory injustice?
REFERENCES Anderson, E. (2012) Epistemic justice as a virtue of social institutions. Social Epistemology, 26(2): 163–173. Anderson, E. (2010) The Imperative of Integration. Princeton, NJ: Princeton University Press. Berenstain, N. (2016). Epistemic exploitation. Ergo, 3. http://dx.doi.org/10.3998/ergo. 12405314.0003.022 Coady, D. (2017) Epistemic injustice as distributive injustice. In I.J. Kidd, J. Medina, and G. Pohlhaus (eds), The Routledge Handbook of Epistemic Injustice. New York: Routledge. Collins, P.H. (2000). Black Feminist Thought. Knowledge, Consciousness, and the Politics of Empowerment (second edition). New York: Routledge. Davis, E. (2016) Typecasts, tokens, and spokespersons: A case for credibility excess as testimonial injustice. Hypatia, 31(3): 485–501. Davis, E. (2018) On epistemic appropriation. Ethics, 128(4): 702–727. Dotson, K. (2011) Tracking epistemic violence, tracking practices of silencing. Hypatia, 26(2): 236–257. Dotson, K. (2012) A cautionary tale: On limiting epistemic oppression. Frontiers: A Journal of Women Studies, 33(1): 24–47.
Epistemic Injustice and Implicit Bias
133
Dovidio, J.F. and Gaertner, S.L. (2000) Aversive racism and selection decisions: 1989 and 1999. Psychological Science, 11: 319–323. Fricker, M. (2007) Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press. Fricker, M. (2015) Epistemic contribution as a central human capability. In G. Hull (ed.), The Equal Society: Essays on Equality in Theory and Practice. Lanham, MD: Lexington. Goff, P.A. and Kahn, K.B. (2013) How psychological science impedes intersectional thinking. Du Bois Review: Social Science Research on Race, 10(2): 365–384. Hahn, A., Judd, C.M., Hirsh, H.K., and Blair, I.V. (2014) Awareness of implicit attitudes. Journal of Experimental Psychology: General, 143(3): 1369–1392. Hidden Figures, 20th Century Fox, https://www.foxmovies.com/movies/hidden-figures Lorde, A. (1995) Age, race, class, and sex: Women redefining difference. In B. GuySheftal (ed.), Words of Fire: An Anthology of African American Feminist Thought (pp. 284–291). New York: The New Press. Lorde, A. (2007) The master’s tools will never dismantle the master’s house. In Sister Outsider: Essays and Speeches (pp. 110–114). Crossing Press. Mason, R. (2011) Two kinds of unknowing. Hypatia, 26(2): 294–307. Medina, J. (2011) The relevance of credibility excess in a proportional view of epis temic injustice: Differential epistemic authority and the social imaginary. Social Epistemology, 25(1): 15–35. Melfi, T. (2016) Hidden Figures. United States: Fox 2000 Pictures, Chernin Entertainment and Levantine Films. Mills, C. (1997) The Racial Contract. Ithaca, NY: Cornell University Press. Mills, C. (1998) Blackness Visible: Essays on Philosophy and Race. Ithaca, NY: Cornell Mills, C. (2007) White ignorance. In S. Sullivan and N. Tuana (eds), Race and Epistemologies of Ignorance (pp. 26–31). New York: State University of New York Press,. Nosek, B.A., Banaji, M.R., and Greenwald, A.G. (2002) Harvesting implicit group attitudes and beliefs from a demonstration web site. Group Dynamics: Theory, Research, and Practice, 6(1): 101. PohlhausJr, G. (2012) Relational knowing and epistemic injustice: Toward a theory of willful hermeneutical ignorance. Hypatia, 27(4): 715–735. Shetterly, M.L. (2016) Hidden Figures: The Untold Story of the African American Women Who Helped Win the Space Race. New York: William Morrow and Company. Spivak, G. (1999) A Critique of Postcolonial Reason: Toward a History of the Vanishing Present. Cambridge, MA: Harvard University Press. Stanley, D.A., Sokol-Hessner, P., Banaji, M.R., and Phelps, E.A. (2011) Implicit race atti tudes predict trustworthiness judgments and economic trust decisions. Proceedings of the National Academy of Sciences, 108(19): 7710–7715. Sullivan, S. and Tuana, N. (2007) Race and Epistemologies of Ignorance. New York: State University of New York Press. Tuana, N. and Sullivan, S. (2006) Introduction: Feminist epistemologies of ignorance. Hypatia, 21(3): vii–ix Uhlmann, E.L. and Cohen, G.L. (2007) “I think it, therefore it’s true”: Effects of selfperceived objectivity on hiring discrimination. Organizational Behavior and Human Decision Processes, 104(2): 207–223.
7
Stereotype Threat, Identity, and the Disruption of Habit Nathifa Greene
What is stereotype threat? Why does it occur? And is it bad to talk about it? For example, might stereotype threat be a concept that blames victims of bias for the effects that bias can have, instead of blaming the surrounding environment and systemic social problems? Using phenomenological descriptions of experience, this chapter treats stereotype threat as a phenomenon that occurs when attention to oneself disrupts a habitual mode of action. Stereotype threat may be understood as the disruption of habit that diminishes the expertise that habit and skill would otherwise afford. If we understand stereotype threat as a disruption of the habitual aspects of expertise, then attention to oneself in stereotype threat is similar to the “choking” effect of self-directed attention in sports, dance, music, or other forms of expert performance.
1 Introduction to the Phenomenon Stereotype threat, or identity threat, is a response to bias that can occur when someone is reminded of their membership in socially significant groups. For example, one group of researchers defines stereotype threat as: a situation in which a member of a group fears that his or her performance will validate an existing negative performance stereotype, causing a decrease in performance. For example, reminding women of the stereotype ‘women are bad at math’ can correlate with poor performance on math questions from the SAT and GRE. (Rydell et al. 2010) When someone belongs to a group that is viewed negatively through the lens of stereotypes, a reminder of politically salient social features of the self—such as race, gender, class, age, sexual orientation, or disability—can trigger a sense that one is subject to stereotyping. This awareness of negative stereotypes, and a concern to avoid confirming them, is the basis of the conscious and unconscious responses that are called stereotype threat.
Stereotype Threat
135
Academic studies that seek to document the effects of stereotype threat often depend on experiments that involve test-taking. Such experiments are designed to measure a difference in performance after someone is primed with the awareness of a stereotype. Stereotype threat can affect thoughts, behavior, and motivation in numerous ways. Some of these responses are explicit and some are more implicit. Shapiro and Aronson (2013) suggest several further implica tions of stereotype threat, beyond the immediate performance difference on a task. They review evidence on how stereotype threat reduces self-perceptions of efficacy, lowers confidence, redirects career aspirations away from the stereo typed domain, and incurs numerous negative effects on wellbeing, such as heightened anxiety and feelings of dejection, and even on physiological indica tors, such as blood pressure (97). This wide range of effects suggests more than actual underperformance on a test or task. These implied effects, which are more persistent, are consequences that can impact personality, motivation, and attitudes towards oneself or certain subjects or tasks. Such global effects are more difficult to measure than the results of controlled experiments, and may not be related to underperformance. Therefore, the single explanatory concept of stereotype threat should not be taken as the defining feature of more com plex phenomena without careful argument. Thoman and colleagues (2013) move the discussion of stereotype threat to more persistent traits than underperformance on a test or task, by linking stereotype threat to a diminished sense of belonging, lower interest, and decreased motivation (see also Freeman 2017; Goguen 2016). But such wider traits may have more complex sources that are overlooked when stereotype threat is the sole focus. What is stereotype threat exactly, and what causes it? At first glance, the simplest explanation could seem to be a kind of belief that negative stereo types about oneself are true, which would make stereotype threat a lack of self-confidence, or low self-esteem. However, stereotype threat often seems to be the result of believing that the stereotype in question is false about oneself, or at the very least an effort to prevent it from being true. Researchers have shown that high-achieving students are most likely to experience stereotype threat, and that it can occur when someone has a high degree of self-confidence. This suggests that stereotype threat is not a lack of self-confidence or a belief that the stereotype is true about oneself. Reminders of one’s social identity or group position may trigger stereo type threat. Such reminders may be explicit, such as asking someone to identify their race, gender, or ethnicity as part of a test, or the reminders may be subtle cues that take the form of exclusion, such as seeing no one who looks like you in a particular place or pursuit and feeling like you may not belong. Researchers have found, for example, that girls and women are up to three times less likely to express interest in computer science if they enter a stereotypically “nerdy” classroom than if they enter a classroom with no specific cues about who a typical computer science student might be. The contrasting reactions to these classrooms clearly demonstrate how social cues can shape students’ sense of belonging, and “their feeling of fit in
136
Nathifa Greene
the environment and similarity to the people imagined to occupy it” (Cheryan, Plaut, Davies, and Steele 2009, 1049; see also Master, Cheryan, and Meltzoff 2016). Since Claude Steele started describing stereotype threat around 1995, stereotype threat has become a popular concept in social psychology research, even moving beyond academic journals to become more widespread in media such as the New York Times (Paul 2012), and podcasts such as Radiolab (2017). However, it is not always clear what it is or, if it is real, what its consequences might be. The overuse of a popular concept can become unhelpful if its scope is too wide and it is used to explain too much. To clarify what it might be, it is helpful to distinguish the actual phenom enon of stereotype threat from the more global effects that bias can have upon performance and personality traits. Stereotype threat should be defined as underperformance, or a different degree of expertise than someone would have had otherwise, in the absence of the threat-inducing conditions. This definition holds apart issues such as lowered confidence, which may be related, but which are more complex and better accounted for by multiple causes, rather than by stereotype threat. Bias—whether implicit or explicit—does have negative effects, but all negative effects are not necessarily stereotype threat. Nor is attention to stereotype threat even the best way to investigate or remedy the multiple consequences of bias and oppression. Lower test scores may indicate that a student is suffering from stereotype threat, but as Blum (2016) argues, such scores may correlate more strongly with underfunded schools and systemic inequities. In addition, stereotype threat is much broader than diminished perfor mance on tests, and it is not restricted to academic settings. The stakes of stereotyping are often much higher and linked to basic survival, since stereotypes pose threats to bodily safety in contexts where stereotypes are markers of vulnerability to racialized political violence. Whether its causes are subtle cues or explicit threats of violence, the core feature of stereotype threat seems to be inhibition. In its most explicit cases, the inhibition caused by stereotyping means that stereotypes haunt everyday decisions like what to wear, which stores to enter, making sure one’s hands are visible while shopping, where to walk, and whether it is safe to go outside. In its broadest sense, stereotype threat is inhibition. It is a particular kind of inhibition that occurs because of self-consciousness, or attention to oneself. This attention occurs because we understand the stereotypes that apply to us. As Steele notes, whenever we’re in a situation where a bad stereotype about one of our own identities could be applied to us … we know it. We know what ‘people could think.’ We know that anything we do that fits the stereotype could be taken as confirming it. And we know that, for that reason, we could be judged and treated accordingly … it’s a standard human
Stereotype Threat
137
predicament. In one form or another—be it through the threat of a stereotype about having lost memory capacity or being cold in relations with others—it happens to us all, perhaps several times a day. (Steele 2011, 5) Because of this kind of concern, making efforts to avoid confirming stereotypes or figuring out how to navigate or ignore them, stereotype threat is an example of what Steele calls an identity contingency: the things you have to deal with in a situation because you have a given social identity, because you are old, young, gay, a white male, a woman, black, Latino, politically conservative or liberal, diagnosed with bipolar disorder, a cancer patient, and so on. Generally speaking, contingencies are circumstances you have to deal with in order to get what you want or need in a situation. (Steele 2011, 3) Stereotype threat is an identity contingency that must be understood as a different kind of problem than implicit bias. Stereotype threat, which appears across many domains of life, is not simply implicit bias turned inward. Instead, implicit bias looks outward, while stereotype threat turns inward, and increased self-consciousness is the effect of that inward turn. When some one becomes aware of their own social position, and the stereotypes that go along with it, this increased self-consciousness of stereotype threat is different from attributing a stereotype to others out of implicit bias. When seeing another through the lens of a stereotype, that way of seeing is seamless and effortless. The increased self-consciousness of stereotype threat can occur in academic settings, workplaces, or when you show up to rent or buy a home. It can affect everyday pedestrian encounters or when police, security guards, and vigilante citizens surveil and monitor activities like shopping. Incidents when police were called on African Americans have received increased attention across social media, with the deadly risk of encounters with police as passersby stereotyped someone who was sleeping, socializing, or playing while black. The identity contingencies that pose the greatest danger are those linked to the risk of vio lence, more than stereotypes that diminish perceived intelligence among white males who play sports, for example, which is unpleasant but not an identity contingency that is linked to a devalued social group. The cases of stereotype threat that cause the most concern are those linked to identities where someone knows what others might think and they are likely to be stopped, questioned, or searched. This may affect optimal performance in classrooms, inhibiting peak performance academic, athletic, or creative endeavors. Steele develops a broad perspective on stereotype threat in Whistling Vivaldi, as he discusses its effects on academic performance, friendships, mindset, and motivation in students. Steele develops a view of stereotypes as
138
Nathifa Greene
a form of social meaning, or something “in the air” that negatively affects the performance of students who are members of minority groups. When students experience this effect, the concern to avoid confirming a negative stereotype creates inhibition, an effort-filled self-consciousness that drains cognitive energy. In the opening sections of his book, Steele describes how he felt when he began a doctoral program in social psychology at the Ohio State University in 1967, when he was the only African-American student in his program: It was hard to trust that behaving naturally, without careful self-presenta tion, wouldn’t get me downgraded—seen in terms of bad stereotypes about my group, or as not fitting positive stereotypes of who excelled in the field. It was a broad pressure, not confined to difficult tests. I felt it in classes, in conversations, while sitting around watching football games. It could cause a paralysis of personality especially around the faculty, even in informal situations like program picnics. I never asked a question in class … I remember once noticing my hands in the middle of a seminar. What did their darkness mean? Nothing? Everything? (2011, 153–4) Steele describes a broad effect on multiple aspects of himself as a student, attributing stereotype threat as a root cause of self-doubt and paralysis. There are two different ways of understanding this description: the standard view, and phenomenological descriptions. The standard view points to beliefs and attitudes (for more on beliefs and attitudes, see Johnson, Chapter 1, “The Psychology of Bias: From Data to Theory”), and phenomenological descriptions point to the qualities of subjective experience in its context (for more on phenomenology, see also Leboeuf, Chapter 2, “The Embodied Biased Mind”). While it is not clear that stereotype threat causes all of the effects on personality and motivation beyond test-taking scenarios, which Steele and other researchers suggest, it is crucial to understand the effects of environments that are shaped by bias. This chapter argues that the standard view raises problems in its emphasis on beliefs and attitudes.
2 The Standard View of Stereotype Threat Imagine a familiar walk home from your subway stop to your apartment, or a drive home. On these routes, you are moving in an ordinary, everyday mode. Without any identity contingencies, your familiar walk would be absorbed by whatever you are doing—talking to friends, listening to music, passively taking in street scenes or the sky, or mental preoccupations with whatever is going on in your life. However, if police are stationed at each intersection on your walk home, or you pass several police cars on your drive, your movement towards home in the skin color that is racially profiled becomes an entirely different movement.
Stereotype Threat
139
The awareness of yourself as a target of racial profiling moves your attention from your way home onto yourself and the potentially life-threa tening dangers of an encounter with the police. If your body is gendered as feminine in a classroom where the stereotype of the persons who belong in that classroom are coded as masculine, your attention may shift from the primary task of answering the teacher’s question. Instead, your attention turns to the fact that your fellow students see your gender and the existence of stereotypes about your gender in your social context when they look at you (see also McKinnon (2014) on the complexities and injustices related to stereotype threat for trans women). Your thoughts may move from solving the problem or accomplishing the task onto concern about making sure you do not fulfill negative stereotypes about inferior aptitude. The standard view of stereotype threat interprets the effects of stereotypes upon performance as anxiety about one’s own aptitude, or other negative beliefs. These beliefs may be implicit and the effects may be subtle, but according to this view the central concern is that stereotypes have a mea surable impact because they trigger anxiety. This anxiety is understood as the mechanism that affects performance. The context determines the effect that a stereotype will have upon performance in a particular setting. The contexts and the effects may vary. In well-known experiments on stereotype threat, individuals were subtly or explicitly reminded of their race or gender before tests that supposedly measured their natural or innate abilities, such as their IQ or mathematical aptitude (Steele and Aronson 1995). Some studies find that if individuals are told that they are about to take a test that identifies their innate mathematical gifts, then women, blacks, and Latinx students tend to perform worse than if the very same test is simply described as measuring how far along they are in the gradual process of mastering mathematical skills. This effect is not evident in white or Asian male students under similar conditions. The consequences of stereotype threat are not restricted to academic tests. Some studies find that when physical tests are described as indicators of “natural athletic ability,” sud denly white participants underperform (because whites are stereotyped as less physically and athletically gifted than blacks) (Stone, Lynch, Sjomeling, and Darley 1999). Stereotype threat may even lead white participants to disengage from the task by practicing less before taking an athletic test (Stone 2002). Students may opt out of subjects that would have interested them otherwise. Experiments in social psychology show that some effects are less con scious than others. Threat-related effects may sometimes arise even when participants appear to have no explicit awareness that they are at risk of being stereotyped. The standard view suggests that anxiety has a corrosive effect on self-esteem and beliefs about oneself. In this case, the harm of ste reotype threat is a psychological problem to be addressed by inner change. This kind of explanation focuses on interiority, leading to recommendations that try to build positive self-concept and boost confidence.
140
Nathifa Greene
The standard view raises significant questions. There is skepticism about the uses of stereotype threat as a conceptual paradigm for understanding how stereotypes affect us, and whether the right psychological mechanisms are being used to explain it. Many doubt whether this paradigm is useful at all. Some raise concerns over the uses of test-taking to measure stereotype threat. Skeptics question the validity of stereotype threat as an experimental variable (Flore and Wicherts 2014). This narrow configuration of the phenomenon raises doubts about whether anxiety and stress are the best explanation for its detrimental effects. While these experiments have articu lated a measurable phenomenon, the use of tests to explain what stereotype threat is and why it occurs remains questionable. Jeanine Weekes Schroer (2015) suggests, for example, that stereotype threat receives more research and media attention than other forms of black disadvantage because stereotype threat does not rely on the testimony of members of oppressed groups. Experimental results carry weight that testi mony from oppressed groups does not. The uptake of experimental data and the relative popularity of stereotype threat research suggest not only the compelling nature of this phenomenon and its far-reaching effects, but the ease of discounting testimonial accounts of how stereotypes are experienced. Just as televised evidence of widespread police brutality in the USA has forced acknowledgement of its reality, making skepticism untenable, so have numerous studies on the effects of stereotype threat made skepticism about stereotype threat implausible. This research is useful because empirical evidence makes it more difficult to minimize and explain away the effects of stereotypes even after disregarding testimony. But its usefulness is limited since testimony reveals dimensions of the effects of stereotype threat that the controlled experiments with tests cannot reveal. Since stereotype threat is so pervasive and multifaceted, its effects must be described in a broader way and not merely as a local or internalized psychological phenomenon. The effects of stereotypes upon experience are not new in philosophical writing. Such descriptions have been written long before the label “stereo type threat” existed. In the next section, we turn to the groundbreaking and now canonical accounts of similar sorts of experiences described by W.E.B. Du Bois (1903/1994) and Frantz Fanon (1952/2008). We then draw on research in the philosophical tradition of phenomenology by researchers like Merleau-Ponty and Shaun Gallagher to theorize about these experiences. While stereotype threat may manifest as test-taking anxiety, it is not necessarily anxiety in the sense of doubting one’s own ability in the case of test taking, nor is it doubt about oneself as a citizen who deserves to be treated with respect, in the case of police following drivers who are racialized as black, store clerks following racialized shoppers, and so on. This kind of self-attention is not necessarily a matter of beliefs about oneself, or believing that negative stereotypes are true. Rather, a more neutral description is needed. It is more neutral to think of stereotype threat as habit disruption in that it disrupts an experience that would have been
Stereotype Threat
141
fluid otherwise. When attention is task-oriented rather than self-oriented, task-oriented attention enables fluidity, while self-oriented attention is inhibiting. The contrast between fluid or inhibited action is captured in phenomenological descriptions. We now turn to the classical phenomen ological descriptions of stereotype threat first, followed by a more specific focus on habit disruption.
3 Classical Phenomenological Descriptions of Stereotype Threat W.E.B. Du Bois’ concept “double consciousness” precedes contemporary research on stereotype threat but captures much of what it is like to be stereotyped. Du Bois’ phenomenological description contributes essential insights for stereotype threat research today, which points to what I am calling disrupted habit. Du Bois coined the term “double consciousness” in The Souls of Black Folk in 1903, within a text that is a rich resource for political theory and wide-ranging philosophical analyses of race, oppression, and related concerns. Du Bois describes double consciousness in terms of striving, or belonging to national and social groups that strive for political status. In this description of double consciousness, Du Bois adopts what I consider a phenomenological approach by his use of the term “conscious ness” at a time when the study of consciousness was nascent, making Du Bois an early pioneer of twentieth-century psychology, along with William James and Edmund Husserl. In an introductory passage of his chapter, “Of Our Spiritual Strivings,” he writes evocatively of the ways that stereotypes affect the structure of experience: “It is a peculiar sensation, this double-consciousness, this sense of always looking at one’s self through the eyes of others, of measuring one’s soul by the tape of a world that looks on in amused contempt and pity” (2). This moment in The Souls of Black Folk is not often read as a phenomenological description of consciousness. Simplifying the phenomenological structure of consciousness as a directed structure, so that consciousness is always consciousness of an object toward which it is directed, we may imagine that pronouns represent first-, second-, or third-person modes of experiencing that object. Thus, if I am in a firstperson mode, I am simply perceiving, while in a second-person mode someone is perceiving me, or I am observing myself perceiving. Likewise, a thirdperson perspective on oneself, or about someone else, is another mode of experience that pronouns capture. Thinking of the phenomenological structure of consciousness as a directed structure, we may imagine double conscious ness as a shift out of a first-person “I” to a third-person observation of myself. This shift to observe oneself is useful for understanding stereotype threat. When I am absorbed in my own experience, and my consciousness is directed outward, at the objects of my experience, then the subject of consciousness is in a first-person “I” mode. Of course, self-observation is necessary, often benign, and a highly desirable capacity because it is the core
142
Nathifa Greene
of all reflection and learning. So double consciousness is not a problem simply because it is third-person self-observation. Rather, being “torn asunder” in this particular way means being seen through the filter of racial stereotypes or the “veil” that filters perception across the color line in a society where skin color, body shape, and other visible markers are the raw material of denigrating stereotypes. Consider Du Bois’ description of double consciousness as applied to a black woman in the United States, someone whose sense of self may or may not fit the maternal stereotype that shaped the lives of black women. These stereotypes determined and reciprocally shaped the social worlds of black women, in large part because of the limited opportunities that black girls could and could not pursue. Regardless of their sense of self, existing stereotypes and the only avail able opportunities for income in these cases would mean measuring oneself against racist and patriarchal stereotypes. The realities of stereotypes in employment and in social life meant that there were controlling images (Collins 2000) that shaped the lives of black women. Double consciousness brings the meaning of the stereotype into the structure of your own experience, shaping action out of an awareness that others would hold that stereotype about you. Of course, someone can observe herself for any number of reasons—when she is trying to learn how to do something, when she is reflecting on past action, ruminating over happy memories, or trying on clothes. An imagined sense of what others may think can be positive, negative, or neutral. A shift from a first- to a third-person sense of self is not necessarily harmful in princi ple, except perhaps in sports, dance, and situations where a first-person flow state is better than a third-person self-observation. When identity contingencies that call up stereotypes—again, understanding identity contingencies as experiences that occur because of one’s ascribed social group—these are cases when stereotypes trigger a particular kind of third-person self-observation. If you are trying on clothes in the understanding that your society is saturated with stereotypes that depict you as violent, or hypersexual, then “looking at one’s self through the eyes of others” means not only observing and deciding out of your own preferences, but observing with the understanding that if the clothes are too casual, or too tight, then they may activate stereotypes that put you in harm’s way. These judgments are necessary, lamentably, so this description is not a flawed perception. In fact, double consciousness and ste reotype threat are attuned responses to social realities, aware of the realities and the potential effects of stereotypes. Frantz Fanon (1952) is another author whose description of what it is like to be stereotyped in Black Skin, White Masks has become a classical anecdote in critical analyses of race and social life. Fanon describes this experience as one of being overdetermined by others’ perception, being “woven by a thousand anecdotes and stories” that form the veil of stereotypes about black people that shape life in a colonial world. Fanon describes the moment when a child pointed at him on a train and cried Tiens! Un nègre! (“Look! A Negro!”) using metaphors of paralysis and freezing to describe the phenomenal qualities of the experience:
Stereotype Threat
143
In the white world, the man of colour encounters difficulties in the structuring of his body schema. Consciousness of the body is solely a negating activity. It is a third-person consciousness. It is an image in the third person. All around the body reigns an atmosphere of certain uncertainty. I know that if I want to smoke, I shall have to reach out my right arm and take the pack of cigarettes lying at the other end of the table. As for the matches, they are in the drawer on the left, and I shall have to move back a little. And I make these moves, not out of habit, but out of implicit knowledge. A slow composition of my self as a body in the middle of the spatial and temporal world, such seems to be the schema. It is not imposed on me; it is rather a definitive structuring of my self and the world—definitive because it creates a genuine dialectic between my body and the world. (91) Fanon describes a third-person sense of oneself as seen and measured by others. This overdetermination can occur without an explicit encounter with others, as a feature of a colonized world. Fanon refers to the corporeal schema as a major reason that the displacement out of first-person into third-person experience is a shift that involves habit. The effect of attention that is overdetermined by the way he is seen by others means that the corporeal schema becomes fragmented and displaced out of the background of his experience, where it ordinarily functions. In The Souls of Black Folk, Du Bois uses the idea of a doubling con sciousness as a split, being “torn asunder.” Similarly, Fanon (1952) describes the effect of being stereotyped as one that involves inhibition, freezing, displacement and fragmentation: “As a result, the body schema, attacked in several places, collapsed, giving way to an epidermal racial schema” (92). Corporeal schema is a rich concept in phenomenological psychology, and Fanon refers to the replacement of the corporeal schema with a racial epidermal schema in a manner that shows how corporeality and the lived body are involved in the experiences of racism and colonial domination. His references to the corporeal schema offer a way of developing an explanation of stereotype threat that focuses on habit, attention, and skill. Because of this description, I propose that stereotype threat is the result of an inverse relationship between skill and a certain sort of attention to oneself: the more skillful the action, the less attention is directed toward oneself, and vice versa. Given the inverse relationship between attention and skill, where attention to oneself disrupts the flow of habitual action, self-attention has a negative effect on expertise through its effect on habit. Because self-attention disrupts the fluidity that would have been available otherwise, when atten tion is task-oriented rather than self-oriented, stereotype threat can be accounted for through a focus on habit. Merleau-Ponty (1945/2013) defines the corporeal schema as the “habit body” or “habitual body” that is an orientation toward a task in which someone knows the position of their own body in relation to an object that
144
Nathifa Greene
they might reach out to grasp. Habit is the reason that a fluid motion to pick up an object can occur without attention to it. The activity of ordinary perception would be too overburdened and fragmented otherwise, with attention devoted to every move. Kinaesthetic awareness, or the sense of oneself as an agent of movement, is a feature of action that exists when the corporeal schema is habitual. Merleau-Ponty and Fanon draw on a similar example, as both authors describe the act of reaching out to grasp items. Smoking is typically habitual, pre-reflective, and inattentive. But for Fanon, the schematic patterns of the body are taken apart and visible because of attention to himself in the midst of action. On the other hand, when atten tion is directed toward the end of action, the particular elements of the embodied self recede from attention for Merleau-Ponty. A person who inhabits the world safely, through a seamless personal identity, will inhabit the world in the manner that Merleau-Ponty describes, when identity is woven from one’s own movement rather than resistance against stories or fear of danger. In an optimal mode, the lived body recedes into a transparent medium for the fluid constitution of space and time. On the other hand, when someone has their subjective experience woven by others, the sense of oneself as seen constitutes a world through a slow composition, a dis sonant grammar that positions the self in the middle of a spatial and temporal world, rather than a lived body that is a center of orientation. Like Merleau-Ponty, Fanon also points to bodily movement, referring to the lived body as an element of the intentional structure that engages with and apprehends meaningfulness in the world. Unlike Merleau-Ponty, Fanon addresses the social realm, observing the fabric and the consistency of intentional experience, in the lived body that inhabits social meaning. Fanon describes the lived body at some distance from its own fleshly “here” when it is positioned and rendered visible in the middle of a spatial and temporal world, while Merleau-Ponty describes fluid movement and “indubitable” self-awareness. These are opposing descriptions of the same point, that the lived body ought to operate at the background of everyday experience, in the seamlessness that habit affords. Shaun Gallagher and Jonathan Cole (1998) distinguish the body schema from the body image in a way that also sheds helpful light on possible reasons that diminished skillfulness may occur because stereotypes trigger attention to oneself. According to Gallagher and Cole, the body schema is postural and more submerged in embodied experience than the body image, which is also pre-reflective but is more like a reflective picture of oneself. As they clarify the distinction between the body schema and the body image, to define each, Gallagher and Cole describe how the corporeal schema operates in manner that is the opposite of Fanon: in walking I do not have to think about putting one foot in front of another; I do not have to think through the action of reaching for something. I may be marginally aware that I am moving in certain
Stereotype Threat
145
ways, but it is not usually the center of my attention. And marginal awareness may not capture the whole movement. If I am marginally aware that I am reaching for something, I may not be aware at all of the fact that for balance my left leg has stretched in a certain way, or that my toes have curled against the floor. Posture and the majority of bodily movements usually operate without the help of a body image. (133) Authors who ignore political questions in phenomenological descriptions treat the lived body in first-person mode, as a seemingly invisible structure and a directed attitude, where the lived body is an element of action. By contrast, Fanon refers to his body as an object of awareness in Black Skin, White Masks as he performs the action in question. Fanon describes a thirdperson mode of corporeality that is displaced and disjointed: Below the corporeal schema I had sketched a historico-racial schema. The elements I had created were not furnished by “residues of sensations and perceptions of a tactile, vestibular and kinesthetic and visual order” but by the other, the white man, who had woven me out of a thousand details, anecdotes, stories. I thought that what I had in hand was to constitute a physiological self, to balance space, to localize sensations, and here I was called on for more. (91) Nonhabitual action becomes the central problem for Fanon because the structure of his experience becomes visible to him. A normative discussion of the body is possible, following Fanon’s use of “body schema” in Black Skin, White Masks, as the corporeal schema ought to be fluid and transparent in order to operate as a tacit background. Fanon points out how stereotyping disrupts this fluidity when attention shifts onto oneself, as one is seen negatively by others, in ways that limit the potential creativity of fluid action. This approach to stereotype threat, grounded in phenomenological attention to the structure of experience and the roles of habit and the corporeal schema, specifically, has several advantages. First, it leaves room for, but does not depend on, the now-controversial test-taking effects of stereotype threat. Second, it does not falsely portray the problem as residing primarily in the minds of disadvantaged individuals but in the systems of oppression in which those individuals are embedded. Individual-centered explanations are not just false but morally pernicious, of a piece with the tendency to explain the disadvantages of members of stereotyped groups in terms of stereotypes about their traits (black men are lazy and don’t want to work, women are nurturing and want to stay home with the kids and don’t want to work, etc.). Third, this approach does not portray the problem of stereotype threat as residing in the problematic beliefs or self-esteem of members of oppressed and stigmatized groups.
146
Nathifa Greene
Phenomenological descriptions of stereotype threat are particularly helpful because they offer a functional description. Phenomenology treats consciousness of phenomena descriptively, with attention to the subjective qualities and the context where experience occurs at the same time. This approach is useful because it provides a way to distinguish different cases of stereotype threat without assuming that they all share the same features or social significance. All stereotyping is not equal. Anyone can experience stereotype threat for any reason, but its consequences vary depending on social context and social position. Phenomenology gives access to the various ways that stereotype is experienced, without presuming the same gravity or consequences for the person who experiences it. Phenomenological descriptions (see Leboeuf, Chapter 2, “The Embodied Biased Mind”; Siegel, Chapter 5, “Bias and Perception”; Ayala-López and Beeghly, Chapter 11, “Explaining Injustice, Structural Analysis, Bias, and Individuals”) of the inhibiting effects of stereotype threat are valuable. Wholesale treatment of implicit bias and stereotype threat together, without distinction, analyzed only through the lens of cognitive science would cause further harm to those who are already harmed and negatively affected by stereotypes. Phenomenological descriptions of the inhibiting effects that stereotypes can cause do not necessarily attribute responsibility for deploy ing stereotypes. This room for phenomenological description without responsibility attribution is important in the case of stereotype threat.
4 Stereotype Threat as Disrupted Habit: A Phenomenological Account A useful way to understand stereotype threat through phenomenological description is to treat it as a form of habit disruption, focusing on the qualities of habitual action, highlighting the contrasts between the qualities of habitual action and non-habitual action. Phenomenological description provides insight into the qualities of stereotype threat, showing how disrupted habit may be a potential model for understanding what stereotype threat is, and the effects it can have. In this chapter, “habit” refers to the aspects of an activity that recede from experience, which do not require attention in the midst of the habitual act. These aspects free up cognitive and perceptual bandwidth, which can be directed elsewhere. The submerged aspects of a habitual act can be accessed afterward, upon reflection, but not in the midst of habitual action. If you are walking home from your subway stop to your apartment, or you are solving a math problem, there are aspects of the activity that do not require attention. Stereotype threat is about disrupting expert performance. I mean expertise in its most basic sense, such as memorized knowledge of your route home, without needing to consult a map, ask for directions, or expend effort inferring which streets would lead to your destination. In any expert performance, there are some aspects of action that must be submerged beyond attention, in order for the
Stereotype Threat
147
skills that have already been developed to become engaged in the expert act. These embodied habits are the foundation for skillful performances, once the habit has been acquired. When we learn, the effort that it takes to develop a skill aims to eventually establish a sediment of acquired knowledge that becomes tacit, or habitual. As that foundation of past practice becomes incorporated as expertise, attention can be dedicated to another element, which needs to be monitored. A fluid mode of action can be disrupted for good reasons or bad. Thus, understanding implicit bias as a perceptual habit means that it involves a kind of fluidity. This fluidity leads phenomenological descriptions of bias as in Sara Ahmed (2006) and Helen Ngo (2017) to construe racism as a kind of bad habit. On the basis of these descriptions, attention to habit means an effort to disrupt habit, by bringing attention where it is not usually engaged. The fluid habits of bias are often described as habitual in order to disrupt these habits. On the other hand, when a stereotype is attributed to you, the experience of being stereotyped is an identity contingency that makes you stop short, interrupt, or pay closer attention to yourself. This shift in your attention, from the task at hand onto yourself, does not occur in order to observe yourself, reflect on your performance, and learn how to perform better. Instead, the attention that is directed onto oneself to avoid confirming a negative stereotype is self-monitoring, causing inhibition without a positive payoff of greater learning or skill acquisition. Descriptions of stereotype threat should distinguish between accounts of implicit bias without a clear distinction which maintains that stereotype threat is different in kind from the attribution of a stereotype to someone else. The reasons for describing stereotype threat are not the same reasons for describing implicit bias. On the receiving end of that harm stereotype threat is a consequence of bias, not a cause. These distinctions are important because implicit bias research involves moral claims that show how seemingly neutral experiences are complicit with wider social injustices, and someone can be complicit with injustice and cause harm without the intent to do so. The subset of stereotypes that are politically relevant refer to social groups that contend with oppression. Iris Young (1990) defines oppression as vulnerability to one or more of the following circumstances: violence, exploitation, marginalization, power lessness, and cultural imperialism. The stereotypes that are linked to these five characteristics of oppression are a distinct subset of identity contingencies, which lead to cases of stereotype threat that are different in kind from stereotypes that are not tied to social disadvantages. Since descriptions of implicit bias suggest attributions of responsibility and complicity, independently of the intent to cause harm, stereotype threat is not implicit bias, which refers to the attribution of stereotypes to others. The politically significant cases of stereotype threat are defined by injustice are socially significant disadvantages linked to oppression.
148
Nathifa Greene
There are many scenarios where such attention to oneself is beneficial and necessary for learning how to do the activity in question, and for selfreflection, making self-attention an experience that is not inherently dama ging. However, in cases of stereotype threat, this instance of disrupted habit is caused by oppression. As Steele points out, people generally have a sense of what people think about various social groups in any given context, so that the meanings of group identities disrupt habitual action for a politically salient reason. In the following section, I limit the phenomenological description of stereotype threat to those stereotypes linked to one or more of the five faces of oppression as described by Young (1990). This depiction of stereotype threat as habit disruption draws on classical first-personal phenomenological accounts of self-focused attention written by Frantz Fanon, W.E.B. DuBois, and Iris Marion Young, and more broadly on the study of bodily experience developed by thinkers including Maurice Merleau-Ponty, Shaun Gallagher, and Jonathan Cole. An understanding of stereotype threat as the disruption of skillful habit has several empirical, experiential, and practical advantages over more familiar accounts of stereotype threat found in popular media and diversity training. If we are going to address the perceived impact that stereotype threat can have, a prescription for what should be done will take very different approaches, depending on which model informs proposed remedies.
5 What Should We Do? Understanding stereotype threat as disrupted habit leads to strategies that could be more effective in reducing stereotype threat, because an explanation that is based on disrupted habit can create change without only focusing on changing individuals, and, in particular, on changing individuals who are more likely to experience stereotype threat. Both individual change and attention to cues and the features of social environments can be addressed by noting how intrusive stereotypes can be, and adopting strategies that would mitigate the friction and attention to oneself that they cause. Researchers have identified a variety of valuable strategies that indivi duals, especially teachers and students, can employ to minimize the effects of stereotype threat. Many of these involve the “mindset” or “conceptual frame” through which individuals interpret their experiences. Consider, for example, when you are about to interact with a person from another social group (maybe you are a teacher about to interact with a student, a student about to interact with a teacher, a white person about to interact with a black person, or a black person about to inter act with a white person). Solely focusing on individuals, for example, by just recommending “more grit” and “acting as if” can be particularly toxic “remedies” that further intensify the effects of stereotype threat and misallocate responsibility onto the person who experiences it. The expertise of someone whose skill is diminished by stereotype threat is a
Stereotype Threat
149
delicate thing, so faking it doesn’t make it. Acting “as if” can lead to overeffort and the cognitive load of actively ignoring the stereotype can end up having the same effect as stereotype threat, effectively (Steele 2011, 105). Focusing on individuals can also constitute epistemic injustice (see Holroyd and Puddifoot, Chapter 6, “Epistemic Injustice and Implicit Bias,” and AyalaLópez and Beeghly, Chapter 11, “Explaining Injustice: Structural Analysis, Bias, and Individuals”). The misallocation of responsibility onto the subject who experiences stereotype threat is effectively further bias, creating further epistemic injustice, compounding the bias that stereotypes already perpetuate. Relying on self-concept to explain why stereotype threat occurs could suggest that something is wrong with the person who demonstrates stereotype threat, misdirecting critical investigations onto the individual psyche of the affected individuals rather than the social environment. This shift onto the affected individual is ethically suspect, similar to the unjust uses of stereotypes that explain away social problems by locating faults in the individuals who are affected by social inequities, instead of focusing on social inequities. Thus, concerns about intensifying the effects of stereotype threat would be addres sed by focusing on the structure of habitual action, rather than the psyche of the affected individuals. Even as we acknowledge that the roots underlying stereotype threat reside in social structures rather than the individual internalization of low selfesteem, it is still worthwhile to discuss strategies that individuals can employ to help them navigate the disruptions of stereotype threat, and if not “stay in the flow,” at least return to it after the disruption. We do not want to abandon people who struggle with stereotype threat and just let them continue to struggle until after the social revolution and the transformation of unjust structures and hierarchies. The most promising individual-level strategies will focus on promoting skillful and fluid action at moments when expertise is called for, by noting how environments could trigger third-person attention to oneself that diminishes performance. Such individual strategies have the advantage of working directly upon the affected subject, with immediate interventions that help when the triggers are sometimes difficult to detect or change. If someone is underperforming, it could be useful to think about whether they are combatting the negative effects of stereotypes in the midst of the tasks that they are trying to complete. It would then follow that we need to look around classrooms to see what messages our pos ters are sending, for example, or consider whether someone is working in an environment where models of excellence do not include anyone who looks like them. Attention to macro-level changes will remain essential, nonetheless, as the features of social settings like a classroom, work environments, and everyday experiences of stereotyping in public spaces will include elements that act as cues for stereotype threat. Even though
150
Nathifa Greene
all the contingencies of environments cannot be controlled, thinking in terms of habit promotion or disruption is a promising way to proceed.
SUGGESTIONS FOR FUTURE READING If you’d like to read more about stereotype threat as analyzed by psychologists, take a look at: •
Claude Steele (2011) Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do. New York: W.W. Norton & Co. An accessible, engaging book that introduces the phenomenon of stereotype threat and analyzes from the perspective of social and cognitive psychology. For a critique of psychological approaches, you should read:
• •
Jeanine Weekes Schroer (2015) Giving them something they can feel: On the strategy of scientizing the phenomenology of race and racism. Knowledge Cultures, 3(1): 91–110. Lawrence Blum (2016) The too minimal political, moral, and civil dimension of Claude Steele’s ‘stereotype threat’ paradigm. In J. Saul and M. Brownstein (eds), Implicit Bias and Philosophy, Volume 2 (pp. 147–172). New York and London: Oxford University Press.
If you want to dig deeper into a phenomenological analysis of oppression, including its way of understanding stereotype threat, read: •
• •
•
Frantz Fanon (2008) Black Skin: White Masks. Trans. Richard Philcox. New York: Grove Press. A classic, highly readable book about the psychological and social effects of French colonialism on colonial subjects. You also might want to take a look at Lewis Gordon (1995) Fanon and the Crisis of European Man: An Essay on Philosophy and the Human Sciences. New York: Routledge. Linda Martín Alcoff (2006) Visible Identities: Race, Gender, and the Self. Oxford: Oxford University Press. A phenomenological exploration of identities and how they overlap, intertwine, and intersect. Sarah Ahmed (2006) Queer Phenomenology: Orientations, Objects, Others. Durham, NC: Duke University Press. Ahmed’s book examines how sexuality is experienced and interpreted in binary, gender-normative contexts. Ahmed also develops the concept of “queer phenomenology” as a liberatory concept. George Yancy (2017) Black Bodies, White Gazes. The Continuing Sig nificance of Race in America (second edition). Lanham, MD: Rowman & Littlefield. A compelling exploration of how racism impacts the experiences, bodies, and identities of black people in the United States.
Stereotype Threat
151
DISCUSSION QUESTIONS 1
2
3
4
5
6
Have you ever been stereotyped? If not, do you know someone who has been? How did it make you/them feel? How do such experiences relate to stereotype threat, if at all? What happens in a person’s mind if they experience “stereotype threat,” according to psychologists like Claude Steele? What kind of contexts tend to create stereotype threat, according to them? When answering, make sure to give a specific example of stereotype threat happening to someone. In this chapter, Greene argues that stereotype threat is “not just implicit bias turned inwards”? Take a case of stereotype threat and analyze it as an instance of implicit bias turned inward. Why does Greene find this analysis of stereotype threat unacceptable? Cite at least one reason. What is the phenomenological analysis of stereotype threat? Take a case of stereotype threat and analyze it from a phenomenological point of view. Make sure to explain how the concept of “habit” plays a role in this analysis. What role does the body play in stereotype threat, according to phe nomenologists? How does this differ from the role played by the body (if any) in a traditional psychological analysis? Does Greene ultimately believe that it is harmful to talk about stereo type threat? Why or why not? Cite evidence from the text to support your answer.
REFERENCES Ahmed, S. (2006) Queer Phenomenology: Orientations, Objects, Others. Durham, NC: Duke University Press. Blum, L. (2016) The too minimal political, moral, and civil dimension of Claude Steele’s ‘stereotype threat’ paradigm. In J. Saul and M. Brownstein (eds), Implicit Bias and Philosophy, Volume 2 (pp. 147–172). New York and London: Oxford University Press. Cheryan, S., Plaut, V.C., Davies, P.G., and Steele, C.M. (2009) Ambient belonging: How stereotypical cues impact gender participation in computer science. Journal of Personality and Social Psychology, 97(6): 1045–1060. https://doi.org/1037. a0016239 Collins, P. (2000) Black Feminist Thought: Knowledge Consciousness and the Politics of Empowerment. New York and London: Routledge. Du Bois, W.E.B. (1903/1994) The Souls of Black Folk. New York: Dover Publications. Fanon, F. (1952/2008) Black Skin, White Masks. Trans. R. Philcox. New York: Grove Press. Flore, P.C. and Wicherts, J.M. (2014) Does stereotype threat influence performance of girls in stereotyped domains? A meta-analysis. Journal of School Psychology, 53: 25–44.
152
Nathifa Greene
Freeman, L. (2017) Embodied harm: A phenomenological engagement with stereo type threat. Human Studies, 40(4): 637–662. Gallagher, S. and Cole, J. (1998) Body image and body schema in a deafferented sub ject. In D. Welton (ed.), Body and Flesh: A Philosophical Reader (pp. 131–148). Oxford: Blackwell. Goguen, S. (2016) Stereotype threat, epistemic injustice, and rationality. In J. Saul and M. Brownstein (eds), Implicit Bias and Philosophy, Volume 2 (pp. 216–237). New York and London: Oxford University Press. McKinnon, R. (2014) Stereotype threat and attributional ambiguity for trans women. Hypatia, 29(4): 857–872. Master, A., Cheryan, S., and Meltzoff, A.N. (2016) Computing whether she belongs: Stereotypes undermine girls’ interest and sense of belonging in computer science. Journal of Educational Psychology, 108(3): 424–437. https://doi.org.10.1037/ edu0000061 Merleau-Ponty, M. (1945/2013) Phenomenology of Perception. Trans. D. Landis. New York: Routledge. Ngo, H. (2017) The Habits of Racism: A Phenomenology of Racism and Racialized Embodiment. New York: Lexington Books. Paul, A.M. (2012, October 6) Opinion | Intelligence and the stereotype threat. The New York Times. https://www.nytimes.com/2012/10/07/opinion/sunday/intelligen ce-and-the-stereotype-threat.html Rydell, R.J., Shiffrin, R.M., Boucher, K.L., Van Loo, K., and Rydell, M.T. (2010) Stereotype threat prevents perceptual learning. Proceedings of the National Acad emy of Sciences of the United States of America, 107(32): 14042–14047. doi:10.1073/pnas.1002815107 Schroer, J.W. (2015) Giving them something they can feel: On the strategy of scien tizing the phenomenology of race and racism. Knowledge Cultures, 3(1): 91–110. Shapiro, J. and Aronson J. (2013) Stereotype threat. In C. Stangor and C. Crandall (eds), Stereotyping and Prejudice (pp. 95–118). New York: Psychology Press. Stone, J. (2002) Battling doubt by avoiding practice: The effects of stereotype threat on self-handicapping in white athletes. Personality and Social Psychology Bulletin, 28(12): 1667–1678. https://doi.org/10.1177/014616702237648 Stone, J., Lynch, C.I., Sjomeling, M., and Darley, J.M. (1999) Stereotype threat effects on black and white athletic performance. Journal of Personality and Social Psychology, 77(6): 1213–1227. https://doi.org/10.1037/0022-3514.77.6.1213 Steele, C.M. and Aronson, J. (1995) Stereotype threat and the intellectual test perfor mance of African Americans. Journal of Personality and Social Psychology, 69(5): 797–811 Steele, C (2011) Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do. New York: W.W. Norton & Co. Thoman, D., Smith, J., Brown, E., Chase, J., and Lee, J.-Y. (2013) Beyond perfor mance: A motivational experiences model of stereotype threat. Educational Psychology Review, 25(2): 211–243. Young, I. (1990) Justice and the Politics of Difference. Princeton, NJ: Princeton University Press.
8
Moral Responsibility for Implicit Biases Examining Our Options Noel Dominguez
In this chapter, I’ll consider the major kinds of arguments that have been advanced for why we may or may not be responsible for actions caused by implicit biases. What kind of control do we need to have over our actions to be responsible for them? Do we have this kind of control over actions influenced by our biases? Could we be responsible for our biases because they reflect aspects of our true selves? What might it look like to be responsible for actions that are entirely unintentional, and could that explain our responsibility for our biases? I’ll expand on and offer some criticisms of these views and conclude by proposing some questions for future work.
1 Moral Responsibility in Broad Strokes While other areas of ethics tell you what makes your actions right or wrong, theories of moral responsibility tell you what makes your action yours. For this kind of question, how you came to do the action is more relevant than what the action itself is. For instance, whether the hiring manager who never properly evaluates (and so never hires) any minority candidates is responsible for her lapse depends on why she isn’t doing her job properly. Consider the differing cases of Jane and Joan: If Jane doesn’t properly evaluate applications from minority candidates because she doesn’t find minorities very trustworthy and would prefer not to have to work with any, we’d be right to hold her responsible for her actions. But if Joan isn’t eval uating these applications properly because her sneaky coworker Jane throws out all the minority applications before they get to her, then it’s clear this isn’t her fault. Generally speaking, agents are responsible for the actions caused by their own beliefs and attitudes and excused from responsibility for external effects they don’t cause and can’t avoid; this is why Jane seems responsible for her failure to properly evaluate the applications and Joan doesn’t. But what happens when we’re no longer sure how to distinguish between an agent’s acting on her own reasons and an agent acting due to external influences she doesn’t know about and can’t control? This is the basic problem actions caused by implicit biases cause for theories of moral responsibility.
154
Noel Dominguez
As is evident from other chapters in this volume (see especially Johnson, Chapter 1, “The Psychology of Bias: From Data to Theory”; Leboeuf, Chapter 2, “The Embodied Biased Mind”; Brownstein, Chapter 3, “Skepti cism About Bias”), many empirical questions remain about how implicit biases work. However, if implicit racial biases work the way current scientists and philosophers think they do, then they seem to both undergird and undermine agency. On the one hand, agents acting on their implicit biases are, like Jane, performing racist actions no one else is making them do, and these actions are motivated by racist attitudes that they have accepted or internalized in some kind of way. However, like Jane’s nefarious influence on Joan, implicit biases change how we perceive situations in ways we wouldn’t agree to and would strongly condemn if we were aware of them. So when a hiring manager means to fairly evaluate her applicants for some job and can’t do this properly because of her implicit racial biases, is she responsible for this or not? Are her biases part of who she is and what she’s responsible for, or are they a kind of external interloper getting in the way of what she means to do? This is a theoretical question, but it is not a hypothetical one—several studies have shown that seemingly non-racist hiring managers often prefer white applicants over black applicants regard less of the actual strength of their applications (Rooth 2010; Ziegert and Hanges 2005; Quillian, Pager, Hexel, and Midtbøen 2017), and we find similar effects in the way judges more harshly sentence black offenders over white offenders for the same crime (Mustard 2001), the way banks often offer higher interest rate mortgage loans to minorities over whites with similar credit histories (Berwick 2010), and even in the way elementary school teachers differently discipline black and white school children (Skiba et al. 2011). If our implicit racial biases can affect our actions, then they probably affect most of our actions, raising the question: Who is responsible for all this? In this chapter I’ll offer an analysis of the kinds of answers philosophers have given to this question. I’ll begin by going over typical disagreements about what makes us responsible for our actions and the ways actions caused by implicit biases seem to upturn that debate (§3), and then I’ll discuss arguments that hold us responsible for actions caused by our biases either because we have some control over the biases (§4) or because the biases are parts of our true selves (§5). Finally, I’ll canvass some newer “revisionist” approaches to the problem of whether we’re responsible for our biases (§6). I’ll also offer some thoughts on the general strengths and drawbacks of these approaches, but ultimately argue that we need to answer some prior questions first. Knowing whether we’re responsible for our implicit biases requires a better understanding of what implicit biases really are and how they operate, knowledge we’re still in the course of developing. But it also requires understanding what generally makes us responsible for our actions.
Moral Responsibility for Implicit Biases
155
2 Caveats Here are some quick clarifications before I begin. This chapter is not about whether we’re responsible for simply having implicit racial biases, but about whether we’re responsible for the actions caused by our implicit racial biases. This is in part just to limit the scope of this chapter, and because most of the discussion concerning responsibility for implicit biases concerns itself with whether we’re responsible for the actions caused by them. Also, it isn’t especially clear at the moment what exactly causes agents to have the implicit biases they do, and while it seems like all agents have some implicit bias or another, the extent to which any agents have any particular bias seems pretty variable (Holroyd 2012). So I’ll mainly focus on claims about our responsibility for actions caused by the bias, rather than considering our responsibility for the bias itself. For example, I won’t address the question whether someone would be blameworthy for an implicit bias against disabled people even if they literally never acted on this bias. I’m focused on cases where biases affect behavior. Importantly, even if an agent isn’t responsible for having a bias, they might nevertheless be responsible for the actions caused by the bias. An analogy might help here: we might hold someone responsible for an inappropriate angry outburst even while we do not hold them responsible for generally being more quick to anger than others are. How quickly you begin to feel angry in a situation might be a product of unchosen social and genetic factors. If so, it wouldn’t be fair to hold you responsible for that any more than it would be fair to hold you responsible for your height or your hair color. But this just means you’ll have to work harder than others to control your anger. It does not mean that just because you are more prone to anger that you are less able to stop yourself from having angry outbursts. And so, in this chapter we’ll mostly be focusing on how and whether you can be responsible for the actions you perform that you wouldn’t have performed had you not had the bias, and not focusing on responsibility for the bias itself. Also, I’ll try as much as I can in this chapter to not say anything about what an implicit bias actually is. That’s both because claims concerning the kind of mental entity an implicit bias is—an attitude, a belief, a mere associational cue, etc. —is already its own controversial topic (see Part 1), and because beginning by taking a stance on that question can unfairly make some views sound more plausible than others. If implicit biases are kinds of attitudes, for example, then a theory of responsibility that says what we’re fundamentally responsible for are attitudes might seem to “fit” the phenomena better than if we were agnostic about the kind of entity an implicit bias is. For the purposes of this chapter, what we’ll focus on are the supposed effects of having an implicit bias. How can you be responsible for an action caused by something with those effects? Well, part of the answer depends on what it is to be responsible for an action in the first place, the topic we turn to next.
156
Noel Dominguez
3 The Conditions of Moral Responsibility and the Problem of Implicit Bias An agent is morally responsible for an action when that agent is eligible for some sort of morally-laden response (such as praise or blame) on the basis of how they came to perform that action (Fischer 1999). In other words, if you can be praised or blamed for some action, then you could be morally responsible for it. This is not the same question as whether anyone should blame or praise you for that action—there can be actions agents are blameworthy for that we shouldn’t blame them for, either because it is none of our business or because our doing so would be hypocritical (Todd 2017), but that doesn’t mean agents can’t be responsible for these actions. The literature on moral responsibility aims to work out the relationship between an agent and her action that makes her morally responsible for that action. Like most philosophical discussions, it is much easier to find what theorists disagree about concerning the nature of moral responsibility than it is to find what they all might have in common. Philosophers have argued over whether we can be responsible for our actions if the world is pre determined and we cannot ever change what will happen (Van Inwagen 1983), over whether we can ever be responsible for our actions given that we cannot create ourselves and so are not responsible for what we choose to do (Strawson 1986), and even whether the concept of moral responsibility is a coherent or rational one in the first place (Waller 2011). But one common ality often mentioned in these debates is that whatever it is to be responsible for an action, it has some deep connection to what it takes for that action to be an expression of the agent’s will (Sripada 2015). From George Sher’s claim that theories of moral responsibility are really theories of the self (Sher 2009) to John Dewey’s proclamation that responsible agency is “ourselves embodied in action” (Dewey 1957), there’s an agreement that being respon sible for an action involves that action being yours, and that what makes an action yours is that you caused or produced the action in “the right kind of way.” But what does that mean? It’s easy to find particular examples of actions being yours or not on the basis of how they were caused—you “own” the action of raising your arm when you choose to do so, but not when it raises because you have alien hand syndrome, or because the wind pushes it. But it’s also easy to find cases between these poles—did “you” move your arm when you instinc tually moved it off a hot surface, or when you unthinkingly pump your fist when your team scores a touchdown? Theories of moral responsibility abstract from cases where we seem responsible to find a general causal story that lets you determine responsibility in more ambiguous cases. There are many of ways to break up the broad swath of existing theories of moral responsibility, but for our purposes a distinction made in Levy (2005) will be especially helpful, that between volitionist and attributionist theories of moral responsibility. Volitionist theories claim you’re responsible for
Moral Responsibility for Implicit Biases
157
actions that are the product of your volitions, or intentions—what makes you responsible for an action is that you intentionally chose to do it. Attribu tionist theories instead claim you’re responsible for actions that can be attributed to you, typically because some aspect of your “deep self,” or your innermost character traits, is what caused them. On these theories, what matters for responsibility isn’t whether you chose to do the action, but whether the action reflects what you’re really like. At first glance it might not be obvious why these are supposed to be competing theories, and some recent commentators (Shoemaker 2015) have suggested these views aren’t as differ ent as they’re commonly taken to be. But attributionists can take us to be responsible for all kinds of unintentional actions, like laughing upon hearing that an enemy has died, or forgetting to call a friend on their birthday (Smith 2005), that volitionist theories often have trouble accounting for (Vargas 2005). By contrast, while volitionist theories have tighter “restrictions” on what makes you responsible for an action, they thereby have an easier time ruling out things we aren’t responsible for (Sridharan 2016). Also, attribu tionists can have trouble explaining why you’re responsible for a fundamental aspect of yourself if that aspect wasn’t chosen. The debate between these two kinds of theories is a very broad one, and generates a lot of attention because most theories of responsibility seemed to be one of one kind or the other. But the distinction is useful to us here because of the following disconcerting fact: it seems like neither theory has much of an advantage when it comes to explaining responsibility for implicit biases.
4 Responsibility for Implicit Biases via Intentional Actions To see why, start with volitionist theories. If implicit biases are as “implicit” as they are often taken to be, affecting actions in ways we don’t notice and might not be able to change, then the idea of being responsible for these biases because we have control over them can seem absurd. But then again, even if we can’t directly control our biases, it might be enough to show we can indirectly control them. Maybe we’re responsible for actions caused by our biases because we didn’t do enough to stop the biases from forming in the first place, or from activating in the moment. This is a popular strategy used by theorists looking to argue that we’re responsible for our biases because of the control we have over them, and we can get a good picture of what such an approach looks like by looking closely at some arguments given in Holroyd (2012). In one of the first articles considering whether we might be responsible for our biases, Jules Holroyd gives a number of arguments to show that implicit biases are not as far outside of our control as we might think. One argu ment claims we must have some kind of control over the acquiring of our implicit biases because the extent to which individuals in a given society have them varies significantly from individual to individual (280). However, if implicit biases “result solely from living in a racist and sexist culture”
158
Noel Dominguez
(279), then you might not expect to see so much variation between indivi duals who have all been exposed to the same society. Also, it seems there are some people who genuinely don’t have any implicit racial biases (even if the amount or percentage of these kinds of people isn’t clear yet), and clearly they’ve had similar exposure to our culture as well. Further, Holroyd cites studies (Devine et al. 2002) that seem to suggest that persons who agree that “behaving in a nonprejudiced way [is] important in itself” (281) tend to perform better on the Implicit Association Test than persons who don’t agree with that statement. Finally, some studies (Nosek 2005) seem to show that a person’s explicit and implicit biases are often highly correlated. Sig nificantly, this is true even when subjects believed their views were different from “mainstream” views about race and gender, which makes it hard to see how it’s cultural baggage, and not their own attitudes and preferences, that ultimately determine which biases they have. Holroyd believes this background is important because if we see that agents play a role in the acquiring of their biases, then it looks like they might have a role to play in the activation and influence of their biases as well. She claims there are a number of things we’re responsible for that we nonetheless can only control indirectly, like our ability to play a piano, to learn a foreign language, or to have a good relationship with our mothers-in-law (284). You have indirect control over these things because while you can’t choose to be able to play the piano well, you can directly choose to do other things (taking lessons, making time to practice and actually practicing, etc.) that will result in your being able to play the piano well. And recent studies seem to show that there are steps you can take to control your biases—Holroyd cites studies showing reduced implicit bias after performing activities like looking at pictures of minorities in nonstereotypical situations (Blair 2002), or repeating “imple mentation intentions” (Webb, Scheeran, and Pepper 2010)—telling yourself to associate minorities with positive traits, like “I will associate Muslims with peace” (for more on these and other strategies, see Madva, Chapter 12, “Indi vidual and Structural Interventions”). Subjects also demonstrate less implicit racial bias when they take the test in the presence of a black test administrator (Lowery, Hardin, and Sinclair 2001), suggesting that exposing yourself to counterstereotypical situations involving minorities is also helpful. Other indirect forms of control have been suggested by Saul (2013), who notes that we can always evaluate applications anonymously, and Holroyd and Kelly (2016), who suggest a number of ways we can alter our working environments to reduce the effects of implicit biases. So it looks like it shouldn’t be that hard to combat our implicit biases by indirect means. To get an idea of what indirect responsibility for our implicit biases amounts to, consider our responsibility for our moods (Madva 2018). We can’t directly choose to be happy or unhappy, but we can clearly do things that make us more likely to be happy or unhappy, and it seems like because of this we can be responsible for actions caused by our moods. If we lash out at someone because we’re in a bad mood and we haven’t done anything
Moral Responsibility for Implicit Biases
159
lately to try to improve our mood, we seem more responsible than the person who tried and failed to improve their mood. Is this kind of indirect control enough to make us responsible for actions caused by implicit biases? It is certainly enough to explain our responsibility for some important cases. When what we’re concerned with are cases in which we could have indirectly stopped the activation of our biases, then it seems like our indirect control over our biases is enough to make us responsible for their effects. If we know, for example, that evaluating job applications without knowing the names of the applicants is enough to stop our implicit biases from being activated, then choosing to evaluate the applications without hiding the names puts us “on the hook” for any effects our implicit biases have if these stop us from hiring the right applicants for the job. But this only gets us so far, since in a large number of cases where we’re concerned about the effects of our biases, we cannot avoid activating the biases, such as choosing candidates after in-person job interviews, or any contexts requiring face-to face interaction with members of the social outgroup we have a bias concerning. So even if we can appeal to indirect responsibility to explain why we’re responsible in some kinds of cases, this won’t cover a lot of ground unless we can also indirectly control the influence our biases have over us once they’re activated. And there’s some reason to worry appeals to indirect control won’t explain why we might be responsible in those kinds of instances. The first worry is simply that a lot of these accounts are based on recent scientific explanations, and these have a habit of being overturned in unexpected ways. Something like this has already started to happen in the psychological literature on debiasing exercises: in 2014, Lai and colleagues wrote a promising report claiming that of the 17 most commonly mentioned therapies to reduce implicit bias, 8 of them are at least somewhat effective. But two years later, Lai and colleagues (2016) noted that of those 8 effective therapies, none of them had any effects 8 hours later. Examples like this raise worries about how effective self-directed exercises to combat our implicit biases really could be. Other studies that claim we can reduce our implicit biases also find the effect of the interventions weak enough that it isn’t clear how much preventative work they really do (Joy-Gaba and Nosek 2010). Of course, all leading methods to eradicate polio didn’t work until one did, so these findings don’t say we can’t find a way to reduce our biases. But they also suggest we shouldn’t be so sure that we can indirectly control the influence of our biases once they are activated. Another worry is that while it might be true that we can be responsible for the things we can indirectly control, merely being able to indirectly control something isn’t enough to make me responsible for it. This can occur for at least two reasons. First, I might not be responsible for failing to (indirectly) control my biases, for instance, if I don’t have the knowledge that I can do this. One can worry most people with implicit biases don’t know about these debiasing interventions, or how to find out about them, or
160
Noel Dominguez
even what implicit biases are and that they ought to find interventions to con trol them; couldn’t that suggest most of these people aren’t responsible for actions caused by their biases? Washington and Kelly (2016) argue against this view, claiming that we can be responsible for the effects of our biases even if we don’t know about them, or about the kinds of exercises that might reduce their influence—as long as we’re in an epistemic environment where such knowledge would be available to us if we tried to seek it out. This can explain, they claim, why we might think a well-meaning hiring manager with implicit biases they’re unaware of but would disavow if they knew of them would not be responsible for the effects of their biases if they were hiring someone in the 1980s (when fewer people really knew about implicit biases, making it generally harder to learn about them). Their view can also explain why a similar hiring manager in the 2010s would be responsible. The 2010 manager would have the opportunity to learn about and try to counter their biases in a way the 1980 manager never did. In other words, you might be responsible for something you don’t know about if your not knowing about it is due to your choosing not to inform yourself about it. The mere fact that we might not know about our biases isn’t enough to show we can’t be responsible for them. The second form of this worry is more troubling, however. Even if we could indirectly control our biases and we could come to know about how to do that, it might not follow that we have the right kind of indirect control over them in these cases. This is because our indirect control might not be reliable enough to ground responsibility. For an example, consider whether you can be respon sible for your high blood pressure. While it is true that there are things you can do that are supposed to reduce your blood pressure (eating less salt, reducing stress, etc.), it isn’t true that doing those things guarantees your blood pressure will drop. So it might turn out that blood pressure can be controlled (by the people those remedies work for), but that you can’t control your blood pres sure. We can say something similar about examples Holroyd uses to motivate her case as well. Whether or not you’re indirectly responsible for your rela tionship with your mother-in-law depends on how reasonable your mother-in law is and the extent to which she’ll let you build a good relationship with her, and not just on what you choose to do concerning her. Given the kinds of variability we find with the extent and influence of implicit biases already, it wouldn’t be surprising to find out methods to reduce them also vary significantly in effect and influence from person to person. And then there may be things we can do to reduce our biases, but no clear way to ensure that our biases are actually reduced by doing them. Would we be responsible for our biases in these kinds of cases? The answer is not clear. Whether and how well we can reduce our biases isn’t fully known, so it is hard to know the extent to which our indirect control of our biases grounds our responsibility for them. However, there are approaches to explaining our responsibility for our biases that do not depend on assuming we have an indirect control over them. We’ll turn to those now.
Moral Responsibility for Implicit Biases
161
5 Responsibility for Implicit Bias via the Deep Self Perhaps the best foil to Holroyd’s attempt to explain how we can control our biases is Michael Brownstein’s attempt to argue that the racialized attitudes held by agents with implicit racial biases are their attitudes, and so they’re responsible for the actions caused by them (Brownstein 2016). He does this by arguing that while it looks like attributionist theories of moral responsibility couldn’t hold us responsible for these kinds of actions, this appearance is due to a misunderstanding of what it takes for an attitude to belong to an agent. What matters, he claims, isn’t how strongly the attitude coheres with other attitudes the agent has, or how much control the agent has over the attitude, but just how deeply ingrained the attitude is. Brownstein isn’t the only author defending an attributionist defense of responsi bility for our implicit biases (see also Faucher 2016; Smith 2018), but examining his view demonstrates some of the advantages and disadvantages such a view offers. Remember that these theories are also called “deep self” theories. What makes you responsible for an action on these theories is that your action was caused by a “core” or essential characteristic of who you are as a person. It isn’t enough, say, for you to be responsible for dropping a glass because you had a muscle spasm and it was “your” muscle that spasmed. For deep self theorists, what makes you responsible for an action is that your most basic and essential values, beliefs, or attitudes are either part of what caused the action or part of the attitudes reflected in your action (Sripada 2016). The worry for saying that we’re responsible for implicit biases on deep self views is that while implicit biases might be “part of us” in some sense, it doesn’t seem like they will be in a very deep sense. Why? Well, at least two reasons. First, given that these biases are so pervasive, and that you acquire these biases in part by living in unjust societies, it can just seem like having these biases is like having a tan—it’s something you get from being in a certain place but not necessarily part of what you’re fundamentally like. Second, it isn’t just that these biases seem surfacelevel because they’re so widespread, but also because they seem so automatic, that it doesn’t look like very deep processing is being drawn upon when we use them (Johnson, Chapter 1, “The Psychology of Bias: From Data to Theory”). In that sense, they can seem like the muscle spasm I mentioned earlier—some thing that happens to you more than something that is really yours. You can worry that a “deep self” theory that would hold us responsible for our implicit biases can’t be that deep after all. At first glance, deep self theories would seem to have a hard time explaining why we are responsible for our implicit biases. How does Brownstein resolve this worry? Brownstein’s main move is to try to recast the aspects that make up our deep selves as being intimately connected to our cares rather than our values. What it means for an action to be expressive of your deep self for Brownstein is for it to reflect the kinds of things you care deeply about, and you care deeply about something when it rouses deep emotions within you and you feel
162
Noel Dominguez
strongly disposed or compelled to act on it. What’s intuitive about such a picture is that caring deeply about something is a way for something to matter to you. And it seems like what makes a commitment a deep part of your identity is probably this sense of mattering, and not some rational con nection between your attitudes and your judgments (the kind of connection other deep self theorists like Scanlon 2008 or Smith 2005 take to be con stitutive of an action belonging to an agent). Brownstein’s view, for instance, would seem to get right what it is for a pre-linguistic child to care deeply for his parents, since the child probably doesn’t have rational commitments to anything yet, but would clearly still have cares and concerns of various kinds. Brownstein argues that by taking our cares (and not our rational values) to be the components making up the deep self, we won’t just have a more accurate picture of what such deep selves are made of, but also one that allows us to accommodate actions caused by implicit bias as stemming from the deep self. That’s because while our biases don’t seem to necessarily reflect or even cohere with our rationally-chosen values, they still have the kind of deep emotional connection to our actions that Brownstein takes to constitute our deep selves. He claims they have this emotional aspect because it seems like we can often “feel” our biases operating (a point also elaborated upon by Holroyd 2015), and because when we act on our biases we often act in situa tions in which we care about and have emotions concerning the outcome. Brownstein points out the fear that many people feel when they see a black man holding a non-gun object in an implicit bias test, or the low-level affec tive concerns that come with thinking of women as less intelligent (779). Since it looks like our implicit biases meet the test for being part of our deep selves, maybe we’re responsible for actions caused by them. Brownstein is undoubtedly right that there’s an interesting and worth while attributionist explanation of our responsibility for our implicit biases, and he articulates it well. But just as we can worry that volitionist accounts of moral responsibility seem to leave out too many kinds of actions we might be responsible for, attributionist accounts can seem to leave us on the hook for too many actions we aren’t truly responsible for. First, if what it takes to be responsible is only that my concern be deeply felt and that I’m disposed to act on it, then it looks like I’d be responsible for behaviors stemming from disorders like obsessive-compulsive disorder (OCD) or Tourette syndrome (Schroeder 2005). But presumably the fact that my emotional distress and disposition to act on something like rearranging all the pencils in the room so they face me is coming from an illness should mean I’m not responsible for it (though for an alternate view, see Pickard 2015). Brownstein might be fine with this—perhaps implicit biases are a kind of disorder that we all have and are nonetheless responsible for—but the worry remains that his view would let in more than he bargained for. Consider the following case:
Moral Responsibility for Implicit Biases
163
Toxic Environment—Dawn grows up near an illegal but well-hidden toxic gas dumping site, and as a result has a number of symptoms she wouldn’t have if she’d grown up somewhere else. One such symptom is that she can often be condescending and impatient with others without meaning to or having any reason to. In fact, she often doesn’t notice she’s doing this until she is told how inappropriately she has been acting, at which point she feels confused and ashamed. She genuinely feels bad about what she does and wishes she would not be so dismissive, but has trouble exercising any direct influence over this reaction. If Dawn is given a stable emotional disposition by something like a poison gas, we probably wouldn’t find her responsible for this. It isn’t a value that she endorses or wants, or is even aware of, until other people let her know it causes her to act in a certain way. Brownstein’s view would say agents like Dawn are responsible for these kinds of actions. Some theorists think these cases, in which an agent’s attitudes or cares are caused by external nonrational factors they don’t endorse, undermine attributionist theories of responsibility (Levy 2005; Sridharan 2015). Others think it is acceptable for our theories to hold agents like Dawn responsible for actions so directly caused by their environments (Fischer 2004; Ebels-Duggan 2013). The worry that manipulated agents like Dawn are held responsible for things they shouldn’t be on attributionist accounts is a standard objection, but it is especially troublesome once we note how similar these cases are to the cases of implicit bias Brownstein is trying to account for. Consider the following case: Toxic Social Environment—Dave grows up near an immoral but wellhidden toxic racist social environment, and as a result has a number of symptoms he wouldn’t have if he’d grown up somewhere else. One such symptom is that he can often be condescending and impatient with minorities without meaning to or having any reason to. In fact, he often doesn’t notice he’s been doing this until he is told how inappropriately he has been acting, at which point he feels confused and ashamed. He genuinely feels bad about what he does and wishes he would not be so dismissive, but has trouble exercising any direct influence over this reaction. While Toxic Environment is a little fantastical, Toxic Social Environment seems like one of the main ways persons are affected by their implicit biases. But once you make it clear how biases are “imposed from outside,” it’s hard to feel like Dave is responsible for them even if his implicit bias manifests as a kind of deep “care.” If Dave really is responsible for his actions here, then we’d either need an explanation of how his case differs from Dawn’s, or an explanation of why taking Dawn to be responsible isn’t so far-fetched.
164
Noel Dominguez
Actions caused by implicit biases have sharply different features than most actions we’re responsible for, and so trying to shape your theory to accom modate them risks accommodating too much. However, if the problem is that we can’t make sense of responsibility for implicit biases using our standard theories, then maybe the solution is to change the structure of our theories.
6 “Revisionist” Approaches to Responsibility for Implicit Biases The problem of moral responsibility for implicit biases arises because, on the one hand, many of us have the intuition that people are responsible for actions caused by implicit biases (Cameron, Payne, and Knobe 2010), but, on the other hand, these kinds of actions lack the standard responsibilitymaking features (like being intentional or expressing our deepest values). The approaches we’ve considered so far aim to ease the tension between these claims by showing that actions caused by implicit biases really do, despite appearances, have the standard set of responsibility-making features. But if this is true despite appearances, then what was originally making us think we were responsible for these kinds of actions in the first place? Aren’t attempts to explain responsibility for new kinds of action by appealing to old theories missing something about the phenomena we’re trying to understand? Manuel Vargas has compared the way some philosophers investigate our responsibility for our biases to the way toast is made in a toaster: the toaster is some characterization of moral responsibility … . We take some bread—the phenomenon of implicit bias—and then put it in. We wait a few minutes, and out pops the toast, delivering a verdict on the phenomenon. (Vargas 2016, 89) While this isn’t a bad way to figure out whether we’re responsible for our implicit biases, it can seem to miss the point—implicit biases seemed to challenge our theories of responsibility by showing responsible actions seem to come in more forms than we might expect. If we then determine our responsibility for them by appealing to those same theories, have we really learned anything new? What would it look like to be guided by a new picture of responsible agency? Instead of trying to shoehorn implicit bias into our existing theories of responsibility, you can instead claim that the phenomena of implicit biases reveal that our existing theories are inade quate. This is the strategy taken by a number of “revisionist” approaches to moral responsibility, and we’ll consider a couple of them in this section. Revisionists about moral responsibility claim that an adequate theory of moral responsibility will have to get rid of some elements of our common sense thinking (Vargas 2009; 2013). The idea is that rather than hoping that
Moral Responsibility for Implicit Biases
165
our intuitions about what makes us responsible and our intuitions about the actions we’re responsible for will just happen to line up properly, we should instead take seriously the latter and build out the former to accommodate them. You can see why such an approach might be especially helpful in making sense of responsibility for implicit biases. If what makes us responsible for our actions isn’t our intentions or our attitudes, what could it be? Zheng (2016) tries to answer to this question by building on Watson’s (1996) distinction between two kinds of responsibility: responsibility as attributability and responsibility as accountability. On her gloss of the distinction, theories of attributability are theories concerning what makes an action belong to an agent, or the kinds of theories that give specifications of what makes an agent responsible for their action. Every theory we’ve considered so far has been a theory of this kind. But we can also think about responsibility in terms of the responsibilities other agents have towards one another, and the conditions under which they can be held accountable for failing to fulfill these responsibilities. For instance, it’s clear that the United States isn’t responsible in an attributability sense for the natural disasters that befall its citizens, but it is nonetheless responsible in an accountability sense for doing what it can to remedy the harm these dis asters cause (Anderson 2010, 138–9). Zheng tries to use a similar framework to explain how we might have some responsibility for the actions caused by our implicit biases. Zheng claims that we can be morally responsible for actions that aren’t “our” actions in the sense other theories of moral responsibility have been concerned with. We can be accountable for our biases because often cases of implicit bias occur in situations where something like strict liability governs the norms of upholding the position. If you are a hiring manager and you don’t hire the best applicant, you’ve failed in your duties as a hiring man ager even if it was because of an implicit bias that you cannot control and don’t endorse. She claims we can say similar things about other cases between agents that involved implicit bias, and notes that holding someone morally responsible in the accountability sense need not include blaming them for their actions (or attributing bad intentions or values to them), but only making sure that proper redress occurs between agents. One advantage of this claim is that it gets at the conflicted feelings we hold about implicit bias. It seems we might be responsible for our biases but it isn’t clear how we could be. Finding out we’re responsible for our biases in one way but not in another could certainly explain that tension. But one can worry that the notion of responsibility as accountability Zheng refers to is too broad. This is certainly a kind of responsibility, but is it the kind we’re concerned about when we’re wondering whether or not an agent is morally responsible for her action? Consider again our beginning example of Jane and Joan. There’s an important sense in which Jane is responsible for failing to properly evaluate the applications (she doesn’t do it because she is a racist) and Joan isn’t (she doesn’t do
166
Noel Dominguez
it because Jane hides them from her). Yet both Jane and Joan have a responsibility to do something about the applications they aren’t evalu ating, and so on Zheng’s account both are accountable for their lapse. The problem here isn’t that they aren’t accountably responsible (that sounds right), but that focusing on responsibility as accountability seems to abstract away from the question we originally wanted to answer. In what sense are implicit biases yours despite your unawareness and disavowal of them, and what role does that fact play in explaining when we are and aren’t responsible for them? Zheng seems to explain how we’re responsible for our biases by bypassing the question we wanted to answer. Other attempts to solve the problem of responsibility for biases by appealing to normative considerations seem to face a similar issue. Mason (2018) introduces a similar notion when she claims we can be responsible for our biases even if they aren’t in our control or express bad attitudes because we should take responsibility for them. We should take responsibility for actions we do that hurt others, she writes, and this taking of responsibility must be more than mere liability. Why? Because there seems to be something insincere about the person who claims they understand they acted wrongly because of their bias, intends to do better, but will not apologize because they don’t take themselves to have been blameworthy for their action. Mason is right, I think, that such a quasi-apology would be unsatisfying. And she successfully explains how and why the implicit bias being ours is important for responsibility, something we worried Zheng might be missing. But Mason seems to be describing our puzzle instead of solving it—why would the apology appear unsatisfying if it really is true that the agent didn’t do it intentionally and it didn’t express their attitudes? More importantly, why does our being unsatisfied mean the apologizing agent is incorrect, and not that our standards are unreasonable? The basic concern about these kinds of theories of moral responsibility (raised in another context by Todd 2016) is that they might not have the resources to answer the above question. If you answer by appealing to some set of normative standards (like what job hiring managers are supposed to do), then you seem unable to distinguish between failing at some standard because it was your fault and merely failing at the standard. But if you tried to say more about the conditions under which your failure is your fault (such as because you intended the action or it expressed your character), then it looks like there’s nothing revisionist about the theory anymore. It becomes a theory of responsibility as attributability, the kind of theory revisionist theories were supposed to replace. Revisionist theories of moral responsibility are very promising, but also very new, so a full understanding of their strengths and weaknesses isn’t readily available yet. Whether this is an exciting new direction or a detour on the way to the true answer isn’t something we’ll know for quite a while.
Moral Responsibility for Implicit Biases
167
7 Conclusion The question of whether or not we’re responsible for actions caused by implicit biases is really the question of whether actions caused by implicit biases meet the standards for responsible actions, and so answering the former requires investigating accounts of the latter. In this chapter, I’ve shown several options that have been proposed for this task, and the potential tra deoffs these options require. Taking us to be responsible for actions caused by our biases because of the kind of control we have over them lets us give a very standard answer to a very non-standard problem, but also one that might not generalize to all the kinds of cases we’re concerned about. Hold ing us responsible for actions caused by our biases because they express aspects of our deep selves allows us to attribute responsibility even when agents lack full control over their implicit biases, but it also might make us responsible for all kinds of other behaviors we typically take ourselves not to be responsible for. Finally, revising the nature of moral responsibility to make room for actions caused by implicit biases ensures our accounts grasp the relevant phenomena, but also makes it unclear that we’re still talking about moral responsibility and not some other, weaker, notion. All philo sophical positions require some tradeoffs, so none of the complications listed above should be seen as fatal for the views discussed. So where do we go from here? It’s clear that the answer to whether and how we’re responsible for our biases is going to involve some element from most of these kinds of answers, just because of the wide expanse of different kinds of morally-laden situations we can refer to as having been “caused by” our implicit biases. Appeals to indirect control are probably our best bet for explaining why someone is responsible for their biases in instances in which they could have stopped their biases from activating and chose not to. Revisionist approa ches to responsibility are certainly right in pointing out what we can be responsible for repairing harms caused by our biases even if those harms are not a result of our intentions or values. But figuring out which of these considerations takes explanatory precedence will probably require answer ing some questions that haven’t received enough consideration. I’ll conclude by considering one of these questions, concerning what is blameworthy about our implicit biases. To what extent does our blameworthiness for our biases require or depend on our ownership of it? If we want to know what makes us responsible for the actions caused by our biases, it would help to have a bit more clarity on what the morally salient effects of these actions typically are. That’s because while we don’t need to know whether we’re blameworthy for our biases to know if we’re responsible for them, we do want our theories of responsibility to explain how we could be blameworthy for these actions if we are. And one way to think of the difference between the more traditional volitionist and attributionist theories of moral responsibility
168
Noel Dominguez
proposed by Holroyd and Brownstein and the more revisionist theories of responsibility proposed by Zheng and Mason has to do with where they can situate our potential blameworthiness for the actions caused by our implicit biases. Brownstein and Holroyd’s theories, in taking us to be responsible for our biases on the basis of how they are caused within us, suggest that what’s blameworthy about our biases fundamentally has to do with what they show about us (for more discussion, see McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit Bias”). Maybe they show that there were things we could do to stop them that we didn’t do, or maybe they are a kind of residue from morally bad feelings or attitudes we currently have. Zheng and Mason, on the other hand, seem to imply that what makes us responsible for our biases are the ways others can object to our actions caused by them. On their approaches, it is because others can object to our biases (i.e., demand that we make amends for the wrong actions we’ve performed because of our biases, or that we do better in the future) that they are blameworthy. Where we want our account of responsibility to begin can be suggested by where we want our account of blameworthiness to end up. So we have two ways of thinking about what makes our actions worthy of blame. One is what our actions show about us, and another is whether others can object to what we’ve done. These two ways of thinking about blame and implicit biases might stem from a more foundational tension concerning the kind of explanation we think an account of our responsibility for our implicit biases is supposed to provide. When we’re concerned about explaining responsibility for implicit biases, we might be concerned about two distinct things. We might be worried about how we can be responsible for actions that are implicitly biased, given that such biases are often outside of our full awareness and perhaps not controllable. Or, we might be worried about the ways in which these actions are implicitly biased, in that they harm and wrong others, and whether we can be responsible for these harms. Which of these concerns is more fundamental? Do we want our theories of moral responsibility to focus on what our actions show about us, or to focus on whether others can object to our conduct? The literature on responsibility on implicit biases is an embarrassment of riches, with a number of distinct and interesting approaches to the topic. Perhaps a better grip on what we think we’re responsible for when we’re responsible for our implicit biases can help us figure out what it is that makes us responsible for them.
SUGGESTIONS FOR FUTURE READING If you’d like to read about arguments claiming we are not responsible for actions caused by our implicit biases, read: •
Levy, N. (2014) Consciousness, implicit attitudes and moral responsi bility. Nous, 48(1). Levy argues here that a proper understanding of the
Moral Responsibility for Implicit Biases
•
169
role that consciousness of our actions plays in controlling our actions will show why unconscious actions like those caused by implicit biases can’t be under our control and so can’t be ones we’re responsible for. Levy, N. (2017) Implicit bias and moral responsibility: Probing the data. Philosophy and Phenomenological Research, 94(1): 3–26. https://doi.org/ 10.1111/phpr.12352. Levy here offers a general argument for the view that we should first fix our theory of responsibility according to stan dard cases and then apply it to more unusual instances, and that doing so with the case of implicit bias shows that we are not responsible for them because we cannot properly control them and because they are not caused by mental states that can be attributed to us.
If you’d like to read more about other considerations that might make us responsible for our implicit biases not covered in this chapter, read: •
•
•
Stark, S.A. (2014) Implicit virtue. Journal of Theoretical and Philosophical Psychology, 34(2). Stark argues for an attributionist view of moral responsibility on the basis of considerations from virtue theory—our attitudes can be “ours” even if we didn’t choose them, she claims, as long as they reflect our judgments in the right way. Glasgow, J. (2016) Alienation and responsibility. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy: Volume 2 (pp. 37–61). Oxford: Oxford University Press. Glasgow argues for a kind of consequentialist conception of responsibility for implicit bias, wherein what makes us responsible for an act of implicit bias is directly related to the amount of harm that it causes. Sie, M. and Voorst Vader-Bours, N. (2016) Stereotypes and prejudices: Whose responsibility? Indirect personal responsibility for implicit biases. In M. Brownstein and J. Saul (eds), Implicit Bias and Philoso phy: Volume 2 (pp. 90–114). Oxford: Oxford University Press. Sie and Voorst Vader-Bours argue that we can be responsible for things like stereotypes and prejudices by being part of a collective that is aware of and fails to act so as to eliminate these prejudices, and that we can therefore be “indirectly personally” responsible for our implicit biases in virtue of being part of a social group that is collectively responsible for them.
If you’d like to read more about the different kinds of control we might be able to exercise over our implicit biases, read: •
Holroyd, J. and Kelly, D. (2016) Implicit bias, character, and control. In A. Masala and J. Webber (eds), From Personality to Virtue: Essays on the Philosophy of Character (pp. 106–133). Oxford: Oxford University Press. Holroyd and Kelly explain the kind of control we have over our biases in terms of the notion of “Ecological Control.” We can’t directly
170
•
•
Noel Dominguez control our biases, but we can indirectly control them by controlling aspects of our environment in ways that gives rise to or diminishes our biases. Frankish, K. (2016) Playing double: Implicit bias, dual levels, and selfcontrol. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy: Volume 1 (pp. 23–46). Oxford: Oxford University Press. Frankish develops a model of self-control in general according to which we can influence our implicit biases via our “metacognitive motivations,” or our conscious desires to change aspects of our unconscious beliefs. Buckwalter, W. (2018) Implicit attitudes and the ability argument. Philosophical Studies [online first]. Buckwalter examines the kinds of control we might over our biases on the way to arguing that the empirical literature on implicit biases doesn’t settle the question of whether we are able to control our biases or not, and also claims that perhaps we can be responsible for our biases even if we have no control over them whatsoever.
If you’d like to read more about revisionist approaches to moral responsibility, read: •
•
•
Vargas, M. (2009) Revisionism about free will: A statement & defense. Philosophical Studies, 144(1): 45–62. Vargas offers a general statement of the view he calls “moderate revisionism,” summarizes the motiva tions for this kind of revisionism about moral responsibility, and responds to criticisms of the notion. McCormick, K.A. (2013) Anchoring a revisionist account of moral responsibility. Journal of Ethics and Social Philosophy, 7(3). McCormick elaborates on two problems one can have with revisionist views of moral responsibility, and claims a focus on the “reference-anchoring” objection has obscured a more interesting and devastating objection based on the “normativity-anchoring” problem for the view. Morris, S.G. (2015) Vargas-style revisionism and the problem of retributi vism. Acta Analytica, 30(3): 305–316. Morris claims that revisionism cannot properly explain motivate the retributivist elements of moral responsibility, and so a revisionist theory of moral responsibility isn’t really a theory of responsibility at all.
DISCUSSION QUESTIONS 1
This chapter focuses on whether we can be responsible for actions caused by our implicit biases, but do you think we can be responsible for the biases themselves, that is, for simply having biases even if we never act on them? Why or why not? Do you think any of the arguments discussed in this chapter (especially the ones focusing on the notion of “indirect
Moral Responsibility for Implicit Biases
2 3
4
5
6
7
171
control”) result in our being responsible for having the biases themselves and not just for the actions they cause? If we’re responsible for actions caused by our implicit biases, does that mean that we’re blameworthy for these kinds of actions? Why or why not? Based on the empirical information presented here, do you think we can indirectly control our implicit biases? Why or why not? What further empirical research could be helpful in settling this question, if any? (See also Madva, Chapter 12, “Individual and Structural Interventions”.) Dominguez uses a thought experiment called Toxic Social Environment to raise an objection to “deep self” views of responsibility. Explain the objec tion. How might a defender of the “deep self” view like Brownstein respond to the objection? Whose position do you find more convincing and why? Many believe that we cannot be responsible for any action that we did not intentionally choose to do. What arguments can you think of to support this view? How would you respond to the arguments in this chapter claiming we can be responsible for actions that are not under our control? Are we responsible for all the things we can indirectly control, or just some of them? If just some of them, then why think actions caused by implicit bias will be part of this set? If all of them, then what makes us responsible for things we don’t directly control? Do “revisionist” accounts of moral responsibility “beg the question” con cerning whether we’re responsible for our implicit biases? That is, do they illicitly assume from the start what they are trying to prove (which is that we are responsible) instead of giving an independent argument for this conclu sion? If so, does this seem to be a problem with revisionist accounts in gen eral? If not, what do we learn when we learn that revisionist accounts of responsibility determine when we’re responsible for an action?
REFERENCES Anderson, E. (2010) The Imperative of Integration. Princeton, NJ: Princeton University Press. Berwick, C. (2010) Patterns of discrimination against Blacks and Hispanics in the US mortgage market. Journal of Housing and the Built Environment, 25(1): 117–124. Blair, I.V. (2002) The malleability of automatic stereotypes and prejudice. Personality and Social Psychology Review, 3: 242–261. Brownstein, M. (2016) Attributionism and moral responsibility for implicit bias. Review of Philosophy and Psychology, 7(4): 765–786. https://doi.org/10.1007/ s13164-015-0287-7 Cameron, C.D., Payne, B.K., and Knobe, J. (2010) Do theories of implicit race bias change moral judgments? Social Justice Research, 23(4): 272–289. Dewey, J. (1957) Outlines of a Critical Theory of Ethics. New York: Hillary House. Ebels-Duggan, K. (2013) Dealing with the past: Responsibility and personal history. Philosophical Studies, 164(1): 141–161. Faucher, L. (2016) Revisionism and moral responsibility for attitudes. In M. Brownstein an J. Saul (eds), Implicit Bias and Philosophy: Volume 2. Oxford: Oxford University Press.
172
Noel Dominguez
Fischer, J. (1999) Recent work on moral responsibility. Ethics, 110(1): 93–139. http s://doi.org/10.1086/233206 Fischer, J.M. (2004) Responsibility and manipulation. The Journal of Ethics, 8(2): 145–177/ Holroyd, J. (2012) Responsibility for implicit bias. Journal of Social Philosophy, 43 (3) 274–306. https://doi.org/10.1111/j.1467-9833.2012.01565.x Holroyd, J. (2015) Implicit bias, awareness, and imperfect cognitions. Consciousness and Cognition, 33: 511–523. https://doi.org/10.1016/j.concog.2014.08.024 Holroyd, J. and Kelly, D. (2016) Implicit bias, character, and control. In A. Masala and J. Webber (eds), From Personality to Virtue: Essays on the Philosophy of Character. Oxford: Oxford University Press Joy-Gaba, J.A. and Nosek, B.A. (2010) The surprisingly limited malleability of implicit racial evaluations. Social Psychology, 41(3): 137–146. https://doi.org/10. 1027/1864-9335/a000020 Laiet al. (2014) Reducing implicit racial preferences: I. A comparative investigation of 17 interventions. Journal of Experimental Psychology, 143(3): 1765–1785. http:// dx.doi.org/10.1037/a0036260 Laiet al. (2016) Reducing implicit racial preferences: II. Intervention effectiveness across time. Journal of Experimental Psychology, 145(8): 1001–1016. http://dx.doi. org/10. 1037/xge0000179 Levy, N. (2005) The good, the bad, and the blameworthy. Journal of Ethics and Social Philosophy, 1(2): 2–16. Lowery, B.S., Hardin, C.D., and Sinclair, S. (2001) Social influence effects on auto matic racial prejudice. Journal of Personality and Social Psychology, 81: 842–855. Madva, A. (2018) Implicit bias, moods, and moral responsibility. Pacific Philosophi cal Quarterly, 99(S1): 53–78. https://doi.org/10.1111/papq.12212 Mason, E. (2018) Respecting each other and taking responsibility for our biases. In K. Hutchinson, C. Mackenzie, and M. Oshana (eds), Social Dimensions of Moral Responsibility. Oxford: Oxford University Press Mustard, D.B. (2001) Racial, ethnic, and gender disparities in sentencing: Evidence from the U.S. Federal Courts. The Journal of Law & Economics, 44(1): 285–314. https://doi.org/10.1086/320276 Nosek, B.A. (2005) Moderators of the relationship between implicit and explicit evaluation. Journal of Experimental Psychology, 134: 565–584. Pickard, H. (2015) Psychopathology and the ability to do otherwise. Philosophy and Phenomenological Research, 90(1):135–163. https://doi.org/10.1111/phpr.12025 Quillian, L., Pager, D., Hexel, O., and Midtbøen, A.H. (2017) Meta-analysis of field experiments shows no change in racial discrimination in hiring over time. Pro ceedings of the National Academy of Sciences, 114(41): 10870–10875. https://doi. org/10.1073/pnas.1706255114 Rooth, D.-O. (2010) Automatic associations and discrimination in hiring: Real world evi dence. Labour Economics, 17(3): 523–534. https://doi.org/10.1016/j.labeco.2009.04.005 Saul, J. (2013) Skepticism and implicit bias. Disputation, 5(37): 243–263. Scanlon, T.M. (2008) Moral Dimensions: Permissibility, Meaning, and Blame. Cambridge, MA: Belknap Press of Harvard University Schroeder, T. (2005) Moral responsibility and Tourette Syndrome. Philosophy and Phenomenological Research, 71(1): 106–123. https://doi.org/10.1111/j.1933-1592. 2005.tb00432.x
Moral Responsibility for Implicit Biases
173
Sher, G. (2009) Who Knew? Responsibility Without Awareness. Oxford: Oxford University Press. Shoemaker, D. (2015) Ecumenical attributability. In R. Clarke, M. McKenna, and A. Smith (eds), The Nature of Moral Responsibility. Oxford: Oxford University Press Skiba, R.J., Horner, R.H., Chung, C., Rausch, K., May, S.L., and Tobin, T. (2011) Race is not neutral: A national investigation of African American and Latino disproportionality in school discipline. School Psychology Review, 40(1): 85–107. Smith, A. (2005) Responsibility for attitudes: Activity and passivity in mental life. Ethics, 115(2): 236–271. https://doi.org/10.1086/426957 Smith, A. (2018) Implicit bias, moral agency, and moral responsibility. In G. Rosen, A. Byrne, J. Cohen, E. Harman, and S. Shifrin (eds), The Norton Introduction to Philosophy. New York: W.W. Norton & Company. Sridharan, V. (2016) When manipulation gets personal. Australasian Journal of Philosophy, 94(3): 464–478. https://doi.org/10.1080/00048402.2015.1104367 Sripada, C. (2015) Moral responsibility, reasons, and the self. In D. Shoemaker (ed.), Oxford Studies in Agency and Responsibility Volume 3. Oxford: Oxford University Press Sripada, C. (2016) Self-expression: A deep self theory of moral responsibility. Philosophical Studies, 173(5): 1203–1232. https://doi.org/10.1007/s11098-015-0527-9 Strawson, G. (1986) Freedom and Belief. Oxford: Oxford University Press Todd, P. (2016) Strawson, moral responsibility, and the “order of explanation”: An intervention. Ethics, 127(1): 208–240. https://doi.org/10.1086/687336 Todd, P. (2017) A unified account of the moral standing to blame. Noûs. Advance Online Publication. https://doi.org/10.1111/nous.12215 Van Inwagen, P. (1983) An Essay on Free Will. Oxford: Oxford University Press. Vargas, M. (2005) The trouble with tracing. Midwest Studies in Philosophy, 29(1): 269–291. https://doi.org/10.1111/j.1475-4975.2005.00117.x Vargas, M. (2009) Revisionism about free will: A statement & defense. Philosophical Studies, 144(1): 45–62. Vargas, M. (2013) Building Better Beings: A Theory of Moral Responsibility. Oxford: Oxford University Press Vargas, M. (2016) Implicit bias, responsibility, and moral ecology. In D. Shoemaker (ed.), Oxford Studies in Agency and Responsibility Volume 4. Oxford: Oxford University Press Waller, B.N. (2011) Against Moral Responsibility. Cambridge, MA: The MIT Press Watson, G. (1996) Two faces of responsibility. Philosophical Topics, 24(2): 227–248. Washington, N. and Kelly, D. (2016) Who’s responsible for this? Moral responsibility, externalism, and knowledge about implicit bias. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy: Volume 2. Oxford: Oxford University Press. Webb, T.L., Sheeran, P., and Pepper, J. (2010) Gaining control over responses to implicit attitude tests: Implementation intentions engender fast responses on atti tude-incongruent trials. British Journal of Social Psychology, 51: 13–32. https:// doi.org/10.1348/014466610X53219 Zheng, R. (2016) Attributability, accountability, and implicit bias. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy: Volume 2, Moral Responsibility, Structural Injustice, and Ethics. Oxford: Oxford University Press. Ziegert, J.C. and Hanges, P.J. (2005) Employment discrimination: The role of implicit attitudes, motivation, and a climate for racial bias. The Journal of Applied Psychology, 90(3): 553–562. https://doi.org/10.1037/0021-9010.90.3.553
9
Epistemic Responsibility and Implicit Bias Nancy Arden McHugh and Lacey J. Davidson
Are we responsible for our knowledge and how we act on that knowledge? Can we be responsible for what we don’t know? Are we responsible when we act on biases of which we are unaware? Are there strategies for making us better and more responsible knowers? What are the consequences of knowledge or lack of knowledge? These are some of the questions that epistemologists, philosophers who are concerned with how we gain knowl edge and what counts as knowledge, ask about the world. Epistemologists use the term epistemic agents to refer to people who are capable of making choices about the amount and kinds of knowledge they have and how they go about getting it. Some individuals are epistemic agents and others are not. For example, we might not say that a 2-year-old child is an epistemic agent, but we would say that most 33-year-old adults are epistemic agents. When epistemologists talk about the methods people use to gain knowledge, they use the term epistemic practices. Epistemic practices are habits or practices that help individuals and communities gain knowledge about themselves, their communities, and the worlds they inhabit. For example, some people have the epistemic practice of asking a lot of questions and critically assessing the responses they get. Other people have the epistemic practice of going by their gut reaction. These epistemic practices help us to act more or less responsibly with respect to the knowledge we have and seek. Epistemic agents have a responsibility to create, transmit, and receive knowledge in the most accurate and just ways possible, and sometimes this means that we need to improve our epistemic practices. This chapter is about how to understand the relationship between indivi dual and collective epistemic responsibility and implicit bias. In this chapter we’ll develop a model for understanding epistemic responsibility with respect to implicit bias. We will begin the chapter by discussing moral responsibility for implicit bias (see also Dominguez, Chapter 8, “Moral Responsibility for Implicit Biases: Examining Our Options”). We identify some of the shared assumptions underlying the existing debates about moral responsibility for implicit bias and argue that these approaches are insufficient for under standing our epistemic and moral obligations with respect to implicit bias. Specifically, we will argue that our moral responsibilities related to implicit
Epistemic Responsibility and Implicit Bias
175
bias must include a central role for seeking and disseminating knowledge and improving our epistemic practices. The framework of epistemic responsibility that we will use highlights the role of knowledge in our responsibilities around implicit bias. Next, we argue that individuals and communities can behave in more epistemically responsible ways and, consequently, can lessen the effects and presence of implicit biases. We offer three concrete individual and collective epistemic practices that allow us to develop better habits for seeking, generating, conveying, and absorbing knowledge.
1 Implicit Bias, Responsibility, and Ignorance Recall from Chapters 1 and 2 (Johnson, “The Psychology of Bias: From Data to Theory”; Leboeuf, “The Embodied Biased Mind”) that many people (particularly those occupying dominant social positions) have beliefs, feelings, and habits that a b c d
they cannot reliably identify or introspectively access, they would not avow (e.g. report believing in, consciously assent to, agree with—this is what Johnson calls divergence), they think they are unlikely to possess, and influence thoughts and behavior in a way that they would not avow.
Taken together these features give us an idea of what implicit biases are like. We can also distinguish between the influence of implicit biases on behavior and the having of implicit biases themselves. It is plausible that one could have a biased attitude with the first three features (a–c) that fails to influence thought and behavior (d). Nonetheless, implicit biases often lead to cognitive gaps (for more specifics, see Beeghly, Chapter 4, “Bias and Knowledge: Two Metaphors”; Siegel, Chapter 5, “Bias and Perception”; and Basu, Chapter 10, “The Specter of Normative Conflict: Does Fairness Require Inaccuracy?”). For example, an implicit bias may cause us to have epistemically questionable habits for determining people’s credibility related to race and economic class, such as the unquestioned credibility we tend to confer to white, male professors. In this section we will give an overview of a common way of thinking about our responsi bilities for implicit bias within philosophy and raise some problems for thinking about it in this way. When philosophers discuss implicit bias, they often focus on responsibility for particular acts that occur at specific times. On these accounts, whether or not a person is morally responsible for an act influenced by implicit bias is determined by considering whether the agent met a set of conditions for moral responsibility. Thus, on these accounts, one can be deemed “morally responsible” or “not morally responsible” for a particular act. Here’s an example of how this might go:
176
Nancy Arden McHugh and Lacey J. Davidson Laura is an admissions counselor at a university, and it is her job to make the initial round of cuts in the application process. She decides who will make it to the next round of application review and who will not. Unbeknownst to Laura, she has implicit biases that lead her to dismiss applications with traditionally Latinx names, such as Carlos or Juanita. The influence of Laura’s bias on the process means that fewer Latinx individuals are admitted to the university.
When philosophers think about this case, they will often ask, “Is Laura morally responsible for her behavior?” In other words, they wonder whether we can justifiably morally evaluate Laura for her actions even though her actions were influenced by implicit biases that she did not know about and did not control. In many ways, this kind of question is helpful to us in sorting out exactly what our responsibilities are with respect to implicit bias. However, we’ll argue that asking these kinds of questions or starting with individual moral responsibility can obscure both the stakes with respect to implicit bias and our individual and collective responsibilities to take action. But, first, here’s a little more about determining moral responsibility. As Dominguez (Chapter 8, “Moral Responsibility for Implicit Biases: Examining Our Options”) explains, accounts of direct responsibility for implicit bias typically focus on necessary conditions for responsibility, such as control, knowledge, or attributability. On these accounts, an agent is morally respon sible for an action when they have met some set of conditions at the time of the action. If the conditions aren’t met, then the agent is said to be not morally responsible, which has consequences for our abilities to be justified in morally evaluating the person. Robin Zheng, for example, argues that in many cases implicitly biased behaviors fail to meet the conditions for responsibility (2016: 72; see also Kelly and Roedder 2008; Levy 2017). This is because in many cases, the individual does not know that implicit bias is influencing their behavior, and they have done what they can be expected to do to avoid implicit bias. On this account, we might conclude that Laura is not morally responsible for her actions, even though one’s expectations, given her role, is that she acts without bias. This method for thinking about implicit bias starts with the individual and what kind of state the individual was in when implicit bias influenced their behavior in order to determine whether or not the individual can be held responsible or be blamed for her actions. The starting point for these accounts is often whether a particular individual is morally responsible for a specific action at a particular time, but in addition the accounts can be used to argue that we are indirectly responsible for implicit bias. In other words, we can talk about the ways we are responsible for failing to engage in practices that reduce implicit bias or keep them from influencing our behaviors at a time prior to the behavior we are evaluating. For example, Washington and Kelly (2016) argue that in many cases we are in positions where we should’ve known (about implicit bias), even when in fact we did not
Epistemic Responsibility and Implicit Bias
177
know. In Laura’s case, we might think that she should’ve known about her implicit biases, and thus, should’ve done something to prevent them from influencing her behaviors (by removing the names, for example). We may also be responsible for doing things in response to implicit bias influencing our behaviors; Zheng argues that we ought to be held accountable for our implicit bias even if we are not directly responsible. This means that we are still responsible for responding to and mitigating harms that come about as a result of our bias, just like we might pay for a vase that our child accidently broke in a store. These accounts provide a straightforward way to think about our responsibilities for implicit bias. Here we suggest, however, that the entire framing of the debate about the necessary conditions for moral responsibility is part of a cognitive framework that functions to obscure responsibility for matters relating to people who are oppressed in virtue of their group membership, such as race, gender, sexu ality, and other salient social categories. We’ll develop this point in the next two sections of the chapter. We will consider whether our collective energies would be better focused by shifting from assigning individual blame for implicit bias (Who can we blame for this? How can I personally avoid blame?) to understanding implicit bias as one part of a much larger set of mechanisms that cause, support, and maintain conditions of oppression. We’ll argue that conditions of oppression can be better mitigated by epistemic agents acting with a greater level of epistemic responsibility because epistemic agents who know more are able to act more ethically. Thus, shifting from questions of moral responsibility for implicit bias to the questions of epis temic responsibility we offer here will allow us to more clearly identify our responsibilities with respect to implicit bias.
2 Epistemic Responsibility Epistemic responsibility is a set of habits or practices of the mind that people develop through the cultivation of some basic epistemic virtues, such as openmindedness, epistemic humility, and diligence that help knowers engage in seeking information about themselves, others, and the world that they inhabit (Code 1987; 2006; Medina 2013). Open-mindedness is the practice of seeking and considering viewpoints that contrast with one’s own. Epistemic humility is the practice of cultivating the right sort of self-reflection such that one attends to “one’s cognitive limitations and deficits” (43). Epistemic diligence is the habit of responding to “epistemic challenges,” such as when someone suggests that we are forming judgments in a hasty or unreliable way (51). Thus, when a knower acts in an epistemically responsible manner, they are more likely to engage in effective knowledge-seeking practices and make truth-tracking judgments. A judgment is truth-tracking when it matches the way the world really is as opposed to the way it is not. Just as people can cultivate and practice epistemic virtues that lead to epis temically responsible behavior, they also can cultivate and practice epistemic
178
Nancy Arden McHugh and Lacey J. Davidson
vices that lead to epistemically irresponsible behavior. Epistemic vices can be thought of as the inverse of the above virtues—close-mindedness, arrogance, and laziness, which are less likely to yield truth about the self, others, and the world. If an individual’s goal is to gain accurate knowledge, then they need to cultivate and engage in epistemically virtuous behaviors and mitigate epistemic vices. In addition, the development of epistemic respon sibility will be essential to our moral lives—if we can know better, we can do better. There are local (individual) and global (community or society) challenges for engaging in epistemically responsible behavior, and these can reinforce each other (see also Ayala-López and Beeghly, Chapter 11, “Explaining Injustice: Structural Analysis, Bias, and Individuals,” and Madva, Chapter 12, “Individual and Structural Interventions”). As the epistemic virtues and vices are framed above, they are primarily thought of as practices of indivi duals. One is epistemically responsible to the extent to which one works to cultivate and engage in the above epistemic virtues. On an individual level, one’s ability to do so is shaped partly by internal motivation and experience, as well as by structural constraints. For example, a person might display very little interest or inclination to be humble in any aspects of life. Perhaps they were raised in a household where arrogance was viewed as a sign of strength and authority. It is not hard to imagine arrogant subjects. We encounter them all the time. Thus, it wouldn’t be surprising if the generally arrogant person was also epistemically arrogant and never questioned the limits of their knowledge or the holes in their reasoning processes. There fore, on an individual level, one might recommend that the arrogant knower be confronted about their behavior and given some tools to change it. Yet arrogant knowers develop in communities. Thus, no knower is an arrogant or unvirtuous island in themselves. They are, at least to a degree, shaped by the situation in which they live. This is where the global chal lenges and opportunities for engaging in epistemic responsibility lie. These global challenges are frequently maintained and yet obscured by the culture in which they arise, constructing individuals and communities who are ignorant of their own epistemic irresponsibility and invested in maintaining their ignorance because it feels “psychologically and social functional,” i.e., it helps them to maintain self-esteem and to get along with other ignorant and arrogant people, yet it is epistemically dysfunctional, i.e., is not truthtracking (Mills 1997: 18). In The Racial Contract, Mills (1997) described the active construction of ignorance by whites on matters related to race in the USA. As Mills frames it, this epistemic state of ignorance depends on whites having a tacit, collective agreement to misinterpret the world. This ignorance, which results from “predictable epistemic gaps,” is pernicious in that it causes harm to individuals and communities (Dotson 2011: 238). Because this state of epistemic irresponsibility is “psychologically and socially functional,” as Mills argues, it is relatively unconscious and fits whites’ expectations of
Epistemic Responsibility and Implicit Bias
179
social and individual practices; it results in “the ironic outcome that whites will in general be unable to understand the world they themselves have made” (18). Thus, whites are unable to see their own collective epistemic irresponsibility and do not understand or even recognize the resulting social, physical, and psychological fallout. As numerous feminist and critical race philosophers have pointed out, epistemic ignorance is not confined to race (Frye 1983; Medina 2013; Ortega 2006; Sullivan 2007; Tuana 2006). This type of collective epistemic irresponsibility also can be seen in matters related to, for example, gender, country of origin, sexuality, sexual identity, ability, and poverty. The unifying component is that marginalized commu nities are primarily those who experience the multiple effects of epistemic irresponsibility, whereas empowered communities are usually those who exist in an epistemic state of ignorance regarding the experiences of marginalized communities and how privilege contributes to oppression. On the other hand, because knowers do exist deeply enmeshed in commu nities, many of which are overlapping, this also presents an opportunity for what has come to be known as “epistemic friction,” which can result in more epistemically responsible knowers (Medina 2013). Beneficial epistemic friction is the epistemic trial that is presented to knowers when they come in contact with alternative viewpoints, frequently from engaging with communities, individuals, ideas, and knowledge systems different from their own. This sets up the conditions that “enable subjects and communities to detect and sensitize themselves” to cognitive gaps regarding themselves and others (176), such as those caused and maintained by various implicit biases. Yet, in order for this to occur, individuals must have the epistemic virtues of open-mindedness, humi lity, and diligence to consider these alternative viewpoints. Thus, epistemic responsibility requires both individual cultivation of epistemic virtues as well as social structures that enable epistemic friction.
3 DIY-T (Do It Yourself-Together) Being epistemically responsible requires engaging in a set of practices or habits that help us develop a better understanding of ourselves, others, and society. We should all want to participate in these practices because they are more likely to lead us to truth-tracking judgments. Furthermore, the ignor ance that promotes and is promoted by implicit bias has frequently harmful ethical and political effects, including but not limited to the killing of marginalized people; the denial of employment, housing, and educational opportunities; and the general discounting of marginalized people’s testimony and experiences. Thus, the stakes regarding implicit bias are high. We share an individual and collective need to mitigate them. (Remember, implicit bias is one within a much larger set of mechanisms that support oppressive conditions. But the fact that implicit bias is just one of many things we ought to worry about, does not change that we ought to worry about it.) But as José Medina (2013) argues, “[w]e cannot overburden the
180
Nancy Arden McHugh and Lacey J. Davidson
individual with the responsibility of identifying and undoing every” bias that they and their culture holds, and that “this responsibility cannot be so diffused in the collective social body that particular individuals, organiza tions, groups, and institutions do not feel compelled or under any obligation to repair” their implicit biases, shared ignorance, and resulting harms (176). Thus, we must devise strategies for developing epistemic responsibility that balance individual responsibility for developing virtues with the power of collective practices. Many of the strategies proposed for mitigating implicit biases are individual practices meant to rid oneself of bias. It’s a bit like an epistemic detox. Instead of eliminating harmful contaminants from one’s body, such as heavy metals, the goal of this epistemic detox is to rid the mind of harmful biases. Just as bodily detox programs have both uses and limitations, so, too, do efforts toward epis temic detox. Some of these individual strategies are: considering what it would be like to have another person’s experiences; affirming counterstereotypes, focusing on features of the individual instead of group-based stereotypes; and focusing on common interests, even trivial ones, instead of broader differences (Kawakami et al. 2007; Mallett et al. 2008; Devine et al. 2012; Madva 2017; Madva, Chapter 12, “Individual and Structural Interventions”). Some of these strategies successfully reduce implicit bias, so they are worth doing, but their scope is limited because, as we argued above, no one is an epistemic island. Humans experience social messages on a regular basis that serve to reinforce and solidify implicit biases. Because these individual strategies have their limitations, we need broader social and collective strategies to help individuals mitigate their implicit biases and practice epistemic responsibility. Thus, collective epistemic responsibility and accompanying strategies are called for to mitigate the effects and the further development of harmful implicit biases. An essential component of this is the presence of alternative epistemic viewpoints to illuminate our cognitive gaps. Remember these alternative epistemic viewpoints “enable subjects and communities to detect and sensitize themselves” to cognitive gaps regarding themselves and others (Medina 2013: 176). Many of these cognitive gaps are predictable because they track and enable patterns of oppression and marginality. Thus, the kind of epistemic strategies needed are of the sort that would disrupt those patterns. We want to present two paths to create the space for alternative epistemic viewpoints (see Figure 9.1). These paths are not mutually exclusive. Individuals and groups can move on and through both trajectories and these trajectories can support and reinforce each other. These paths are 1. the development of epistemic virtues and 2. the experience of epistemic friction. Each path has an individual and collective component. The development of epistemic virtue (1) can arise both from a personal commitment and through engagement within a community that values and supports such development. The experience of epistemic friction (2) arises from individuals interacting through specific community practices and
Epistemic Responsibility and Implicit Bias
181
through engaging with people whose experiences and viewpoints are different from their own. In what follows, we’ll offer three community practices that both develop particular epistemic virtues and produce epistemic friction. These three practices will highlight the ways in which the paths to alternative epistemic viewpoints can mutually reinforce one another.
Figure 9.1 Two paths to create the space for alternative epistemic viewpoints: 1. the development of epistemic virtues and 2. the experience of epistemic friction
182
Nancy Arden McHugh and Lacey J. Davidson
The first practice is “world”-traveling, developed by Maria Lugones (2003), which is one strategy that can be an individual and a communal road to detecting one’s cognitive gaps by cultivating the epistemic virtue of openmindedness. The challenge is that many of us live in experientially and ideologically homogeneous environments and thus need to find ways to be presented with viewpoints that differ from one’s own. You can think of this as getting out of your “bubble.” “World”-traveling can be an inroad to cultivating open-mindedness. It is a strategy that entails a person willfully shifting their perspective and, potentially, physical location to move out of one’s comfort zone into the “world” of another. The “world” one travels to is one “that has to be inhabited at present by some flesh and blood people” (9), because it is too easy to craft imaginative worlds with people and situations that mirror one’s own perceptions. “World”-travelling is an inroad to open-mindedness because it involves the intentional practice of considering other points of view that over time can lead to the habit of open-mindedness. One way to think about this differentiation is that in order to develop habits, we have to repeat intentional practices (e.g., “world”-traveling) in order for them to develop into habits (e.g., openmindedness) that we do automatically. “World”-traveling is the intentional, but challenging, push that when repeated can get us in the habit of being open-minded. However, unlike many other intentional practices with much lower stakes, such as having a reminder on your phone to ensure that you get to class on time, “world”-traveling forces us out of our comfort zone by presenting other ways of being in the world, including the ways in which others live in oppressive situations. This process can start through engaging the narratives and testimony of people whose lives and experiences differ from one’s own. One can think of this as studying up in order to come from a place of openness and exposure rather than from a place of ignorance. This is particularly important because the creation of epistemic friction puts a particular and likely unjust burden upon marginalized groups because it is their lives that are the subject of empowered groups’ ignorance, yet it is their experiences and knowledge that provide an alternative viewpoint to allow for epistemic friction. The epistemic reciprocity that is needed for this type of endeavor is the very reciprocity that is endangered by epistemic ignorance and oppression. Thus, we must be very careful and intentional. Once one has sufficiently worked to developed an informed and self-critical perspective, one can be in a position to engage across differences and build reciprocal relationships with others and thus more fully “world”-travel. An example of “world”-traveling that can result in open-mindedness that one of the authors of this piece, Nancy McHugh, engages in is teaching philosophy classes in a men’s prison. She takes fifteen students from her university (referred to as outside students) to have class with fifteen men who are incarcerated (inside students) for a full semester. The class experi ence is as symmetrical as possible given that it is in a prison. All students
Epistemic Responsibility and Implicit Bias
183
have the same assignments, are graded the same, and receive the same college credit for the course. Even though the course subject is not about incarceration, one of the things that happens is the outside students learn to see through the experiences of the inside students in the class and this reframes the way they understand incarceration and people who are incarcerated. For example, one outside student stated: I expected to walk into the classroom and see nothing but criminals. … but I saw men who loved their families and wanted to further their educations … they were in no way what I had expected. So now I know that when I see prisoners being portrayed as nothing but hardcore criminals, I can tell people that’s not the case, they are people just like you and me locked away. They have good hearts, they are kind, and they don’t deserve to be stereotyped. This type of shift in understanding was common among the outside students. These outside students “world”-traveled to prison for 15 weeks and meaningfully engaged “flesh and blood” people who have some experiences and viewpoints that differ from their own. Because of the consistency of this experience, the outside students experienced epistemic friction that jars their conception of who is incarcerated and why they are incarcerated, thus moving them in the direction of developing the epistemic virtue of open-mindedness. Although this parti cular type of opportunity may not be available at all universities, most have similar opportunities for “world”-traveling, and students should exercise the responsibility for seeking these out. The second practice is progressive stacking (developed by organizers and acti vists). Progressive stacking is a community practice that centers the voices of those most marginalized by our current social and political structures. In an unmediated discussion, those with the loudest voices, those most comfortable with interrupting others, and those trained to take up air time and space are those people that get heard and those whose ideas, suggestions, and perspec tives are integrated into the group’s decisions. The basic practice of stacking involves identifying one person to “take stack,” which merely requires the individual to record the names of those individuals who wish to speak (perhaps indicated by a quick wave raise of the hand). The individual responsible for stacking then moves through the list of those wanting to speak. Progressive stacking adds another layer to this process. Rather than moving through the list of those who wish to contribute “in order,” the individual responsible for taking stack calls on those who occupy identities that are relatively more mar ginalized by current power dynamics first. Of course, the individual practice of identifying marginalized individuals will be difficult and as such, communities need to be diligent about developing this skill (and doing so may be a part of being an epistemically responsible agent) and careful about whom they select for this difficult role. Because not every oppression can be identified by sight, this is most easily done in a group of people who know and trust one
184
Nancy Arden McHugh and Lacey J. Davidson
another at least to some extent. This practice makes spaces for voices that are often unheard or silenced due to our individual and collective habits, often bolstered by our implicit bias, of listening and engaging. This practice has many benefits, and also supports and develops the virtue of epistemic humility and results in epistemic friction. Cultivating epistemic humility means cultivating the right sort of self-reflection such that indivi duals attend to cognitive limitations and deficits. Progressive stacking supports the individual development of epistemic humility by highlighting that one’s experiences may both limit and obscure one’s knowledge of the world. Epistemic friction arises from hearing the testimonies of others and the individual’s attempts at integrating the testimony into their own worldviews. Friction occurs in particular when something within the testimony is inconsistent with an individual’s attitudes or beliefs about the world. Because the practice is supported by the community, the individual is motivated to develop epistemic humility. Notice here that individuals will be more open to friction when they already practice epistemic humility and that the friction will lead them to further develop the virtue. The paths that lead to epistemic counterpoints are distinct, yet they interact and support one another. Also notice that this practice will be difficult without a commit ment to epistemic humility. For example, when individuals who embody dominant identities are moved to the bottom, without a commitment to epistemic humility, they may experience this as oppression or an act of silencing. Recognizing one’s own gaps is an integral part of being comfor table with progressive stacking. When introducing the practice, switching between stacking and progressive stacking allows for a balance of friction and development of epistemic humility. The third set of practices are calling-in and calling-out. Both involve an individual or group confronting another individual or group about something said or done. Typically, the motivation behind identifying a particular utterance or action is that the utterance or action perpetuates systems of oppression. For example, one might confront a friend for using the n-word or for telling a racist joke, or one might write an open letter expressing one’s intolerance of a company’s labor practices. The distinction between calling-in and calling-out has to do with the goal of the interaction as well as the style. The practice of calling-in is focused on building and expanding relationships, where calling-out need not have this focus (but may). Calling-in often works toward restoring community between all involved, and calling-out demands the utterance or action be stopped or corrected. For example, if someone in a group of friends calls another individual in the group “mixed” in a group conversa tion, that individual may pull aside the person and say, “I’m multiracial and would prefer if you didn’t use the term mixed. Here’s why …” In this case, an individual has called-in another individual. Now, imagine that a group has committed to using progressive stacking, and a person who embodies dominant identities continually interrupts others who have been called before them in the stack. Someone might say something like, “You are violating our shared
Epistemic Responsibility and Implicit Bias
185
agreements and taking up space at the expense of other voices with experiences we need to hear about. Please remain silent for the rest of the meeting.” This person has been called-out. As one can see, both practices are useful depending on one’s context and goals. On its face, it may seem that calling-in is better than calling-out, but two worries arise around calling-in: one, appeals to call-in (rather than call-out) are sometimes used to control the ways in which marginalized individuals confront those in dominant positions (e.g. to keep individuals from getting “angry” and police their tone, rather than responding to the substance of their claims) and, two, it is sometimes used to protect and maintain dominant identities (e.g. those in dominant positions don’t want to be embarrassed or have their feelings hurt). Thus, it is particularly important for those embodying dominant identities to be open to being both called-in and called-out. In addition, there is often a tendency for white people to label any instance of feedback from people of color as “being called out.” This is tied to white fragility (DiAngelo 2018) and ideas about people of color being dangerous or violent. Receiving feedback and being called-in or -out are all opportunities to learn about one’s self and question one’s own ways of knowing in ways that lead to improved individual and collective epistemic practices. The epistemic virtue that can either support or be developed through calling-in and calling-out is epistemic diligence. Epistemic diligence is the habit of responding to “epistemic challenges,” like calling-in and calling-out, by calling into question our other epistemic habits and responding positively to requests for self-critique and for more information or evidence (Medina 2013: 51). The practices of calling-in and calling-out and the ways in which we respond to such calls, for example not being defensive or shutting down, serve as the types of epistemic challenges required for the development of epistemic diligence. In addition, the practices give rise to epistemic friction in that they make one aware of the limits of one’s epistemic habits and access to the world. The mutually supporting friction and development of virtue allow these practices to play an important role in epistemic responsibility. These strategies have many positive epistemic and moral benefits. They reduce the effects of implicit bias, specifically, for several interacting reasons. 1. They give rise to alternative epistemic viewpoints that make us aware of the cognitive gaps that arise due to our implicit biases. 2. One develops the epistemic habit of being self-critical and potentially self-correcting. 3. One is now enmeshed in a community which has also developed the habit of col lective monitoring and, therefore, can point to not only individual cognitive gaps, but also potentially be situated to recognize, point out, and act upon communal cognitive gaps. Thus, one is positioned to effectively practice epistemic responsibility. By practicing epistemic responsibility, we not only improve our epistemic practices, but we also can become effective moral agents in our community. In traditional moral responsibility debates, ignorance is an excusing condition, but what if you’re responsible for your
186
Nancy Arden McHugh and Lacey J. Davidson
ignorance? The importance of the condition-based approaches (focusing on knowledge and control) given in Section 2 becomes minimized through our recommendations because, by contrast, the epistemic practices have robust moral outcomes, ones that cannot be achieved through acting from one’s own moral island.
4 Conclusion: Know Better, Do Better In this chapter we have argued that the epistemic responsibility framework that we offer leads us closer to achieving our moral and epistemic goals than the individual and condition-based notions of responsibility often discussed with respect to implicit bias. The robust framework of epistemic responsibility allows us to see that implicit bias is just one of the many things that influences and maintains devastating epistemic practices that lead to oppressive conditions and widespread harm along social identity lines. Specifically, we have discussed the cultivation of three epistemic virtues to fulfill our epistemic responsibilities: open-mindedness, epistemic humility, and epistemic diligence. We’ve shown that communities can support individual commitments to the development of these virtues and develop these virtues more fully as a collective by engaging in practices that lead to epistemic friction. With improved epis temic practices, we are able to know more about ourselves, our communities, and our world. As we become more epistemically responsible through these practices, we are able to shift outcomes in morally significant ways. When we know better, we are able to do better. It is essential that we address our patterns of attention and habits of mind—some of which are caused by our implicit biases—in order to achieve different outcomes and work collectively toward a better world. What we hope that you take from this chapter is not only the importance of being epistemically responsible subjects and the impor tance of cultivating epistemically responsible communities, but also that there are concrete strategies that you can employ to increase your efficacy as epistemically responsible agents.
SUGGESTIONS FOR FUTURE READING For a more in-depth argument about why focusing on individual responsi bility and blameworthiness for implicit bias leads us in the wrong direction, read: •
•
Coates, Ta-Nehisi (2015) Between the World and Me. New York: Spiegel & Grau. In this autobiography Coates describes the ways in which racism is woven into the fabric of American life and the collective black struggle against oppressive social and political forces. Hinton, Anthony Ray (2018) The Sun Always Whines. New York: St. Martin’s Press. In this memoir Anthony Ray Hinton discusses what
Epistemic Responsibility and Implicit Bias
187
social and epistemic factors led to his wrongful conviction and senten cing to death row for 33 years before he was eventually freed. If you’d like to learn more about why marginalized voices are a good starting point for improving our epistemic practices, read: •
•
Collins, Patricia Hill (1990) Black Feminist Thought. New York: Rou tledge. Collins makes the lives of black women an important starting point for epistemic practices because their knowledge and experiences are outside of the mainstream and provide a way to see experiences that are systematically ignored. Anzaldua, Gloria (2007) Borderlands/La Frontera (third edition). San Francisco, CA: Aunt Lute Books. Anzaldua was a Chicana lesbian who wrote about how living on the border of Mexico and the USA resulted in a particular way of seeing the world. She argued that developing a perspective from the “borders” helps one to see the oppression in ways that were inaccessible previously.
If you’re looking to explore epistemic resistance, how we can select communal epistemic practices that actively work to dismantle ignorance, read: •
•
•
Lugones, M. (2003) Pilgrimages = Peregrinajes: Theorizing Coalition Against Multiple Oppressions. Lanham, MD: Rowman & Littlefield. Lugones describes how she inhabits and travels between multiple worlds and perspectives in order to explain the complexity of social identity, and ultimately with the aim of building coalitions between marginalized groups. Medina, José (2013) The Epistemology of Resistance. New York: Oxford University Press. Medina explores epistemic oppression and offers strategies for meeting our collective epistemic responsibilities to resist ignorance and other pernicious cultural narratives. Sandoval, Chela (2000) The Methodology of the Oppressed. Minneapolis, MN: University of Minnesota Press. Sandoval looks at specific practices and frameworks that can help marginalized and privileged people learn to see and act in more critical ways.
If you are interested in learning more about epistemologies of ignorance, read: • •
Mills, Charles (1997) The Racial Contract. Ithaca, NY: Cornell University Press. Mills develops the concept of epistemology of ignorance in this short and highly readable book. Sullivan, Shannon and Tuana, Nancy (eds) (2007) Race and Epistemologies of Ignorance. Albany, NY: State University of New York Press. This
188
Nancy Arden McHugh and Lacey J. Davidson anthology holds many articles on a wide variety of issues related to epistemology of ignorance.
For more specific examples of community groups that engage in activism that works to dismantle ignorance and mitigate the harms of ignorance, read: •
McHugh, Nancy (2015) The Limits of Knowledge. Albany, NY: SUNY Press. Through case studies that include grassroots organizing, McHugh argues that we must build better collective practices to fulfill our obligations to marginalized communities.
DISCUSSION QUESTIONS 1 What is/are your social identity(ies)? What are the ways in which you see yourself to be socially advantaged and/or socially disadvantaged? How has this shaped your worldview? 2 What things do you know more about because of your social identities? What do you know less about? To start, think about the different knowledge someone that grew up in a city will have compared to someone that grew up in a rural area (and vice versa). What can we learn from asking similar kinds of questions about our knowledge with respect to other social identities such as race, gender, or ability? Do some social identities shape us more than others? If so, why? 3 Do you participate in any groups that engage in collective epistemic practices that lead to epistemic friction and support the development of epistemic virtues? What are the practices? 4 What practices could you engage in that would lead to epistemic friction and the support the development of epistemic virtues? What steps would you need to take to engage in these practices? 5 Who benefits when communities remain ignorant of oppressive conditions? How do the communal epistemic practices work against the maintenance of ignorance? 6 “Ignorance is bliss” is a phrase often used to justify apathy or inaction. How are our communities harmed rather than helped by ignorance? 7 What is your initial reaction to progressive stacking? What are some of the benefits? What are some ways this strategy could backfire? What are some ways to prevent this backfire? How does progressive stacking push back against or support some of your beliefs about discussion and conversation? 8 Have you ever been called-in or called-out? How does it feel? How are our feelings in reaction to receiving feedback, especially from people of color, tied into racial embodiment and stereotypes (e.g. how is it connected to stereotypes about “angry black people”)?
Epistemic Responsibility and Implicit Bias
189
9 What are some potential challenges with world-traveling, especially for those who embody white identities? Shannon Sullivan (2001) argues that white people embody an “ontological expansiveness” in which they assume they have a right to be in all spaces, places, and cultures. This expansiveness means that white people are often dangerous when they world-travel. How does this idea inform our practice of responsible world-traveling? What are some epistemic virtues that need to be developed before one world-travels? 10 The authors give three examples of community practices that lead to the development of epistemic virtues. What are some other practices that also develop these same virtues? 11 The authors claim that when we know better, we are able to do better. What are some times in your life when you were able to do better because you knew better? 12 The authors claim that “world”-traveling is a way to develop openmindedness. What are some opportunities on your campus for “world” traveling?
REFERENCES Code, L. (1987). Epistemic Responsibility. Hanover, NH: University Press of New England. Code, Lorraine (2006) Ecological Thinking: The Politics of Epistemic Location. Oxford, UK: Oxford University Press. Devine, P.G., Forscher, P.S., Austin, A.J., and Cox, W.T.L. (2012) Long-term reduction in implicit race bias: A prejudice habit-breaking intervention. Journal of Experimental Social Psychology, 48(6): 1267–1278. DiAngelo, R. (2018) White Fragility: Why It’s So Hard for White People to Talk About Racism. Boston, MA: Beacon Press. Dotson, K. (2011) Tracking epistemic violence, tracking practices of silencing. Hypatia, 26(2): 236–257. Frye, M. (1983) The Politics of Reality: Essays in Feminist Theory. Trumansburg, NY: Crossing Press. Kawakami, K., Dovidio, J.F., and Kamp, S. van (2007) The impact of counterstereotypic training and related correction processes on the application of stereo types. Group Processes and Intergroup Relations, 10(2): 139–156. Kelly, D. and Roedder, E. (2008) Racial cognition and the ethics of implicit bias. Philosophy Compass, 3(3): 522–540. Levy, N. (2017) Implicit bias and moral responsibility: Probing the data. Philosophy and Phenomenological Research, 94(1): 3–26. Lugones, M. (1987) Playfulness, ‘world’-travelling, and loving perception. Hypatia, 2 (2): 3–19. Madva, A. (2017) Biased against debiasing: On the role of (institutionally sponsored) self-transformation in the struggle against prejudice. Ergo, 4(6). Mallett, R.K., Wilson, T.D., and Gilbert, D.T. (2008) Expect the unexpected: Failure to anticipate similarities leads to an intergroup forecasting error. Journal of Personality and Social Psychology, 94(2): 265–277. https://doi.org/10.1037/0022-3514.94.2.94.2.265
190
Nancy Arden McHugh and Lacey J. Davidson
Medina, J. (2013) The Epistemology of Resistance. New York: Oxford University Press. Mills, Charles. (1997) The Racial Contract. Ithaca, NY: Cornell University Press. Ortega, M. (2006) Being lovingly, knowingly ignorant: White feminism and women of color. Hypatia, 2(3): 56–74. Sullivan, S. (2001) The racialization of space: Toward a phenomenological account of raced and anti-racist spatiality. In S. Martinot (ed.), The Problems of Resis tance: Studies in Alternate Political Cultures (pp. 86–104). Atlantic Highlands, NJ: Prometheus/Humanity Books. Sullivan, S. (2007) White ignorance and colonial oppression: Or, why I know so little about Puerto Rico. In S. Sullivan and N. Tuana (eds), Race and Epistemologies of Ignorance (pp. 153–172). Albany, NY: State University of New York Press. Tuana, N. (2006) The speculum of ignorance: The women’s health movement and epistemologies of ignorance. Hypatia, 21(3): 1–19. Washington, N. and Kelly, D. (2016) Who’s responsible for this? Moral responsibility, externalism, and knowledge about implicit bias. In J. Saul and M. Brownstein (eds), Implicit Bias and Philosophy: Volume 2: Moral Responsibility, Structural Injustice, and Ethics (pp. 11–36). New York: Oxford University Press. Zheng, R. (2016) Attributability, accountability, and implicit bias. In J. Saul and M. Brownstein (eds), Implicit Bias and Philosophy: Volume 2: Moral Responsibility, Structural Injustice, and Ethics (pp. 62–89). New York: Oxford University Press.
10 The Specter of Normative Conflict Does Fairness Require Inaccuracy? Rima Basu
Pierce Hawthorne: We all know what we’re really thinking. If, and I mean ‘if’ the culprit is among us, statistically speaking it’s Troy. Community, Cooperative Calligraphy
A challenge we face in a world that has been shaped by, and continues to be shaped by, racist attitudes and institutions is that the evidence is often stacked in favor of racist beliefs. As a result, we may find ourselves facing the following conflict: what if the evidence we have supports something we morally shouldn’t believe? For example, it is morally wrong to assume, solely on the basis of someone’s skin color, that they’re a staff member. But, what if you’re in a context where, because of historical patterns of dis crimination, someone’s skin color is a very good indicator that they’re a staff member? When this sort of normative conflict looms, a conflict between moral considerations on the one hand and what you epistemically ought to believe given the evidence on the other, what should we do? It might be unfair to assume that they’re a staff member, but to ignore the evidence would mean risking inaccurate beliefs. Some, notably Tamar Gendler (2011), have suggested that we simply face a tragic irresolvable dilemma. In this chapter, I consider how these cases of conflict arise and I canvass the viability of suggested resolutions of the conflict. In the end, I argue that there’s actually no conflict here. Moral considerations can change how we epistemically should respond to the evidence.
1 Setting up a Conflict Let’s start with an obvious point: we form beliefs about other people all the time. I believe that at a busy intersection no less than three drivers will turn left when the light turns red. Why? Because I see it happen all the time. On similar grounds I believe that when it rains half of my students won’t show up for class. Why? Because in my experience no one in Southern California, and I include myself in this generalization, knows how to deal with rain. We often don’t think twice about forming beliefs on the basis of these sorts of statistical regularities or stereotypes. But maybe we should.
192
Rima Basu
Consider, for example, whether I should believe that a black man stand ing outside of a restaurant is a valet. What if it’s the case that every valet I have interacted with outside of this restaurant has been black? What if I’m in a rush and I just need to get my car and I know that 90 percent of valets at this restaurant are black? Although this seems like a classic case of racial profiling, the evidence seems really strong. As Barack Obama has noted in an interview with People magazine, “There’s no black male [his] age, who’s a professional, who hasn’t come out of a restaurant and is waiting for their car and somebody didn’t hand them their car keys.” I might get it wrong, but given the probabilities, I’m also very likely to get it right. But here’s the challenge: unjust social structures of our world gerrymander the regularities and the evidence an individual is exposed to in ways that reinforce racist and sexist beliefs (see Munton 2019; Basu 2019). What if it is the case that because of historical patterns of discrimination the vast majority of valets working at this restaurant are black? In such a context, if I mistake Barack Obama for a valet, have I done anything wrong? In one sense, yes. My belief was inaccurate. Further, my belief was an instance of racial profiling, and racial profiling is generally considered to be morally problematic. However, in another sense, in believing as I did I was at least aiming at having an accurate belief. To introduce a technical term, it was epistemically rational for me to believe what I did. Suppose, for example, that you check the weather forecast and see that there’s a 90 percent chance of rain tomorrow. Given that 90 percent chance, you should believe that it will rain tomorrow. Further, it is appropriate to criticize you for being epistemically irrational were you to not believe it’ll rain tomorrow, e.g., if you started to plan a picnic. By disregarding the likelihood of rain, you are doing something epistemically wrong. It is epistemically irrational and irresponsible to ignore evidence that bears on the question of what to believe. If we return to the racial profiling case, however, we now find ourselves in the following bind. Believing in accordance with the evidence—the high like lihood that a black man standing outside the restaurant is a valet—may lead you to hold a belief that is morally problematic. After all, assuming that a black man is a valet is a paradigmatic example of a racist belief, and racist beliefs are morally impermissible. The world we live in is pretty racist, and so it shouldn’t be surprising if the world presents us with a lot of evidence for pretty racist beliefs. Thus, the stage is now set for the conflict between accuracy and fairness that is the subject of this chapter. Any black male standing outside a restaurant who is mistaken for a valet seems to have the legitimate complaint, no matter how much evidence you had for your belief, that it is unfair that you assume that they must be a valet. But now we must ask, why is it unfair? It’s at least partly unfair for reasons that Lawrence Blum has noted. Blum says, “being seen as an individual is an important form of acknowledgment of persons, failure of such acknowledgement is a moral fault and constitutes a bad of all stereotyping” (2004, 282; see also Lippert-Rasmussen 2011). Similarly, it’s also unfair because we wish to be related to as we are, not as
Does Fairness Require Inaccuracy?
193
we are expected to be on the basis of race, gender, sexual orientation, class, etc. When someone forms beliefs about us in the same way we form beliefs about planets—that is, as objects to be observed and predicted—they fail to relate to us as persons (see Basu 2019). If you’ve ever been mistaken for waitstaff or “the help” because, as Pusha T says, your melanin’s got a tint, you recognize that feeling of unfairness. If anyone’s ever thought that you’re not interested in science because you’re a woman or made you feel selfconscious about being sad and wanting to cry because you’re a man (“men don’t cry”), you know what it feels like to have assumptions made about you. Sometimes they’re innocuous or trivial and you don’t feel hurt or restricted by them, but often they’re not so trivial. Sometimes they really hurt. Although it is notoriously tricky to pin down exactly what is meant by this requirement to treat others as individuals and what precisely it is that makes stereotyping morally wrong (for more see Beeghly 2015; 2018), we recognize this feeling of being wronged, of being reduced, of being treated as an object, when we are stereotyped. It’s, simply put, not fair. This unfairness of believing something of another person simply on the basis of statistical evidence is also recognized by our judicial system. You cannot convict someone solely on the basis of statistical evidence no matter how strong that statistical evidence. Within legal scholarship this is known as the problem of naked statistical evidence (see Schauer 2006; Enoch et al. 2012; Buchak 2014). The common example goes as follows: while you were driving a bus sideswiped your car. In your town there are only two bus companies: the Blue Bus Company and the Green Bus Company. You want compensation for the damages, so you decide to sue the company operating the bus that sideswiped your car. Here’s the problem, though. It was late at night and you couldn’t accurately identify the color of the bus. But, you also know that 80 percent of the buses in the city are blue buses operated by the Blue Bus Company, whereas the other 20 percent are green and operated by the Green Bus Company. Given the balance of the probabilities, you can be fairly confident that you were sideswiped by a blue bus. In a civil court you only need to demonstrate a preponderance of evidence. That is, that it is more likely than not that you were hit by a blue bus. The good news is that the statistics are on your side! It is, after all, 80 percent likely that you were hit by a blue bus and 80 percent is greater than 50 percent. The bad news, however, is that this merely statistical evidence is inadmissible in the courtroom. If there had been an eyewitness who could testify that they saw a blue bus sideswipe your car, then you’d stand a much better chance at winning your case even if that eyewitness testimony was less reliable than the mere statistical evidence. Why? Again, the answer seems to return to these considerations of fairness. It would be unfair to convict the Blue Bus Company just because it’s statistically likely that if a car is sideswiped by a bus it would be sideswiped by one of their buses. It is unfair to hand your keys to a black man assuming that he’s a valet just because it’s statistically likely that he’s a valet.
194
Rima Basu
Although you can’t find the Blue Bus Company guilty for sideswiping your car, perhaps you still have enough evidence to believe that it was a blue bus that sideswiped your car. It might be unfair to the Blue Bus Company, but that’s where the preponderance of evidence lies. Similarly, the world isn’t a fair or just place; the world has been shaped by racism and other discriminatory practices. Perhaps there’s some truth to the stereotypes we hold and so if we want to do what’s epistemically right, i.e., believe in accordance with the evidence, we just have to make this tradeoff with moral considerations like fairness. And this brings us right back to the conflict of this chapter: in the interests of accuracy, must we give up on fairness or, in the interests of fairness, must we give up on accuracy? To answer these questions concerning the seeming conflict between accuracy and fairness, we first need to get clear on what is even meant by the claim that you should believe in accordance with the evidence. A common person to turn to when it comes to explicating the duty to believe responsibly is W. K. Clif ford. Clifford (1877) asks us to consider an old-timey ship owner whose ship is about to ferry families to their new home. The ship owner knows that the ship is old, has a few faults, and has been in need of a number of repairs. He’s thought to himself that he should have the ship overhauled and refitted, but that would be a costly venture. As he watches the ship leave the dock, he pushes his doubts aside by reminding himself that she’s still a hardy ship and she’s safely made and returned from a number of journeys. So, perhaps he has no reason to be worried that this time she might not safely come home. I’ll leave the last bit of the example in Clifford’s own words: In such ways he acquired a sincere and comfortable conviction that his vessel was thoroughly safe and seaworthy; he watched her departure with a light heart, and benevolent wishes for the success of the exiles in their strange new home that was to be; and he got his insurance-money when she went down in mid-ocean and told no tales. (1) Now, is the ship owner guilty of the deaths of the passengers on his ship? He did, after all, sincerely believe that his ship was seaworthy. Nonetheless, Clifford argues, the ship owner’s sincerity means absolutely nothing because he had no right to believe on the basis of the evidence he had (for more cases like this, see Siegel, Chapter 5, “Bias and Perception”). Instead of gathering evidence about the seaworthiness of his ship he instead came to his belief by stifling his doubts. Furthermore, even if his luck (and the luck of the pas sengers) had been different and the ship had safely made the journey, his guilt would not be diminished one bit. Whether the belief turned out to be true or false has no bearing on whether the ship owner had a right to believe on the evidence that was before him. As Clifford famously remarks, “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.”
Does Fairness Require Inaccuracy?
195
An important part, then, of having epistemically rational beliefs is ensuring that your beliefs are held on the basis of sufficient evidence. It seems pretty intuitive to say that you should believe in accordance with the evidence, and we seem to hold people to this standard all the time. For example, suppose someone were to insist that there has been no foreign interference in our elections. We wouldn’t just take their word for it. What we’d want to know is what reasons they have for thinking that, i.e., what evidence they are basing their beliefs upon. But, to now practice some of the pedantry well-known to philosophy: what is evidence? Or to borrow a popular meme, we can present the question as depicted in Figure 10.1. This is a difficult question to answer, but for our purposes we can appeal to a standard intuitive conception of evidence and how it relates to the epistemic rationality of belief. Ordinarily when we think of evidence we think of examples from crime procedurals, e.g., gun residue, fingerprints, DNA from a strand of hair. These examples of evidence are physical objects we can put in a bag, label as evidence, send to a lab, and present in front of a judge or jury. But that’s not all that evidence is. What I hear, see, and smell can be evidence that bears on a question under investigation. This
Figure 10.1 Is this evidence?
196
Rima Basu
question of how to understand the nature of evidence is a big one (see Kelly 2016 for a survey of the topic), but in what follows we will work with the following intuitive gloss: evidence for a question under investigation is a reliable sign, symptom, or mark for information relevant to the question under investigation. For example, smoke is a reliable sign of fire and as such it is evidence of fire. Similarly, my dog’s barking is evidence of someone at the door, a distinctive whistle from the kettle is evidence that the water has boiled, and the sight of rain outside my window is evidence that it is raining outside. Now we can fill out our meme as depicted in Figure 10.2. Returning to our opening example involving racial profiling, the question we must ask is whether you or I have sufficient evidence upon which to believe that the black man standing outside the restaurant is a valet. We’ve already noted that the belief is morally problematic because it seems unfair. Nonetheless, it also seems like it’s the belief you ought to have given the evidence. Notice that although Clifford states it as a moral imperative to believe in accordance with the evidence, moral considerations—such as whether the belief would be a
Figure 10.2 Is this evidence? Take two. Evidence is a reliable sign, symptom, or mark for information relevant to the question under investigation.
Does Fairness Require Inaccuracy?
197
racist belief—play no role in determining whether a belief is epistemically rational or not. Epistemic rationality is just a matter of the evidence you have and whether that evidence provides adequate justification for your beliefs. As Ben Shapiro might say, the facts don’t care about your feelings. Translating this into our, and Clifford’s, vocabulary, we might say that the evidence, or the reasons for believing, don’t care about your feelings. Now if that’s the case: the question can be put like this. When we have these conflicts between fairness considerations and, on the other hand, what you should believe based on your evidence, what should you do? Broadly speaking, the options you have available for answering this question range from either just accepting that this is an irresolvable dilemma to saying that there’s no fact of the matter about what you should do and you should just choose fairness or accuracy. These options can be mapped as depicted in Figure 10.3. Going forward, I’ll begin by canvassing the various options—starting at the extremes with The Dilemmist vs. The Pluralist—in an attempt to present each camp warts and all so the reader can decide for themselves what they
Figure 10.3 The fairness and accuracy conflict. A (partial) map of analytical space.
198
Rima Basu
find most convincing. But I also think there is a right answer. My preferred answer, moral encroachment, requires a shift in how we understand the demands of fairness and accuracy. According to moral encroachment, there is no dilemma here, but there is a fact of the matter about what you ought to do. That sounds odd, but I’ll try to show it’s the most promising answer to this apparent conflict between fairness and accuracy.
2 The Dilemmist vs. the Pluralist Dilemmas are familiar from our everyday lives. You want a decadent brownie but you also want to eat healthy. You can’t do both. You promised a friend you’d meet them for lunch, but then another friend has a crisis just before lunchtime. You either keep your promise to one friend or break the promise to go support your other friend. You love both your children equally but because Nazis are evil, they’re threatening to kill both your children unless you choose one, and only one, for them to kill. These examples escalate quickly, but you get the point. The possibility of genuine normative dilemmas is most clearly seen when both obligations are of the same normative kind, e.g., two moral demands. To illustrate this, let’s consider Ginny. Ginny has promised Fred she’d see the new Marvel movie with him and only him (A), but she also promised George that she’d see the new DC movie with him and only him (B). Ima gine that Ginny lives in a town with only one movie theater and because of her busy schedule she has been putting off her obligations until tonight, the last night that both movies are playing. Unfortunately, she can only either keep her promise to Fred and do A, or she can keep her promise to George and do B. There is no way for her to do both. Ginny finds herself torn between two incompatible options—A and B—and asks “What ought I do? A or B?” Let us now imagine that she shouts out this question to the universe and while she’s shouting, a proponent of The Dilemmist position happens to be walking by. Noting Ginny’s dilemma they helpfully answer, “Well, you ought to do A, and you ought to do B. Basically, you’re just stuck between a rock and a hard place.” We could reasonably expect Ginny to reply reminding this stranger offering advice that she can’t do both, and she wants to know which she ought to do. If they were again to simply reply that she ought do A, and she ought do B, that is no help. Now, why should we think that there are genuine dilemmas of this sort? Partly, we can follow a line presented by Bernard Williams (1965): in the case of moral dilemmas you might think that whatever Ginny does, she’ll feel regret at not having done the other. If she breaks her promise to George and goes to the movies with Fred, she’ll feel the need to make it up to George. Seeing that both incompatible acts are required does justice to why we would feel this regret. This case of promising involves two moral obligations, but what about when moral and epistemic obligations, such as fairness and accuracy, seem to collide?
Does Fairness Require Inaccuracy?
199
A classic defender of this dilemmist position is Tamar Gendler. She introduces the following case that mirrors the conflicts we’ve been discuss ing so far, and argues that the characters in the case (taken from John Hope Franklin’s autobiography) simply face an irresolvable dilemma. Social Club. Agnes and Esther are members of a swanky D.C. social club with a strict dress code of tuxedos for both male guests and staff members, and dresses for female guests and staff members. They are about to go on their evening walk so they head towards the coat check to collect their coats. As they approach the coat check, Agnes looks around for a staff member. All of the club’s staff members are black, whereas only a small number of the club members are black. As Agnes looks around she notices a well-dressed black man standing off to the side and tells Esther, “There’s a staff member. We can give our coat check ticket to him.” (see Gendler 2011 for the original case) Gendler, in a familiar refrain, argues that given that we live in a society structured by racial categories, we simply face a tragic irresolvable dilemma. We must either (a) lose out on knowledge and pay the epistemic cost of failing to attend to certain sorts of statistical information about cultural categories (i.e., encoding the information that a minuscule fraction of club members are black whereas all the staff members are black) or (b) we must actively believe against the evidence and work at regulating the inevitable implicit biases which that information gives us. In other words, we must choose between doing what we epistemically ought (attend and respond to the background statistical information about the race of the staff members) and what we morally ought (not use someone’s race to make assumptions about them, such as that they are staff). This, she argues, places us in a tragic irresolvable dilemma. We cannot simultaneously fulfill both our moral and epistemic obligations, and there is no way to resolve this conflict. Both options have major downsides. A challenge for interpreting these cases of normative conflict as genuine dilemmas is that it can undermine movements against racism and implicit bias. As Jennifer Saul (2018) argues, if we were to suggest that opposition to racism (what is morally required) leads one into (epistemic) irrationality, then the consequence is that the person committed to anti-racism will be irrational. As Saul (2018, 238–9) goes on to note, Although it is clearly not Gendler’s intent, this fits exceptionally well with the right-wing narratives of politically correct thought-police attempting to prevent people from facing up to difficult truths; and of the over-emotional left, which really needs to be corrected by the sound common sense of the right. Anything that props up these narratives runs the risk of working against the cause of social justice.
200
Rima Basu
However, there may be ways to acknowledge that moral and epistemic obligations can conflict without suggesting that adopting the moral option means you’re being irrational. One such way, pluralism, starts from the observation that obligations of all different sorts are in conflict all the time. Maybe, then, there’s nothing special about moral-epistemic conflicts. According to pluralism there is a plurality of oughts and from the perspective of each ought you simply ought do what it prescribes. If we return to the flowchart from Figure 10.3, the pluralist says that the conflict is resolvable, meaning that it’s not the case that any one consideration (always, or even ever) takes priority over another, and there’s just no fact of the matter about what you should do in these cases of conflict. For example, consider the Knights Who Say Ni from Monty Python and the Holy Grail. To say that the Knights Who Say Ni have some weird norms would be an understatement. For those unfamiliar with this impor tant cultural reference, one of the standards governing Knights Who Say Ni is to shout “Ni!” until their demands are met, i.e., a gift of shrubbery. As a result, that you are a Knight Who Says Ni generates a reason for you to shout “Ni!” until you are gifted shrubbery. Despite the silliness of the example, we recognize that a lot of prescriptive norms are like this. Perhaps the rules of etiquette only generate reasons for you—a reason to take off your hat when indoors, a reason to not pick your nose at the dinner table, etc.—if you care about the rules of etiquette. Perhaps these standards of fairness and accuracy are in the same boat. A consequence of this style of pluralism is that if any standard can generate reasons for you, then we will have an infinite number of reasons generated by an infinite number of standards and we will constantly be pulled in every direction. Although this initially sounds bad, it does seem to reflect how we feel a lot of the time. Consider, for example, the familiar saying that you can either have it cheap, fast, or good: you can only pick two! Alternatively, you can either be well-rested, have good grades, or a social life, but not all three. A challenge for the pluralist is that they need to explain why our intuitions in cases involving a small good in one domain at the expense of a large cost in another domain seem to suggest that there is something you all things considered ought to do. These cases are referred to as the argument from notable-nominal comparisons (see Chang 1997; Scanlon 1998; Parfit 2011). For example, if you are walking by a lake and see a drowning child you could either save the child or you could not. From the perspective of self-interest, perhaps you shouldn’t save the child—after all, you would get your clothes wet and from the perspective of self-interest you prefer having dry clothes. From the perspective of morality, you should rescue the drowning child. The pluralist in this case must simply say that what you morally ought do is rescue the child, but self-interestedly you ought not rescue the child. This seems like the wrong conclusion. What we want to say in this case is that you just ought to do what you morally ought to do.
Does Fairness Require Inaccuracy?
201
Thus, we are pushed towards a different answer: the conflict is resolvable, so there must be some consideration that takes priority over the other considerations. So, let us now turn to those three options.
3 Moral Priority, Epistemic Priority, and the All Things Considered or Just Plain Ought To get the intuitive grasp on the idea that there is an all things considered ought or just plain ought that can tell us what we should do when we’re in a dilemma or we feel conflicted between competing options: consider the ought we deploy when we offer advice to one another, it is the kind of ought that would issue from a wise guru who has weighed all the relevant considerations. For example, in the movie adaptation of The Notebook, Noah (played by Ryan Gosling) asks Allie (played by Rachel McAdams) what she really wants. He doesn’t want to know what it is that everyone wants, what she thinks he wants or what her parents want. Those wants are all relative-wants, they are wants relative to other people. Noah wants to know what Allie just plain wants. This example suggests that we can make sense of an idea of an all things considered ought, or just plain ought, and maybe determining what we ought all things considered do will help us in these cases of normative conflicts between fairness on the one hand and accuracy on the other. Starting with the all things considered ought, one consideration in favor of such an account is that it offers us a unified and comprehensive answer. That is, the other oughts, such as the moral ought (as captured by considerations of fairness) and the epistemic ought (as captured by considerations of accuracy), are partial or incomplete collections of all the relevant considerations. The all things considered ought, as its name suggests, is comprehensive; it is based on all the considerations that weigh either in favor or against. We can contrast this with something like the moral priority or the epistemic priority view. Accord ing to those views, when Agnes (the swanky club member looking to drop off her coat) is deciding what to do or what to believe, either the moral considerations are more weighty and take priority over the epistemic considerations (moral priority, which means she should ignore the statistical fact that black people are almost entirely employees of the club, rather than members) or the opposite (epistemic priority, which means she should just go with the statistics and ignore the potential harms for black club members like John Hope Franklin when others assume that they’re employees). A challenge for going either of these two routes would be explaining why one takes priority over the other. Furthermore, this challenge is particularly difficult for anyone who wants to defend epistemic priority, i.e., that considerations of accuracy take priority over considerations of fairness. To briefly expand on this challenge, it is simply not clear whether the epistemic ought is powerful enough to take priority over other considera tions. According to an influential argument from Mark Nelson (2010), we
202
Rima Basu
have no positive epistemic duties to believe: that is, we are never required to believe anything. Evidence might give us a reason to believe something, but to say that it follows that I have a duty to believe everything for which I have evidence leads to a conclusion we ought reject: that I am required to believe an infinite number of things. For example, if you have evidence that supports believing p and you are required to believe p, then following simple rules of logic you now also have evidence that supports believing p or q and you’re similarly required to believe p or q, you also now have evidence that supports believing p or q or r, etc. There are now infinitely many beliefs we are required to believe. Returning to the all things considered view, we can instead envision some third perspective from which to adjudicate these competing demands and values. A benefit of the all things considered account is that it offers a kind of comprehensive value (see Chang 2003). Further, this more comprehensive value comes with a kind of special authority that the all things considered ought has that the other oughts and considerations don’t. For every con sideration, whether it is morality, epistemic rationality, etiquette, prudence, or whatever, we can always ask, “Why should I be moral?”, “Why should I take my hat off when I enter a church?”, “Why should I care about selfinterest?”, etc. The reason we can ask these questions of these oughts and considerations is because we are implicitly granting the authority of a more comprehensive ought, of a third perspective from which to answer these should questions. However you wish to cash out this idea of an all things considered ought, there are some reasons for skepticism. Here I will canvass just two such reasons. First, there might be no common scale for weighing these con siderations—moral, epistemic, aesthetic, legal, etc. That is, there is no further normative perspective from which we can both ask and answer the question of how these reasons should be combined. To see why one might doubt that there is a common scale for weighing moral and epistemic considerations together, consider the following example: trying to compare different colleges to go to. There are many dimensions along which we can compare various colleges, e.g., average class size, professor to student ratio, sports teams, Greek life, or the best damn band in the land. But, how do we determine which college is all things considered the best? Similarly, imagine trying to give an all-things-considered judgment about who is the all-things-considered best superhero (it’s Captain America btw, but I’m willing to hear arguments in favor of Thor). If we could answer that question, why do such arguments get so heated? If there were an answer, there wouldn’t be reasonable dis agreement. Maybe just as we can’t make sense of the best all things con sidered college or superhero, we similarly can’t make sense of the all things considered ought. Second, and relatedly, you might worry that the all things considered or just plain ought doesn’t make sense because it requires there to be some standard that is the most normatively important standard. However, either
Does Fairness Require Inaccuracy?
203
way you cash out this idea is incoherent. David Copp (1997) argues that whatever way we try to explain the authority that this most normatively important standard has will force us to embrace a contradiction. For example, consider the standard S. Let us suppose that S is the most norma tively important standpoint. But, if S is the most normatively important standpoint, then there must be some more authoritative standard, A, that tells us that S is the most normatively important standpoint. The challenge we now face is answering whether A is identical to S. If A is identical to S, then A cannot play the role of the more important normative standard that establishes the normative supremacy of S. That kind of self-endorsement is characteristic of all the normative standards we’ve been considering and as such it is unimpressive, e.g., morality tells you to listen to morality and selfinterest tells you to listen to self-interest. So, A cannot be identical to S. But, were A to be a standard other than S, that is similarly unimpressive. What we want to know is whether S is the most normatively important standard full stop. But if A is more authoritative, then S is not the most important standard. So, Copp concludes, there is no coherent way to cash out the idea that there is a normatively most important standard that dictates what we ought, all things considered, do. To see this in play in Gendler’s original example involving John Hope Franklin, suppose you think that morality is the normatively important standpoint. From that it follows that considerations of fairness take priority over considerations of accuracy. However, what makes morality the most normatively important standpoint? The answer can’t be morality itself, because that’s an unimpressive kind of self-endorsement characteristic of all normative standards. So, perhaps there’s some more authoritative standard than morality, and that more authoritative standard, The Authority, tells us that morality is the most normatively important standard. But, now it turns out that morality isn’t the most normatively important standard, it’s The Authority. Also, what makes The Authority the most normatively important standard—some other authority, The Meta-Authority?
4 Moral Encroachment As I noted earlier, I don’t personally find any of the previous options fully convincing. What leads those options astray, I believe, is the very framing of the scenarios under consideration as ones involving a conflict between competing norms. Rather, what I believe the valet case and the John Hope Franklin case present us with is just a morally fraught example of a very common occurrence: a high-stakes situation. There are many situations in which we would characterize what we should believe or what we should do as being high stakes. For example, consider the following classic example. Suppose you have an important mortgage payment due on Monday. As a millennial, the idea of a mortgage payment sounds like a luxurious expense I’ll never have the opportunity to experience, so to update the luxurious
204
Rima Basu
expense example to one that I expect more readers will be able to relate to: imagine that you have a meal-kit delivery subscription that automatically withdraws a certain amount from your bank account every Monday. Now, let’s say that it’s Friday afternoon, you’ve just been paid, but for some reason you don’t have direct deposit set up. You have two options: you can either try to go to the bank on your way home on Friday or hope that the bank is open on Saturday and find some time to go then. Alter natively, you can deposit your check through the mobile-banking app on your phone. If you do the former, the money will be in your account right away (either on Friday or on Saturday), if you do the latter you know the check probably won’t clear until Tuesday. Now imagine a low-stakes case: you’ve got more than enough money in your bank account so you don’t risk overdraft fees if you don’t deposit your paycheck before Monday. As you and your wife are on the bus headed home you’re about to pass the bank and have to make a decision about whether to request the stop or stay on the bus to get home earlier. It’s been a long day and since you’re in no rush to deposit your paycheck, you casually ask your wife, “Hey, do you know if the bank is open on Saturday?” She tells you that she thinks she’s been by the bank before on a Saturday and it was open. Now contrast that case with a high-stakes case. You will face overdraft fees on Monday if the check is not deposited before Monday. If you decide not to stop in at the bank on Friday and you decide to believe that your wife knows that bank will be open on Saturday and it turns out it’s not open … bad news bears. Here’s the thing, though: in both the low-stakes case and the high-stakes case, your evidential situation is the exact same: you are relying upon the testimony of your wife. If all that matters to epistemic rationality is believing on the basis of your evidence, well, you have the same evidence in both cases so what you should believe in both cases should be identical. Our intuitions, however, differ. In the low-stakes case it seems that your wife’s testimony is sufficient evidence to be justified in believing that the bank will be open on Saturday, but in the high stakes case that very same evidence no longer seems sufficient. After all, if she’s wrong then you risk overdraft fees. These sorts of examples have been used to argue for pragmatic encroach ment, the idea that practical features of our situation can make a difference to (or “encroach” upon) whether we’re in a position to know or whether we’re in a position to believe on the basis of the evidence we have available to us (Fantl and McGrath 2002; 2009; Stanley 2005; Schroeder 2012). However, recall Clif ford’s claim that “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.” What the pragmatic encroachers add to this observation from Clifford is that what counts as sufficient or insufficient evidence can vary according to the practical stakes of the belief. If we return now to the cases of the mistaken valet and John Hope Franklin, I believe these cases are similar to the cases that motivate pragmatic encroachment (and I’m not alone, see also Code 1987; Pace 2011; Moss 2018a; 2018b, Basu and Schroeder 2019; Fritz 2018; Bolinger 2018;
Does Fairness Require Inaccuracy?
205
Basu 2019). The moral risks present in the cases, i.e., the moral considera tion that our belief that John Hope Franklin is a staff member would be unfair to John Hope Franklin for reasons previously discussed, makes the cases high-stakes cases. Moral encroachment, then, understood as the thesis that morally risky beliefs raise the threshold for (or “encroach” upon) justification, makes sense of the intuitive thought that racist beliefs require more evidence and more justification than other beliefs. What distinguishes this approach from others considered so far is that it provides an alternative route to simply throwing our hands up in the air and proclaiming these cases to be dilemmas. It also avoids the problems associated with moral and epistemic priority, because on this view neither consideration takes priority; rather, both considerations work together to determine what you should believe and what you should do. The moral considerations raise the epistemic standards in these high-stakes cases. We can preserve the thought that the facts don’t care about your (or other peo ple’s) feelings while also recognizing that whether or not you are justified in believing on the basis of the evidence available to you is a question that is sensitive to non-factual or non-evidential considerations. Whether you have enough evidence to believe varies according to the stakes. Returning to the courtroom, we see this intuitive thought in play. Criminal cases are high-stakes cases; that is why the standard the evidence must meet is higher than in civil cases. In a criminal case you must prove beyond reasonable doubt that the defendant committed the crime; in a civil case the standard is considerably lower. Similarly, if you are going to believe that someone is a valet on the basis of their skin color, although that might give you a lot of evidence, it’s not enough evidence to make the belief justified. Given the high moral stakes, you must look for more evidence. (For more discussion of this point, see Beeghly, Chapter 4, “Bias and Knowledge: Two Metaphors”; Siegel, Chapter 5, “Bias and Perception”.) Despite my preference for moral encroachment, this view also faces some challenges. First, moral encroachment risks compounding and contributing to the unfairness that John Hope Franklin, and other folks who are constantly mistaken for staff or “the help,” experience. Moral encroachment recommends that, when it comes to beliefs about black men and other nondominantly situated groups, the epistemic situations are high stakes. As a result, when it comes to forming beliefs about dominantly situated folks, our epistemic situations will be more free and less burdensome because we won’t constantly be walking on epistemic eggshells. This, however, also seems unfair. To answer this challenge, I think it’s a mistake to say that moral considerations only make it harder to believe; sometimes moral con siderations might also make it easier to believe. For example, if we live in a world in which women’s testimony, and the testimony of victims of sexual assault more generally (regardless of the gender or sex of the victim), is routinely discounted, then perhaps we have a moral burden to lower our evidential standards for believing the victim (see Fricker 2007 for more on
206
Rima Basu
these forms of testimonial injustice). However, if we go this route, this opens us up to more challenges: how do we determine when moral con siderations cause the threshold to go up and when moral considerations cause the threshold to go down? Also, we shouldn’t downplay the worry that, if we allow considerations of morality and knowledge to encroach upon each other, our standards of justification might be easily manipulated. Related to this epistemic eggshell worry is a worry about the demand ingness of moral encroachment. If one will fail to be justified in virtue of failing to appreciate the burden and risks that they impose on another, then almost all beliefs about other people—especially any belief about a person on the basis of their race or another protected class—are going to be high stakes. Moral encroachment, it seems then, is demanding. It requires moral agents to be fairly sophisticated in recognizing when they should occupy this kind of moral standpoint. However, as I’ve suggested elsewhere (Basu 2019), the moral encroacher should just bite this bullet. Morality is demanding. It should not be surprising then that a moral constraint on our epistemic practices would be similarly demanding. In our everyday lives and our day to-day beliefs, we may often fall short of the moral and epistemic ideal. The ideal, however, exists as a standard we ought to strive to meet nonetheless. Similarly, consider Clifford’s response to the objector who says that they’re a busy man and that they can’t possibly be expected to take the time and effort to make sure they never ever believe on the basis of insufficient evi dence. To such a character, Clifford simply offers the following rebuke, “then he should have no time to believe.” This list of objections is not exhaustive (see Gardiner 2018 for more). Nonetheless, I sincerely believe that moral encroachment offers the best analysis of the cases in which fairness and accuracy seem to conflict. To then finally answer the question contained in the title, fairness does not require inaccuracy. Nor does our desire for accurate beliefs require that we disregard considerations of fairness. Community’s Pierce Hawthorne isn’t a righteous jerk who at least is aiming for accurate beliefs when he accuses Troy of stealing Annie’s pen because Troy (a young black man) is the sta tistically likeliest candidate. Pierce Hawthorne isn’t exhibiting any epistemic virtues. He’s just being a jerk.
SUGGESTIONS FOR FUTURE READING For more on the ir/rationality of racist beliefs: • • •
Basu, R. (2019) The wrongs of racist beliefs. Philosophical Studies, 176 (9): 2497–2515. https://doi.org/10.1007/s11098-018-1137-0 Bolinger, R. (2018) The rational impermissibility of accepting (some) racial generalizations. Synthese. https://doi.org/10.1007/s11229-018-1809-5 Gendler, T. (2011) On the epistemic costs of implicit bias. Philosophical Studies 156(1): 33–63.
Does Fairness Require Inaccuracy? • • •
207
Munton, J. (2019) Beyond accuracy: Epistemic flaws with statistical generalizations. Philosophical Issues, 29(1): 228–240. https://doi.org/10. 1111/phis.12150 Silva, P. (2018) A Bayesian explanation of the irrationality of sexist and racist beliefs involving generic content. Synthese. https://doi.org/10.1007/ s11229-018-1813-9 Tetlock, P.E., Kristel, O.V., Elson, S.B., Green, M.C., and Lerner, J.S. (2000) The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78(5): 853–870. For more on stereotyping and generalizations:
• • • •
Beeghly, E. (2015) What is a stereotype? What is stereotyping? Hypatia, 30(3): 675–691. Beeghly, E. (2018) Failing to treat persons as individuals. Ergo, 5(6): 687–711. Blum, L. (2004) Stereotypes and stereotyping: A moral analysis. Philosophical Papers, 33(3): 251–289. Leslie, S.-J. (2017) The original sin of cognition: Fear, prejudice, and generalization. Journal of Philosophy, 114(8): 393–421. For more on moral encroachment:
• • • • • •
Basu, R. (2019). Radical moral encroachment: The moral stakes of racist beliefs. Philosophical Issues, 29(1): 9–23. https://doi.org/10.1111/p his.12137 Fritz, J. (2017) Pragmatic encroachment and moral encroachment. Pacific Philosophical Quarterly, 98(S1): 643–661. Gardiner, G. (2018) Evidentialism and moral encroachment. In K. McCain (ed.), Believing in Accordance with the Evidence: New Essays on Evidentialism (pp. 169–195). Cham, Switzerland: Springer. Moss, S.(2018a) Probabilistic Knowledge. New York: Oxford University Press. Moss, S. (2018b) IX—Moral encroachment. Proceedings of the Aristotelian Society, 118(2): 177–205. Pace, M. (2011) The epistemic value of moral considerations: Justifica tion, moral encroachment, and James’ ‘Will to Believe’. Nous, 45(2): 239–268. For some alternative views that weren’t canvassed in this chapter:
•
Madva, A. (2016) Virtue, social knowledge, and implicit bias. In J. Saul and M. Brownstein (eds), Implicit Bias and Philosophy, Volume 1 (pp. 191–215). New York: Oxford University Press.
208 •
Rima Basu Puddifoot, K. (2017) Dissolving the epistemic/ethical dilemma over implicit bias. Philosophical Explorations, 20(sup1): 73–93.
DISCUSSION QUESTIONS 1 Illustrate a normative conflict with a dilemma that you’ve faced in your life. How did you try to solve it? 2 What is evidence? 3 Why does Tamar Gendler think that the case of John Hope Franklin is a tragic irresolvable dilemma? 4 The variation of the case from Tamar Gendler (2011), Social Club, is designed to set up a conflict between our moral obligations on the one hand and our epistemic obligations on the other. How convincing do you find the suggestion that these conflicts happen? Do you think these conflicts actually happen? Can you think of other ways of describing the case in which it isn’t a dilemma? How might we try to resist framing the example as an example of a dilemma? 5 Which account offered do you find most persuasive? What are your reasons for preferring this account, and how would you respond to the problems that are raised for your preferred account? 6 What are some of the reasons for skepticism offered against the account of an all things considered ought? Do you find the reasons for skepti cism convincing? 7 According to the pluralist, any coherent standard is a reason-generating normative domain. Some argue that a standard is only reason-generat ing for you if you are partisan to that standard, e.g., you only have a reason to shout “Ni!” until your demands are met if you are a member of the Knights Who Say Ni. Extending this to morality, you might think that you only have reasons to be moral if you care about morality. What do you think of this extension? Do you find this a plausible account of moral reasons? How might someone try to resist this line of reasoning? 8 What are some examples of low-stakes cases, where practical or moral considerations don’t factor into or “encroach” upon what we should believe and do, and high-stakes cases, where the practical and moral considerations really matter? 9 One major challenge for moral encroachment is that it seems to require too much, that is, it is too demanding, because so many of our beliefs should factor in practical and moral considerations. Can you think of other challenges one might raise to moral encroachment? 10 In the opening epigraph, is Pierce Hawthorne really just saying what we’re all thinking? He might be a jerk, but is there anything he gets right when he’s at least aiming for accurate beliefs when he accuses Troy of stealing Annie’s pen because Troy (a young black man) is the statistically likeliest candidate?
Does Fairness Require Inaccuracy?
209
REFERENCES Basu, R. (2019) The wrongs of racist beliefs. Philosophical Studies, 176(9): 2497–2515. https://doi.org/10.1007/s11098-018-1137-0 Basu, R. (2019) What we epistemically owe to each other. Philosophical Studies, 176 (4): 915–931. Basu, R. (2019). Radical moral encroachment: The moral stakes of racist beliefs. Philosophical Issues, 29(1): 9–23. https://doi.org/10.1111/phis.12137 Basu, R. and Schroeder, M. (2019) Doxastic wronging. In B. Kim and M. McGrath (eds), Pragmatic Encroachment in Epistemology (pp. 181–205). New York: Routledge. Beeghly, E. (2015) What is a stereotype? What is stereotyping? Hypatia, 30(3): 675–691 Beeghly, E. (2018) Failing to treat persons as individuals. Ergo, 5(6): 687–711. Blum, L. (2004) Stereotypes and stereotyping: A moral analysis. Philosophical Papers, 33(3): 251–289. Bolinger, R. (2018) The rational impermissibility of accepting (some) racial general izations. Synthese. https://doi.org/10.1007/s11229-018-1809-5. Buchak, L. (2014) Belief, credence and norms. Philosophical Studies, 169: 285–311. Chang, R. (2003) All things considered. Philosophical Perspectives, 18(1): 1–22. Chang, R. (1997) Incommensurability, Incompatibility and Practical Reason. Cambridge, MA: Harvard University Press. Clifford, W.K. (1877/1999) The ethics of belief. In T. Madigan (ed.), The Ethics of Belief and Other Essays (pp. 70–96). Amherst, MA: Prometheus. http://people.bra ndeis.edu/~teuber/Clifford_ethics.pdf Code, L. (1987) Epistemic Responsibility. Providence, RI: Brown University Press. Copp, D. (1997) The Ring of Gyges: Overridingness and the unity of reason. Social Philosophy and Policy, 14(1): 86–101. Enoch, D., Spectre, L., and Fisher, T. (2012) Statistical evidence, sensitivity, and the legal value of knowledge. Philosophy and Public Affairs, 40(3). https://doi.org/10. 1111/papa.12000 Fantl, J. and McGrath, M. (2002) Evidence, pragmatics, and justification. The Philosophical Review, 111(1): 67–94. Fantl, J. and McGrath, M. (2009) Knowledge in an Uncertain World. New York: Oxford University Press. Fricker, M. (2007) Epistemic Injustice: Power and the Ethics of Knowing. New York: Oxford University Press. Fritz, J. (2017) Pragmatic encroachment and moral encroachment. Pacific Philosophical Quarterly, 98(S1): 643–661. Gardiner, G. (2018) Evidentialism and moral encroachment. In K. McCain (ed.), Believing in Accordance with the Evidence: New Essays on Evidentialism (pp. 169–195). Cham, Switzerland: Springer. Gendler, T. (2011) On the epistemic costs of implicit bias. Philosophical Studies 156 (1): 33–63. Kelly, T. (2016) Evidence. The Stanford Encyclopedia of Philosophy. http://plato.sta nford.edu/archives/fall2014/entries/evidence/ Lippert-Rasmussen, K. (2011) ‘We are all different’: Statistical discrimination and the right to be treated as an individual. Journal of Ethics, 15(1): 47–59. Moss, S. (2018a) Probabilistic Knowledge. New York: Oxford University Press. Moss, S. (2018b) IX—Moral encroachment. Proceedings of the Aristotelian Society, 118(2): 177–205.
210
Rima Basu
Munton, J. (2019) Perceptual skill and social structure. Philosophy and Phenomenological Research, 99(1): 131–161. https://doi.org/10.1111/phpr.12478 Nelson, M. (2010) We have no positive epistemic duties. Mind, 119(473): 83–102. Pace, M. (2011) The epistemic value of moral considerations: Justification, moral encroachment, and James’ ‘Will to Believe’. Nous, 45(2): 239–268. Parfit, D. (2011) On What Matters. New York: Oxford University Press. Saul, J. (2018) (How) should we tell implicit bias stories? Disputatio, 10: 217–244. https://doi.org/10.2478/disp-2018-0014 Scanlon, T. (1998) What We Owe To Each Other. Cambridge, MA: Harvard University Press. Schauer, F. (2006) Profiles, Probabilities, and Stereotypes. Cambridge, MA: Harvard University Press. Schroeder, M. (2012) Stakes, witholding, and pragmatic encroachment on knowledge. Philosophical Studies, 160(2): 265–285. Stanley, J. (2005) Knowledge and Practical Interests. New York: Oxford University Press. Williams, B.A.O. (1965) Ethical consistency. Proceedings of the Aristotelian Society, Supplementary Volumes, 39: 103–124.
11 Explaining Injustice Structural Analysis, Bias, and Individuals Saray Ayala-López and Erin Beeghly
Why does social injustice exist? What role, if any, do implicit biases play in the perpetuation of social inequalities? Individualistic approaches to these questions explain social injustice as the result of individuals’ preferences, beliefs, and choices. For example, they explain racial injustice as the result of individuals acting on racial stereotypes and prejudices. In contrast, structural approaches explain social injustice in terms of beyond-the-individual features, including laws, institutions, city layouts, and social norms. Often these two approaches are seen as competitors. Framing them as competitors suggests that only one approach can win and that the loser offers worse explanations of injustice. In this chapter, we explore each approach and compare them. Using implicit bias as an example, we argue that the relationship between individualistic and structural approaches is more complicated than it may first seem. Moreover, we contend that each approach has its place in analyses of injustice and raise the possibility that they can work together—synergistically—to produce deeper explanations of social injustice. If so, the approaches may be complementary, rather than competing.
1 Individuals and the Social, in Broad Strokes To illustrate the individualistic and structural approaches and how they differ, we’ll start with two examples. Lisa quits her job (adapted from Cudd 2006, discussed in Haslanger 2015) Lisa is a middle-class woman in a heterosexual monogamous relationship with Larry. They live in a community with expensive childcare and a gender wage gap (i.e., men tend to be paid more than women, in some cases even when they are doing the same work in the same jobs). When they have a baby, Lisa quits her full-time job. One way to make sense of this outcome is to say that there is something about Lisa that makes her quit. For example, it could be that Lisa prefers to take care of the baby full time, or that she is determined to exclusively breastfeed and that requires staying at home. Perhaps Lisa even had a “transformative experience” (Paul 2015). Before
212
Saray Ayala-López and Erin Beeghly
becoming a parent, she might have valued her job and planned to keep it. However, maybe the experience of holding a baby in her arms and being the main caretaker for that small being has given her new knowledge about herself and what she really wants. The experience has changed her, let’s suppose, to such a great extent that she no longer cares that much about her job and prefers to quit. Another way to explain why Lisa quits her job is to look at the social system of which Lisa is a part, and understand the outcome as the result of the constraints this system imposes on Lisa. For example, in her society, being a woman positions her as someone with a lower salary compared to her male partner. Besides that, there is no affordable childcare, and babies cannot take care of themselves. All this imposes constraints on what Lisa can do: she cannot keep her job, have her partner quit his to care for the baby, and at the same time keep the most important part of the family income. Pau tries to communicate their gender identity Pau is trying to communicate their experiences to friends, in particular, their not identifying as either a woman or a man. Pau says things like “I don’t feel comfortable in public restrooms, I wish there was a non-gendered one I could use.” Their friends take Pau to be confused. They say, “Pau, you are making no sense, maybe you are homosexual, maybe that’s it, but you have to be either a woman or a man.” If we overheard this conversation, we might ask, “What’s gone wrong?” One way to make sense of the problem appeals to the friends’ beliefs and values. Perhaps Pau’s friends are prejudiced against agender people, or trans people more generally, so they interpret Pau’s statements as expressing confusion. Perhaps their binary assumptions prevent them from understanding what Pau has to say, namely, that their gender identity is non-binary. A second way to understand what’s happening appeals to the wider social environment in which the conversation takes place. Suppose the exchange occurred at a dinner party in the 1990s in Barcelona. The right concepts for interpreting Pau’s experience may not have been available at that time. Though the gay liberation movement had been ongoing for decades and everyone knew what it meant to be “gay” or “lesbian” or “bisexual,” the concept of being “non-binary” was not in widespread use. The concept was missing, in part, because there was no socially acknowledged place to exist outside of the gender binary. Even the concept of “transgender” was largely defined to fit within a binary frame until recently (Stryker 2008). If so, the reason for the distorted interpretations of Pau’s friends is not in their minds, but outside: it’s a feature of the social milieu they inhabit. These contrasting ways of analyzing Lisa and Pau’s situations offer two different pictures of society, and two different approaches to social justice. In the individualistic picture, we have individuals acting and constraining each other’s actions. In the structural picture, there are other elements such
Explaining Injustice: Structure and Bias
213
as institutions, laws, social norms, shared concepts like being agender, and material features of environments (e.g. the layout of cities, systems of public transportation or health care). Such beyond-the-individual elements are loosely referred to as structural factors. In the structural picture, we look at individuals through a wider lens. Individuals are understood as situated in networks of relationships within an organized larger whole, i.e., a structure. In particular, structural analysis reveals how particular individuals are positioned in that structure, which we’ll call, following Sally Haslanger, “a node” (2016). Picture a web of social relations where each node is a type of person (see Figure 11.1.) Individuals like Lisa and Pau, as members of different social types (e.g., woman, agender) occupy different nodes and, therefore, have different social positions and social roles. When trying to understand something about an individual, the structural approach asks us to look at the node someone occupies, how that node is connected to other nodes, as well as features of the system as a whole. Using a structural lens, we see how the behavior of any part depends on its interactions with other parts, and is constrained by the state of the whole. People occupy structural nodes corresponding to their social categories (e.g., sex, race, class, gender identity, national origin). Zach, who sleeps on a sidewalk on a piece of cardboard, occupies a node that is constituted by at least the following dimensions: homeless person, man, white, citizen of the country he lives in. That he occupies this node, and that this node is defined along all those different dimensions, is going to affect how he navigates the city, which opportunities for
Figure 11.1 A social structure, depicted visually. Each node (black dot) corresponds to a social position, and the lines represent relations between nodes.
214
Saray Ayala-López and Erin Beeghly
action are and are not available to him, and how others treat him. For example, as a white man and citizen, he has in principle a significant amount of social power. However, as a homeless person, he will lack credibility, be denied opportunities like the ability to use restrooms in cafés, and be perceived in stigmatizing ways, for example, as dangerous. Dimensions of ability, sexual orientation, and gender identity have similar effects on how they position indi viduals in the social structure and, therefore, on how individuals are treated and what they can do. The structural picture reveals forms of injustice that might escape the individualistic lens. Think about Lisa’s decision to quit her job. Suppose we explain that decision as a result of her beliefs and preferences. Nothing there seems to ring the “injustice alarm.” The structural picture highlights, however, that there is more going on. It’s not just Lisa’s beliefs or desires that cause her to quit. Her personal transformation may be no accident (Barnes 2015). Factors surrounding her invite a radical shift in her preferences, making quitting her job after having a baby the most rational decision for her. What strikes the injustice alarm is that for Lisa (and many middle-class, married women in similar positions), the rational decision is one that keeps them subordinated, for example, by rendering them economically vulnerable and jeopardizing their careers. The structural explanation captures the system of factors affecting the vulnerable social positions women occupy. It also helps us appreciate that independently of their personal beliefs and preferences (which may vary a great deal from one person to another), people situated in similar positions, and therefore with similar opportunities and constraints, tend to act in similar ways. Structural analysis is revealing in a second, complementary way as well. Think about the example of Lisa and Larry. Cases like this one have played a prominent role in the history of feminism, and they have serious limita tions. As bell hooks points out: “While this issue [of being subordinated in the home as housewives] was presented as a crisis for women, it really was only a crisis for a small group of well-educated white women” (2015a: 38; 2015b: 92). Her point is crucial. Working-class women—many of whom are women of color, some of whom may also be undocumented—may not even have paid maternity leave. For these women, work does not provide “free dom” or “economic security”; they are stuck in exploitative low-paying jobs. The choice to stay home with their children would be perceived as the opposite of oppression. It would be a treasured kind of liberty. A structural approach helps us see this. It calls attention to the fact that women who are positioned differently than Lisa—especially in terms of their socio-economic status and race—may be constrained in ways that may or may not overlap with economically privileged women like her (see Madva, Chapter 12, “Individual and Structural Interventions” for further discussion.) There are at least two ways to interpret individualistic and structural approaches to social injustice. First, we can treat them as different metaphysical stories about the constitution of society. In the individualistic picture, society
Explaining Injustice: Structure and Bias
215
and social processes are composed of nothing but individuals and their inter actions. This is called ontological individualism. The structural picture goes beyond individuals and adds social structures and elements to the composition of society. Second, we can treat individualistic and structural approaches as offering two different ways to explain what’s going on in society. Whereas individualistic explanations analyze social processes in terms of interactions among individuals, structural explanations ask us to take seriously the role of groups in the production of social outcomes. They also adopt a more holistic frame, analyzing society as an interconnected system. Taken in the explanatory sense, the structural picture does not have to worry about questions concerning the metaphysical status of social structural factors. The role of structural factors is (merely) explanatory. Though it may be tempting to portray structural and individualistic approaches as mutually exclusive, doing so distorts the debate. Proponents of each side sometimes characterize the opposing position in an overly simplistic way, turning it into a straw person. Real-world straw persons—namely, scarecrows—are inadequate copies of the real thing. Similarly, when someone’s portrayal of their opponents’ views or arguments is described as a “straw person,” it means that the portrayal is an inadequate copy of the real argument, and does not represent the strongest and most plausible version of the opponents’ position. Accordingly, if we said that individualistic approaches explain social injustice by exclusively appealing to individuals’ beliefs or preferences and how individuals interact with one another, while an advocate of the structural approach argues that only structural factors matter, it would be easy to defeat either of these extreme positions. In reality, things are richer and more complicated.
2 Implicit Bias and Social Structures: How They Might Relate To see the complexity, consider the nature of implicit bias. At first, it might seem as if the existence of implicit bias gives straightforward support to a strictly individualistic analysis of injustice. Implicit biases are typically thought to reside “inside our heads.” Many are associated with stereotypes. To have a stereotype, psychologists argue, is to have a set of beliefs or associations with a social group. Consider Pau’s friends. They split gender into two and only two categories, “man” and “woman,” and possess a set of associated gender stereotypes. If so, features of their psychology cause them to act unfairly; hence, it would seem, the primary source of societal unfair ness associated with bias resides inside people’s heads. Taking implicit bias seriously does not require such individualistic assumptions, however. 2.1 Bias as Internalized Social Structure Biases enjoy a public existence. Cultural stereotypes, for example, exist as controlling images or ideas in wider society (Collins 2000). Consider the image of a young mother breastfeeding her baby, gazing at the child with
216
Saray Ayala-López and Erin Beeghly
complete and utter devotion. The image conveys a message: her baby is all she needs, and it completely fulfills her. Stereotypes such as this are found in novels, movies, online articles, in the jokes and stories that people tell, and in the worksheets that children bring home from school. Intuitively, such images are structural for an obvious reason: they are part of the beyond-the individual factors that need to be analyzed in order to understand the social world. Yet they may be structural in a more specific way as well. Return to the picture of a social structure depicted in Figure 11.1; the lines connecting the nodes represent social relations. To the extent that stereotypes and other social biases make social relations what they are, they partially constitute these relations. For example, Lisa and Larry’s relationship is mediated by gender norms; their relationship gets its particular nature, in part, from them. These norms and images are called “controlling” because they—as social structures do—play a role in influencing what individuals can and cannot do, as well as what they think, feel, hope, and expect from each other and themselves. Social structures have this power to shape individuals’ lives, in part, because individuals internalize them. Think about Pau’s friends. “You are making no sense,” they say, “maybe you are just a homosexual, but you have to be either a man or woman.” Pau’s friends say this because they have absorbed controlling images and ideas that exist in their wider social milieu. Hence, we can think of their biases—whether they qualify as implicit or explicit—as a way in which the social structure manifests in them (Zheng 2018). Something similar might be true of Lisa. Perhaps she quits work, to some extent, because controlling images of motherhood resonate with her. She may be exercising her autonomy when she shapes her life to match the stereotype; however, in doing so, she may also act as an agent of the patriarchy. What explains why people are so influenced by social biases, including stereotypes and norms? Human cognition, one story goes, evolved so as to facilitate group cohesion and cooperation (Zawidzki 2013; Haslanger 2020). If our minds didn’t attune us to our social environment, allowing us to “pick up” group norms and beliefs, our survival as a species would be compro mised. Similarly, Lacey Davidson and Dan Kelly argue that the human mind contains innate mechanisms—modules—that allow individuals to perceive and follow a wide-range of social norms, including norms of reasoning, thought, and action (2018). It is no surprise, according to them, that the gender schema adopted by Pau’s friends is pervasive. The human mind is built to facilitate such uptake. 2.2 Bias as Gerrymandered Perception Implicit bias connects to social structures in a second way as well. In politics, gerrymandering is a way of dividing up voting districts in a partisan way, so as to make the success of certain political parties more likely. According to
Explaining Injustice: Structure and Bias
217
some theorists, the same kind of thing happens in visual perception (Munton 2019; for more on bias and perception, see Siegel, Chapter 5, “Bias and Perception”). Look around you, for example: don’t you see many women in submissive positions at work and in their personal lives? Why is that? According to a structural analysis, the social environment with its norms and arrangements constrains the lives of social groups in a systematic way, and this results in many of their members exhibiting certain properties (for example, being submissive). If middle-class and upper-class women in heterosexual relationships like Lisa tend to quit their jobs when a baby arrives, for example, their economic and social power is compromised. If Larry is making all the money and controls access to the family bank account, Lisa might have to politely ask him for permission to spend money. We may see Lisa doing this or hear her petitioning Larry. However, our eyes and ears cannot access the social backstory. All we see or hear is the outcome: Lisa acting submissively and deferentially toward Larry. This observation points to something troubling. Suppose you implicitly associate women with taking care of children or with character traits like submissiveness. You may have developed these associations, in part, because you look around the world and see that many—if not most—women embody these stereotypes. Similarly, Pau’s friends might see confirmation of the gender binary in their world. “There are just two genders,” they might argue if Pau pushes back, “just open your eyes and look around.” Statistical evidence might be on their side. However, to the extent that evidence is on their side, this is because agender, gender fluid, and transgender people are not tolerated, and so, too often, are not publicly visible. Moreover, Pau’s friends are forgetting all the ways in which children are socialized through the binary and, hence, how the gender binary is actively promoted and collectively reproduced. What they don’t consider is whether social reality has been gerrymandered—rigged—to make it appear as if social outcomes reflect unvarnished, unconstrained individual choice. One might object to calling accurate views of groups “biases.” But even true beliefs about groups may “incline,” hence bias, us towards judging individuals by group membership rather than by facts about them as individuals (Antony 2016; Beeghly, Chapter 4, “Bias and Knowledge: Two Metaphors”; Basu, Chapter 10, “The Specter of Normative Conflict: Does Fairness Require Inaccuracy?”). Likewise, habitual ways of seeing and thinking may become “sedimented” in us, making it harder to be open to evidence when we enter new environments in which our views of groups may not be accurate (Ngo 2017; Munton 2019; see also Leboeuf, Chapter 2, “The Embodied Biased Mind,” and Greene, Chapter 7, “Stereotype Threat, Identity, and the Disruption of Habit”). Finally, gerrymandered perception and cognition may constitute biases in that they cause us to think and act in ways that promote an unjust status quo.
218
Saray Ayala-López and Erin Beeghly
2.3 Bias as a Contextual Feature of Social Environments Now consider a third approach to implicit bias: “the bias of crowds” model. While traditional theories of implicit bias focus on what’s going on “inside the head” of particular biased individuals (Johnson, Chapter 1, “The Psychology of Bias: From Data to Theory”), this new model grounds bias “in the culture, community, and immediate social contexts people inhabit” (Payne and Vuletich 2017: 49). New data motivate the model. By now, millions upon millions of Implicit Association Tests have been completed on the Project Implicit website. This rich trove of Big Data has enabled researchers to study the geographic variability in individuals’ biases (briefly mentioned by Brownstein, Chapter 3, “Skepticism About Bias”). Combining isolated individuals’ IAT scores and explicit attitudes to study overall average social attitudes across regions, researchers are now uncovering more and more correlations between these average implicit bias scores and a range of regional outcomes and patterns. For example, in countries with larger achievement gaps between boys and girls in science and math, people tend to exhibit stronger implicit gender stereotypes associating men with science (Nosek et al. 2009). In conjunction with these new data, researchers have also found that individuals’ scores on implicit bias tests can be manipulated in various ways. For example, an individual’s implicit racial biases can shift dramatically depending on whether they take the IAT in a well-lit versus dark room (Schaller et al. 2003). What context effects such as this suggest is that the specific IAT score you get says somewhat less about your biases as an individual—and less about what you’re really like deep down and over the long term—and more about the thoughts and images that happen to be floating through your head at a given time. Imagine this scenario. You are a student attending a predominantly white university in the American South. As you walk to lecture every day, you see Confederate monuments. You perceive the faculty to be largely white. You know that many of your fellow students are finan cially stressed out, some are even homeless, while others are living in luxury. How might this state of affairs impact your biases? One set of researchers has examined the question. Here is what they found: average implicit bias scores among college campuses are predicted by broader environmental and structural factors such as the percentage of nonwhite faculty on campus, the presence (or absence) of highly visible Confederate monuments on campus, and the student body’s economic mobility (i.e., the percentage of students who grow up in low-income families but eventually become high earners) (Vuletich and Payne 2019). Significantly, implicit biases seem to be tracking salient inequalities and environmental markers of injustice. If so, not all places inspire bias equally. Modern-day regional IAT scores, for example, correlate with patterns of slavery in the USA at the dawn of the Civil War (Payne et
Explaining Injustice: Structure and Bias
219
al. 2019). In counties and states that had higher proportions of slaves in 1860, white residents have stronger pro-white implicit biases to this day, whereas black residents in those same areas have stronger antiwhite attitudes. There is, in fact, a sizeable and growing empirical literature tracing the psychological and material legacies of slavery across American time and space. If individuals’ biases vary with where they live, go to school, or work, we should perhaps think of biases as existing in environments and situations, rather than as existing in individuals’ minds. Advocates of this new model thus defend “a context-based perspective … an interpretation of implicit bias as the cognitive residue of past and present structural inequalities” (Payne et al. 2017; Payne et al. 2019: 1; see also Murphy et al. 2018). These three models underscore a crucial point. Though biases exist in individuals’ minds, they cannot be adequately understood as cut off from everything else. Each of these three models connects individuals’ implicit or explicit biases to their wider social environment. Psychology and structure are intertwined in deep and important ways. To miss this, or ignore it, is to misunderstand the nature of bias.
3 Comparing Individualistic and Structural Approaches: Three Criteria Given the interconnection between structural and individualistic elements of bias, it is too simple to say that an approach has to be either structural or individualistic. The language of priority is more appropriate. Individualistic approaches prioritize or emphasize the individual, and in particular what is inside their mind, whereas structural approaches prioritize elements of the social reality beyond the individual (Madva 2016). How might we evaluate the strengths and benefits of each picture, individualistic and structural? This section articulates three dimensions along which the two approaches could be compared and evaluated. One comparison is how accurately each approach identifies what is morally relevant in unjust social situations. Call this the moral relevance criterion. For instance, Haslanger claims that an individualistic picture focused on implicit bias “fails to call attention to what is morally at stake” when individuals make choices in unjust social conditions (2015: 1). Recall how the structural picture reveals what is wrong in Lisa’s case: not that she cannot make a choice, but the way her choice architecture is constrained. A second way to compare the pictures is the explanatory adequacy criterion. Each approach—individualistic and structural—explains injustice and social inequality differently. Often these explanations are thought to be competing. If so, the question would be this: which one is superior? On the other hand, individualistic and structural explanations might be compatible. Perhaps we can keep both in our toolbox.
220
Saray Ayala-López and Erin Beeghly
A third comparison looks at the interventions each proposes, and how effective they are (Ayala 2017). Call this the practical utility criterion. When considering interventions, we might have at least three different aims: 1 2 3
reducing, and ideally eliminating, individual negative attitudes and prejudices; reducing and ideally eliminating, inequalities (e.g. salary gaps, employment and education opportunities); and finally, reducing social injustice altogether, and ideally attaining a just society.
These three aims are related, but they are also independent in important ways. (For more about the differences between these aims along with examples, see Madva, Chapter 12, “Individual and Structural Interventions.”) As we compare individualistic and structural approaches, it is important to consider what our aims are in order to determine whether an individualistic or a structural approach is more appropriate.
4 Evaluating Structural and Individualistic Approaches We now have three criteria. In this section, we apply the criteria and see how the two approaches fare. 4.1 The Moral Relevance Criterion Here is the first question. Which approach is better at identifying the morally relevant features of unjust social situations? According to Sally Haslanger’s view, structural approaches do better. Haslanger criticizes individualistic approaches for dwelling too much on the motives of wrongdoers (2015: 1). If she is right, these approaches ignore the fact that “the asymmetrical burdens and benefits and inegalitarian relation ships imposed on groups” constitute “the normative core” of what’s wrong with racism and sexism (2015: 1–2). These group-level wrongs become visible only through structural analysis. How well does this objection work? Remember the three models discussed earlier: bias as internalized social structure, bias as gerrymandered perception, and bias as feature of social environments. Each model calls attention to deep connections between individual psychology and social structures. Because such approaches intertwine bias and structure, they do not hide how biased judgments and decisions relate to group dynamics and collective harms. On one hand, Haslanger need not be disturbed by this result. Her view is not that we must stop talking about bias altogether but that “an adequate account of how implicit bias functions must situate [bias] within a wider theory of social structures and structural injustice” (1). To the extent that newer accounts of bias do this, they do not ignore the “normative core” of
Explaining Injustice: Structure and Bias
221
racism and sexism. Nevertheless, her objection still has merit. Early theories of implicit bias did characterize biases solely in terms of individual psy chology, and these theories continue to be influential. Such theories ignore collective dynamics and are problematic for the reasons Haslanger notes. Exhibit A is the philosophical literature on implicit bias, which has been disproportionately focused on questions of individual responsibility (for continued reflection on this point, see McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit Bias”). On the response just given, Haslanger is open to—and even embraces—more complex accounts of implicit bias. But she has another option. Remember Pau’s friends. Imagine a theorist who argues that these friends have internalized widespread gender norms, i.e., parts of the social structure. When this theorist analyzes what’s ethically wrong with how Pau is treated, let’s suppose, they emphasize the ways in which Pau is harmed by their friends’ binary assump tions. At this point, Haslanger might say: “Ah ha, my point precisely! Expla nations of injustice that appeal to implicit bias—no matter how complex— make folks more likely to focus on wrongs to individuals rather than group wrongs, even if they don’t necessarily do this, and even if the theories, when properly understood, push against that tendency. So the objection holds: explaining injustice via implicit bias prioritizes individual factors and, in so doing, obscures what’s most problematic about social biases.” We have now reached the heart of the issue. The thought is this. If we endorse an analysis of injustice that prioritizes individuals (and especially their mental states), then structuralists like Haslanger think we are encouraging theorists to remain at the periphery of social problems, ethically and politically speaking, rather than getting to their core. Let’s investigate this thought further. Start with the claim that there is a normative core to social injustice. For any injustice, there will be a range of harms and wrongs associated with it. Some of these will be group harms and wrongs. If agender people have no place to exist outside the binary, that harms them as a class. Yet individual wrongs and harms are also present. Pau’s friends harm Pau, for example, by acting in a way that defends a rigid gender binary. They fail Pau as friends. Pau can be resentful if they are silenced or remain misunderstood because their friends are dismissive. Likewise, if we want to understand what’s wrong with Pau being treated in this way, we ought to think about how it affects Pau’s wellbeing and in what specific ways. Perhaps Pau becomes depressed and socially alienated. Maybe there is a certain kind of bodily alienation that accompanies their experiences. If so, there is an imperative to pay attention to—and center in our analysis—Pau’s experiences as a particular individual. On this last point, we should note a powerful tradition in social science: critical race theory (Salter and Adams 2013; Delgado and Stefancic 2017). Theorists in this tradition, as well as feminist theorists, excavate and render visible the experiences of marginalized individuals for insights into how injustice operates. Writers like Frantz Fanon and Iris Marion Young, for
222
Saray Ayala-López and Erin Beeghly
example, eloquently explore how bodily and social alienation feels and functions from the inside (Fanon 1952/2008; Young 2005). Such theorists foreground their own particular experiences; yet, quite explicitly, they suggest that these experiences are widely shared and reflect oppressive social dynamics (for additional examples, see Lorde 2007). Their methodology pushes back against the idea that one must center social structures—giving them maximum “air time” in one’s analysis—in order to reach the norma tive core of racism or sexism. These analyses also reveal that individual and group harms are overlapping and inextricable, so much so that it makes little sense to label group harms “core” while relegating all else to the per iphery. To understand group harms, we must understand how oppression affects individuals; to understand the wrongs suffered by individuals, we must appreciate group dynamics. A second worry deserves to be mentioned here. Haslanger’s objection presumes that the normative core of injustice is stable across all contexts. While it’s a common assumption, it ought to be questioned. Suppose that we are trying to build a social movement to support gender equality. Our central concern might be law and policy. Perhaps we push for laws guaranteeing workplace protections for transgender employees. Maybe we agitate for more generous family leave policies or universal, government-subsidized childcare. To justify these policies, we appeal to how they benefit and provide justice to groups as a whole. Given our aims, collective benefits and burdens take center stage here—and rightfully so. Yet this might not always be the case. There could be some contexts in which individual wrongs and harm can and should take center stage, if we care about justice. Think about Lisa. Imagine that, instead of quitting her job, Lisa is fired when she has her baby. Perhaps, in this specific context, individualistic factors such as her employer’s beliefs about women, as well as his treatment of Lisa specifically, are of central moral relevance. To get justice in court, Lisa’s lawyer must prove that her employer fired her because of her preg nancy. If Lisa’s lawyer ignores what’s in the employer’s head and exclusively focuses on widespread group dynamics, she will lose the case. Justice for Lisa will not be served. Similarly, if Pau demands an apology from their friends, it would be very odd if the friends apologized only for harming agender people in general. When we tell a friend that we feel wronged by them, we are generally asking for acknowledgement of a wrong done to us, specifically, as an individual friend. Though group harm may be interwoven with this wrong, the fact remains that the injustice was done to us. Examples such as these lead into controversial territory. They warm readers up to the idea that what’s most morally relevant in a situation can change, depending on how you are trying to fight injustice. But the examples should worry us, too. Why should a flawed legal system get to dictate what is most morally relevant in cases of injustice, for example? Justice for many people is not served within the existing system, precisely because intentions are given excessive moral relevance. Employers are often smart enough not
Explaining Injustice: Structure and Bias
223
to leave a paper trail stating their intentions. When Lisa’s lawyer litigates as if prejudice were the key factor in wrongful discrimination, it’s thus not necessarily a good thing. She plays into a flawed system and may be seen as perpetuating the false view that bad intentions are required for wrongful discrimination. Bad intentions are simply not always the problem. A group of people might be genuinely committed to social coordination and follow their community’s norms without any specific mental state that could be said to be discriminatory, and yet, their community’s norms and practices might be such that they disadvantage a subset of the group. These observations suggest that the moral relevance criterion cannot be used to decisively argue for the superiority of structural approaches. It is not obvious that group harm is always the most morally relevant feature of unjust social situations. What is morally relevant in the courtroom may not be of central moral importance when we are engaging in a collective act of protest. Even more crucially, individual and groups harms appear to be interwoven so thoroughly that it makes little sense to lift up group harm as the essential and most important thing in any and all contexts whatsoever. Both kinds of harms matter, ethically. If so, the moral relevance criterion would push towards a more contextual answer to the question: which approach is better? We would need both approaches to understand what’s wrong with injustice, and they would be complements. 4.2 The Explanatory Adequacy Criterion Perhaps the explanatory adequacy criterion tells a different story. Our two approaches—individualistic and structural—correspond to two kinds of explanations used by social scientists to explain the social world and make predictions about it. When two scientific theories offer an explanation of a phenomenon, how do we know which one is better? What are the most important explanatory virtues and vices? Such questions have long been explored by scientists and philosophers of science. One view is that structural explanations win: perhaps they offer the deepest and most complete explanation of injustice. Consider our three models of implicit bias. All of them locate the sources of biases outside minds and in social environments. When people internalize group norms and stereotypes, their minds take in controlling images from society at large. The same goes if biases exist environmentally, as residues of historical and ongoing inequalities. In each case, explanatory priority seems to lie in structures, not minds. Individuals only have the biases they do because they exist in particular social milieus. Serious explanatory gaps may remain if we rely on structural explanations alone. Different individuals respond to social norms and stereotypes differ ently. Some, like Lisa, embrace them. Others, like Pau, resist them. If we use a structural approach alone, we face serious challenges explaining why some individuals embrace conformity, while others do not. Likewise, there is a
224
Saray Ayala-López and Erin Beeghly
strong argument to be made that individuals act on structures. The Stone wall rioters, for example, started a movement that eventually changed American attitudes towards homosexuality, queerness, and gender non conformity (Stryker 2008). They also challenged unjust laws that permitted the brutalization of queer people. Not only do social structures shape indi viduals, individuals shape the structural aspects of social reality. Within a structural frame, we may therefore want to keep an eye on individual actors for various reasons (Beeghly 2020). Sometimes individuals are complicit and act as agents of structure. Sometimes they subvert structures. The possibility thus arises that we need both approaches. Maybe they are even compatible and can be used together—synergistically—to explain an event that would be less well explained if only one approach were used. Return to the example of Lisa, who quits her job when she has a baby. Maybe the individualistic picture tells us the proximal or immediate cause of her decision: Lisa quit her job because she prefers to take care of her baby. The structural picture might tell about more prior or distant causes, for example, how Lisa was socialized to think about gender, or how the possible options for her are constrained to make quitting seem rational. Likewise, the individualistic picture would reveal the immediate cause of why Pau’s friends failed to accurately understand what Pau says: prejudice clouds their minds. The structural picture could tell us that what caused those prejudices in the first place (see Kukla 2014; Ayala 2016), and it would alert us to the fact that there may not be appropriate concepts like “non binary” available in the context. In this way, the structural approach might be thought of as expanding the scope of individualistic explanations. Expansion here consists in including more causes, by pushing the causal chain back in time, seeking past structuring causes that add to the list of factors that result in the outcome. If so, the two approaches would again be complements rather than competitors. One objection to understanding explanatory expansion in this way is that it mischaracterizes the right role for structural factors. They are not just back-in-time causes leading to specific mental states in individuals, which ultimately produce an outcome. Structural factors play a role not only as distant causes, but are also present at the very moment the outcome we want to explain is happening (Ayala and Vasilyeva 2015; Ayala 2018). According to a second sense of expansion, we could see the structural approach as zooming out and including (any) external, outside-the-individual factors. However, as Vasilyeva (2016) and Vasilyeva, Gopnik, and Lombrozo (in prep.) point out, not any “expansion” of focus counts, as not all external factors are structural. Suppose, for instance, that Lisa were forced to quit her job because her malevolent sister locked her up at home for a week. If dynamics in Lisa’s family place her systematically in a submissive position relative to her sister, then a structural explanation could be offered for the outcome. How ever, if there is no such dynamic and it’s an aberration from how they usually interact, then there is no structural explanation available. Being locked up, and
Explaining Injustice: Structure and Bias
225
having to quit her job, is just bad luck. Though an external force is to blame, that force is not structural. As a result, we cannot merely think of structural explanations as zooming out to consider all beyond-the-individual factors. A third sense of expansion is better for the structural picture: the structural lens neither just broadens the scope back in time, nor merely includes any exter nal factors. Rather, it situates the outcome in a network of relationships within a larger whole, identifying how the relationship between the parts and the whole modify the probabilities of certain kinds of behavior within the system. After all, one way to think of social environments and networks of relationships as influ encing what we do in the moment is by changing the odds. Part of what makes structures hard to see and understand is that they often don’t force or require us to act in certain ways. Often structures just make it more or less likely for us to act in certain ways. Structures put their fingers on the scale: making some actions easier and some actions harder, some options more beneficial and some options less. Structures have these effects, moreover, not just on particular individuals, but for lots of other people in similar situations. They change the odds not just for Lisa’s choice but for all the people who occupy “nodes” like Lisa in similar networks of relationships. Significantly, this explanation is not necessarily causal. No causal mechanism for Lisa’s choice is cited. Instead what we seem to have on our hands is a probabilistic explanation. If so, individualistic and structural approaches provide explanations of different kinds: one explanation is causal, the other is non-causal. Some readers might be worried by this suggestion. How could a good scientific explanation not be concerned about causes? Answering this question is beyond the scope of this essay. Still, one thing is clear. Though causal explanations dominate the sciences, other kinds of explanations flourish in biology, physics, and yet other disciplines (for a wide variety of examples, see Lange 2016). Some explanations are functional; that is, they appeal to the larger purpose of an event or its role in a system as a whole to explain why it happened. Other explanations are mathematical or probabilistic. (See Fagan, in preparation, for a wider list of scientific explanations and further discussion.) Though some scientists and phi losophers defend explanatory monism (the view that there is only one respectable kind of explanation), it’s not the only—or, arguably, the best—view. Explanatory pluralists argue that many kinds of explanation are useful and even necessary for science. A related view—called explanatory particularism (Fagan)—says that scientific explanations are the specialized products of particular scientific communities. For example, many psychologists produce individualistic explana tions that frame explanations of social injustice in primarily individualistic terms, whereas sociologists tend to produce structural explanations. Good scientific explanations, according to explanatory particularism, are ones that enrich our understanding of the world when combined with others. If so, structural and individualistic approaches could potentially work together to promote a more comprehensive, deeper understanding of social injustice.
226
Saray Ayala-López and Erin Beeghly
4.3 The Practical Utility Criterion We now turn to the final criterion for evaluating the two approaches: the practical utility criterion. This criterion looks at what kinds of interventions each approach proposes and compares their effectiveness. Perhaps ironically, the three models of implicit bias we’ve examined point to structures—not individuals’ minds—as the locus for effective interventions. According to the bias-of-crowds model, individuals’ biases as measured by psychologists track the presence of past and present social inequalities in social environments. If so, it’s pointless to intervene directly on individual minds. To eliminate bias, we must modify social environments (see also Dasgupta 2013). On a second view, implicit biases are controlling images from wider culture, including stereotypes and social norms, which individuals internalize. Since the root of the problem is structural, so is the solution. If we could get rid of cultural stereotypes and social norms, we would stop them from being inter nalized. Finally, if morally problematic biases (e.g. associating women and being submissive) accurately track statistical realities in our world that result from unequal social arrangements, we find ourselves with “a pattern of social inequalities that we can and ought to change” (Antony 2016: 185). These reflections suggest that, ultimately, the structural picture is the most adequate when considering interventions to fix social injustice. However, before readers with structural affinities get too excited, a little cold water must be thrown on the proposal. There is paradox surrounding structural change. While it’s one hundred percent true that structural approaches offer the most direct route to social change, a stubborn fact remains. Structures do not magically transform. Indi viduals must change them. For example, if an unjust law is to be abolished, a huge collective effort will have to be made. People will have to call their legis lators and voice their concerns. Investigative reporters will have to publicize the ways in which the unjust law is unfair and harmful. Legislators will have to introduce a new law that invalidates or overrides the old one, and they will have to vote on it. Citizens may have to protest if the vote fails. Though the processes by which social norms and cultural stereotypes transform are less straightforward, the same kind of observation holds. Individuals must act if these aspects of reality are to be changed. For that reason, early advocates of gay rights argued that queer folks would have to come out to their friends and family in order to drum up sympathy for structural change. If your kid’s tea cher, your brother, your favorite neighbor, or your daughter comes out as gay, the thought went, it would be harder for pernicious stereotypes to dominate the conversation about gay rights. For example, it would be harder for politicians to argue that employers could simply fire gay people if they wanted to because “gay people were deviants.” Examples such as these suggest that advocates of structural change must also pay attention to individuals and their mental states if they hope to change structures, even if changing structures directly would be in principle a faster and better way to go.
Explaining Injustice: Structure and Bias
227
5 Synergies and Convergences In this chapter, we investigated two approaches to injustice. At first, it seemed as if these approaches—individualistic and structural—were competitors. We have questioned that and invited the readers to consider how the two approaches can work together, complementing each other. Here is one exciting example. In psychology, researchers are examining how individuals think about and react to structures. For example, Vasilyeva, Gopnik, and Lombrozo (2018; in prep) propose a psychological intervention aimed at counteracting the way people process social structures by promoting what they call “structural thinking.” Structural thinking acknowledges that people occupy specific social positions within the landscape of opportunities and obstacles shaped by structural constraints. It recognizes that people (including each one of us) don’t just do things because of underlying biologi cal traits or idiosyncratic preferences (Vasilyeva and Ayala-López 2019; see also Madva, Chapter 12, “Individual and Structural Interventions” on accentuating the situation). Thus, one thing we have to persuade individuals to do is—think structurally! The imperative is to interpret each other not just in terms of beliefs and desires but also in terms of structural opportunities and constraints. A second lesson of our analysis is this: there is no one-size-fits-all solution for injustice. Sometimes structural approaches may fare better, given our aims. Sometimes individualistic approaches will be more effective. In yet other cases, a mixed strategy focusing simultaneously on individuals and social structures may work best. Our considered conclusion is therefore this: both approaches are necessary to explain what’s wrong with injustice, why inequalities occur, and how to transform our world (and ourselves) for the better.
SUGGESTIONS FOR FUTURE READING If you’d like to explore individualistic approaches to injustice further, read: •
•
•
Katie Steele (2013) Choice models. In Nancy Cartwright and Eleonora Montuschi (eds), Philosophy of Social Science: A New Introduction (pp. 185–207). Oxford: Oxford University Press. Steele introduces readers to decision theory, social choice theory, and game theory and examines how these models seek to explain/predict human behavior. Cristina Bicchieri (2016) Norms in the Wild: How to Diagnose, Mea sure, and Change Social Norms. New York: Oxford University Press. Bicchieri offers a predominantly individualistic account of social norms, how they are established, and how to change them. An engaging book full of case studies. L.A. Paul (2014) Transformative Experience. Oxford: Oxford University Press. Paul argues that the big decisions in our lives like
228
•
Saray Ayala-López and Erin Beeghly having a child and choosing a profession cannot be resolved by reason and require a “leap of faith.” She says that one’s preferences and views change radically after big life experiences in ways that cannot be reliably anticipated in advance. Mahzarin Banaji and Anthony Greenwald (2013) Blind Spot: Hidden Biases of Good People. New York: Delacourte Press. Banaji and Greenwald analyze bias with a heavily individualistic lens, namely, as a property of individuals’ minds.
If you’d like to explore structural approaches to injustice further, read: •
•
•
•
•
•
Richard Rothstein (2017) The Color of the Law: A Forgotten History of How Our Government Segregated America. New York: Norton. Using historical documents and interviews, Rothstein explores how law and policy at local, state, and national levels created residential segregation in the United States throughout the twentieth century. Eduardo Bonilla-Silva (2018) Racism Without Racists: Colorblind Racism and the Persistence of Racial Inequality in America (fifth edition). New York: Rowman & Littlefield. Bonilla-Silva analyzes story-telling practices and narratives that generate racial inequalities in the United States. Kate Manne (2017) Down Girl: The Logic of Misogyny. New York: Oxford University Press. Manne distinguishes sexism and misogyny. She argues that misogyny does not require negative attitudes or beliefs about women, much less hatred towards them; rather, misogyny is a feature of environments in which all genders are policed in ways that maintain gender hierarchy. Keith Payne, Heidi Vuletich, and Kristjen Lundberg (2017) The bias of crowds: How implicit bias bridges personal and systemic prejudice. Psychological Inquiry, 28(4): 233–248. https://doi.org/10.1080/1047840X. 2017.1335568. This is a “target article” about the bias of crowds model that is followed by commentaries and replies to the commentaries. Sally Haslanger (2012) Resisting Reality: Social Construction and Social Critique. New York: Oxford University Press. An important collection of philosophical essays which analyzes social structures, social con struction, objectification, and oppression related to race, gender, and other social categories. Elizabeth Anderson (2010) The Imperative of Integration. Princeton, NJ: Princeton University Press. A rich analysis of racial segregation in the United States, as well as what’s wrong with it and how it sustains injustice and inequality.
If you’d like to read the work of theorists who blend individualistic and structural approaches, read:
Explaining Injustice: Structure and Bias •
•
•
•
•
229
Cherríe Moraga and Gloria Andalzúa (eds) (2015) This Bridge Called My Back: Writings by Radical Women of Color. Albany, NY: SUNY Press. In this classic anthology from 1981, you’ll find essays, poetry, and philosophy that explore the experiences of women of color in order to better understand how oppression functions. Audre Lorde (2007) Sister Outsider: Essays and Speeches by Audre Lorde. Berkeley, CA: Crossing Press. Audre Lorde is one of the most celebrated poets of the twentieth century. In this collection of essays, she talks about her experiences as a black lesbian feminist, as well as her attempts to fight structural sexism and racism. See especially the essay, “The Master’s Tools Will Never Dismantle the Master’s House.” Iris Marion Young (2005) On Female Body Experience: Throwing Like a Girl and Other Essays. Oxford: Oxford University Press. Collected essays of philosopher Iris Marion Young. Young argues that under standing and fighting injustice requires both structural analysis and the attention to individuals’ embodied experiences. Michelle Alexander (2010) The New Jim Crow: Mass Incarceration in the Age of Colorblindness. New York: The New Press. Alexander explores how the United States prison system, as well as the legal system and policing practices have been used to target African Americans and keep them “in their place” since Jim Crow laws were abolished in the 1960s. The core mechanism she describes for both police officers and prosecutors is unchecked discretion plus implicit bias equals unfair treatment. Kristie Dotson (2012) A cautionary tale: On limiting epistemic oppres sion. Frontiers, 33(1). Dotson explores epistemic oppression and argues that popular ways of analyzing epistemic injustice in philosophy are inadequate. Dotson introduces a type of epistemic injustice (i.e. con tributory injustice) that can be seen as a failure of both individuals and structures.
DISCUSSION QUESTIONS 1 How would you define the individualistic approach? And the structural approach? Ayala-López and Beeghly illustrate these approaches using the examples of Lisa and Pau. Can you think of an example from your own personal experience or fiction or the recent news that can be used to illustrate each approach? 2 Describe a particular social position in a social structure. Focus in particular on which possibilities this position enables or constrains. 3 Can you summarize the three models of implicit bias outlined in the chapter? 4 Pick one of those three models. Explain how it connects biases in individual psychology to social structures.
230 5 6
7
8
9
Saray Ayala-López and Erin Beeghly How can you compare individualistic and structural approaches? Towards the end of Section 4, Ayala-López and Beeghly write, “what’s most morally relevant in a situation can change, depending on how you are trying to fight injustice.” Explain what that means. Do you agree that what’s most morally relevant about injustice might change, depending on the situation? If yes, give your reasoning why. If not, explain why not. What are the senses in which the structural picture can be said to expand the scope of the individualistic picture? And why is a mere expansion to include external factors not adequate? What argument would you give in support of the idea that individualistic and structural approaches are incompatible? What argument would you give in support of the idea that they are compatible? How would you try to persuade someone that if the goal is to reduce injustice, intervening on people’s minds is not the best way to go? And how would you try to persuade them of the opposite?
REFERENCES Anderson, E. (2010) The Imperative of Integration. Princeton, NJ: Princeton University Press. Antony, L. (2016) Bias: friend or foe? Reflections on Saulish skepticism. In M. Brownstein and J. Saul (eds), Implicit Bias and Philosophy: Volume 1 (pp. 157–190). Oxford: Oxford University Press. Archer, M. (1979) Social Origins of Educational Systems. London: Sage. Ayala, S. and Vasilyeva, N. (2015) Explaining speech injustice: Individualistic vs. structural explanation. In D.C. Noelle, R. Dale, A.S. Warlaumont, J. Yoshimi, T. Matlock, C.D. Jennings, and P.P. Maglio (eds), Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp. 130–136). Austin, TX: Cognitive Science Society. Ayala, S. (2016) Speech affordances: A structural take on how much we can do with our words. European Journal of Philosophy, 24(4): 879–891. https://doi.org/10. 1111/ejop.12186 Ayala, S. (2017) Comments on Alex Madva’s ‘A plea for anti-anti-individualism: How oversimple psychology misleads social policy’. The Brains Blog, March 6. Available at http://philosophyofbrains.com/wp-content/uploads/2017/03/Saray-Aya la-Lopez-Comments-on-Madva.pdf Ayala, S. (2018) A structural explanation of injustice in conversations: It’s about norms. Pacific Philosophical Quarterly, 99: 726–748. https://doi.org/10.1111/papq.12244 Barnes, E. (2015) Social identities and transformative experience. Res Philosophica, 92(2): 171–187. Beeghly, E. (2020) Embodiment and oppression: Reflections on Haslanger. Australasian Philosophical Review. Collins, P.H. (2000) Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment (second edition). New York: Routledge. Cudd, A. (2006) Analyzing Oppression. Oxford: Oxford University Press.
Explaining Injustice: Structure and Bias
231
Dasgupta, N. (2013) Implicit attitudes and beliefs adapt to situations: A decade of research on the malleability of implicit prejudice, stereotypes, and the self-concept. Advances in Experimental Social Psychology, 47: 233–279. Davidson, L. and Kelly, D. (2018) Minding the gap: bias, soft structures, and the double life of social norms. Journal of Applied Philosophy. Accessed via early view: https://doi.org/10.1111/japp.12351 Delgado, R. and Stefancic, J. (2017) Critical Race Theory: An Introduction, New York: NYU Press. Fagan, M. (in preparation) Explanatory Particularism. Fanon, F. (1952/2008) Black Skin: White Masks. New York: Grove Press. Haslanger, S. (2015) Distinguished lecture: Social structure, narrative and explanation. Canadian Journal of Philosophy, 45(1): 1–15. Haslanger, S. (2016) What is a (social) structural explanation. Philosophical Studies, 173(1): 113–130. Haslanger, S. (2020) Cognition as a social skill. Australasian Philosophical Review. hooks, b. (2015a) feminism is for everybody: passionate politics (second edition). New York: Routledge. hooks, b. (2015b) ain’t i a woman: black women and feminism (second edition). New York: Routledge. Kukla, R. (2014) Performative force, norm, and discursive injustice. Hypatia, 29(2): 440–457. Lange, M. (2016) Because Without Cause: Non-Causal Explanations in Science and Mathematics. Oxford: Oxford University Press. Lorde, A. (2007) Sister Outsider: Essays and Speeches by Audre Lorde. Berkeley, CA: Crossing Press. Madva, A. (2016) “A plea for anti-anti-individualism: How oversimple psychology misleads social policy. Ergo, 3(27): 701–728. Munton, J. (2019) Perceptual skill and social structure. Philosophy and Phenomenological Research, 99(1): 131–161. https://doi.org/10.1111/phpr.12478 Murphy, M.C., Kroeper, K.M., and Ozier, E.M. (2018) Prejudiced places: How contexts shape inequality and how policy can change them. Policy Insights from the Behavioral and Brain Sciences, 5(1): 66–74. Accessed at https://doi.org/10.1177/ 2372732217748671 Ngo, H. (2017) The Habits of Racism: A Phenomenology of Racism and Racialized Embodiment. New York: Lexington Books. Nosek, B.A., Smyth, F.L., Sriram, N., Lindner, N.M., Devos, T., Ayala, A., … Green wald, A.G. (2009) National differences in gender–science stereotypes predict national sex differences in science and math achievement. Proceedings of the National Acad emy of Sciences of the United States of America, 106(26): 10593–10597. https://doi. org/10.1073/pnas.0809921106 Paul, L.A. (2015) What you can’t expect when you’re expecting. Res Philosophica, 92 (2): 149–170. Accessed at http://dx.doi.org/10.11612/resphil.2015.92.2.1 Payne, B.K. and Vuletich, H.A. (2017) Policy insights from advances in implicit bias research. Policy Insights from the Behavioral and Brain Sciences, 5(1): 49–56. http s://doi.org/10.1177/2372732217746190 Payne, B.K., Vuletich, H.A., and Brown-Iannuzzi, J.L. (2019) Historical roots of implicit bias in slavery. Proceedings of the National Academy of Sciences of the United States of America, 116(24): 11693–11698. https://doi.org/10.1073/pnas. 1818816116
232
Saray Ayala-López and Erin Beeghly
Payne, B.K., Vuletich, H.A., and Lundberg, K.B. (2017) The bias of crowds: How implicit bias bridges personal and systemic prejudice. Psychological Inquiry, 28(4): 233–248. https://doi.org/10.1080/1047840X.2017.1335568 Salter, P. and Adams, G. (2013) Towards a critical race psychology. Social and Per sonality Psychology Compass, 7(11): 781–793. https://doi.org/10.1111/spc3.12068 Schaller, M., Park, J.H., and Mueller, A. (2003) Fear of the dark: Interactive effects of beliefs about danger and ambient darkness on ethnic stereotypes. Personality and Social Psychology Bulletin, 29(5): 637–649. https://doi.org/10.1177/ 0146167203029005008 Stryker, S. (2008) Transgender History. New York: Perseus Books. Thomasson, A.L. (2016) Structural explanations and norms: Comments on Haslanger. Philosophical Studies, 173(1): 131–139. Vasilyeva, N. (2016) Structural explanations: Structural factors as moderators and constraints on probabilistic outcomes. Poster presented at the 42nd Annual Meeting of the Society for Philosophy and Psychology. Vasilyeva, N. (in preparation) Situating structural explanations. Vasilyeva, N., Gopnik, A., and Lombrozo, T. (2018) The development of structural thinking about social categories. Developmental Psychology, 54(9): 1735-1744. Vasilyeva, N., Gopknik, A. and Lombrozo, T. (in prep) When generic language does not promote essentialism. Vasilyeva, N. and Ayala López, Saray (2019) Structural thinking and epistemic injustice. In B. Sherman and S. Goguen (eds), Overcoming Epistemic Injustice: Social and Psychological Perspectives (pp. 63–85).Washington, DC: Rowman & Littlefield. Vuletich, H.A. and Payne, B.K. (2019) Stability and change in implicit bias. Psychological Science, 30(6): 854–862. https://doi.org/10.1177/0956797619844270 Young, I.M. (2005) On Female Body Experience: ‘Throwing Like a Girl’ and Other Essays. Oxford: Oxford University Press. Zawidzki, T.W. (2013) Mindshaping: A New Framework for Understanding Human Social Cognition. Cambridge, MA: The MIT Press. Zheng, R. (2018) Bias, structure, and injustice: A reply to Haslanger. Feminist Philosophy Quarterly, 4(1): 1–30. doi:10.5206/fpq/2018. 1. 4
12 Individual and Structural Interventions Alex Madva
Changing the world is hard. Changing it for the better is usually harder than changing it for the worse. But why is positive change so difficult? Some answers are familiar. First, it’s hard to get people to care, especially about problems in different places (physically or socially distant neighborhoods or countries) that don’t confront us every day. Second, people are, understandably, wrapped up in pursuing personal goals (careers, families, hobbies). Third, and closely related to the first two reasons, many people feel like their votes and voices don’t matter because the system is rigged (by corporate donors, gerrymandered voting districts, etc.) to make their political efforts pointless. Fourth, making change to promote equality is especially hard, because the haves are typically motivated to hold onto their advantages, and even to see their advantages as fair. Even the have-nots are easily hoodwinked into thinking that their disadvantages are fair when they’re not (Jost 2015). We derive comfort from believing we live in a merit-based society where, as long as you work hard enough, put your head down, and don’t rile up political trouble, then your personal and professional life will go well. Fifth, people may perceive that the world is so big, the problems so entrenched, and here I am, just a tiny, insignificant individual. How can I possibly make a difference? This chapter helps to address such obstacles. But there’s another reason why changing the world is so hard: we often don’t know how to do it. After learning about implicit bias and related social ills, many people are persuaded that there is a problem, but they don’t know what to do next. This chapter is meant to start chipping away at these knowledge gaps, to provide concrete tools to become less biased on an individual level, as well as to start thinking about potential larger-scale reforms for combatting bias, discrimination, and injustice—for promoting a fairer world. But we shouldn’t oversell what we know. Another aim of this chapter is to highlight remaining gaps in our knowledge, and encourage you, the reader, to do your part to fill them. True progress requires that we adopt an experimental mindset: test out different strategies and see how they go, then go back to the drawing board, revise our strategies, and test them again. By contrast, people concerned about racism, sexism, and
234
Alex Madva
other forms of bias and discrimination sometimes speak as if the changes we need to make are obvious. They’re not. Consider two examples.
1 Two Examples 1.1 Boxed Out In the United States, we make it really hard for people with criminal records to get back, and stay, on their feet. One challenge is that many employers can, or sometimes must (by law), ask all applicants to “check a box” saying whether they have a criminal record. Asking ex-offenders to self-identify in job applications seems reasonable from the perspective of employers (especially for certain jobs: maybe I don’t want to hire someone convicted of decades of accounting fraud to be my accountant). But the result is that exoffenders have a very hard time finding gainful employment, which in turn makes them more likely to become desperate to make ends meet, and then to re-offend and end up back in prison. Some ex-offenders have it worse than others. In one field study in Milwaukee, the odds of getting a callback for an interview were 34 percent for white male applicants with no criminal record but only 17 percent for white men with a criminal record. Moreover, the odds were 14 percent for black male applicants without a record, but only 5 percent for black men with a record (Pager 2003). The first finding to note is that white men with a record had slightly better odds than black men without. Two follow-up studies in New York City found much the same: white applicants who had literally just gotten out of prison were slightly more likely to get a callback or job offer than black and Latinx applicants with spotless records (Pager et al. 2009). These studies illustrate how devastating a criminal record can be for anyone’s job prospects, but especially for people of color. 19 out of 20 ex-offenders of color can’t make it past the initial screening, even just to land an interview, let alone secure the job. This strikes many of us as unfair. Haven’t these folks already paid their debt to society? But even if you don’t find this unfair (maybe you think they “should have thought of that before committing the crime”), you might agree that the current system is counterproductive when it comes to reducing recidivism. We all share the goal of living safe from violence and crime, and many folks from both the left and right agree that much about the current criminal justice system makes ex-offenders of all races more rather than less likely to re-offend (Bibas 2015). So what should we do about it? While the causes and remedies are complex, one intuitive piece of the puzzle is to restrict employers’ ability to demand that applicants “check the box.” Recent “Ban the Box” initiatives have prohibited employers from asking about applicants’ criminal record until after they get past the initial screening process. As of April 2019, 35 states and over 150 cities have adopted some form of ban-the-box policy
Individual and Structural Interventions
235
(Avery 2019). It has been encouraging to see these steps being taken, but the results may not be what we were hoping for. One study compared employment rates in regions before and after banning the box (Doleac and Hansen 2016), and found that low-skilled black and Latinx men were marginally less likely to be employed after banning the box than before. It might be that banning the box reduces discrimination against ex-offenders overall but increases discrimination specifically against applicants of color with clean records. Why might this be? Maybe because when employers cannot ask upfront about criminal histories, they may (consciously or unconsciously) just assume that black and brown applicants have sketchy backgrounds. The researchers point to “a growing literature showing that wellintentioned policies that remove information about negative characteristics can do more harm than good” (9). In response to such findings, what is an advocate for justice to do? Are we “damned if we do and damned if we don’t?” Before drawing such pessimis tic conclusions, we should take a step back and begin by admitting that we don’t already know the best ways to address such problems. We should invest resources in studying the questions of how to facilitate ex-offender’s reentry into public life, rather than patting ourselves on the back for making a small change that sounds good in the abstract but maybe does little to fix things, and might even make matters worse. Anticipating these counterproductive results, Michelle Alexander wrote that, “banning the box is not enough. We must also get rid of the mind-set that puts black men ‘in the box’” (Alexander 2012, 153; but see Hernandez 2017). It’s not enough to change rules and policies; we must also consider what’s going on in the minds of employers who consider applications from ex-offenders, namely, stereotypes and prejudices about people of color. We must, collectively, overcome the mindsets (the feelings, assumptions, and implicit and explicit biases) that “box out” people of color from equal opportunities. 1.2 Leaving Mothers Behind Upon first entering the workforce, men and women typically earn similar salaries. As their careers progress, that changes. Gradually, men’s paychecks tend to grow significantly higher than women’s. Men are also more likely to be promoted; in fact, women make up only 5 percent of the CEOs of the 500 largest US corporations (Zarya 2018). One of the most significant factors here is parenthood. Women who become mothers tend to fall behind, whereas men who become fathers often do even better than men who don’t become fathers (Aravena 2015; Cudd 2006, chap. 5; Miller 2014; see also the Lisa and Larry example in Ayala-López and Beeghly, Chapter 11, “Explain ing Injustice: Structural Analysis, Bias, and Individuals”). (Parenthood is not the only factor behind the gender pay gap. Even if you compare single, childless men to single, childless women with comparable performances in
236
Alex Madva
comparable jobs, men still average about 5 percent higher salaries than women (e.g. Stewart and Valian 2018, chap. 4). Also note that effects like the “fatherhood pay bonus” are strongest for high-skilled, cis straight white biological fathers married to biological mothers (Killewald 2013; see also Gasdaglis and Madva forthcoming). There is some evidence for “breadwin ner bonuses” and “caregiver penalties” in less gender-stereotypical parenting contexts (Bear and Glick 2017), but a striking Norwegian study found that for same-sex couples, any pay gaps between parents due to having a child disappear within a few years (Andresen and Nix 2019).) What should we do about these gaps? Again, the causes and remedies here are complex. Much attention goes to family-friendly policies, such as allowing more flexible work schedules (Kliff 2017), but especially to parental leave policies. Maybe if we give new parents more paid time off from work, then mothers will be less likely to fall behind. In fact, the USA is the only developed country that doesn’t guarantee paid leave for parents. Some employers opt to give mothers a few months of paid leave, but many don’t. The USA at least guarantees all parents 12 weeks of unpaid leave (although, of course, only parents who are not living paycheck-to-paycheck will be able to afford and take advantage of this policy). As with Banning the Box, however, attempts to reform leave policies may have unintended negative consequences. To take a simple example, requiring employers to let women take maternity leave might make them less likely to hire women at all, or to promote them up the ranks. Consciously or unconsciously, perhaps they would rather not be on the hook for hiring or promoting a woman who (they assume) can just leave the job for months on end, when they could instead hire a man who (they assume) will continue working without significant interruption. One study looked at employment and promotion rates for women before and after the USA passed its minimal policy guaranteeing unpaid leave (Thomas 2016), finding that women overall became a little more likely to remain employed (i.e., to not be fired or quit), but a little less likely to get promotions, perhaps because employers were reluctant to make investments in early-career women who they feared wouldn’t stick around. “The problem ends up being that all women, even those who do not anticipate having children or cutting back in hours, may be penalized,” said Thomas in an interview (Miller 2015). Moreover, if mothers are being financially supported to take time off from work, but fathers are not, then fathers are going to stay on the job and have more time to move up the ranks while new mothers lose experience, lose opportunities for promotion, and fall behind their male counterparts. The natural solution, then, would seem to be to encourage paternity leave in addition to maternity leave. Accordingly, Iceland includes 13 weeks of paid leave specifically for the non-childbearing parent, and most fathers take advantage of it (Kliff 2018a). (Note that this policy is also more inclusive for gender-nonconforming parents.) Due to this and other aggressive efforts, Iceland has one of the lowest gender pay gaps in the world. They haven’t
Individual and Structural Interventions
237
eliminated the gap completely (still hovering around 5 percent!), partly because mothers tend to take more leave than fathers (Bershidsky 2018). But even completely gender-neutral parental benefits might not solve the problem. Everything depends on what parents do with the time set aside for caregiving. One study looked specifically at how certain gender-neutral parenting policies affected the odds of getting tenure for professors at 50 top-tier economics departments (Antecol et al. 2018). They found that some policies actually increased fathers’ advantages over mothers, evidently because new fathers used the extra time to work on their research and strengthen their case for tenure, whereas mothers actually used (needed) the time to recover, breastfeed, and parent. You can lead a horse to water but you can’t make it drink. Maybe you can lead a father to the home but you can’t make him parent. Or, much as Alexander said about ban-the-box campaigns, parental leave policies are not enough; we need to get rid of the mindsets that put one parent in the “breadwinner” box and the other in the “caregiver” box. We need to eliminate the conscious and unconscious expectations, habits, and preferences that treat fathers and mothers differently even when official laws and policies say they should be treated the same. Recall that whereas mothers are often paid less than childless women, fathers are typically paid more than childless men. One field study found that, even when their résumés were otherwise identical, mothers were half as likely as childless women to be called back for an interview, whereas there was no penalty for fathers (Correll et al. 2007). Why might that be? Well, getting married and becoming a dad often strikes people as “responsible,” sending the message that you’ll be fully committed to your job because you need to support your family. By contrast, becoming a mom often sends the opposite message: employers think you are not fully committed to the job because you’re going to take time off to take care of the kids. This is unfair nonsense. Most mothers work, and they are presumably just as committed to supporting their family as working fathers. And nearly two-thirds of fathers believe they should be spending more time with their children (Parker and Livingston 2018). There are also social pressures. Fathers who take full advantage of parental leave are sometimes seen as failing to live up to their stereotypical breadwinning role, whereas mothers who don’t take advantage of all their parental leave are sometimes seen as failing to live up to their stereotypical caregiving role. In fact, over 25 percent of both men and women worldwide (and over 20 percent of men in North America) explicitly believe that women should stay home altogether. On top of that, nearly half of Amer ican women think they should both work outside the home and maintain primary caregiving and housework responsibilities (Gallup-International Labour Organization 2017). (And remember that housework is work, even though it’s often uncompensated and excluded from standard measures of economic output.)
238
Alex Madva
Reforming parental leave policies won’t—all by itself—dislodge all these biased beliefs and attitudes. Instead, dislodging biased attitudes may be essential to encouraging individual fathers to take full advantage of these policies and take on their fair share (i.e., half) of household responsibilities, as well as to expanding support for the sorts of powerful policy changes that Icelanders and others have explored. In other words, we need interventions specifically aimed at changing individual hearts, habits, and minds, which may in turn be integral to bringing about necessary larger-scale social transformations and to ensuring that these transformations have the broadest and most durable impact on people’s lives.
2 Either/Or Versus Both/And What examples like Boxed Out and Leaving Mothers Behind come to is an argument for insisting on the importance of individual-level debiasing strategies, which change individuals’ biased assumptions, feelings, and habits, including their implicit and explicit social prejudices and stereotypes. These examples help to respond to an important criticism of the debiasing strategies we’ll explore in this chapter, which is that they overlook more fundamental structural factors. What are structural factors? The contrast is with individual factors. As Brownstein (Chapter 3, “Skepticism About Bias”) and Ayala-López and Beeghly (Chapter 11, “Explaining Injustice: Structural Analysis, Bias, and Individuals”) explain, different theorists draw the distinction between “individuals” and “structures” in different ways, but, to keep things simple here, we can think of structures as the contexts in which individuals operate. Individuals make choices, but they don’t decide what the available options are. The range of available options, and how attractive or feasible each option is—that’s part of the structure. Fathers in both Iceland and the USA can choose to spend 13 weeks at home with their newborn. However, fathers in Iceland will be paid during that time, whereas fathers in the USA will most likely not, and instead could be fired for being gone so long (past 12 weeks). Individuals have agency, the freedom to make choices, but it’s the external context, or structure, that shapes which options are more or less available, feasible, and desirable. Structures include all sorts of things, most obviously rules, procedures, and laws. They also include informal social norms and our physical and social environments. Communities of color are, for example, more likely to be exposed to environmental toxins than predominantly white communities (Ard 2015). Consider Flint, Michigan. The water source for this predominantly black city was changed in 2014, which in turn exposed thousands of adults and children to lead poisoning (Clark 2018). Being poi soned just by turning on the tap was a feature of their structure, part of the system in which they operated (and against which they protested, although authorities repeatedly insisted there was nothing to worry about). Other
Individual and Structural Interventions
239
structural factors include the availability and affordability of high-speed internet, public transportation, housing, childcare, and healthcare. Roughly, structuralists argue that, when it comes to bringing about a more just society, we should deemphasize changing individual hearts and minds (think less about implicit bias, individual psychology, and ethics) and reemphasize changing structures (think more about sociology, urban planning, political philosophy, etc.). This means revising everything from social norms, to official laws, to the layout of physical space, to the broader systems governing how politicians are elected and how money and resources are extracted, generated, and distributed. I am all in favor of overhauling existing social structures. In fact, basically everybody is! Disagreement revolves around about which structures to change, and how. Libertarians and anarchists want fewer laws and regulations; their opponents want more. Both want structural change, but they disagree about which. But let’s grant structuralists the point that, for example, reforming family-friendly policies—like parental leave, work hour flexibility, and more affordable, accessible childcare—is necessary. We can also grant (just for the moment, for the sake of argument) that these structural reforms will do more to promote fairness than combating individuals’ stereotypes about gender, careers, and caregiving. Does it follow that we should prioritize these structural reforms in general over changes in individual psychology in general? I don’t see how (Madva 2016). Bringing about these policy changes requires, at a minimum, changes in the beliefs, motivations, or actions of those individuals poised to help change policy. Such structural reforms are more likely if the relevant individuals are persuaded that the reforms are possible and desirable, and start acting to help bring the reforms about. Such reforms are more likely to “stick” and change behavior in enduring ways insofar as the individuals affected “buy into” them, or at least don’t actively resist them (as in the case of employers and coworkers who think worse of fathers who take paternity leave). The most this sort of example can show, if it were correct, is that changing certain individual attitudes (in this case, stereotypes and personal preferences regarding gender, careers, and caregiving) is less relevant to bringing about necessary structural reform, and therefore to promoting fairness, than is changing other individual attitudes (e.g., changing individuals’ motivation to reform parental leave policies!). But note that for this example to make sense, we must also assume that individuals’ attitudes about gender, careers, and caregiving exert no mean ingful influence on their beliefs and motivations surrounding the reform of parental leave policies. That is an implausible assumption. Changing individuals’ gender biases might very well be integral to drumming up support for reforming parental leave policies. Remember how many men and women alike still think mothers should stay at home or, even if working outside the home, still do the brunt of the homemaking? If views like these remain pervasive, how can we expect to generate enough enthusiasm and
240
Alex Madva
buy-in to make meaningful and durable changes to parenting policies? The fundamental reason that it doesn’t make sense to say things like, “don’t worry about individuals’ prejudices and stereotypes, just focus on changing structures,” is that individuals’ prejudices and stereotypes are some of the most powerful factors shaping their willingness to support (or oppose) political and structural change (Azevedo et al. 2019; Cooley et al. 2019; Harell et al. 2016; Monteith and Hildebrand 2019; Mutz 2018).
3 More Lessons With this in mind, the next sections recommend concrete strategies for combating implicit (and explicit) bias at both the individual and structural levels. But I want to make a few more points about Boxed Out and Leaving Mothers Behind first. What also becomes clear in these examples is that we don’t yet know the best ways to address these problems. We need to study how to facilitate ex-offender’s reentry into public life, and how to avoid penalizing mothers for working and fathers for caregiving. Too often we revise our policies with the expectation that they’ll make a difference but then don’t bother to check if they’re helping or hurting (Dobbin et al. 2015). What’s worse, even when policy changes don’t make any positive difference, they can still give members of privileged groups the false impression that others are now being unfairly advantaged over them, leading them to become more discriminatory toward the disadvantaged than they were before (Dover et al. 2014; Kaiser et al. 2013). So for starters, we need a healthy dose of epistemic humility when it comes to thinking about how to remedy social injustice (Medina 2012; McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit Bias”). Humility means not overestimating how good you are at something, calibrating your level of confidence to your level of ability. Calling for epistemic humility, then, is a warning against overestimating how much we know, to be less arrogant and self-assured that the solutions to these problems are obvious. The difficulty of fixing these problems also reveals how multifaceted they are. There is never a single law or stereotype or powerful individual solely to blame. We must figure out how the many puzzle pieces fit together, and then attack these problems in an accordingly multifaceted way. We must reject the forced choice: either pay attention to individuals or pay attention to structures. Resist that either/or framing and insist on a both/and framing. We should think about the complex ways that individuals fit into their structures, and how to change both individuals and structures in tandem to promote fairness (e.g., how to simultaneously undermine stereotypes and overturn discriminatory laws, norms, and built environments). Relatedly, another lesson regards the importance of adopting an experi mental approach to these issues. Since we can’t know in advance what will work, we have to put interventions in place with every intention of testing
Individual and Structural Interventions
241
them, and then going back to the drawing board if they don’t work. What makes this so challenging is that collective motivation to change is usually a transient phenomenon: a crisis happens, then a bunch of changes are put in place, and then people forget about it—regardless whether the changes actually make things better or worse. Our initial plans have to include subplans to measure effects and re-assess. We need automatic checkpoints and triggers to determine whether our efforts are paying off. Given our epistemic limitations and the multifaceted nature of these problems, when we try to make changes, we shouldn’t put all our eggs in one basket. We should try out a bunch of things. We need what I call a diversified experimentalism (Madva 2019a). If you’re saving for retirement, investors say you shouldn’t put it all in one company’s stock, because that company might go bankrupt. Instead, you should diversify your investments across a bunch of companies. We should similarly diversify our experimental portfolio, and explore a bunch of different individual and social experiments and interventions to see which ones stick and which ones stink. With these lessons in mind, what follows are strategies that—according to one person in 2019—have enough evidential support to be worth a try. Future evidence might finetune, enrich, or even overturn the suggestions to follow. Here are some questions for you to think about for each strategy. First, most of these will work better in some contexts than others, and sometimes they might even backfire. Sorting out when and when not to use these tools is incredibly important—something for us to test in formal scientific settings as well as in the labs of lived experience. Ask yourself when these might be more useful, when they might be less useful, and when they might be downright illadvised. Second, although these tools are framed in terms of what we as individuals can do, you can also consider how we might redesign our social institutions to encourage everybody to try them (a question we’ll investigate further in Section 5, on structural reform). Third, ask yourself how these tools might be usefully combined. Perhaps we can couple different techniques together to make them more powerful (for more on the importance of mutually reinforcing strategies, see Madva 2016; 2017; 2019b).
4 Six Debiasing Tools 4.1 Tool #1. The Life-Changing Magic of If–Then Plans The first debiasing tool regards how to bridge the gap between intention and action. This tool is if–then plans (their official name is a mouthful: “imple mentation intentions”). These are concrete plans that specify when, where, or how we intend to put our broader aims and values into practice. The key idea is that we are more likely to follow through on our goals if we focus as concretely as possible on the contexts for action and the specific thoughts and behaviors we will execute in those contexts. To get a sense of the difference between vague and concrete plans, contrast the first with the second plan in each of these two pairs:
242
Alex Madva 1a. I’d like to cut back on smoking! 1b. If I feel a craving for cigarettes, then I will chew gum! Or: 2a. My New Year’s resolution is to work out more! 2b. When I leave work on Tuesdays, then I will go to the gym!
Which of these plans do you think will be more effective? 1a or 1b? What about 2a vs. 2b? Note that in each of these pairs, the first option just vaguely identifies the broad goal you’re aiming for, but it doesn’t say anything about how you intend to follow through on it. By contrast, the second option (step 1) identifies the specific contexts and obstacles that you’re interested in and (step 2) highlights a concrete, straightforward plan about what to do in those contexts. Research suggests that if–then plans are easy to form (practice rehearsing them in your head a few times, or write them down), easy to remember, and easy to execute in the crunch. Meta-analyses consistently find that they can have dramatic effects on behavior and goal achievement (Gollwitzer and Sheeran 2006; Toli et al. 2016). If–then plans can be applied in pretty much any area of our lives where we recognize a gap between how we think we should act and how we actually do act. They are most studied in clinical settings related to healthy eating and substance use, but they also help combat implicit bias (Mendoza et al. 2010; Stewart and Payne 2008). For example, do you worry that you interrupt women more than men? Well, here’s an if–then plan for you: “If she’s talking, then I won’t!” Do you sometimes suffer from stereotype threat or test anxiety? Before your next exam, mentally rehearse the following if– then plan, studied by Bayer and Gollwitzer (2007): “And if I start a new problem, then I will tell myself: I can solve it!” But don’t just take my word for it. It is incumbent upon us to formally and informally test these plans ourselves. Informally, that means trying these out in your daily life. See if they help. Whether they do or don’t help, talk to someone you know about your personal experiment. Maybe find an if–then planning buddy to share your intentions and experiences with. Or, if you yourself are a budding social scientist, then do a study on if–then plans! If you’re a computer-coding entrepreneur, found a start-up company around an if–then planning app! As you read through the tools to follow, think about examples of if–then plans that might help you put these tools into practice. You might also consider ways to “gamify” these tools. Are there fun games or apps that might be designed which can help us practice using these tools? 4.2 Tool #2: Approach Mindsets The mindsets we take into our social interactions are essential for shaping how we get along, and whether we act in biased or unbiased ways. One study examined the different mindsets we might have when meeting or
Individual and Structural Interventions
243
collaborating with someone from a different social group (e.g., a different political party, religion, or, in this case, a different race). One group was given the goal to “avoid appearing prejudiced in any way during the interaction.” This is an avoidance or prevention-focused mindset, where people focus on what not to do. People in this group had more difficult interactions and were more mentally drained after the conversation. Another group, however, was encouraged to adopt an approach or promotionfocused mindset: “approach the interaction as an opportunity to have an enjoyable intercultural dialogue” (Trawalter and Richeson 2006, 409). People in this group found intergroup contact to be “rewarding rather than depleting” (411). Having an approach orientation makes conversations more likely to start off on the right foot and unfold in positive ways. Approach mindsets have also been studied in clinical and habit-training contexts. For example, studies suggest that people with alcohol use disorder who (in conjunction with regular rehabilitative therapy) repeatedly practiced approaching non-alcoholic drinks and avoiding alcoholic drinks were less likely to relapse for at least one year (Eberl et al. 2013). Other studies find that simply telling participants that they are about to approach members of a different social group, or to approach healthy foods, has many of the same benefits as repeated practice, including reducing bias on the IAT (Van Dessel et al. 2017). In general, when we like something, we tend to approach it. These findings further suggest that when we practice or imagine approaching something, we also come to like it a bit more, too. 4.3 Tool #3: Common-Ground Mindsets Every person has countless similarities and differences with everybody else, but which similarities and differences count? Which ones do we notice in day-to-day life? When people meet a member of a noticeably different social group, they are more likely to look for and notice their differences than to pick up and build on what they share in common. Another potentially powerful mindset regards trying to find common ground when you meet a new person—even over something as trivial as whether you both prefer apples over oranges, or carpet over hardwood floors (Mallett et al. 2008). Are you both rooting for the same contestant on The Bachelorette, or against that team that seems to win the championship every year? Do you both see The Dress as black-and-blue or blue-and-gold? The effects of common-ground mindsets may be even stronger for more “self-revealing” questions, like the ones that come up in Would You Rather?. Would you rather a) be granted the answer to three questions or b) be granted the ability to resurrect one person (West et al. 2014)? With research like this in mind, Gehlbach and colleagues (2016; compare Cortland et al. 2017; Robinson et al. 2019) had high school students and their teachers fill out a “get-to-know-you” survey with questions like, “If you could go to one sporting event, which of the following would you go
244
Alex Madva
to?” or “What do you do to de-stress?” Some teacher–student pairs learned what they had in common (e.g., maybe both would choose to go to the FIFA World Cup soccer finals, or maybe both de-stress by going for a walk). Compared to a control group that did not learn any shared-in-common facts, the intervention led to increased perceptions of similarity between instructors and students. It also boosted student achievement, especially for black and Latinx students who were traditionally underserved at the school. Strategies for promoting mindsets of common ground across group differ ences represent key avenues for future research. If you, dear reader, are a college instructor or student, maybe you can try this strategy out at your next meeting in office hours, or develop a common-ground icebreaker game. 4.4 Tool #4: The Power of Perspective Part of what makes approach and common-ground mindsets effective is their ability to prompt perspective-taking across group boundaries. It’s easier for members to understand, communicate, and collaborate on shared social and political projects when they are able to see things from each other’s point of view. And it can be that much harder to take another’s perspective when people differ in some obvious way, like geographical origin, religion, nationality, or (dis)ability. Face-to-face cooperation (see structural reform #4, below) remains the gold standard for promoting perspective-taking, but narratives and games are also useful here (see also structural reform #3). Community activists and scholars across the humanities and many social sciences have long emphasized the transformative power of narrative, and the empirical evidence bears them out. One study tested the effects of a 20 minute, online “choose-your-own-adventure” game, in which Hungarians in their mid-20s occupied the perspective of an individual in the Hungarian Roma minority (Simonovits et al. 2018). Both immediately after the game and at least one month later, participants reported much less anti-Roma prejudice, as well as less prejudice toward another social group (refugees) who were not mentioned in the game. Participants were even 10 percent less likely to intend to vote for Hungary’s far-right white-supremacist party. Another study found that fictional, engaging narratives about intergroup contact and conflict can reduce explicit bias among young kids, high school ers, and even undergraduates (in this case, students read passages about relations between Harry Potter’s wizarding community and “Muggles,” i.e., ordinary humans) (Vezzali et al. 2015). Perspective-taking interventions even reduce implicit bias, and lead, in turn, to more positive face-to-face interac tions (Todd et al. 2011). Consider also that mock jurors encouraged to adopt the perspective of defendants become less likely to find them guilty (Skorinko et al. 2014). See also McHugh and Davidson’s discussions of “epistemic fric tion” and “world-traveling” (Chapter 9, “Epistemic Responsibility and Implicit Bias”).
Individual and Structural Interventions
245
4.5 Tool #5: Persuasion and Value But what does it actually mean to try to occupy another’s perspective? Which part of others’ perspectives should we try to occupy? Most perspec tive-taking interventions focus on imagining how other people experience the world (e.g., imagining what it’s like to be in the Roma minority), but research suggests that people from different backgrounds also tend to emphasize different sorts of ethical and political values when they reason about what to do (Feinberg and Willer 2015). This poses a problem for moral dialogue because each group tries to persuade the other in terms of the values they prioritize the most, rather than in terms of the values most salient to the person they’re talking to. For example, left-leaning folks (e.g., typical “liberals” or “Democrats”) tend to put a little more emphasis on fairness, reciprocity, and protecting the marginalized from harm, whereas right-leaning folks (e.g., “conservatives” or “Republicans”) tend to place a bit more emphasis on patriotism, loyalty, and purity. With this in mind, one useful task for enhancing perspective-taking and finding common ground may be to identify the moral values most central to the person you’re talking to, and think about how to defend your goals in terms of those values. For example, a left-leaning person trying to persuade a right-leaning person to support marriage equality for everyone regardless of gender and sexual orientation might say, “Our fellow citizens of the United States of America deserve to stand alongside us, deserve to be able to make the same choices as everyone else can … . Our goal as Americans should be to strive for that ideal. We should lift our fellow citizens up, not bring them down” (Feinberg and Willer 2015, 2). Right-leaning folks might better persuade left-leaning folks to support military spending by emphasiz ing its employment prospects for members of underemployed groups, including racial minorities, and explaining that “through the military, the disadvantaged can achieve equal standing and overcome the challenges of inequality and poverty” (7). Some people (in particular, pristinely principled philosophers and other academics) find this strategy unbecoming. It can sound cynical and manipulative. Shouldn’t we defend the right policies for the right rea sons—the reasons we truly stand behind—rather than exploit rhetorical techniques that we don’t actually agree with in order to get others to think and do what we want? This is a reasonable concern! There may also be unintended consequences of relying too heavily on strategies like this. For example, appealing to concerns about purity might be useful for someone trying to persuade others to care about the environment and healthcare (“Keep our lakes and rivers pure and unpolluted!… Keep our fellow citizens free from infection and disease!”), but problematic in other contexts. Purity has historically been at play in some of our worst racist impulses, from laws against “miscegenation” (interracial partnering) to the rhetorical strategies used to justify genocide, which portray the outgroup
246
Alex Madva
as an infestation of diseased, polluting insects that must be exterminated to preserve the ingroup’s purity. Left-leaning folks may be understandably reluctant to let appeals to purity move back to the center of contemporary political discourse. So if we try this strategy, we must do so carefully. But before rejecting it altogether, we should remember that making genuine headway toward improving intergroup communication and perspective-taking likely cannot just be about imagining others’ experiences (e.g., imagining “what it’s like” for both police officers and black civilians to fear for their safety). It must surely also include a central role for taking each other’s deeply felt values into account. Trying to identify the laws and policies that appeal to the widest range of people, because these laws can be defended in light of the broadest range of values, may be vital for bridging contemporary partisan divides. 4.6 Tool #6: Accentuate the Situation As I write this, four of the American National Football League’s (i.e., NFL’s) 32 starting punters are from Australia (http://www.espn.com/nfl/p layers/_/position/p, 2018), as are many punters on American college teams (Bishara 2018). Punters are highly specialized players who kick the ball both really far and with impressive accuracy to specific spots on the field. All the other punters are from the United States. Isn’t it odd that Australians, who don’t grow up playing American football, are making it in the NFL in this dedicated role? Why might this be? If we tried to explain this odd phenomenon by appealing to internal and individual factors, we might hypothesize that, since Australians are descended from British criminals, maybe they’ve inherited genes for physical strength or psychological grit. Maybe they have strong legs from outrunning the law or hopping after kangaroos? Or maybe not! The real explanation is not about Australian players’ genes or personality traits. It’s about Australian players’ situations, growing up in a country where “Aussie rules football,” a game similar to rugby, is very popular. This sport also involves kicking long distances, in a similar way to American football (which itself grew out of rugby). Of course, the fact that this highly specific skill is valued in two distinct cultural contexts doesn’t by itself explain why sure-footed Aussies would abandon the game they love to learn a new one. What else is involved? How about the fact that Australians can make up to five times more money in the USA and play professionally perhaps twice as long (“Australians in American football,” 2018)? Once we consider their situations, there is nothing particularly idiosyncratic or odd about their choices. We don’t have to appeal to any stereotypes about “Australian DNA” or “what Australian personalities are like” in order to explain their success in the NFL.
Individual and Structural Interventions
247
Too often we overlook situational factors and overplay individual factors when we try to explain things, especially when we are trying to explain the behavior of members of disadvantaged social groups. What’s worse, the individual factors we appeal to are often stereotypes. For example, both men and women are more likely to explain a woman’s anger in terms of her internal traits (“she is an angry person”, “she is out of control”), but, when the very same reaction is exhibited by a man, people explain it in terms of his situation (“he is justifiably angry given the circumstances”) (Brescoll and Uhlmann 2008). So when we try to understand others, we do well to consider whether we are explaining their behavior in terms of internal traits or situational forces (Levontin et al. 2013; Stewart et al. 2010). Why did Jamal show up late to work? Is it because he’s lazy and doesn’t want to work, or because that city wide power outage last night stopped his alarm from going off? Maybe his brother’s car broke down, which meant it fell to Jamal to drop his niece off at school. Why is Jamal acting uncomfortable now that he’s arrived at work? Is it because he’s got a challenging personality that doesn’t “fit” our office culture, or because of anxiety that his predominantly white coworkers will assume that he came late just because he’s black? Maybe he’s uncomfortable because he rode up the elevator with a white woman who nervously clutched her purse as soon as he stepped in. With practice, we can make headway toward shifting our default orientation away from stigmatizing, internalizing explanations and toward accentuating the situation.
5 Structural Change Speaking of redirecting our attention away from the idiosyncratic beliefs, feelings, and character traits that explain people’s decisions and toward the broader situations that frame these decisions, this chapter now turns to concrete strategies for transforming our social institutions to combat bias and promote fairness. These strategies are roughly ordered from less to more transformational and impactful—and, accordingly, from less to more controversial. 5.1 Structural Reform #1. Decision-Making Criteria All sorts of decisions can be influenced by implicit bias: Which candidate should I vote for? Is this defendant innocent or guilty? Which grade should I give this essay? How much should I tip my server? Should I swipe left or swipe right (Hutson et al. 2018)? Yet our decisions are less biased when we make them on the basis of clear criteria. For example, one study asked participants to choose between two candidates for the job of chief of police (Uhlmann and Cohen 2005). One candidate had extensive “street” experience (street smarts) but little formal education, and one had extensive formal education (book smarts) but little street experience. In addition, one candidate was a man, and one was a
248
Alex Madva
woman—and in different conditions they switched which was which. When the male candidate was street-smart and the female candidate was booksmart, participants reported that street smarts was the most important criterion for being chief of police, and recommended promoting the man. However, another group had to choose between a street-smart woman and a book-smart man, and this group reported that book smarts was most important, and still recommended hiring the man. What’s going on here? Participants had a gut feeling about who was the right fit for the job (chief of police = man), and then they combed through the résumés to find something to “justify” that initial gut feeling. Nevertheless, this particular story has a happy ending. In a further condition, participants were asked to identify in advance which criterion (street experience or formal education) they valued most for the chief of police. When participants had settled on their decision-making criteria ahead of time, the bias in favor of hiring a man disappeared. Sometimes, preemptively settling on criteria—and sticking to them—is enough to eliminate the effects of bias. Sometimes, but not always. Just because you’ve got some criteria doesn’t necessarily mean they’re any good! You might be baking in human bias or injustices without realizing it. This is a problem as organizations increas ingly rely on computer algorithms to inform decisions, like whom to hire and whom to let out on parole (Johnson in preparation; O’Neil 2016). Many think that if a computer made the decision, it can’t be biased, but everything depends on how we design these algorithms and what data we feed them. Algorithms, just like ordinary criteria, can make our decisions better or worse depending on how we use them. We don’t need high-tech examples to make the point. Consider law school admissions, which rely primarily on two criteria: LSAT performance and undergraduate GPA. Historically, LSAT scores have been given more weight in admissions (60 percent) compared to GPA (40 percent) (Crosby et al. 2003; Murphy et al. 2018). What’s problematic about this weighting is that women have (on average) higher GPAs than men, but they also score (again, on average) worse on the LSAT. In other words, this weighting builds in an advantage for average male applicants over average female applicants. This is all the more troubling when we consider that women’s LSAT scores might be somewhat negatively affected by factors like stereo type threat and impostor syndrome (see Greene, Chapter 7, “Stereotype Threat, Identity, and the Disruption of Habit”), and given the evidence that neither the LSAT nor undergrad GPA actually predicts future bar-exam performance (that is, neither criterion correlates well with the exam that ultimately determines who becomes a practicing attorney). Examples like this demonstrate the importance of revisiting and revising our criteria. (Maintain an experimental mindset!) One of the great benefits of criteria is how much easier they make it to collect and analyze data. When we make all our decisions based on gut feelings, it’s very hard to tell
Individual and Structural Interventions
249
if or where are decisions are going wrong. Once we start relying on clear criteria (LSATs, GPA, etc.), we can see whether these criteria are actually helping us make the best decisions, or are unfairly stacking the deck for some groups over others. Then we can continue to tweak the criteria to make sure they’re treating everyone fairly and delivering the most accurate outcomes. One study of a large company revealed that women, people of color, and immigrant employees were being awarded smaller raises than white American men despite earning equivalent performance scores (i.e., despite meeting the same on-the-job criteria) (Castilla 2008). In this case, their decision-making criteria seemed OK but managers weren’t acting on them properly. Only by collecting data were they able to reveal this pro blem—after which point the company reformed its practices and has now all but eliminated disparities in raises (Castilla 2015). Part of what enabled this company to reform its practices was that working with clear decision-making criteria allows greater transparency for everybody involved about the reasons and procedures behind our decisions. Similarly, when teachers use a clear rubric for grading essays, it’s easier to be more open and transparent with students about the reasons for their grades. Clear rubrics can also point students (or applicants for jobs, promotions, grants, etc.) toward the precise areas where there’s most room for improvement. Criteria also promote accountability. It is much easier to hold ourselves and others accountable for our decisions when we can justify them by reference to reasonable, mutually-agreed-upon standards. Another virtue of criteria-based decision-making and data-collecting is that this strategy is relatively uncontroversial. People who are skeptical of ongoing efforts to combat bias and discrimination, or who are concerned about overcorrecting and think the world is becoming unfair to white men, might nevertheless be on board for working together to settle on decision-making criteria. The goal here is to focus our attention only on the factors we deem most relevant for success, and bracket everything we think is irrelevant. That said, some criteria are more controversial than others. Which standards should universities use for admitting students, or for hiring professors? Should they only look at students’ GPA and standardized test scores, or should they also consider, say, an applicant’s demonstrated ability to over come difficult circumstances, or to enrich the range of social perspectives on campus? Should hiring professors just be about easily measurable things like how many papers they publish or how often their papers get cited by others? Or should it count as a “plus” if, say, an instructor has a record of success in mentoring students who are the first generation in their family to go to college? When we are hiring a police chief, is book smarts or street smarts more important, or are they equally important? There are countless debates to have about these questions. A further benefit to having criteria is that they allow us to focus our discussions on which criteria we should use for making the best choice, and then to test out the criteria to see if they work.
250
Alex Madva
5.2 Structural Reform #2. Anonymous Review Another relatively uncontroversial tactic—which I swear by—is evaluating materials anonymously (see also Saul 2013). Literally every time I grade papers, I am surprised, in two ways: first, by some of the outspoken students who are good at seeming smart in class but who evidently put less effort into their essays; but second, by the quieter students who blow me away with clear and insightful work. Simply put, the fact that I am surprised means that I’m biased. It means I have expectations. And those expectations can affect my grades without my noticing (Alesina et al. 2018; Carlana 2019; Forgas 2011; Harvey et al. 2016; van den Bergh et al. 2010). It’s also, frankly, a relief to not have to think about these issues as I’m grading. This is speculative, but I think tamping down the salience of the author’s identity sometimes frees up my mind to just dig in and focus on the paper itself. Academic journals and professional conferences are increasingly moving toward anonymous review, and some early results are promising. Anon ymous review seems especially well-suited to reducing prestige bias, when people assume, for example, that a paper must be high quality because it was written by an Ivy League professor. Another potential benefit of anon ymous review is its power to broadcast a commitment to fairness to all parties involved. Some studies find that white women and people of color become more likely to submit their work to top-tier journals after the journals have transitioned to anonymous review, partly because they come to have more trust that their work will be evaluated fairly and accurately, without bias (Stewart and Valian 2018, 391–397). Obviously, anonymous review is not always possible (see also Dominguez, Chapter 8, “Moral Responsibility for Implicit Biases”). At some point, you may have to meet the person you’re evaluating face-to-face, so anonymity goes out the window. Even so, you may be able to incorporate anonymous review partially, for example, by reviewing some application materials anonymously (using clear criteria!) before the interview stage. Where it’s workable, anonymous review should again be a relatively uncontroversial tactic for those who are otherwise skeptical of efforts to address bias. Here we are not talking about giving any “extra consideration” to members of disadvantaged groups; we are just talking about how to maximize the chances that each individual essay or application is evaluated on its merits alone, rather than on less relevant factors like the prestige of their school. People who think they are already objective decision-makers might find anonymous review to be a nuisance, but they are less likely to protest that it’s unjust. Anonymous review may actually be more controversial among those already dedicated to resisting bias and injustice. This is because there are many facts that some people are better positioned to know than others (McHugh and Davidson, Chapter 9, “Epistemic Responsibility and Implicit
Individual and Structural Interventions
251
Bias”), which means that sometimes we should take people’s social identities into account when evaluating their claims (Alcoff 2006). For example, if I want to know whether English is an easy language to learn, it would make more sense for me to ask a non-native English speaker than a native English speaker. People who had to learn English as a second language will know more about this topic than others. It would also make more sense for me to ask a linguist who has dedicated their career to studying second-language learning than it would to ask, say, an astrophysicist, brain surgeon, or fourstar military general who, although maybe very smart, has not specialized in this area. Rebecca Kukla applies this idea to the journal review process, arguing that, “knowing who wrote a piece is often important to assessing the value and meaning of what it says” (Kukla 2018). Moreover, Kukla suspects that anonymous review is an essentially “conservative” policy that privileges people who write in a “mainstream” and “conventional voice,” and may disadvantage people with less conventional styles or more radical ideas. Kukla further suggests that reviewers will often “be able to tell or at least make a strong guess about [the author’s] identity. So the idea that anonymous review levels differences and removes biases is a myth.” Just as appeals to racial colorblindness sometimes entrench whites’ advantages over members of other races (Alexander 2012), sometimes appeals to total ignorance of identity might further marginalize those outside mainstream identities or styles. While anonymous review is certainly not a cure-all, a few points might be made in response to Kukla. First, we may misjudge how easy it is to guess an author’s identity (Goues et al. 2018), perhaps because we are more likely to remember our correct guesses than our incorrect guesses, and also to forget all the times we couldn’t hazard a guess at all. There are many contexts in which we are prone to overestimate our accuracy and abilities (Pronin et al. 2002), and this might be one of them. Consider a similar but much higher-stakes context: one of the leading causes of wrongful criminal conviction is eyewitness mis identification (The National Registry of Exonerations 2019). Many witnesses who think they can identify the offender turn out to be wrong when the DNA evidence comes in. Second, perhaps there are ways to incorporate Kukla’s valuable points about identity and knowledge without completely sacrificing anonymity. For example, an author might mention an aspect of their social identity when it’s relevant to their argument (e.g., “As an able-bodied cis white man, I think I am more likely to be believed by police officers …”) without completely blowing their cover and announcing their name. Although partial self-identification like this would open the door for some biases, it might nevertheless inhibit others (like prestige bias). Lastly, Kukla does not actually defend total de-anonymity; she argues that attention to identity should play a role only at certain specific moments in the review process. Defenders of anonymous review should be open to the possibility of mixing anonymized and de-anonymized stages in our decision-making procedures—with the caveat that
252
Alex Madva
precisely when and how to practice anonymous review is an open-ended, empirical question that we cannot conclusively settle on the basis of anecdotal experience alone. Note: Social Categories and Hierarchy: Ingroup vs. Outgroup, High Status vs. Low Decision-making criteria and anonymous review are structural changes for blocking the influence of bias. For other promising examples of structural chan ges along these lines, see Stewart and Valian (2018, chaps. 5–10) and Glaser (2014, chap. 8). They are vital first steps, and they can be defended on narrow merit-based and colorblind principles that resonate with people from a variety of social and political backgrounds. Putting them into place requires some upfront costs of time and resources, but once they’re in place, sticking to them is rela tively easy. That said, their overall impact on bringing about a more just and less biased society is unknown. Note in particular that both strategies aim to work around our biases, leaving our preexisting prejudices and stereotypes in place but limiting their influence on important decisions. More substantial structural changes might aim to reduce or eliminate our biases altogether, or stop them from forming in the first place. With that in mind, we turn next to more impactful (and, correspondingly, more difficult) reforms. These will inevitably be more controversial, in some cases because they are explicitly color- or identityconscious (which makes them more likely to trigger complaints of “reverse” dis crimination against historically privileged groups), or because they raise worries about objectionable forms of top-down “social engineering,” where elitist administrators boss us around and try to remold our hearts and minds. Figuring out how to uproot our biases requires understanding where they come from. Research on child development suggests that two of the most powerful factors behind the formation of implicit biases are ingroup–outgroup distinctions and status preferences (Dunham et al. 2008). From a very young age and continuing into adulthood (Axt et al. 2014), people tend to implicitly prefer ingroup members over outgroup members, and high-status group mem bers over low-status group members (status is determined by factors like the group’s average wealth, power, and visibility in celebrated jobs and positions). Since white people occupy a higher-status racial group, most whites implicitly prefer members of the ingroup over other races. Things are more complex for members of other racial groups, such as African Americans: for some, ingroup favoritism dominates social status (so they implicitly prefer blacks over whites); for others, status dominates ingroup favoritism (so they implicitly prefer whites over blacks); and many have no implicit preference either way (almost as if their ingroup favoritism and status biases cancel each other out). It stands to reason that reforms for combatting implicit bias should be oriented toward disrupting these two major causes, by shifting widespread perceptions of the boundaries between “us” and “them,” and by supporting egalitarian reforms that minimize status differences.
Individual and Structural Interventions
253
5.3 Structural Reform #3. Environmental Signals, Counterstereotypes, and Representation One powerful set of tools for breaking down us-vs.-them dichotomies and status hierarchies resides in the cues and signals in our environments (Murphy et al. 2018; Vuletich and Payne 2019). What messages do our physical, social, and virtual spaces send about who does and doesn’t belong, and who will and won’t excel? Even subtle cues about belonging and identity can have powerful effects on our biases, behaviors, and sense of identity. Recall a set of studies discussed by Nathifa Greene (Chapter 7, “Stereotype Threat, Identity, and the Disruption of Habit”). Students in high school or college were first brought into computer-science classrooms that were either filled with objects associated with science fiction and video games (Star Trek figurines and World of Warcraft posters) or decorated in a neutral fashion. Then the students answered a series of questions. Researchers found that being in the “geeky” classrooms dramatically reduced women’s interest and expected success in computer science, but had no effect either way on men. In fact, girls and women are up to three times more likely to express interest in computer science in the neutral room (Master et al. 2016). Studies like this highlight the power of situations and environments to “influence students’ sense of ambient belonging … or feeling of fit in an environment” (Cheryan et al. 2011, 1826). What messages are our environments sending about who “fits” in our neighborhoods, campuses, classrooms, offices, and dorm hallways? Consider also a study on how attending an all-women’s college affected undergraduate women’s implicit biases regarding gender and leadership qualities (Dasgupta and Asgari 2004). Beforehand, participants were quicker to implicitly associate names like “Emily” with traits stereotypical of women leaders, like “nurturing,” but to associate names like “Greg” with stereo typically masculine traits, like “assertive.” After one year, these implicit biases vanished. The same study also found that attending a coed school had the opposite effect on undergraduate women. After a year at the coed school, most women’s implicit gender-leadership biases grew even more pronounced. You might think the difference-maker was a more supportive atmosphere at the all-women’s school. That’s not what researchers found. Evidently, the difference boiled down to the total number of classes that students had with women math and science professors, because there was a larger pool of women math and science professors at the all-women’s school. A closer look at the data showed that this was true regardless of which institution they attended. Women at the coed school who also managed to take a few math and science classes with women instructors also showed reduced implicit biases. Subsequent studies have consistently demonstrated that having a few role models “like you” can be very effective for increasing motivation,
254
Alex Madva
success, and belonging, and shaping implicit and explicit biases and goals (for a review, see Dasgupta 2013). These studies reveal the debiasing power of counterstereotypes, exemplars in our social environments who buck our biased expectations. More broadly, consider audience’s profound reactions to blockbusters like Wonder Woman, Black Panther, and Crazy Rich Asians in 2017 and 2018, which prominently portrayed white women and people of color in a diverse range of counterstereotypical roles. Experimental and correlational studies demonstrate that exposure to outgroup members through engaging narratives can reduce bias, such as the effect of the sitcom Will & Grace on heterosexism (Schiappa et al. 2006; for a review, see Murrar and Brauer 2019). In fact, an examination of long-term, population-level attitudes found that, from 2007 to 2016, the largest reduction in both implicit and explicit biases regarded biases against sexual minorities (Charlesworth and Banaji 2019). It is difficult to suppose that changes in media representation (including journalistic coverage of the marriage equality debate) did not contribute to this change. We should therefore demand more cultural, racial, and other kinds of variety in our media diets. We can also take it upon ourselves to create more original and counterstereotypical content. Become the writers, artists, actors, journalists and video game designers who enrich our media environment, and expand our sense of what’s possible. Unfortunately, the effects of seeing counterstereotypes sometimes differ for members of historically underrepresented versus historically advantaged groups. Sometimes whites perceive the increased prominence of people of color as a sign that whites are unfairly losing status, or they conclude that, if some historically disadvantaged folks can now become billionaires, CEOs, or President of the United States, that means that people of color in general no longer face serious obstacles (Alexander 2012, 244ff; Wilkins et al. 2017). Given these predictable kinds of backlash, we cannot expect changes in environmental cues or media representation to do all the work (Madva 2016). The empirical case for counterstereotypical cues may be better grounded in their power to provide possibility-expanding role models for members of disadvantaged groups than in their capacity to debias the advantaged. 5.4 Structural Reform #4. Intergroup Cooperation Several of the individual debiasing tools listed above (such as 2 through 4, on approach and common-ground mindsets, and perspective-taking) are closely tied to a much broader and time-tested strategy: positive intergroup contact and cooperation. The basic recipe for reducing both explicit and implicit forms of bias is to get people from different groups to work together toward a common goal. Perhaps the most famous case was the desegregation of the US military in World War II. Soldiers who served in racially diverse units showed much less racial bias at the end of the war than
Individual and Structural Interventions
255
soldiers in segregated units. For discussion of these and other early findings, see Pettigrew (1998). More humdrum examples are diverse sports teams and first-year college roommates (Shook and Fazio 2008). Evidence also suggests that racially diverse juries are both more likely than racially homogeneous juries to consider an array of perspectives and to more accurately recall case facts and testimony (Sommers 2006). Bearing in mind the potential for backlash discussed in the previous section, a key point to emphasize is that simply thrusting people from different groups into shared cubicles, classrooms, neighborhoods, or nationstates is not enough. We need intergroup cooperation, not just intergroup conversation, and definitely not just intergroup physical proximity (Enos 2017; Putnam 2007). In some contexts, intergroup contact and proximity can heighten rather than dampen bias, for example, when members of different groups are competing against each other for scarce resources. Productive intergroup contact must be 1) frequent, 2) on terms of relatively equal social status, 3) organized around cooperating toward a common goal, and 4) sponsored by authority figures who enforce conditions 1–3 (Allport 1979; Anderson 2010; Pettigrew 2018). Think of the role that good team coaches (authority figures) can play in holding their players accountable for having frequent cooperative contact and ensuring that all players are recognized and valued for their contributions (thereby promoting relatively equal status on the team). The challenge for all of us, then, is to constantly consider context: how can we structure our own social institutions to foster fruitful intergroup cooperation? In this vein, I often daydream about two sorts of initiatives. The first is a campus-wide team-based competition (e.g., a massive event modeled on pub team trivia, or a kickball tournament, or Assassin, or…?) in which all the different school clubs participate, but teams are composed of a mix of members from different clubs. So, e.g., some Campus Democrats and Campus Republicans would be on the same team, as would various athletes, members of the Model UN, Chess team, Black Student Union, Muslim Student Association, and Queer Student Alliance. The second initiative I envision is a large-scale American “domestic exchange program,” in which high school students across the country spend one semester at another school in a very different situation, traveling between the center of the country and the coast, between north and south, between urban and rural, and so on. What would the world look like if everyone residing in the United States made a more concerted effort to learn face-to-face how the other half—or at least a few of the many, many other halves—live? Some argue that we should make intergroup cooperation a central orga nizing principle for society, and strive to create racially integrated schools, businesses, residential neighborhoods, and voting districts (Adams 2006; Anderson 2010, chap. 6). Doing so might promote intergroup cooperation as well as ensure that members of otherwise marginalized groups have access to the same high-quality education, healthcare, and job opportunities as
256
Alex Madva
members of more advantaged groups. In this way, proactive integrationist strategies have the potential to tackle the two major causes of implicit bias mentioned earlier: expanding our sense of who’s “inside our group” and limiting the influence of inherited and unjust inequalities in social status. However, it’s one thing when an individual freely chooses to seek out more opportunities for productive intergroup cooperation; it’s another thing when schools, businesses, and governments enforce top-down policies to integrate their citizens or subordinates. Shouldn’t individuals have freedom of association, e.g., the ability to freely choose where to live and whom to befriend? One prominent strategy for promoting neighborhood integration is for the government to distribute vouchers which individuals can then use toward a down payment on a house, but only for houses in socioeconomically diverse communities. Some object to this policy on the grounds that it unfairly restricts people’s choices (they say it’s paternalistic), and because it is a conditional rather than unconditional distribution of goods (Goetz 2018; Shelby 2016; compare Madva 2019a). If we take seriously that folks who live in concentrated poverty, such as urban ghettos, are denied the same access to the educational, healthcare, and employment opportunities available to residents of wealthy suburbs, then just devoting our efforts toward distributing one-way tickets out of the ghetto might seem proble matic. Does it, in effect, amount to telling these folks that their neighborhoods and schools are beyond saving, and getting access to decent resources and opportunities requires uprooting their households, leaving behind their social networks, and moving away from services that cater to their distinctive needs and preferences (such as religious centers or hair salons)? What about the ghetto residents who would rather stay put, and what about the hostility or discrimination that those who move might face from new neighbors who don’t want them there? One of the perennial obstacles to top-down initiatives to promote intergroup contact is the resistance of the advantaged, such as through “white flight.” Historically, a prominent strategy for racially integrating schools in residentially segregated areas was to bus students across town. When these busing policies were put in place, many wealthy white parents just sent their kids to private school rather than let them share classrooms with less privileged children of color. (This trend was as common in Boston, Massachusetts as in Birmingham, Alabama.) Productive intergroup cooperation is a powerful intervention, but creating conditions for it that are both effective in durable ways and fair to everyone involved has proven difficult. Some conclude that we should think less about integrationist strategies, which aim to move people to resources, and focus more on redistributive strategies, which move resources—and power—to the people (Young 2000). 5.5 Structural Reforms #5. Powers to the Peoples Thus a final set of structural reforms tries to “cut to the chase” and focus on directly alleviating status hierarchies and inequalities rather than on breaking down psychological “us” vs. “them” dichotomies. These proposals are so
Individual and Structural Interventions
257
varied that it is a disservice to lump them all together, but it is nevertheless worthwhile to draw readers’ attention to a wider range of questions about how best to overhaul society. First, consider ways to make the distribution of goods fairer. Some overhauls that might help to mitigate status differentials in society are universalistic and colorblind: guarantee everyone free healthcare, a basic income (Matthews 2017), and better financial support for students all the way through college and grad school (Miller and Flores 2016). Such proposals are seen as radically leftwing in the United States of 2019, but some are common and uncontroversial in other parts of the world. (For discussion of these and other proposals that would apply to everyone but likely have the most positive effects on the most oppressed, see Movement for Black Lives (n.d.).) Others argue that more targeted, color-conscious redistributions to members of disadvantaged groups are necessary, such as reparations for past injustices. For an accessible intro duction and back-and-forth about the virtues and vices of taking racial reparations seriously—not just for slavery but for the many injustices that have continued since—see Coates (2014a), Williamson (2014), and Coates (2014b). A different set of tools for combating status disparities has less to do with making sure that goods like healthcare and education are equally available to all, and more to do with the well-functioning of democracy—in overtly poli tical contexts but also more broadly in workplaces and classrooms. Note, for example, that most workplaces have a strong hierarchical structure, a top boss with nearly unlimited power to hire and fire whom they please, to force employees to work long hours in unpleasant conditions, and so on. In response, some call for movement toward greater workplace democracy (Anderson 2017; Frega et al. 2019). The idea is not for workers to vote democratically on every major decision. Instead, they might, for example, elect someone to represent their interests on the board that runs the company, or they might all jointly participate in deciding on the right decision-making criteria for getting promoted (structural reform #1). Defenders of these reforms argue that democracy is not just about “one person, one vote,” but about a culture of equals actively participating and sharing in the governing and decision-making processes of social institutions, including businesses, schools, and even families. Such reforms do not eliminate the distinction between leader and led, but they might reduce the perception that those in leadership positions are inherently “better than” while others are “less than.” That is, under these conditions, differences in leadership status would not entail differences in esteem or respect. Moreover, as mentioned in the pre vious section, getting people to cooperate on terms of social equality is key for reducing prejudice. Perhaps a similar lesson should be applied in disadvantaged communities. Malcolm X (1963) famously contrasted “racial segregation” from “racial separation” in terms of community control. He argued that America’s segre gationist hierarchy was wrong not because it meant that whites and blacks lived separately (he disagreed that “separate is inherently unequal”), but
258
Alex Madva
because whites had an inordinate amount of control over black communities. His defense of “separation” called for more local control within predominantly black communities. The recent Movement for Black Lives has similarly called for more community control. Perhaps we should commit to redistributing resources to oppressed communities, but make sure that members of those communities have the democratic freedom to collectively decide for themselves how they’d like to use those resources (Goetz 2018; Shelby 2016). This would again be about ensuring that residents of disadvantaged communities have the same status and respect as everyone else. While we’re at it, we might as well highlight broader reforms to promote well-functioning democracy, such as, in the USA, the elimination of the electoral college (Prokop 2016), reforming how political campaigns are financed (Kliff 2018b), complex revisions to elections like ranked-choice voting (Nilsen 2018), or redrawing political districts to prevent one side from holding onto power even when they lose the popular vote (Druke 2017). Of course, for many of these reforms, the more impactful they’ll be, the harder they’ll be to set in motion and sustain. Returning more power to more people means taking power away from the elite few, and they don’t want that to happen. In fact, even among members of disadvantaged groups, it’s often difficult to appreciate just how unfair the current system is, let alone to mobilize against it (Jost 2015). When it comes to more dramatic efforts to reduce power inequalities between historically advantaged and disadvantaged groups, color-conscious proposals like reparations or integration often face more backlash than universalistic and colorblind policies. But it is easy to overstate the disparities in divisiveness between colorblind and color-conscious proposals. Evidence suggests that many universalistic reforms, such as expanding healthcare access for all citizens, are often perceived by whites as delivering unfair benefits to people of color (Maxwell and Shields 2014). Even basic views about the reality of climate change are increasingly predicted by our attitudes about race (Benegal 2018). Our politics have become so polarized that a whole range of superficially non-racial political and scientific issues are thoroughly intertwined with beliefs and biases about race (and about gender and other social categories!). As a result, we should be skeptical about the possibility of identifying major structural transformations that somehow avoid divisive backlash altogether. This returns me to a point I emphasized at the outset of this chapter related to Boxed Out and Leaving Mothers Behind. Some argue that, when it comes to bringing about a more just society, we should focus more on changing structures and less on changing hearts and minds (Mallon 2018; Vuletich and Payne 2019). But such claims repeatedly fail to appreciate the extent to which our hearts and minds prop up these unjust structures (Mandalaywala et al. 2018; Plaut 2010; Ridgeway 2014). Eliminating status differences between historically privileged and oppressed groups may reduce individuals’ biases, but eliminating individuals’ biases toward other social
Individual and Structural Interventions
259
groups may increase support for measures to reduce status differences. More generally, if we want to change the status quo, we have to convince enough people that the status quo is unfair. So we have to examine the psychologi cal precursors that lead people to get angry at injustice and animated to do something about it (Jost et al. 2017; van Zomeren 2013). Structures change when attitudes change—and attitudes change when structures change when attitudes change when structures change! Our beliefs, habits, biases, and social structures are thoroughly interconnected and mutually reinforcing. Neither comes first; neither comes second; it must be both/and every step of the way.
6 Conclusion Leaders have a powerful role to play in shaping how individuals interpret and respond to their own implicit biases. When authority figures send the message to their subordinates that their uncomfortable gut feelings are valid representations of social reality (e.g., telling straight people that the discomfort they feel toward gay people is justified), then vague implicit biases can transform into wholeheartedly endorsed explicit prejudices, and lead people to act in more discriminatory ways (Cooley et al. 2015; Madva 2019c). Leaders have the power to turn implicit biases into explicit pre judices and overt acts of discrimination. Such findings are disheartening when we think about people in leadership positions who stoke prejudice and division. But we can also respond to such findings as calls to action. The upshot is that we need to become leaders. Become the formal or informal leaders who demand change. Run for club president or political office, apply to move up the ranks at your job. When you get there, use your leadership status to broadcast your commitment to fairness and against bias. What signals are you sending by the policies you endorse, the opinions you respect, and the jokes you laugh at? How committed are you to fair, trans parent, and shared decision-making and data-gathering, and to ensuring that everyone is treated with equal esteem and respect? We don’t just need leadership in politics, business, or education, i.e., the contexts where specific individuals are granted official supervisory status over others. Making headway will require a whole bunch of leaders tackling these complex problems from a whole bunch of angles, from many different positions in society. (Patrisse Cullors, co-founder of #BlackLivesMatter, calls for a movement that neither anoints a single figurehead leader nor becomes wholly leaderless; the movement must instead be leader-full.) We need scientists to study causes and potential interventions, and we need activists and politicians to make changes. We also need artists to envision emancipatory alternatives. We need filmmakers and videogame designers and vloggers and science-fiction writers and journalists and therapists and lawyers and spiritual leaders. We need all the ingenuity, creativity, experi mentation, and exploration we can get. The problem is so multifaceted, with so many different aspects to work on, that many people with many
260
Alex Madva
different strengths are necessary. Everyone can contribute, and contribute we must if meaningful, lasting change is going to come. We must also not forget that we’re the ones who have to make these changes happen. They will not happen on their own. Recall Martin Luther King Jr.’s warning about the dangers of believing that progress comes, slowly but inevitably, with the passage of time: Such an attitude stems from a tragic misconception of time, from the stran gely irrational notion that there is something in the very flow of time that will inevitably cure all ills. Actually, time itself is neutral; it can be used either destructively or constructively. More and more I feel that the people of ill will have used time much more effectively than have the people of good will. We will have to repent in this generation not merely for the hateful words and actions of the bad people but for the appalling silence of the good people. Human progress never rolls in on wheels of inevitability; it comes through the tireless efforts of men willing to be co workers with God, and without this hard work, time itself becomes an ally of the forces of social stagnation. We must use time creatively, in the knowledge that the time is always ripe to do right. (King Jr. 1963)
SUGGESTIONS FOR FUTURE READING Visions of structural reform •
•
•
•
Anderson, E. (2010) The Imperative of Integration. Princeton, NJ: Princeton University Press. A comprehensive argument for the value of integrationism, embedded in a broader ethical-political vision of how to improve society. Shelby, T. (2016) Dark Ghettos: Injustice, Dissent, and Reform. Cam bridge, MA: The Belknap Press of Harvard University Press. A philo sophical defense of community-based strategies for undoing injustice, which emphasizes the rationality and rights of the oppressed, and includes critical responses to integrationists like Anderson. Goetz, E.G. (2018) The One-Way Street of Integration: Fair Housing and the Pursuit of Racial Justice in American Cities. Ithaca, NY: Cor nell University Press. An accessible introduction to the debate between integrationists and community-based approaches, focused specifically on housing. Alexander, M. (2012) The New Jim Crow: Mass Incarceration in the Age of Colorblindness. New York: The New Press. This book is a masterpiece in terms of illuminating an entire system made up of parts that lock together to create injustice, as well as explaining how the best account of injustice will integrate attention to structural injustice with attention to individual psychology and behavior.
Individual and Structural Interventions •
•
•
•
261
Glaser, J. (2014) Suspect Race: Causes and Consequences of Racial Profiling. New York: Oxford University Press. Integrates insights on implicit bias with the criminal justice system, focused in particular on the question of profiling. Stewart, A.J. and Valian, V. (2018) An Inclusive Academy: Achieving Diversity and Excellence. Cambridge, MA: The MIT Press. A rigorous analysis of the ongoing causes of inequity in academic contexts and rich with compelling proposals for reform. A must-read for academics and other professionals looking to recruit and retain excellent colleagues from diverse backgrounds. Gullo, G.L., Capatosto, K., and Staats, C. (2018) Implicit Bias in Schools (first edition). New York: Routledge. Although ostensibly directed to K–12 school administrators (e.g., principals), this book offers an accessible introduction to implicit bias as well as several chapters devoted to individual and institutional change—including firsthand reports of successes and setbacks in real-world attempts at reform—that will be useful for almost any individual within almost any social structure. Plaut, V.C. (2010) Diversity science: Why and how difference makes a difference. Psychological Inquiry, 21(2): 77–99. https://doi.org/10.1080/ 10478401003676501. An excellent analysis of the importance of integrating individual psychology with social and cultural analysis, which also explores some of the perils of colorblindness and virtues of multiculturalism. Following this “target article” are eleven insightful commentaries, as well as Plaut’s replies.
DISCUSSION QUESTIONS Discussion questions about individual debiasing tools: 1
2
3
Brainstorm if–then plans for all the debiasing tools and structural reforms discussed in this chapter. For example, tool #2 called for adopting approach mindsets during intergroup interactions. Here is a potential if–then plan to promote this mindset: “When I meet a new person, then I’ll tell myself it’s an opportunity to learn!” Does that sound like a good plan? Are there any potential downsides to it? Can you think of strategies for “gamifying” these tools, such as an approach-oriented video game or an icebreaker game to find similarity with your classmates, instructors, and people with different social backgrounds for you? Tool #5 is about persuading people with different values from you. Some people find this tactic distasteful. Here is a similar but perhaps less problematic exercise: try to think of the goals and policies that you might already share with people from a different political orientation. You might even sit down with them and try to make a list of all the
262
4
5
Alex Madva goals and policies you share. You could start by writing down a list of all the people you know (family, friends, acquaintances, colleagues) who have different political views from you—and think about goals and values you might nevertheless share in common. Is the emphasis in tool #6 on situational explanations problematic? Are we just making excuses for people rather than holding them responsible for their own actions? Is there room to both acknowledge the powerful role that our situations play in our lives and still respect each other’s individual freedom and agency? In Chapter 8, “Moral Responsibility for Implicit Biases: Examining Our Options,” especially Section 4, Noel Dominguez raised numerous concerns about the limitations and shortcomings of various individual debiasing tools, similar to those discussed in Chapter 12. After reading both chapters, how powerful do you find Dominguez’s concerns? Should we be optimistic or pessimistic about individuals’ ability to take responsibility and overcome their biases, or should we suspend judgment until more evidence comes in?
Discussion questions about structural reforms: 1
2
3
4
Structural reform #1 calls for clear decision-making criteria, but the really hard work is figuring out what the criteria should be. What criteria should we use to decide who gets into college, who to hire and promote, and who to vote for? For example, do you think it’s ever OK to take someone’s social identity (race, gender, socioeconomic back ground, religion, etc.) into account in making these decisions? An important limitation to structural reform #2, on anonymous review, is that sometimes taking people’s social identities into account can be important for understanding the meaning and reasons behind their ideas. When do you think anonymous review might be valuable and when might it do more harm than good? Structural reform #3 regards environmental cues and signals. What messages do the environments in your school, job, favorite websites, etc. send about who belongs and succeeds in that space? Try to think of as many signals—both welcoming, inclusive ones and problematic, exclusionary ones—as you can. Chapter 6, “Epistemic Injustice and Implicit Bias,” by Jules Holroyd and Kathy Puddifoot, addressed epistemic injustices, that is, kinds of unfairness related to knowledge. One prominent example is testimonial injustice, when one person gives another less credibility because of their social group (e.g., when the police don’t believe you because you’re black). Could any of the individual or structural reforms discussed in Chapter 12 help to combat testimonial and other forms of epistemic injustice? Why or why not?
Individual and Structural Interventions 5
263
This chapter suggested that, when it comes to changing society, the debate about whether to prioritize individual or structural reforms doesn’t make sense, because both types of reform are necessary and mutually reinforcing. (Attitudes change when structures change when attitudes change when…) But can you think of potential reasons to prioritize one set of changes over another?
REFERENCES Adams, M. (2006) Radical integration. California Law Review, 94, 261. Alcoff, L.M. (2006) Visible Identities: Race, Gender, and the Self. New York: Oxford University Press. Alesina, A., Carlana, M., Ferrara, E.L., and Pinotti, P. (2018) Revealing Stereotypes: Evidence from Immigrants in Schools (Working Paper No. 25333). Cambridge, MA: National Bureau of Economic Research. https://doi.org/10.3386/w25333 Alexander, M. (2012) The New Jim Crow: Mass Incarceration in the Age of Colorblindness. New York: The New Press. Allport, G.W. (1979) The Nature of Prejudice. Boston, MA: Addison-Wesley Publishing Company. Anderson, E. (2010) The Imperative of Integration (reprint edition). Princeton, NJ: Princeton University Press. Anderson, E. (2017) Private Government: How Employers Rule Our Lives (and Why We Don’t Talk about It). Princeton, NJ: Princeton University Press. Andresen, M.E. and Nix, E. (2019) What Causes the Child Penalty? Evidence from Same Sex Couples and Policy Reforms [web document]. Statistics Norway, Dis cussion Papers No. 902. https://www.ssb.no/en/forskning/discussion-papers/wha t-causes-the-child-penalty [accessed 2 April 2019]. Antecol, H., Bedard, K., and Stearns, J. (2018) Equal but inequitable: Who benefits from gender-neutral tenure clock stopping policies? American Economic Review, 108: 2420–2441. https://doi.org/10.1257/aer.20160613 Aravena, F. (2015) The Impact of Fatherhood on Men’s Earnings in Canada. Ottawa: University of Ottawa. https://doi.org/10.20381/ruor-4017 Ard, K. (2015) Trends in exposure to industrial air toxins for different racial and socioeconomic groups: A spatial and temporal examination of environmental inequality in the U.S. from 1995 to 2004. Social Science Research, 53: 375–390. https://doi.org/10.1016/j.ssresearch.2015.06.019 Australians in American football (2018). Wikipedia. Avery, B. (2019) Ban the box: U.S. cities, counties, and states adopt fair hiring poli cies. National Employment Law Project. https://www.nelp.org/publication/ba n-the-box-fair-chance-hiring-state-and-local-guide/ [accessed 4 July 2019]. Axt, J.R., Ebersole, C.R., and Nosek, B.A. (2014) The rules of implicit evaluation by race, religion, and age. Psychological Science, 25: 1804–1815. https://doi.org/10. 1177/0956797614543801 Azevedo, F., Jost, J.T., Rothmund, T., and Sterling, J. (2019) Neoliberal ideology and the justification of inequality in capitalist societies: Why social and economic dimensions of ideology are intertwined. Journal of Social Issues, 75: 49–88. https:// doi.org/10.1111/josi.12310
264
Alex Madva
Bayer, U.C. and Gollwitzer, P.M. (2007) Boosting scholastic test scores by willpower: The role of implementation intentions. Self and Identity, 6: 1–19. https://doi.org/ 10.1080/15298860600662056 Bear, J.B. and Glick, P. (2017) Breadwinner bonus and caregiver penalty in work place rewards for men and women. Social Psychological and Personality Science, 8: 780–788. https://doi.org/10.1177/1948550616683016 Benegal, S.D. (2018) The spillover of race and racial attitudes into public opinion about climate change. Environmental Politics, 27: 733–756. https://doi.org/10.1080/ 09644016.2018.1457287 Bershidsky, L. (2018) No, Iceland hasn’t solved the gender pay gap. 4 January. Bloomberg. Bibas, S. (2015) The truth about mass incarceration. 16 September. National Review. Bishara, M. (2018) Why Australian punters are dominating American football. CNN. Brescoll, V.L. and Uhlmann, E.L. (2008) Can an angry woman get ahead? Status conferral, gender, and expression of emotion in the workplace. Psychological Science, 19: 268–275. https://doi.org/10.1111/j.1467-9280.2008.02079.x Carlana, M. (2019) Implicit stereotypes: Evidence from teachers’ gender bias. Quarterly Journal of Economics. https://doi.org/10.1093/qje/qjz008 Castilla, E.J. (2015) Accounting for the gap: A firm study manipulating organiza tional accountability and transparency in pay decisions. Organization Science, 26: 311–333. https://doi.org/10.1287/orsc.2014.0950 Castilla, E.J. (2008) Gender, race, and meritocracy in organizational careers. American Journal of Sociology, 113: 1479–1526. https://doi.org/10.1086/588738 Charlesworth, T.E.S. and Banaji, M.R. (2019) Patterns of implicit and explicit atti tudes: I. Long-term change and stability from 2007 to 2016. Psychological Science. https://doi.org/10.1177/0956797618813087 Cheryan, S., Meltzoff, A.N., and Kim, S. (2011) Classrooms matter: The design of virtual classrooms influences gender disparities in computer science classes. Com puters & Education, 57: 1825–1835. https://doi.org/10.1016/j.compedu.2011.02.004 Clark, A. (2018) ‘Nothing to worry about. The water is fine’: How Flint poisoned its people. The Guardian. Coates, T.-N. (2014a) The case for reparations. The Atlantic. Coates, T.-N. (2014b) The case for American history. The Atlantic. Cooley, E., Brown-Iannuzzi, J.L. and Boudreau, C. (2019) Shifting stereotypes of welfare recipients can reverse racial biases in support for wealth redistribution. Social Psychological and Personality Science. https://doi.org/10.1177/1948550619829062 Cooley, E., Payne, B.K., Loersch, C., Lei, R. (2015) Who owns implicit attitudes? Testing a metacognitive perspective. Personality and Social Psychology Bulletin, 41: 103–115. https://doi.org/10.1177/0146167214559712 Correll, S.J., Benard, S., and Paik, I. (2007) Getting a job: Is there a motherhood penalty? American Journal of Sociology, 112: 1297–1339. https://doi.org/10.1086/ 511799 Cortland, C.I., Craig, M.A., Shapiro, J.R., Richeson, J.A., Neel, R., and Goldstein, N.J. (2017) Solidarity through shared disadvantage: Highlighting shared experiences of discrimination improves relations between stigmatized groups. Journal of Personality and Social Psychology, 113: 547–567. https://doi.org/10.1037/pspi0000100 Crosby, F.J., Iyer, A., Clayton, S., and Downing, R.A. (2003) Affirmative action. Psychological data and the policy debates. American Psychologist, 58: 93–115.
Individual and Structural Interventions
265
Cudd, A.E. (2006) Analyzing Oppression (first edition). New York: Oxford University Press. Dasgupta, N. (2013) Implicit attitudes and beliefs adapt to situations: A decade of research on the malleability of implicit prejudice, stereotypes, and the self-concept. Advances in Experimental Social Psychology, 47: 233–279. Dasgupta, N. and Asgari, S. (2004) Seeing is believing: Exposure to counterstereotypic women leaders and effect on automatic gender stereotyping. Journal of Experimental Social Psychology, 40: 642–658. Dobbin, F., Schrage, D., and Kalev, A. (2015) Rage against the Iron Cage: The varied effects of bureaucratic personnel reforms on diversity. American Sociological Review, 80: 1014–1044. https://doi.org/10.1177/0003122415596416 Doleac, J.L. and Hansen, B. (2016) Does “Ban the Box” Help or Hurt Low-Skilled Workers? Statistical Discrimination and Employment Outcomes When Criminal Histories are Hidden (Working Paper No. 22469). Cambridge, MA: National Bureau of Economic Research. https://doi.org/10.3386/w22469 Dover, T.L., Major, B., and Kaiser, C.R. (2014) Diversity initiatives, status, and system-justifying beliefs: When and how diversity efforts de-legitimize discrimina tion claims. Group Processes & Intergroup Relations, 17: 485–493. https://doi.org/ 10.1177/1368430213502560 Druke, G. (2017) The Gerrymandering Project. FiveThirtyEight. Dunham, Y., Baron, A.S., and Banaji, M.R. (2008) The development of implicit intergroup cognition. Trends in Cognitive Sciences, 12: 248–253. https://doi.org/10. 1016/j.tics.2008.04.006 Eberl, C., Wiers, R.W., Pawelczack, S., Rinck, M., Becker, E.S., and Lindenmeyer, J. (2013) Approach bias modification in alcohol dependence: do clinical effects replicate and for whom does it work best? Developmental Cognitive Neuroscience, 4: 38–51. Enos, R.D. (2017) The Space Between Us: Social Geography and Politics. New York: Cambridge University Press. Feinberg, M. and Willer, R. (2015) From gulf to bridge: When do moral arguments facilitate political influence? Personality and Social Psychology Bulletin, 41: 1665– 1681. https://doi.org/10.1177/0146167215607842 Forgas, J.P. (2011) She just doesn’t look like a philosopher…? Affective influences on the halo effect in impression formation. European Journal of Social Psychology, 41: 812–817. Frega, R., Herzog, L., and Neuhäuser, C. (2019) Workplace democracy—The recent debate. Philosophy Compass. https://doi.org/10.1111/phc3.12574 Gallup-International Labour Organization (2017) Towards a better future for women and work: Voices of women and men. Gasdaglis, K. and Madva, A. (forthcoming) Intersectionality as a regulative ideal. Ergo. Gehlbach, H., Brinkworth, M.E., King, A.M., Hsu, L.M., McIntyre, J., and Rogers, T. (2016) Creating birds of similar feathers: Leveraging similarity to improve teacher–student relationships and academic achievement. Journal of Educational Psychology, 108: 342–352. https://doi.org/10.1037/edu0000042 Glaser, J. (2014) Suspect Race: Causes and Consequences of Racial Profiling. New York: Oxford University Press. Goetz, E.G. (2018) The One-Way Street of Integration: Fair Housing and the Pursuit of Racial Justice in American Cities. Ithaca, NY: Cornell University Press.
266
Alex Madva
Gollwitzer, P.M. and Sheeran, P. (2006) Implementation intentions and goal achievement: A meta‐analysis of effects and processes. Advances in Experimental Social Psychology, 38: 69–119. Goues, C.L., Brun, Y., Apel, S., Berger, E., Khurshid, S., and Smaragdakis, Y. (2018) Effectiveness of anonymization in double-blind review. Communications of the ACM, 61: 30–33. https://doi.org/10.1145/3208157 Harell, A., Soroka, S., and Iyengar, S., 2016. Race, prejudice and attitudes toward redistribution: A comparative experimental approach. European Journal of Political Research, 55: 723–744. https://doi.org/10.1111/1475-6765.12158 Harvey, K.E., Suizzo, M.-A., and Jackson, K.M. (2016) Predicting the grades of low income–ethnic-minority students from teacher–student discrepancies in reported motivation. The Journal of Experimental Education, 84: 510–528. https://doi.org/ 10.1080/00220973.2015.1054332 Hernandez, P. (2017) Ban-the-Box “statistical discrimination” studies draw the wrong conclusions. National Employment Law Project. https://www.nelp.org/ blog/ban-the-box-statistical-discrimination-studies-draw-the-wrong-conclusions/ [accessed 21 October 2018]. Hutson, J.A., Taft, J.G., Barocas, S., and Levy, K. (2018) Debiasing desire: Addres sing bias & discrimination on intimate platforms. Proceedings of the ACM on Human–Computer Interaction, 2: 1–18. https://doi.org/10.1145/3274342 Johnson, G. (in preparation) Algorithmic Bias: On the Implicit Biases of Social Technology. Jost, J.T. (2015) Resistance to change: A social psychological perspective. Social Research, 82: 607–636. Jost, J.T., Becker, J., Osborne, D., and Badaan, V. (2017) Missing in (collective) action: Ideology, system justification, and the motivational antecedents of two types of protest behavior. Current Directions in Psychological Science, 26: 99–108. https://doi.org/10.1177/0963721417690633 Kaiser, C.R., Major, B., Jurcevic, I., Dover, T.L., Brady, L.M., and Shapiro, J.R. (2013) Presumed fair: Ironic effects of organizational diversity structures. Journal of Personality and Social Psychology, 104: 504–519. Killewald, A. (2013) A reconsideration of the fatherhood premium: Marriage, cor esidence, biology, and fathers’ wages. American Sociological Review, 78: 96–116. https://doi.org/10.1177/0003122412469204 KingJr., M.L. (1963) Letter from a Birmingham jail. https://kinginstitute.stanford. edu/king-papers/documents/letter-birmingham-jail [accessed 6 January 2019]. Kliff, S. (2018a) A stunning chart shows the true cause of the gender wage gap. Vox. Kliff, S. (2018b) Seattle’s radical plan to fight big money in politics. Vox. Kliff, S. (2017) The truth about the gender wage gap. Vox. Kukla, R. (2018) Diversity and philosophy journals: How to avoid conservative gatekeeping. Blog of the APA. https://blog.apaonline.org/2018/08/30/diversity-and-philo sophy-journals-how-to-avoid-conservative-gatekeeping/ [accessed 2 January 2019]. Levontin, L., Halperin, E. and Dweck, C.S. (2013) Implicit theories block negative attributions about a longstanding adversary: The case of Israelis and Arabs. Journal of Experimental Social Psychology, 49: 670–675. https://doi.org/10.1016/j. jesp.2013.02.002 Madva, A. (2016) A plea for anti-anti-individualism: How oversimple psychology misleads social policy. Ergo, an Open Access Journal of Philosophy, 3: 701–728. https://doi.org/10.3998/ergo.12405314.0003.027
Individual and Structural Interventions
267
Madva, A. (2017) Biased against debiasing: On the role of (institutionally sponsored) self-transformation in the struggle against prejudice. Ergo, an Open Access Journal of Philosophy, 4: 145–179. http://dx.doi.org/10.3998/ergo.12405314.0004.006 Madva, A. (2019a) Integration, community, and the medical model of social injustice. Journal of Applied Philosophy. https://doi.org/10.1111/japp.12356 Madva, A. (2019b) The inevitability of aiming for virtue. In B.R. Sherman and S. Goguen (eds), Overcoming Epistemic Injustice. Lanham, MD: Rowman & Littlefield. Madva, A. (2019c) Social psychology, phenomenology, and the indeterminate content of unreflective racial bias. In E.S. Lee (ed.), Race as Phenomena: Between Phe nomenology and Philosophy of Race (pp. 87–106). Lanham, MD: Rowman & Littlefield International. Mallett, R.K., Wilson, T.D., and Gilbert, D.T. (2008) Expect the unexpected: Failure to anticipate similarities leads to an intergroup forecasting error. Journal of Per sonality and Social Psychology, 94: 265–277. https://doi.org/10.1037/0022-3514.94. 2.94.2.265 Mallon, R. (2018) Constructing race: racialization, causal effects, or both? Philosophical Studies, 175: 139–1056. https://doi.org/10.1007/s11098-018-1069-8 Mandalaywala, T.M., Amodio, D.M., and Rhodes, M. (2018) Essentialism promotes racial prejudice by increasing endorsement of social hierarchies. Social Psychological and Personality Science, 9: 461–469. https://doi.org/10.1177/ 1948550617707020 Master, A., Cheryan, S., and Meltzoff, A.N. (2016) Computing whether she belongs: Stereotypes undermine girls’ interest and sense of belonging in computer science. Journal of Educational Psychology, 108: 424. https://doi.org/10.1037/edu0000061 Matthews, D. (2017) A basic income really could end poverty forever. Vox. https:// www.vox.com/policy-and-politics/2017/7/17/15364546/universal-basic-income-rev iew-stern-murray-automation [accessed 8 January 2019]. Maxwell, A. and Shields, T. (2014) The fate of Obamacare: Racial resentment, eth nocentrism and attitudes about healthcare reform. Race and Social Problems, 6: 293–304. https://doi.org/10.1007/s12552-014-9130-5 Medina, J. (2012) The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations. Studies in Feminist Philosophy. New York and Oxford: Oxford University Press. Mendoza, S.A., Gollwitzer, P.M., and Amodio, D.M. (2010) Reducing the expression of implicit stereotypes: Reflexive control through implementation intentions. Per sonality and Social Psychology Bulletin, 36: 512–523. https://doi.org/10.1177/ 0146167210362789 Miller, B. and Flores, A. (2016) How lessons from health care and housing could fix higher education affordability. Vox. https://www.vox.com/2016/5/26/11767096/ higher-education-affordability [accessed 8 January 2019]. Miller, C.C. (2014) The motherhood penalty vs. the fatherhood bonus. The New York Times. Miller, C.C. (2015) When family-friendly policies backfire. The New York Times. Monteith, M.J. and Hildebrand, L.K. (2019) Sexism, perceived discrimination, and system justification in the 2016 U.S. presidential election context. Group Processes & Intergroup Relations. https://doi.org/10.1177/1368430219826683 Movement for Black Lives (n.d.) Platform. The Movement for Black Lives. https://p olicy.m4bl.org/platform/ [accessed 6 January 2019].
268
Alex Madva
Murphy, M.C., Kroeper, K.M., and Ozier, E.M. (2018) Prejudiced places: How contexts shape inequality and how policy can change them. Policy Insights from the Behavioral and Brain Sciences, 5: 66–74. https://doi.org/10.1177/2372732217748671 Murrar, S. and Brauer, M. (2019) Overcoming resistance to change: Using narratives to create more positive intergroup attitudes. Current Directions in Psychological Science. https://doi.org/10.1177/0963721418818552 Mutz, D.C. (2018) Status threat, not economic hardship, explains the 2016 pre sidential vote. PNAS, 115: E4330–E4339. https://doi.org/10.1073/pnas.1718155115 Nilsen, E. (2018) Maine voters blew up their voting system and started from scratch. Vox. O’Neil, C. (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown. Pager, D. (2003) The mark of a criminal record. American Journal of Sociology, 108: 937–975. https://doi.org/10.1086/374403 Pager, D., Western, B., and Bonikowski, B. (2009) Discrimination in a low-wage labor market: A field experiment. American Sociological Review, 74: 777–799. http s://doi.org/10.1177/000312240907400505 Parker, K. and Livingston, G. (2018) 7 facts about American fathers. Pew Research Center. http://www.pewresearch.org/fact-tank/2018/06/13/fathers-day-facts/ [accessed 7 January 2019]. Pettigrew, T.F. (2018) The emergence of contextual social psychology. Personality and Social Psychology Bulletin, 44: 963–971. https://doi.org/10.1177/0146167218756033 Pettigrew, T.F. (1998) Intergroup contact theory. Annual Review of Psychology, 49: 65–85. https://doi.org/10.1146/annurev.psych.49.1.65 Plaut, V.C. (2010) Diversity science: Why and how difference makes a difference. Psychological Inquiry, 21: 77–99. https://doi.org/10.1080/10478401003676501 Prokop, A. (2016) Why the Electoral College is the absolute worst, explained. Vox. Pronin, E., Lin, D.Y., and Ross, L. (2002) The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28: 369–381. https:// doi.org/10.1177/0146167202286008 Putnam, R.D. (2007) E Pluribus Unum: Diversity and community in the twenty-first cen tury—The 2006 Johan Skytte Prize Lecture. Scandinavian Political Studies, 30: 137–174. Ridgeway, C.L. (2014) Why status matters for inequality. American Sociological Review, 79: 1–16. https://doi.org/10.1177/0003122413515997 Robinson, C.D., Scott, W., Gottfried, M.A. (2019) Taking it to the next level: A field experiment to improve instructor–student relationships in college. AERA Open. https://doi.org/10.1177/2332858419839707 Saul, J. (2013) Implicit bias, stereotype threat, and women in philosophy. In K. Hutchinson and F. Jenkins (eds), Women in Philosophy: What Needs to Change? (pp. 39–60). New York: Oxford University Press. Schiappa, E., Gregg, P.B., and Hawes, D.E. (2006) Can one TV show make a dif ference? Will & Grace and the parasocial contact hypothesis. Journal of Homosexuality, 51: 15–37. https://doi.org/10.1300/J082v51n04_02 Shelby, T. (2016) Dark Ghettos: Injustice, Dissent, and Reform. Cambridge, MA: The Belknap Press of Harvard University Press. Shook, N.J. and Fazio, R.H. (2008) interracial roommate relationships: An experimental field test of the contact hypothesis. Psychological Science, 19: 717–723. Simonovits, G., Kézdi, G., and Kardos, P. (2018) Seeing the world through the other’s eye: An online intervention reducing ethnic prejudice. American Political Science Review, 112: 186–193. https://doi.org/10.1017/S0003055417000478
Individual and Structural Interventions
269
Skorinko, J.L., Laurent, S., Bountress, K., Nyein, K.P., and Kuckuck, D. (2014) Effects of perspective taking on courtroom decisions. Journal of Applied Social Psychology, 44: 303–318. https://doi.org/10.1111/jasp.12222 Sommers, S.R. (2006) On racial diversity and group decision making: Identifying multiple effects of racial composition on jury deliberations. Journal of Personality and Social Psychology, 90: 597–612. https://doi.org/10.1037/0022-3514.90.4.597 Stewart, A.J. and Valian, V. (2018) An Inclusive Academy: Achieving Diversity and Excellence. Cambridge, MA: The MIT Press. Stewart, B.D. and Payne, B.K. (2008) Bringing automatic stereotyping under control: Implementation intentions as efficient means of thought control. Personality and Social Psychology Bulletin, 34: 1332–1345. Stewart, T.L., Latu, I.M., Kawakami, K., and Myers, A.C. (2010) Consider the situation: Reducing automatic stereotyping through Situational Attribution Training. Journal of Experimental Social Psychology, 46: 221–225. https://doi.org/ 10.1016/j.jesp.2009.09.004 The National Registry of Exonerations (2019) Exonerations contributing factors by crime. https://www.law.umich.edu/special/exoneration/Pages/ExonerationsContribFa ctorsByCrime.aspx# [accessed 2 January 2019]. Thomas, M. (2016) The Impact of Mandated Maternity Benefits on the Gender Dif ferential in Promotions: Examining the Role of Adverse Selection. New York: Institute for Compensation Studies. Todd, A.R., Bodenhausen, G.V., Richeson, J.A., and Galinsky, A.D., 2011. Perspec tive taking combats automatic expressions of racial bias. Journal of Personality and Social Psychology, 100: 1027–1042. https://doi.org/10.1037/a0022308 Toli, A., Webb, T.L., and Hardy, G.E. (2016) Does forming implementation inten tions help people with mental health problems to achieve goals? A meta-analysis of experimental studies with clinical and analogue samples. British Journal of Clinical Psychology, 55: 69–90. https://doi.org/10.1111/bjc.12086 Trawalter, S. and Richeson, J.A. (2006) Regulatory focus and executive function after interracial interactions. Journal of Experimental Social Psychology, 42: 406– 412. https://doi.org/10.1016/j.jesp.2005.05.008 Uhlmann, E.L. and Cohen, G.L. (2005) Constructed criteria: Redefining merit to justify discrimination. Psychological Science, 16: 474–480. van den Bergh, L., Denessen, E., Hornstra, L., Voeten, M., and Holland, R.W. (2010) The implicit prejudiced attitudes of teachers: Relations to teacher expectations and the ethnic achievement gap. American Educational Research Journal, 47: 497–527. Van Dessel, P., Gawronski, B., Smith, C.T., and De Houwer, J. (2017) Mechanisms underlying approach-avoidance instruction effects on implicit evaluation: Results of a preregistered adversarial collaboration. Journal of Experimental Social Psychology, 69: 23–32. van Zomeren, M. (2013) Four core social‐psychological motivations to undertake collective action. Social and Personality Psychology Compass, 7: 378–388. Vezzali, L., Stathi, S., Giovannini, D., Capozza, D., and Trifiletti, E. (2015) The greatest magic of Harry Potter: Reducing prejudice: Harry Potter and attitudes toward stigmatized groups. Journal of Applied Social Psychology, 45: 105–121. https://doi.org/10.1111/jasp.12279 Vuletich, H.A. and Payne, B.K. (2019) Stability and change in implicit bias. Psychological Science, 30: 854–862. https://doi.org/10.1177/0956797619844270
270
Alex Madva
West, T.V., Magee, J.C., Gordon, S.H., and Gullett, L. (2014) A little similarity goes a long way: The effects of peripheral but self-revealing similarities on improving and sustaining interracial relationships. Journal of Personality and Social Psychology, 107: 81–100. https://doi.org/10.1037/a0036556 Wilkins, C.L., Hirsch, A.A., Kaiser, C.R., and Inkles, M.P. (2017) The threat of racial progress and the self-protective nature of perceiving anti-White bias. Group Processes & Intergroup Relations, 20: 801–812. https://doi.org/10.1177/1368430216631030 Williamson, K.D. (2014) The case against reparations. National Review. X, M. (1963) Racial separation. Speech. Available online at https://www.blackpast. org/african-american-history/speeches-african-american-history/1963-malcolm-x-ra cial-separation/ Young, I.M. (2000) Residential segregation and regional democracy. In Inclusion and Democracy (pp. 196–235). New York: Oxford University Press,. https://doi.org/10. 1093/0198297556.001.0001 Zarya, V. (2018) The share of female CEOs in the Fortune 500 dropped by 25% in 2018. Fortune.
Glossary
Term Accountability
Agent (agency)
Alief
All-things considered ought
Chapter 8
8, 9, 11 12
1
10
Definition Accountability is a certain kind of moral responsibility, which focuses on who can properly be held responsible, or held accountable, for good and bad actions and events. Focusing on accountability means focusing on the responsibilities that agents have towards one another, and the conditions under which they can be held responsible for failing to fulfill these responsibilities. Accountability is sometimes thought to come apart from attribution. For example, many believe that the US government is accountable for helping its citizens get back on their feet after they’ve lost homes and property to natural disasters or terrorist attacks, even if the US government didn’t cause or choose for those bad outcomes to happen. An agent is an individual with the ability to act, to make choices about what to do and how to do it based on their beliefs, preferences, and intentions. It makes sense to hold agents morally responsible for what they do. See also epistemic agent. The alief model of implicit attitudes views them as an associated mix of thoughts, feelings, and behavioral impulses. For example, although a given individual might have the belief that people deserve to be treated equally regardless of their race or religion, they might have negative gut reactions to members of certain races or religions. The alief model says these reactions have an associative structure, where seeing a person automatically activates aversive thoughts, feelings, and action tendencies. See also association, dual-construct model, implicit construct, and attitude. In the face of dilemmas and normative conflicts, you might think there are no good options or no good ways of deciding. Or, you might think that there is some higher-level way of deciding between the options, and something that you really should do, once you consider all the relevant information. This would be an all-things-considered ought. (Continued)
272
Glossary
(Cont.) Term
Chapter
Definition
Association
1, 5
Attitude
1, 8
Associations are psychological links between concepts, thoughts, and feelings. Our minds often make associations quickly, automatically, and without our conscious awareness. For example, when I say “salt,” you automatically think “pepper;” or when I say “hip,” you think “hop;” when I say “Tweedledee,” you think…. You don’t need to deliberate about what comes next; you just know. See also mental representation, implicit construct, alief, belief-based model, and proposition. Attitudes are dispositions or tendencies to react favorably or unfavorably, positively or negatively, toward things. We hold attitudes toward all kinds of things, like food and movies, but also to abstract objects (for example, some people have negative attitudes toward the number 13) and concepts (many people have positive attitudes toward justice). Attitudes include three main components: thoughts, feelings, and behavioral motivations. People with negative attitudes toward clowns might think they are weird, feel fear toward them, and have a behavioral impulse to run and hide from them. Importantly, we can also have (implicit or explicit) attitudes toward other individuals in virtue of their race, gender, or social identity. See also alief. Attributionist theories of moral responsibility claim you’re responsible for actions that can be attributed to you, typically because some aspect of your “deep self,” or your innermost character traits, is what caused them. According to attributionism, what matters for responsibility isn’t whether you chose to do the action of your own volition, but whether the action reflects what you’re really like. Attributionism might take us to be responsible for all kinds of unintentional actions, like laughing upon hearing that an enemy has died, or forgetting to call a friend on their birthday. The probability of some event happening, such as having a heart attack, prior to intervention, such as taking medication to lower your cholesterol. Discussions of base rates often come up in debates about the accuracy of our stereotypes and biases. For example, if most women are as a matter of fact more empathetic than most men, then stereotyping women as empathetic might be in line with the actual “base rates” of empathy in the population. See also generalization. A methodological paradigm in psychology popular in the mid-twentieth century. Behaviorists argued that psychology should only study observable stimuli and behavioural responses and not subjective, private mental states.
Attributionism
Base rate
Behaviorism
8
3, 4, 10
1
(Cont.) Term Belief-based models
Chapter 1
Bias as gerrymandered perception model
4, 5, 11
Bias as internalized social structure model Bias of crowds model
4, 11
3, 11
Centrality
4
Content of experience
2, 5
Definition These models argue that implicit constructs and attitudes are just like, or at least similar to, ordinary explicit beliefs, except that one kind of belief is unconscious while the other is conscious. See also rationality, proposition, and generalization. In politics, gerrymandering is a way of dividing up voting districts in a partisan way, so as to make the success of certain political parties more likely. According to some theorists, the same kind of thing happens in visual perception. Our social environments have been rigged by unfair practices to make it appear as if people deserve what they get and that pernicious stereotypes are true. Stereotypes constitute biases, on this view, because they promote a kind of perceptual rigidity and, also, hide the true cause of social inequalities. See also base rates and generalization. The theory that implicit biases in individual psychology reflect widespread cultural stereotypes, hence are ways in which individuals internalize social structures. The theory that measures of implicit constructs like the IAT primarily reflect situational variables rather than individual variables (see Individualistic Explanation and Structural Explanation). According to this theory, the differences in performance on measures of implicit bias are driven primarily by social structures. For example, average levels of racial bias on college campuses reflect the number of nonwhite faculty on campus, the number of Confederate monuments, and the overall economic mobility of students. An entity’s central features are the underlying features that cause or explain their other features. For example, we tend to think that an organism’s DNA is a central feature that plays a huge role in making them the way they are. Notably, centrality comes apart from salience. Sometimes the most causally important features of an organism are less salient or noticeable to us. When you think of a lion, you might think of its mane, but you can shave the mane and the lion will still be a lion. Other less salient features can’t be so easily removed, for example, having bones is a central feature of being a lion, even though it’s not the first thing that pops in our head when we think of lions. The things we perceive (including what we see, hear, etc.) are presented to us in certain ways. For instance, when we put a straight stick in water, our visual experience presents it as bent (we see it in a “bent” way), but what we believe or judge is that it’s straight. The contents of our experiences are the different ways we can perceive things. They characterize our perceptual perspective on the world, and they can be accurate or inaccurate, ill-founded or well-founded. (Continued)
(Cont.) Term
Chapter
Contributory injustice
6
Corporeal schema
7
Cultural stereotypes
4, 11
Debiasing strategies
7, 9, 12
Deep self
8
Definition Contributory injustice is a form of epistemic injustice that occurs when someone is willfully ignorant in a specific way, namely, they stick to using their own concepts and refuse to consider other concepts, which then blocks others’ ability to contribute knowledge and new concepts. Whereas hermeneutical injustice arises when there are simply gaps in our shared concepts and hermeneutical resources, contributory injustice occurs specifically when people are unable to contribute to the dominant shared resources—because those around them willfully refuse to consider alternative ways of thinking and to use new concepts for understanding the world. Discussed by Maurice Merleau-Ponty and Frantz Fanon, the corporeal schema is closely related to the notion of habit and bodily skill, and refers to ways of being aware of our own bodies, and how our bodily understandings relate us to the world. It is an orientation toward a task in which we know the position of our own body in relation to an object that we might reach out to grasp. You can think here about the postures, positions, and bodily orientations we use to navigate the world, like the different ways we learn to hold a glass of ice water versus a mug of hot tea. Fanon argues that because of how they are perceived by whites, people of color have a disrupted or fragmented sense of their own bodies. It feels different to have a black body than to have a white body because blacks are perceived and treated differently by other people. See also epidermal racial schema, stereotype threat, double consciousness, embodied cognition, and perceptual habit. Controlling images and ideas that enjoy a social existence and express how groups are represented in conversation, literature, online, and beyond. Cultural stereotypes are often considered to be part of social structures and structural explanations because they are part of the beyond-the-individual aspects of reality that matter for explaining social injustice. For example, the image of “the happy slave” was a cultural stereotype used to cover up and justify the enslavement of African and African American peoples. See also stereotypes. These are strategies for reducing, removing, or changing individuals’ biased assumptions, feelings, habits, and attitudes, including their implicit and explicit social prejudices and stereotypes. Some strategies (like anonymous review) aim to bypass or work around our biases, whereas others (like intergroup cooperation) aim to reduce or eliminate our biases altogether. A person’s “deep self” or “true self” refers to their innermost beliefs, desires, and traits– the facts that are really true of them, deep down. These are central features of the person. The notion of the “deep self” is key to attributionist theories of moral responsibility.
(Cont.) Term
Chapter
Definition
Dilemma
10
A dilemma is any situation where we feel pulled between opposing desires, obligations, plans, or sometimes even beliefs. See also normative conflict. Divergence occurs when our implicit (unconscious or automatic) mental states differ, or diverge, from our explicit (consciously-held or reflective) mental states. See also dual-construct models. Rather than “putting all our eggs in one basket” when we try to bring about social change, diversified experimentalism calls on us to (formally and informally) test a bunch of different individual and social experiments and interventions. Coined by W.E.B. Du Bois, “double consciousness” refers to a variety of experiences and injustices faced by black Americans. For one, it refers to the degree to which black Americans must internalize the way that white people perceive them into their own sense of self, so that black Americans walk with two different perspectives on themselves (their own, and white people’s). For another, it refers to the divided impulses to assimilate to white mainstream society and to rebel against it completely. In Chapter 7, Greene explains how double consciousness is an intellectual precursor to stereotype threat. See also corporeal schema, epidermal racial schema, and perceptual habits. Many theories of the mind hypothesize that there are two basic systems or processes (two types of mental construct, mental representation, or attitude). There are different ways of understanding the difference between the two types, and also different labels. A popular distinction is between implicit System 1 and explicit System 2. Implicit constructs are often thought to be unconscious, automatic, uncontrollable, effortless, and associative (see also association and alief), whereas explicit constructs are often thought to be conscious, reflective, deliberate, controllable, effortful, and propositional (see also rationality, epistemic rationality, and belief-based models). Embodied cognition refers to the idea that we can’t understand how our minds work unless we understand how our minds are situated in our bodies, and the extent to which our physical bodies, perceptual habits, and skillful actions shape the way we think about the world. For example, psychologists have studied how many of our metaphors and ways of speaking about the world seem to depend on the body, such as the fact that we describe affectionate people as warm—as if our ideas about what it means to be a nice person are shaped in part by the literally warm, physical affection we get from caretakers and loved ones.
Divergence
Diversified experimentalism
1
12
Double consciousness
7
Dual-construct model
1
Embodied cognition
2
(Continued)
(Cont.) Term
Chapter
Epidermal racial schema
7
Epistemic agent
9
Epistemic appropriation
6
Epistemic diligence
9
Epistemic duties
Epistemic exploitation
10
6
Definition Coined by Frantz Fanon, the epidermal racial schema refers to the way that black people (and other people of color) are immediately aware of how their skin and bodies appear in a world full of anti-black racism. It is harder for black people to just “be” or “be in the flow” in their own bodies, because others often perceive and treat them differently because of their skin color. See also stereotype threat, double consciousness, corporeal schema, embodied cognition, and perceptual habits. Potential knowers; individuals who are capable of making choices about the amount and kinds of knowledge they have and how they go about getting it. See also agent and epistemology. Epistemic appropriation is an epistemic injustice that occurs when someone produces and shares knowledge, but does not get credit. One example is “he-peating,” when a woman puts forward an idea at a meeting and no one responds, but then a man repeats the same idea later and gets credit for it. Epistemic diligence is an epistemic virtue, the trait or habit of persistently seeking the truth, even in the face of challenges to our knowledge, such as seeking out new information when we have reason to suspect we might be wrong, or by responding to requests from others for justification or self-critique. There are different ways of thinking about the relationship between our evidence and our beliefs. You might think that evidence can give us reasons to believe something (for example, if I look outside and see that the ground is wet, that gives me a reason to think it recently rained), but that we’re still not required to believe whatever the evidence says; we’re still free to reserve judgment, perhaps. But if you think we are sometimes required to believe in line with the evidence, then you might think we have epistemic duties or responsibilities, much like we have moral duties or responsibilities (e.g., the duty not to step on other people’s toes just for the fun of it). Perhaps, because the evidence is so strong, we have an epistemic duty to believe that human-caused climate change is a real and dangerous threat to the earth. See also epistemic rationality, rationality, warrant, and ill-founded. Epistemic exploitation is an epistemic injustice that occurs when members of disadvantaged groups bear the burden of educating members of dominant groups about the injustices they face, e.g., when white people expect their black acquaintances to explain what’s wrong with dressing up in blackface, rather than just looking up the answers themselves.
(Cont.) Term
Chapter
Epistemic friction
9
Epistemic humility
9, 12
Epistemic injustice
4, 6, 11
Epistemic objection
4, 10
Epistemic practices
9
Epistemic rationality
10
Epistemic responsibility
9
Definition A form of cognitive dissonance, or psychological tension or discomfort, that arises when individuals come in contact with alternative viewpoints, e.g. with communities, individuals, ideas, and knowledge systems different from their own. “Humility” means not overestimating how good you are at something, calibrating your level of confidence to your level of ability. Calling for epistemic humility, then, is a warning against overestimating how much we know, to be less arrogant and self-assured that the solutions to these problems are obvious. It is an epistemic virtue that requires being self-reflective about one’s own limitations. Epistemic injustice deals with the intersection of questions about fairness and questions about knowledge (epistemology), and the influence of social power on who gets believed and who gets ignored. Chapter 6 explores several different kinds of epistemic injustice, including testimonial injustice, hermeneutical injustice, epistemic exploitation, epistemic appropriation, contributory injustice, and the epistemology of ignorance. Epistemic objections are objections concerning knowledge and belief. Epistemic objections can include arguing that someone’s belief or testimony is false or ill-founded or unreliable, or that it lacks justification or warrant, or that it is inconsistent with the evidence, or that it doesn’t fit with other background beliefs and ideas. See also rationality and epistemic rationality. Habits or practices that help individuals and communities gain knowledge about themselves, their communities, and the worlds they inhabit. Epistemic rationality requires having true, justified beliefs about the world, building knowledge based on an accurate view of things. It also requires changing our beliefs in light of changes in our evidence. Suppose you check the forecast and see there’s a 90 percent chance of rain. Given that 90 percent chance, you should believe that it will rain. Further, it is appropriate to criticize you for being epistemically irrational were you to not believe it’ll rain, e.g., if you started to plan a picnic. By disregarding the likelihood of rain, you are doing something epistemically wrong, or irrational. See also warrant, ill-founded, and reliable. A set of traits, habits, and practices of the mind for cultivating epistemic virtues, such as open-mindedness, epistemic humility, and diligence, which help knowers engage in seeking information about themselves, others, and the world. (Continued)
(Cont.) Term Epistemic virtue
Epistemology (epistemic)
Epistemology of ignorance
Chapter 9
4, 5, 6, 9, 10
9
Ethics (or morality)
Evidence
10
Explanatory monism
11
Definition Traits, habits, or practices that reliably lead to knowledge and truth (such as open-mindedness, epistemic humility, and diligence), and contrast with epistemic vices, which are traits, habits, or practices that less reliably lead to knowledge (such as close-mindedness, arrogance, laziness, and the epistemology of ignorance). See also epistemic responsibility, rationality, and epistemic rationality. The word “epistemic” comes from the Greek word - - ,” meaning “knowledge.” Epistemology is “episteme the study of knowledge and justified (warranted, well-founded) belief. Epistemologists ask about the general nature of knowledge and how to get it. How can I know whether I’m dreaming right now? Is science the best (and only) way to gain knowledge about the world? Are implicit biases fundamentally obstacles to knowledge that distort the evidence or can they sometimes be accurate and reliable guides to truth? A deliberately paradoxical term coined by Charles Mills that reflects the lengths to which some individuals and communities go to in order to remain ignorant about the way the world really is. This might include, for example, white communities that tacitly collaborate in order to conceal the pervasiveness and seriousness of anti-black racism. This active construction of ignorance is an epis temic vice that is opposed to epistemic objections, epistemic virtue, epistemic responsibility, and epistemic rationality. Investigates individual normative questions, such as which actions are right and wrong, and what kind of person I should be. Ethics is not about what’s legal or illegal, but about what should be legal or illegal. It’s not about whether the Constitution says we have a certain right, but about whether we ought to have that right or not. See also normative conflict and moral responsibility. Evidence for a question under investigation is a reliable sign, symptom, or mark for information relevant to the question under investigation. For example, smoke is a reliable sign of fire and so it is evidence of fire. Similarly, my dog’s barking is evidence of someone at the door, a distinctive whistle from the kettle is evidence that the water has boiled, etc. See also warrant, ill-founded, epistemic rationality, epistemic virtue, and epistemic responsibility. The view that there is only one respectable kind of explanation. Some monists think that the only true explanations are physical explanations in terms of fields, forces, and particles (see also unified theories). Some monists about social science think that only individualistic explanations, which focus on individual preferences and decisions, are good explanations, and argue against appealing to social structures in our explanations. See also explanatory pluralism and explanatory particularism.
(Cont.) Term
Chapter
Definition
Explanatory particularism
11
Explanatory pluralism
11
Explicit construct (explicit bias, explicit attitude)
1
Generalization
4
Habitus
2
The view that scientific explanations are the specialized products of particular scientific communities. Each scientific discipline (physics, biology, economics, sociology) has its own specific types of explanation. See also explanatory pluralism and explanatory monism. The view that many kinds of explanation are useful and even necessary for science. Which kinds of explanations are best depend on our scientific and other aims. See also explanatory monism, explanatory particularism, and pluralism. Many theories of the mind (dual-construct models) hypothesize that there are two basic systems or processes (two types of mental construct or attitude). Explicit constructs—as opposed to implicit constructs—are often thought to be conscious, reflective, deliberate, controllable, effortful, and propositional. You can think of carefully thought-out decisions, stable beliefs, or deeply held preferences and commitments. Explicit constructs are typically thought to give rise to performance on direct or explicit measures of attitudes and bias, like self-report measures. An example of a self-report measure is, “How much pain are you feeling, on a scale from 1 to 10?” After we encounter a few specific examples of something, we might notice that the different examples seem to share common properties, and we might then form a general belief, or make a generalization, according to which we judge and predict that other examples are also likely to share that property. If every swan you ever see is white, you will probably form the generalization that all swans are white (and in this case you’d be wrong: some swans are black but you just haven’t seen any yet). Broadly speaking, making generalizations is essential to survival and getting around in the world. We also make generalizations about people, however, and some of these might be problematic stereotypes. See also base rate, bias as gerrymandered perception model, and normative conflict. Pierre Bourdieu’s term, derived from the Latin word for “habit” or “comportment,” to refer to the system of dispositions and skills that people develop for perceiving, appreciating, and navigating the world. These individuallevel dispositions are shaped by larger social structures but also help to shape and support social structures in turn. See also corporeal schemea, perceptual habit, and social structure. Hermeneutics (not to be confused with "heuristic"!) is the study of interpretation, for example, how best to interpret religious or philosophical texts, or works of art, but also how best to interpret each other and understand the world. “Hermeneutical” is a synonym for “interpretive.” See also hermeneutical resources, hermeneutical injustice, and contributory injustice.
Hermeneutical
6, 11
(Continued)
(Cont.) Term
Chapter
Definition
Hermeneutical injustice
6, 11
Hermeneutical resources
6, 11
“Hermeneutical” is a synonym for “interpretive,” and hermeneutical injustice is an epistemic injustice that occurs when individuals lack the concepts they need to interpret and understand their own experiences. Specifically, this situation becomes unjust when there is a gap in the shared conceptual, hermeneutical resources, which is in turn due to some groups having an unfair amount of power and influence over which concepts get formed and shared. For example, the concept of sexual harassment came to pro minence in the 1960s only after women worked collectively to understand and articulate the experiences that were making their participation in the workplace so difficult and costly. Before that time, men might describe that very same behavior as harmless flirting or joking fun. “Hermeneutical” is a synonym for “interpretive,” and hermeneutical resources are the shared social resources that communities have for understanding and interpreting their own experiences and social reality. In particular, they involve shared concepts, narratives, scripts, and cultural stereotypes. The concepts that we have shape the way that we understand and communicate our experiences. For example, different communities have different resources for understanding gender. Some communities only have the concepts “man” and “woman,” whereas other communities have a wider set of concepts to make room for a wider set of gender possibilities, including being agender, genderqueer, and so on. These different communities have different hermeneutical resources, i.e. different options available for people to interpret themselves, their experiences, and each other. A heuristic (not to be confused with “hermeneutical”!) is a simple rule of thumb or cognitive shortcut for making decisions. It is often difficult or impossible to go through extensive cost-benefit calculations about what we should believe or how we should act. It is usually easier and sometimes even more reliable to use simple rules to guide us, even if we know that the rule won’t work 100 percent of the time. Arguably, the Golden Rule (“treat other people the way you want to be treated”) is a pretty good rule of thumb for ethical action, but there are lots of contexts where that rule won’t work. According to many psychologists and economists, human beings rely on a wide range of simple heuristics without even realizing it. These heuristics lead to gut feelings, intuitions, and hunches about what to believe or do without having to spend time deliberating. See also dual-construct models. A fallacy associated with theories of mind that attempt to explain human consciousness by positing equally intelligent causes, i.e., “little people” in the brain.
Heuristic
4
Homunculus fallacy
1
(Cont.) Term Ill-founded (versus well-founded)
Implicit Association Test (IAT)
Chapter 5
1, 2, 3, 5, 11
Implicit construct (implicit bias, implicit attitude)
1
In-between belief
1
Indirect control
8
Definition A belief, judgment, or perception is ill-founded if it is formed or maintained epistemically badly, and in contrast it is well-founded if it is formed and maintained epistemically well. An example of ill-founded judgment occurs when a police officer judges that an unarmed person is carrying a gun, when really it’s just a cell phone. Being ill-founded or well-founded is distinct from being true or false. True beliefs can be ill-founded, and well-founded beliefs can be false. See also warrant, reliable, epistemic rationality, epistemic objection, epistemic virtue, and epistemic responsibility. A measure of how quickly and accurately people associate pairs of concepts. Instead of relying on our definition, you should try one for yourself. Google “Implicit Association Test” or go to https://implicit.ha rvard.edu/implicit/. See also association, attitude, alief, and implicit construct. Many theories of the mind (dual-construct models) hypothesize that there are two basic systems or processes (two types of mental construct or attitude). There are different ways of understanding the difference between the two types, but implicit constructs—as opposed to explicit constructs—are often thought to be unconscious, automatic, uncontrollable, effortless, and associative (see also association and proposition). You can think here of automatic habits (including perceptual habits), spontaneous gut reactions, or knee-jerk intuitions. Implicit constructs are typically thought to give rise to performance on indirect or implicit measures of attitudes and bias, like the Implicit Association Test (IAT). Whereas belief-based models view implicit attitudes as ordinary beliefs, and association or alief models view implicit attitudes as more simplistic associative links, the in-between belief model views implicit attitudes as a kind of vague in-between case. Sometimes they act more like beliefs, but sometimes they act more like associations. See also dual-construct models and proposition. Being morally responsible for something usually requires that you have some control over what happens. Often theorists are thinking about being able to control your own behavior in the moment and choose what to do. But we can also take indirect control over our actions. In particular, even if we cannot always directly control how our implicit biases affect us in the moment, maybe we can take steps, like anonymous paper grading, to indirectly prevent the biases from affecting us. (Continued)
(Cont.) Term
Chapter
Definition
Individualistic explanation
11
Inequalities
3, 6, 11
Inequities
3, 6
Liability
8
Mental construct
1
Mental representation
1
Individualistic explanations for behavior and social outcomes focus on psychological features of individuals, such as their beliefs, preferences, goals, traits, habits, experiences, associations, and attitudes. See also structural explanations and explanatory monism. Simple differences between individuals or groups, which may or may not be fair. For example, some people are taller than others, which is an inequality, but is not inherently unfair. Inequalities contrast with inequities, which are unjust differences, e.g., when cars are designed in ways that prevent very short or very tall individuals from using them properly. Unfair differences between individuals or groups. For example, when laws are written so that only white men can inherit wealth, that is an unfair difference. Inequities contrast with inequalities, which are mere differences that may or may not be fair. For example, if you buy a house from someone, and as a result they are now wealthier than you, this inequality all by itself may not be unfair. Liability is the legal sense of responsibility. You are liable for something if the law says it’s your responsibility to fix it, replace it, pay for it, etc. This notion of responsibility is narrower than moral responsibility. See also accountability. When psychologists build theories about what’s inside the mind, they hypothesize about the existence of specific items, events, and processes in the mind, including mental representations, concepts, beliefs, associations, propositions, attitudes, thoughts, perceptual experiences–and implicit and explicit biases. These are all examples of mental constructs. They are pieces of our psychological theories, which we typically can’t observe directly, so we must infer their existence on the basis of behavior and other considerations (like brain imaging). A psychological stand-in for a potential fact. A fundamental assumption among most psychologists today is that humans have mental states that represent the world as being a certain way and that those representations of the world affect how they think and act in it. For example, you might explain your roommate’s going to Chipotle for lunch using her belief (a mental representation) that Chipotle makes the best guacamole. See also mental construct, association, and proposition.
(Cont.) Term
Chapter
Definition
Moral encroachment
10
Moral responsibility
8, 9
Normative
4, 5, 6, 7, 8, 9, 10, 11
Normative conflict
10
Open mindedness
9
Moral encroachment is one way of responding to normative conflicts between ethics and epistemology. Here the view is that what you’re justified in believing shifts depending on whether you’re in a situation with high or low moral stakes. Moral considerations “encroach” upon epistemic considerations. For example, maybe you should raise the bar on how much evidence you require to believe something, if the belief is potentially morally problematic, like forming the belief that someone is a valet based on their skin color. See also warrant, reliable, epistemic rationality, epistemic virtue, and epistemic responsibility. An agent is morally responsible for an action when they are eligible for some sort of morally-laden response, such as praise, blame, gratitude, or resentment. This notion is broader than liability and legal or criminal responsibility, and deals with all areas of human conduct, big and small. While other areas of ethics tell you what makes your actions right or wrong, theories of moral responsibility tell you what makes your action yours. See also attributionism, volitionism, accountability, revisionism, taking responsibility, epistemic agent, epistemic responsibility, and individualistic explanation. Unlike descriptive claims, normative or prescriptive claims regard what you should do. (When a doctor “prescribes” medicine, they’re saying you should take this medicine, whether you actually do or not.) Whereas psychology and political science describe, explain, and predict what individuals and governments actually do, ethics and political philosophy make claims about what individuals and governments should do. See also rationality, evidence, warrant, ill-founded, epistemic virtue, and epistemic responsibility. In general, a normative conflict is any dilemma or “tragic situation” where an individual is pulled between two opposing but apparently equally strong considerations about what they should do. Is it ever acceptable, for example, to kill one innocent person in order to save five other people? In Chapter 10, Basu focuses on the apparent normative conflict between ethics and epistemology. For example, we often think it is wrong to jump to conclusions and make assumptions about people whom we haven’t met, but what if our assumptions seem to be based on good statistical evidence and base rates? An epistemic virtue, the trait or practice of seeking and considering viewpoints that contrast with one’s own. See also epistemic responsibility.
(Continued)
(Cont.) Term
Chapter
Perceptual habit
2
Perceptual hijacking, or the problem of hijacked experience
5
Phenomenology
2
Pluralism
10
Definition Our perception of the world is not just a passive event that happens to us automatically. The ways we perceive the world are built over time through our active engagement with the world. Over time we come to acquire habits for perceiving the world. These are learned, skillful ways for perceiving what we take to be important or salient and ignoring everything else. Perceptual habits are necessary for navigating the world, but they can also lead to serious biases. See also embodied cognition and habitus. Siegel says that perceptual experience is “hijacked” when emotions (e.g., fear), desires (e.g., wishful thinking), or background assumptions (e.g., stereotypes) exert an improper influence on what we perceive, which in turn leads us to form inaccurate and ill-founded beliefs and judgments. The epistemic problem here is whether these hijacked perceptual experiences give us reasons to believe. For example, if someone feels fear seeing a person of color, and that fear “hijacks” their experience and leads them to misperceive a cell phone as a gun, is it rational or reasonable to believe that they’re holding a gun? Many judges and juries (and philosophers) have said “yes,” but in Chapter 5 Siegel argues “no.” Phenomenology is the study of lived experience, “from the inside” or “from the first-person point-of-view,” whereas neuroscience, psychology, and other social sciences can study experience from the “outside” or “third-person” perspective. From the inside, we can introspect and study the difference between experiencing a dull aching pain versus a stinging pain. From the outside, scientists can poke and prod us and listen to what we say, and study our brain responses. In philosophy, pluralism refers to the idea that we cannot give a clean, unified theory of something, because there is no single overarching principle to explain every case we’re interested in. In Chapter 10, Basu considers the possibility of pluralism about normative topics. Maybe there is one kind of “should” for ethics and another kind of “should” for epistemology, and there is no all things considered “should” to choose between them. In Chapter 11, Ayala-López and Beeghly consider the possibility of explanatory pluralism, which says that there is not just one kind of good explanation, and different questions call for different types of explanations (e.g., individualistic versus structural explanations). See also explanatory monism.
(Cont.) Term Predictive validity
Proposition
Chapter
Definition
1, 3
The degree to which researchers are able to predict some phenomenon on the basis of some other phenomenon. For example, in the case of implicit bias, many researchers are interested in the extent to which measures of psychological biases predict discriminatory behaviors in the real world. Propositions contrast with associations in that they are mental representations (see also mental constructs) that have a sentence-like structure. For example, the proposi tions “Cats love dogs” and “Dogs love cats” both have the same words, but they express different thoughts because they have different linguistic structures. They are two distinct propositions. See also explicit construct and belief-based models. Psychometrics is about the measurement of the mind, such as questions about the best way to measure implicit biases (or to measure personality traits, levels of pain, intelligence, etc.). See also mental construct, mental representation, implicit construct and explicit construct. Rationality has different meanings in different contexts. In Chapter 1, Johnson uses “rationality” to refer to the fact that implicit constructs (surprisingly) sometimes seem to respond in sensible or logical ways to different interventions, such as revising in response to good evidence or strong arguments. See also proposition and belief-based model. In Chapter 8, Dominguez discusses rationality in a stronger sense, to refer to the overall fit within a given individual’s beliefs, desires, and commitments. For example, if I have a long-term goal of buying a house, but I never save up any of my money, then there is a lack of “rational” coherence between my long-term plans and my short-term behavior. There are important ways in which these different senses of rationality are related. Plausibly, the kind of rationality that Johnson discusses is necessary for the kind of rationality that Dominguez discusses. See also epistemic rationality, epistemic virtue, and epistemic responsibility. In epistemology, reliability refers to whether certain ways of forming and revising our beliefs are good or bad at tracking the truth. Some ways of trying to acquire knowledge are much more reliable or truth-tracking than others. Is your belief true because of merely good luck, or is it true because it was formed on the basis of a reliable process? For example, asking an expert who has devoted their life to studying the climate is a more reliable way to gain knowledge about climate change than is asking a radio-show host with no background in climate science. And making decisions on the basis of our “gut feelings” may be a relatively reliable strategy in some contexts, but an unreliable strategy in other contexts. See also warrant, evidence, ill-founded, and epistemic rationality.
1
Psychometrics
1, 3
Rationality
1, 4, 8
Reliable (and unreliable)
4, 10
(Continued)
(Cont.) Term
Chapter
Revisionism
8
Salience
4
Social norms
11
Social structures
2, 3, 11, 12
Stereotype
All chapters
Definition Many philosophical theories try to explain or fit with our pretheoretical intuitions, practices, and commonsense ideas. By contrast, a “revisionist” view of something argues that our intuitive, commonsense ways of thinking about it are significantly or even radically mistaken, and need to be revised. In debates about moral responsibility, revisionism might mean, for example, that we should give up the idea that individuals have to be in control of their behavior in order to be responsible for it (see volitionism). Salience refers to the way certain things “stand out” and grab our perceptual or cognitive attention. When you think about a tiger, what is salient about them? Probably their stripes and their big claws and jaws. When you think about a lion, probably their mane is salient—although lots of lions don’t even have manes (what if they are cubs, or females, or just shaved?). Our metaphors and explanations of things also tend to make some factors salient and other factors less noticeable. For example, individualistic explanations make individuals’ beliefs, preferences, biases, stereotypes, and attitudes salient, and they risk obscuring background social structures—whereas structural explanations do the opposite. Expectations and informal rules and patterns regarding either how most people do act in certain situations, or how they should act. See also normative. Beyond-the-agent factors and networks, including institutions, laws, social norms, shared concepts, and material features of environments (e.g. city layouts, systems of public transportation, health care). These are the contexts or situations that shape which individual actions are possible, feasible, and desirable (versus impossible, or infeasible, or undesirable). Stereotypes in individual psychology contain the pieces of information we associate with social groups. Some psychologists even think stereotypes are nothing more than concepts or associations. For example, your concept of a doctor is the image that pops into your head when you head the word “doctor” and that very same image constitutes a stereotype. Often the stereotypes in individuals’ minds are manifestations of cultural stereotypes that exist in the social world more broadly. See also mental construct, mental representation, generalization, proposition, belief-based models, and bias of crowds model.
(Cont.) Term
Chapter
Stereotype threat
7
Structural explanation
3, 11
Taking responsibility
8, 9
Testimonial injustice
6
Transformative experience
11
Unified theory (versus nonunified theory)
4
Definition Stereotype threat occurs when being reminded of one’s social identity and the stereotypes associated with it (such as gender and racial stereotypes) leads to anxiety, alienation, and underperformance. Stereotype threat has most often been studied in the context of test-taking, but in Chapter 7, Greene argues that it is a much broader phenomenon, which also occurs when being perceived according to stereotypes disrupts a person’s easy and habitual way of moving through the world. In this sense, stereotype threat is similar to when athletes “choke” under pressure. See also double consciousness and epidermal racial schema. Structural explanations for behavior and social outcomes focus on the opportunities and constraints external to individuals, including laws, norms, layouts of physical space, and institutions. See also social structures, cultural stereotypes, and individualistic explanations. In the debate about moral responsibility for implicit bias, some think that we should stop asking backward-looking questions about who’s to blame (see also volitionism and attributionism) and instead ask forward-looking questions about what we should do next. These theories emphasize “taking responsibility” above all. When a speaker attempts to impart knowledge, the audience assesses whether the speaker is credible. Judgments of credibility involve judgments of the speaker’s reliability and trustworthiness as a knower. Testimonial injustice—a term coined by Miranda Fricker—is an epistemic injustice that occurs when speakers are treated as less reliable and trustworthy than they really are, and not believed when they should be, due to prejudice. This happens, for example, when the police don’t believe you because you’re black. A transformation that changes an individual’s point of view, especially their most basic preferences and attitudes, and that can only be understood by actually experiencing the transformation (knowing “what it’s like” to go through the experience). See also phenomenology and individualistic explanation. Scientists and philosophers create theories, and they typically prefer theories to be unified in clear and simple ways. Unified theories identify a single property or set of properties that all cases have in common. For example, physicists currently have one theory that is good at explaining the movements of subatomic particles, and another theory that is good at explaining the movements of planets and stars, but they have struggled to build a single, unified theory that explains everything. Non-unified theories (see also pluralism and explanatory pluralism) give up the idea that all the relevant cases must share the same properties, and they identify multiple properties that appear across cases. For example, in Chapter 4, Beeghly suggests that there may be no single property that all “bad” cases of bias share. (Continued)
(Cont.) Term
Chapter
Volitionism
8
Warranted (and unwarranted)
4
Definition Volitionist theories of moral responsibility claim that agents are responsible for actions that are the product of their volitions, or intentions. What makes agents responsible for an action is that they intentionally chose to do it (“of their own volition”). See also attributionism. In Chapter 4, Beeghly uses the terms “warranted” and “unwarranted” in an epistemic sense, to refer to whether beliefs are warranted, or justified, in light of the evidence. See also reliable, ill-founded, epistemic rationality, epistemic virtue, and epistemic responsibility.
Index
Page numbers in italics indicate Figures. accountability for implicit bias 165–166,
176–177, 249
accuracy, fairness vs. 192, 194, 197–198,
201, 203
ad-hoc 33, 35
advertising, hidden biases in 1
affect heuristic 86–87
agency: individuals having 238; as
ourselves embodied in action 156; racialized attitudes and 161; responsible 156; role in acquiring biases 157–158 age overestimation 106
Ahmed, Sara 147
Alexander, Michelle 235
aliefs 35
all things considered ought 201–203
Al-Saji, Alia 44
Anderson, Elizabeth 64, 89
anonymous review 250–252
Antony, Louise 90
approach mindsets 242–243
association 25–26
association/associationism: age
overestimation and 106; between concepts 105; discussion questions 112–113; future readings 111–112; looking deathworthy 107; racialized bias in 105–107; shooter task and 106; stereotypical 105; between thoughts 105–106; see also belief-based models associationism 20–21, 32–33
associative activation 31
Associative-Propositional Evaluation
(APE) Model 31–32, 33, 63, 67,
69–70
attention 103, 104, 108
attributability theories 157, 162,
165–166
availability heuristic 86
avoidance mindset 243
Ayala-López, Saray 64
Banaji, Mahzarin 26–27, 77
Ban the Box initiatives 234–235 base rate neglect 86
base rates 59–60, 86 see also generalizations behavior: IAT correlations and 70; predicting 69; prejudice affecting 4–5 behaviorism 22–24 belief, as a mental state 22, 23
belief-based models: ad-hoc 33, 35;
explaining rationality data 33; of
rationality 21–22, 32–33; see also
generalizations, propositional model,
rationality of bias
beliefs 107, 191–193, 204, 206
bias as fog: bias as shortcuts vs. 84; causing associations 80; defined 79; effect of 80; future readings 93; see also stereotype(s) bias as shortcuts: bias as fog vs. 84; cognitive misers 83; defined 83; future readings 93; see also biased judgments, fast thinking biased cognition, perceptions and 9
biased judgments: epistemic objections to 93–94; future readings 93–94; motivations of 8; perception and 88; as permissible 78; power relationships 91; as problematic 84–85; quick thinking and 90; undermining
290
Index
knowledge 83; unreliability of 85, 87;
as warranted 85, 90
biased perceptions see perception(s) bias-of-crowds model 218, 226
Big Data 218
bigotry 2
Black Americans 65
Black Skin, White Masks (Fanon)
142–143, 145
blameworthiness for biases 167–168, 186–187 Blindspot (Banaji and Greenwald) 26–27 blindspot bias 84
Blum, Lawrence 81, 192
body, as object of awareness 145
body language 47–48 body schema 143–145 Bourdieu, Pierre 49–50 Brown, Michael 109
Brownstein, Michael 161–163 bypass 103
calling in 13, 184–185 calling out 13–14, 184–185 Camp, Elisabeth 78–79 centrality 79, 80
Chetty, Raj 65
Chugh, Dolly 77, 80, 81
Civil Rights Movement 24
classic conditioning 28–30 Clifford, W. K. 194
cognitive gaps 175, 179, 180, 182, 185
cognitive misers 83
cognitive penetration 103, 104, 108
cognitive science 101–107 Cole, Jonathan 144
collective epistemic irresponsibility 178–179 collective hallucinations 80–81 Collins, Patricia Hill 116, 122
common-ground mindsets 243–244 commonsense thinking 164–165 congruent blocks 25
consciousness: of the body 143; as directed structure 141; phenomenological structure of 141–143; third-party 142–143 content of experience 103
contributory injustice 127–130 Copp, David 203
corporeal schema 143–145 see also double consciousness, embodied cognition, perceptual habits, stereotype threat
counterstereotypes 254
credibility 119–120 credibility deficits 118–119 crime-suggestive acuity 104
criminality, base rates for 59
criteria-based decision-making 248–249 criticisms of implicit bias: discussion
questions 72–73; examples of 7;
future readings 72
cultural analysis 99–101, 112
cultural stereotypes 215–216, 226
Dasgupta, Nilanjana 78
Davis, E. 121
debiasing strategies: accentuate the positive 246–247; approach mindsets 242–243; common-ground mindsets 243–244; discussion questions 261–262; finding common ground 245; if-then plans 241–242; individuallevel 238; persuasion and value 245–246; power of perspective 244; see also interventions, structural reforms decision-making criteria 247–249, 252
deep self theories 161–164, 167 see also attributability theories developmental psychology 4
Dewey, John 156
differential evaluations 120
Dilemmist vs. Pluralist 198–201 directed structure 141
direct measures: of people’s attitudes
5–6; predictive power of 28; for social
bias 24
disbelief 103
discrimination: banning the box exam ple 235; epistemic exploitation and 124; hermeneutical injustice 127; historical patterns of 192; intentional 63; social structures causing 64–67; wrongful 222–223 discussion questions: association/ associationism 112–113; debiasing strategies 261–262; epistemic injustices 132, 229–230; epistemic responsibility 188–189; habit(s) 54–55; moral responsibility 170–171; normative conflict 208; psychological explanation 36–37; stereotype threat 151; structural reforms 262–263 disowned behavior 103, 104
disruption 11
Index divergence: associationism and 20–21; direct measures and 26; dual-construct models of 21–22; motivational 27, 30; predictive validity 28; reportability 30; social sensitivity and 27–28, 30 diversified experimentalism 241 dominant conceptual resources 127–128 Dotson, Kristie 124, 127, 131 double consciousness 141–143 see also epidermal racial schema dual-construct models: ad-hoc 33; direct vs. indirect measures in 30–32; of divergence 21–22; explaining rationality data 33 Du Bois, W.E.B. 141–143 Eberhardt, Jennifer 104 economic inequities 65–66 Eidelson, Benjamin 88–89 emancipatory alternatives 259 embodied cognition 48 embodied view of bias 6–7 employment: Ban the Box initiatives 234–235; of ex-offenders 234–235; gender-neutral parental benefits 236–237; gender pay gap 235–237, 249; leave policies 236–238; promotion rates 236; workplace democracy 257 environmental signals 253–254 see also stereotype threat epidermal racial schema 143 epistemic agents, defined 174 epistemic appropriation 121–123, 131–132 epistemic detox 180 epistemic diligence 177, 185, 186 epistemic evaluations of bias 85 epistemic exploitation 123–125, 131–132 epistemic friction 179, 180–181, 181, 182–184 epistemic humility 177, 184, 186, 240 epistemic injustices: anonymous review and 250–251; contributory injustice 127–130; credibility deficits 119–120; credibility denial 121; discussion questions 132, 229–230; epistemic exploitation 123–125; future readings 131–132, 227–229; hermeneutical power 125–127; racial segregation and 117; remedies for 130–131; stereotype threat and 149; testimonial injustice 118–120; testimonial
291
smothering 124; see also Hidden Figures (film) epistemic norms vs. ethical norms 14 epistemic objections 78, 84–90, 92, 93 see also epistemic rationality of belief epistemic ought 201–203 epistemic practices 90, 116, 174–175, 185–186, 187 epistemic rationality of belief 195–197, 204 see also ill-founded epistemic resistance 187 epistemic responsibility: calling in 184–185; calling out 184–185; collective irresponsibility 178–179; condition-based approaches to 177–179; discussion questions 188–189; future readings 187–188; on individual level 178; individual responsibility vs. 179–180; mitigating implicit biases 180; progressive stacking 183–184; truth-tracking judgments 179; white fragility 185; world-traveling and 182–183; see also moral responsibility, social epistemology epistemic state of ignorance 178–179 epistemic vices 178 epistemic virtues 177–178, 180, 181, 186 see also epistemic rationality of belief, epistemic responsibility epistemology: appropriation 10; beliefs and 107; bypass scenario 107–108; exploitation 10; future readings 111–112; injustices 10, 118–120; metaphors and (See metaphors) objections 78, 84–90 See also heuristics; perception and 9–10; perceptions and 108; use-all-your information conception 89–90; see also epistemic responsibility, knowledge, moral responsibility, social epistemology ethical norms vs. epistemic norms 14 Evaluative Priming Task (EPT) 27 evidence: beliefs based on 203–205; defined 196; statistical 193–194, 197; see also epistemic rationality of belief, epistemic responsibility, ill-founded ex-offenders 234–235 experimental mindset 16, 233–234 explanatory adequacy criterion 219, 223–225 explanatory monism 225 see also explanatory particularism, pluralism
292
Index
explanatory particularism 225 see also explanatory monism, pluralism explicit bias: causal role in discriminatory practices 62–63; causing discriminatory behavior 63; implicit bias vs. 4–5; inequities explained by 62–64 explicit constructs 30 face-to-face cooperation 244 fairness: accuracy vs. 192, 194, 197–198, 201, 203; etiquette rules and 200; Pusha T on 193; statistical evidence and 193–194, 197 Fanon, Frantz 142–144, 221–222 fast thinking 83, 87–88, 90 fatherhood pay bonus 236 fearful thinking 109 Festinger, Leon 33 Fiske, Susan 83 Frankish, Keith 84 freedom of association 256 Fricker, Miranda 118, 125 future readings: association/ associationism 111–112; biased judgments 93–94; criticisms of implicit bias 72; cultural analysis 112; epistemic appropriation 131–132; epistemic exploitation 131–132; epistemic injustices 131–132, 227–229; epistemic responsibility 187–188; epistemology 111–112; habitformation 52–54; metaphors 92–93; moral responsibility 168–170; oppression 150; perceptions 52; stereotypes 207; stereotype threat 150; structural reforms 260–261 Gallagher, Shaun 144 Gawronski, B. 70 gender bias: in academic hiring 62; changing individuals’ 239; in tenuretrack jobs 61–62; women leaders in the workplace 79 gender equality 222 gender habitus 49–50 gender-leadership biases 253 gender pay gap 235–237, 249 gender stereotypes 85, 122, 215 Gendler, Tamar 35, 191, 199 generalizations 81, 84–85, 88–89, 92, 207 see also base rates, normative conflict gerrymandering perception 216–217 see also base rates, generalizations
Glenn, John 117 Goble, Katherine see Hidden Figures (film) Greenwald, Tony 26–27, 77 group norms 216, 223 see also stereotype(s) habit(s): acquisitions of 44; bodily movement repetitions 46–47; defined 43, 146; discussion questions 54–55; disruption of 140–141, 146–148; embodied account of 45–48; formation of 46–47, 53; future readings 52–54; of a social group 50–51; as social groups property 49; stereotype threat disrupting 140–141, 146–148 Habits of Racism, The (Ngo) 47–48 habit-training 243 habitual body 143–144 habitual tasks 43–44 habitus 7, 49–51 see also perceptual habits, social structures Harper, Shaun 80 Haslanger, Sally 64, 220–221 hasty judgment 103 Hermanson, Sean 59 hermeneutical injustice 125–127, 131 see also contributory injustice, epistemic injustices hermeneutical resources 125, 130 heuristics 36, 85–88 see also dual-construct models, systematic ways of reasoning hidden biases, marketing and 1 Hidden Figures (film): contributory injustice 128–129; epistemic appropriation 121; epistemic exploitation 123–124; epistemic injustices in 116–117; hermeneutical injustice 126; racial segregation 117; testimonial injustice 118 hijacked experiences 108–109 Holroyd, Jules 157–158 homunculus fallacy 23, 34, 35 hooks, bell 214 housing, racial segregation in 1–2 identity contingencies 137, 142, 147 identity threat see stereotype threat if-then plans 241–242 ill-founded 9–10, 107–108 implementation intentions 158, 241–242
Index Implicit Association Test (IAT) 24–25, 41–43, 66–67, 70, 158, 218–219 implicit bias: defined 41; explicit bias vs. 4–5; measuring 69–71; as in the mind 41 implicit bias research: bigotry and 2; critics of 1–2 implicit bias tools 68–71 see also Implicit Association Test (IAT) implicit bias trainings 1 implicit constructs 30 imposter syndrome 248 in-between belief 34 see also dual-construct models incongruent blocks 25 indirect control of biases 159–160 indirect measures: examples of 25–26; of people’s attitudes 6; predictive power of 28; simple instructions changing 29; for social bias 24–25; strength of argument and 29–30 indirect responsibility for bias 158–159 individualistic explanations 215, 224, 225 see also explanatory monism, structural explanations inductive reasoning 88 inequalities: inequity vs. 57–58; primary drivers of 7; see also segregation inequities: economic 65–66; explained by explicit bias 62–64; group-based 59–61; inequalities vs. 57–58; prejudice affecting 65; racial 64–65; see also segregation in-group-out-group biases 51 inhibitions 136 injustices see epistemic injustices Inside Out (film) 23 intergroup cooperation 254–256 interpretative frames 78–79 interventions: criterion for 219–220; debiasing 159–160; experimental approach to 240–241; perspectivetaking 244; see also debiasing strategies introspective error 103 Jackson, Mary 117 see alsoHidden Figures (film) judgments: hasty 103; of trustworthiness 120; truth-tracking 179; see also biased judgments, testimonial injustice Jussim, Lee 67 just plain ought 201–203
293
Kahneman, Daniel 83 kinaesthetic awareness 144 King, Martin Luther, Jr. 260 knowledge: biased judgments undermining 83; discussion questions 94–95; epistemic agents 174; epistemic practices 174; racism hindering access to 117; social power and 116; see also bias as fog, bias as shortcuts Kristof, Nicholas 67 Kuhn, Thomas 22 Kukla, Rebecca 251 law school admissions 248 Lippert-Rasmussen, Kasper 89 lived body 144–145 Logic of Practice, The (Bourdieu) 49–50 MacDonald, Heather 58–59 Malcolm X 257–258 malleability of implicit bias 21 Mallon, Ron 64–65 marginalized groups: contributory injustice 129–130; credibility deficits 130; epistemic exploitation and 10, 124; epistemic friction 182; hermeneutical injustice 125; intergroup cooperation and 255–256; in job market 60 marketing: hidden biases and 1; hidden biases in 1 Mason, E. 166 McHugh, Nancy 182 measurement of implicit bias 70, 71 Medina, José 179–180 mental constructs 23–26 mental states 22–23 Merleau-Ponty, Maurice 44, 46–47, 49, 143–144 metaphors: attributing centrality 79; fog as oppression 80–81; future readings 92–93; interpretative frames 78–79; see also bias as fog, bias as shortcuts Mills, Charles 80, 178–179 mind bugs 77–78 minimal associations see association/ associationism moral encroachment 198, 203–206, 207–208 see also epistemic rationality of belief, epistemic responsibility, epistemic virtues moral relevance criterion 219, 220–223 moral responsibility: accountability 165–166; actions caused by racial bias
294
Index
153–155; anonymous review and 250; by apology 166; attributability theories 157, 162, 165–166; attributionism and 157, 162, 165–166; blameworthiness for biases 167–168; collective dimensions of 13; conditions of 156–157, 176–177; deep self theories 161–164; defined 12; discussion questions 170–171; future readings 168–170; indirect activation of biases and 158–159; as individual matter 13, 153; influencing implicit biases 164, 175–176; intentional actions vs. implicit bias 157–160; as justifiable 176; revising nature of 167; revisionist theories of 164–166, 168; taking ownership of bias 166; toxic environments and 163; toxic social environments and 163; see also accountability for implicit bias, attributability theories, epistemic responsibility, epistemology, individualistic explanations, social epistemology, volitionist theories motivation 27
Mullainathan, Sendhil 61
narratives, cultural production of 99–100 NASA (National Aeronautics and Space Administration) 116–117 see also Hidden Figures (film) Nelson, Mark 201–202
New York Times, The, Who Me?
Biased? (videos) 77, 80, 82, 84
Ngo, Helen 47–48, 147
Nix, Justin 60
Noë, Alva 46
non-unified theories 90
normative conflict: belief formation and 191–193; as dilemmas 199; dilemmas 198–200; discussion questions 208; moral encroachment 203–206; pluralism and 200; racial profiling 192; statistical evidence 193–196 Obama, Barack 192
ontological individualism 214–215 open-mindedness 177, 182–183, 186 see also epistemic responsibility oppression: characteristics of 147;
defined 147; future readings 150;
group 91
paradigms 22
Payne, Keith 102
perception(s): biased cognition and 9,
194; conscious aspects of 102; epistemic impact of 108; epistemology and 9–10; future readings 52; gerrymandering 216–217; hijacked experiences and 108–109; Implicit Association Test as from of biased 42–43; irrationality of 109–110; judgment and 102, 104; law and 109–110; Merleau-Ponty’s conception of 44–45; phenomenology of 46; prior state influencing 102–104; racial attitudes impacting 101–104; racial bias and 42; visual appearances 102, 216–217; see also Rationality of Perception view perceptual habits 6, 41–45 see also embodied cognition
perceptual hijacking 109
perspective-taking interventions 244
phenomenology: defined 45–46; habit-
formation 46–47; of perception 46;
stereotype threat and 146
pluralism 200 see also explanatory
monism
Pluralist vs. Dilemmist 198–201
Pohlhaus, Gaile 128
police shootings, bias contributing to
60–61
power inequalities 257–258
power relations 118
practical utility criterion 220, 226
pragmatic encroachment 203–204
predictive power 28
predictive validity 28, 68
prejudice: inequities and 65; residue of
4–5; social equality reducing 257
prestige bias 250
prevention-focused mindset 243
proactive integrationist strategies
255–256
progressive stacking 183–184
Project Implicit 41, 67, 218
propositionalization 31
propositional model 32
psychological biases 66
psychological explanation 5, 20, 22–24,
35, 36–37
psychological precursors 259
psychometrics 68 see also explicit con structs, implicit constructs, mental constructs
Index purity 245–246
Pusha T 193
quick thinking see fast thinking racial colorblindness 251
Racial Contract, The (Mills) 80–81,
178–179
racial injustice, defined 211
racial perception see biased perceptions
racial segregation: Hidden Figures (film)
117; in housing 1–2; racial separation vs. 257–258; in United States 4; see also segregation racial separation 257–258 racism/racial disparity: contributory
injustice and 128–129; economic
inequities 64–65; hindering access to
knowledge 117; in policing 58–59;
racial attitudes 101–104; racial
profiling 139, 192, 196; racist body
language 47–48; in tenure-track jobs
61–62
racist beliefs 192–193, 206–207
rational interventions 28–29, 32
rationality of bias: belief-based models
of 21–22; classic conditioning 28–30; see also epistemic rationality of belief, epistemic responsibility, epistemic virtues Rationality of Perception view 109–111
reasoning, systematic ways of 86
recidivism 234
relational information 32
representativeness heuristic 85–86, 88
Reshamwala, Saleem 77, 80, 84
residential segregation 1–2
responsibility see moral responsibility
Rice, Tamir 109
Rubber Hand Illusion 43
salience 11, 44, 79–80, 84, 250
Saul, Jennifer 199
Schroer, Jeanine Weekes 140
Schwitzgebel, E. 34
science communication 67–68
segregation 64, 65–66, 130 see also
inequities, racial segregation
self-attention 143
self-confidence 135, 139
self-consciousness 136, 137, 138
self-observation 141–142
sensory perceptions 80
sentence-like structures 32
295
separation see racial separation sexual harassment 125, 127–128 Shapiro, Ben 197
shooter task 106
Singal, Jesse 62
Skinner, B. F. 22
Slovic, Paul 87
social biases: direct measures 24; divergence and 26–28; empirical data of 24–30; fairness vs. accuracy in 14; indirect measures 24–25; influence of 216; rationality of 28–30; reportability 26–27; tests for 24–26 social cues 135–136 social environments 218–219 social epistemology 116 see also epistemology social equality 257
social identity 135
social injustice: approaches to 214–215;
defined 211; examples of 211–213;
individualistic approaches to 15,
211–212, 214–215; normative core to
221–222; structural approaches to 15,
212–213, 215
social norms 238 see also normative conflict social perception, racialized bias in 99–101 social power, knowledge and 116
social privileges 65
social sensitivity 27–28 social structures: as cause of discrimination 64–67; epistemic injustices and 130; explanatory adequacy criterion 219; gerrymandering and 216–217; implicit bias and 215–219; internalized 215–216; moral relevance criterion 219; overhauling existing 239; practical utility criterion 220; prioritizing reforms 239; psychological biases and 66; racist 126; relationships between parts and the whole 225; shaping individuals’ lives 216, 224; stereotype threat and 149; structural thinking and 227; unjust 192; visual depiction of 213 socioeconomic disparities 58, 59
Souls of Black Folk, The (Du Bois) 141,
143
spontaneous evaluation response 31
statistical evidence 193–194, 217
Steele, Claude 134, 136–138
296
Index
stereotype(s): in advertising 81; bias as fog 81; biased judgments motivating 8; controlling and/or reducing 122; counterstereotypes 254; cultural 215–216, 226; emotions and 87; evaluative function of 91; explaining behavior through 247; fast thinking and 83, 87–88; future readings 207; gender 85, 122, 215; group 81; inductive reasoning and 88; misleading/false 81; racial 122; as shortcuts 83–84; unwarranted 84–85; see also belief-based models, bias-of crowds model, generalizations, mental constructs, propositional model stereotype threat: as anxiety trigger 139; classical phenomenological descriptions of 141–146; as conceptual paradigm 140; consequences of 139, 147; corporeal schema and 143–145; defined 11, 134; discussion questions 151; disruption and 11; effects of 135, 146; epistemic injustices and 149; future readings 150; gendered 139; as habit disruption 140–141, 146–148; harm of 139; identity contingency 137; as inhibition 136; as lack of self-confidence 135; law school admissions 248; self-consciousness of 137; self-doubt and 138; social identity triggered by 135; social structures and 149; standard view of 138–141; strategies to minimize 148–149; test-taking anxiety 140; toxic remedies and 148–149; as underperformance 136; see also double consciousness, epidermal racial schema Stonewall rioters 224
structural analysis 213–215 structural explanations 215, 219, 223,
225 see also cultural stereotypes,
individualistic explanations, social
structures
structural factors 213, 238–239 structuralists 65
structural reforms: alleviate status hierarchies 256–257; anonymous review 250–252; combating status disparities 257; counterstereotypes 254; criteria-based decision-making 248–249; decision-making criteria 247–249; discussion questions
262–263; environmental signals 253; future readings 260–261; imposter syndrome and 248; intergroup cooperation 254–256; journal review process and 251; powers to the peoples 256–259; proactive integrationist strategies 255–256; psychological precursors and 259; uprooting biases by 252; see also debiasing strategies, interventions symmetry 8–9, 88
systematic ways of reasoning 86
Take the Lead 79
Taylor, Shelley 83
testimonial injustice 10, 118–120
testimonial smothering 124
third-party punishment 51
Tokio Kid 81, 82
toxic environments 163
transconceptual communication 131
transformative experience 211–212 see
also individualistic explanations, phenomenology
trustworthiness 120
Tversky, Amos 83
unconscious mental association 20
underperformance 11, 135–136, 139, 149
unified theories 90
universality of bias 84
use-all-your-information conception
89–90 Vasilyeva, Nadia 224, 227
Vaughan, Dorothy see Hidden Figures
(film)
visual appearances 102, 216–217
visual tuning device 104
volitionism 156–157, 162, 167–168 see
also attributability theories War on Cops, The (MacDonald) 58–59 warranted/unwarranted beliefs 67, 78,
84–85 see also epistemic rationality of
belief, epistemic responsibility,
epistemic virtues, evidence,
ill-founded
weapon bias 42–43
weapon categorization 102–103
Weiss, Gail 43
Weldon, Michele 79
Whistling Vivaldi (Steele) 137–138
white flight 256
Index white fragility 185 Who Me? Biased? (videos) 77 willful ignorance 128 Williams, Bernard 198 Wilson, Darren 109 wishful thinking 109 workplace democracy 257
297
world traveling 13, 182–183 Would You Rather? questions 243–244 Yancy, George 99 Young, Iris 147, 221–222 Young, Marion 50 Zheng, Robin 165–166, 176–177