139 41 868KB
English Pages 176 [170] Year 2020
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Morality by Degrees
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Morality by Degrees Reasons without Demands A L A S TA I R N O R C R O S S
1
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
1 Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Alastair Norcross 2020 The moral rights of the author have been asserted First Edition published in 2020 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2019954573 ISBN 978–0–19–884499–0 Printed and bound in Great Britain by Clays Ltd, Elcograf S.p.A.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Preface In 1988, I was in graduate school in Syracuse, in a seminar on consequentialism, taught by Jonathan Bennett. I was already convinced that utilitarianism was the correct general approach to ethics, but I was still exploring what form it should take. For that seminar, the students wrote short weekly papers. One week, in papers on supererogation, both I, and another student, Frances Howard (now Howard-Snyder), suggested that, perhaps, consequentialist theories in general (and utilitarianism in particular) would be better served by jettisoning the maximizing account of rightness, and any other account, and concentrating instead on purely comparative judgments of alternatives. Much of the criticism of consequentialist approaches seemed to focus on the demandingness objection, which is really an objection to maximizing approaches. Although that can be diffused by retreating to a satisficing approach, there seemed to be good reasons to reject that approach too, and rejecting the idea of demands altogether. Later in the semester, we read portions of Michael Slote’s book, Common-sense Morality and Consequentialism, and found that he had coined a term for this approach (which he didn’t himself endorse): “scalar morality.” Frances and I decided to write a co-authored paper, exploring and arguing for the idea. I also included a brief defense of the scalar approach in my dissertation (on moral conflicts). The paper that Frances and I wrote received valuable feedback from many people, notably including Jonathan Bennett, Shelly Kagan, Michael Slote, Robert Audi, and Dan HowardSnyder. It was rejected by many journals, before eventually being published in the Journal of Philosophical Research. We knew we were onto something, when one journal (Mind, I think) rejected it with two referee reports, one claiming that our approach was so outlandish that no-one (besides the two of us, presumably) could take it seriously, and the other claiming that our approach was so boringly obvious that everyone already accepted it (a good journal editor, when faced with two reports
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
vi Preface like that, would either accept immediately, or at least send the paper to a third, but I digress). Even though Frances and I didn’t pursue our collaboration beyond that one paper, this book clearly owes a large debt to her, as I doubt whether I would have continued to develop the scalar approach, if we hadn’t persisted with our original paper. After publishing that co-authored paper, I continued to explore various aspects of the scalar approach, on and off, while also pursuing different research projects. Several years later, Derek Parfit generously offered to read my work, after refereeing one of my papers (not on scalar morality) for a journal. He subsequently encouraged me to develop the scalar approach into a book for Oxford, and put me in touch with Peter Momtchiloff. He also gave me much valuable feedback on early drafts of several chapters. I (like so many others) owe Derek an enormous debt of gratitude. One of my biggest regrets is that it has taken me so long to finish this book that he is no longer around to see it. I am also considerably indebted to my wonderful colleagues at Southern Methodist University, Rice University, and the University of Colorado, who have given me great feedback on various papers, which have morphed into chapters of this book. I would like especially to mention Steve Sverdlik, Mark Heller, Doug Ehring, George Sher, Chris Heathwood, and David Boonin. I have also received valuable comments from Julia Driver, Shelly Kagan, Ben Bradley, and Peter Singer. The scalar approach that I have advocated in various articles has attracted some scholarly attention, both critical and sympathetic (sometimes both at once), and I have been repeatedly pressed to “hurry up and finish the damn book.” Part of what has taken me so long is that I keep getting distracted by other projects, both in theoretical ethics and applied ethics, most notably projects concerning ethics and animals. Another part is that is has taken me a long time to settle on the form I wanted this book to take. You will notice that it is quite a short book. I had considered making it much longer, with detailed responses to the various published criticisms of the approach. In the end, however, I decided that I wanted a short book, motivating and arguing for the approach I take. This is not because I think the published criticisms aren’t worthy of response. That is a different project. In my experience, it is highly unusual for many (any?) people to read the whole of a philoso phy book that runs to more than about 200 pages. I am of the opinion
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Preface vii that everything in this book is worth reading, and so I would like to maximize the chances (even though I reject maximization) that a reader will get through the whole thing, before succumbing to the kind of intellectual fatigue that has prevented so many of us from reading every page of some of the much heftier tomes that adorn our shelves. (I am reminded of the standard quip of the philosopher who, when asked about a book on their shelf whether they have read it, replies “read it? I haven’t even taught it!”) A few years ago, I was part of an author-meetscritics panel at the American Philosophical Association on a book that ran to over 500 pages. The other two critics were broadly sympathetic to the approach of the book, and I was not. I felt rather guilty about the fact that I had read only about 300 of the 500-plus pages (the other 200-odd pages were detailed responses to criticisms of the author’s earlier work). But, at the session, I was cheered to hear from the other two critics that they had actually read considerably less of it than I had. I would also like to acknowledge the tremendous debt I owe to my family. My parents were not only two of the most loving and supportive people that I have ever known, but were also always encouraging of my intellectual pursuits. They were thrilled when I decided to pursue a degree in classics (or, as they still call it at Oxford, “literae humaniores”), and even more thrilled when I decided to continue my studies in phil osophy at graduate school. Whenever I talk to students, who tell me that they really want to pursue philosophy in college, but their parents are pressuring them to study business, or law, or engineering, and sneering at philosophy (which happens distressingly often), I remind myself how fortunate I was to have such supportive parents. I am sorry that it has taken so long to finish this book that they are both gone. I still miss them deeply. My son, David, has also always been supportive of and interested in my work. I only hope that he is at least a little as proud of me as I am of him. Finally, nothing I do would be possible, meaningful, or even slightly enjoyable without the love and support of my wonderful wife Diana. I still have no idea what I did to deserve her, but I am endlessly grateful for her. This book is for her above all.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Table of Contents Acknowledgments
xi
1. Introduction
1
2. The Scalar Approach to Consequentialism
14
3. Good and Bad Actions
48
4. Harm
82
5. Contextualism: Good, Right, and Harm
108
6. Contextualism: Determinism, Possibility, and the Non-Identity Problem
128
Bibliography Index
153 155
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Acknowledgments The arguments in Chapter 2 draw heavily on material earlier published in Alastair Norcross, “Reasons Without Demands: Rethinking Rightness,” in: James Dreier (ed.), Contemporary Debates in Moral Theory, pp. 38–53, Copyright © 2006 by Blackwell Publishing Ltd. The arguments in Chapter 3 draw heavily on material earlier published in Alastair Norcross, “Good and Bad Actions,” The Philosophical Review, Volume 106, Issue 1, pp. 1–34, doi: 10.2307/2998340, Copyright © 1997, Duke University Press. Chapter 4 contains significant material revised from Alastair Norcross, “Harming in Context,” Philosophical Studies, Volume 123, Issue 1–2, pp. 149–73, doi: 10.1007/s11098-004-5220-3, Copyright © 2005. Reprinted by permission from Springer Nature: Springer. Chapter 5 contains significant material revised from Alastair Norcross, “Contextualism for Consequentialists,” Acta Analytica, Volume 20, Issue 2, pp. 80–90, doi: 10.1007/s12136-005-1023-1, Copyright © 2005. Reprinted by permission from Springer Nature: Springer.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
1 Introduction 1.1 The Consequentialist Approach In one of the first great works in ethics in the Western tradition, The Nicomachean Ethics, Aristotle begins Book I, chapter 1 as follows: “Every art and every inquiry, and similarly every action and pursuit, is thought to aim at some good; and for this reason the good has rightly been declared to be that at which all things aim.” He continues, in chapter 2: If, then, there is some end of the things we do, which we desire for its own sake (everything else being desired for the sake of this), and if we do not choose everything for the sake of something else (for at that rate the process would go on to infinity, so that our desire would be empty and vain), clearly this must be the good and the chief good. Will not the knowledge of it, then, have a great influence on life?1
Likewise, early in the first chapter of his treatise Utilitarianism, John Stuart Mill declares: From the dawn of philosophy, the question concerning the summum bonum, or, what is the same thing, concerning the foundation of morality, has been accounted the main problem in speculative thought . . . All action is for the sake of some end, and rules of action, it seems natural to suppose, must take their whole character and colour from the end to which they are subservient.2
1 Aristotle, Nichomachean Ethics.
2 Mill 1861.
Morality by Degrees: Reasons without Demands. Alastair Norcross, Oxford University Press (2020). © Alastair Norcross. DOI: 10.1093/oso/9780198844990.001.0001
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
2 Introduction Both Aristotle and Mill are talking about what is intrinsically good, that is, what is good for its own sake, simply in and of itself.3 Another way to put this, as Mill does, is to talk about what is intrinsically desirable, or worth desiring or pursuing for its own sake. Most, though not all, philo sophers believe that some things are intrinsically good. The most com mon such thing, and the foundation of the best-known consequentialist theory, Utilitarianism, is happiness or well-being, often taken to include the well-being of nonhuman animals too. Likewise, most philosophers also believe that some things are intrinsically bad, such as unhappiness, or suffering. This doesn’t entail that happiness can’t be instrumentally bad, or that unhappiness can’t be instrumentally good. That is, a particu lar person’s happiness may lead to unhappiness for others. Suppose that a mugger gets a lot of happiness out of stealing from others. The fact that he gets such happiness could lead him to steal more and more, causing unhappiness to others. Likewise, a talented blues singer and composer may be very unhappy, leading her to compose and sing wonderful songs, which cause others to be happy. The claim that some things, at least including happiness, are intrinsic ally good, and others, at least including unhappiness or suffering, are intrinsically bad, is appealing, and almost certainly true. To appreciate the appeal of the view, suppose that a friend tells you that they heard yesterday of a momentous event occurring in a distant country. Although they distinctly remember hearing of such an event, they can’t quite remember what it was, because they were a little distracted and inebriated at the time. You press them to remember, and they tell you that they are pretty sure that it was one of two things: either (i) there was a terrible natural disaster in the relevant country, painfully killing and maiming hundreds of thousands of people and animals, or (ii) a discovery was made, enabling the country to eradicate a terrible disease, which had been painfully killing and maiming hundreds of thousands of people and animals. You, obviously, have no control over which of the two events actually took place. It would be strange, however, if you didn’t hope that it was (ii) rather than (i). And, surely the reason you would hope it was (ii) rather than (i) is that you believe it would be bet ter if hundreds of thousands of people and animals were spared from 3 What is intrinsically good is noninstrumentally good, and good impartially.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
The Consequentialist Approach 3 painful death and maiming rather than having it inflicted on them. You believe it would be overall better, that is, not just better for you, or for the people involved. The state of affairs, as philosophers like to say, involving the sentient creatures being spared from death and maiming is, other things being equal, better than the state of affairs involving the sentient creatures suffering from death and maiming. And the reason for this is that the suffering caused by painful death and maiming is intrinsically bad, and the happiness caused by eradicating such things is intrinsically good. So far, I have been explaining, and at least partially motivating, the following axiological thesis: Value: There is such a thing as intrinsic value, and some things have it. Some things are just good, or desirable, or worth having or pursuing for their own sakes. At least (and perhaps at most), one of these things is well-being or happiness. Likewise, some things are just bad, or undesirable, or worth eschewing or avoiding for their own sakes. At least (and perhaps at most), one of these things is unhappiness, suffer ing, or being badly off (“ill-being” seems to be an infelicitous expres sion). Furthermore, everything else that is either good or bad, but not intrinsically so, is so only because of its contribution (or at least some form of relation) to intrinsic value, either good or bad.
Value is relatively uncontroversial. Most philosophers, and certainly most who work in ethics, believe that some things, such as happiness and unhappiness, are intrinsically good and bad, and so they believe that, other things being equal, things go better (that is, the world is better) when there’s more of the former and less of the latter. This view is shared by consequentialists and most nonconsequentialists. Some non consequentialists do reject Value, or at least they claim to, as it applies to the notion of one state of affairs being overall better or worse than another. Philippa Foot’s notorious thesis in “Utilitarianism and the Virtues,” that the notion of a “good state of affairs” is incoherent, or at least empty, seems to be a rejection of Value. I have long wondered whether she, and the small number of philosophers who claim to agree with her, really do reject Value, or whether they simply claim to (or per haps mistakenly believe themselves to), because they see that rejection
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
4 Introduction as the best or only way of rejecting consequentialism—a thesis to which they are emotionally averse. In fact, Foot makes it perfectly clear that a large part of her motivation for rejecting Value is that she thinks that an acceptance of that thesis leads inexorably to consequentialism. Although this is an important topic, worthy of further discussion, I won’t be exploring it further in this book. The arguments of this book are addressed mostly to those who are, at least, already attracted to Value. So far, so good. But in a thesis such as Value, we don’t yet have the materials for a complete ethical theory. Ethics is not only concerned with discovering or evaluating situations, or states of affairs, as good or bad, but is also, some would say especially, concerned with evaluating conduct. Knowing that some things are intrinsically good and others are intrinsically bad, and that the world would be better if there’s a larger net quantity of good, doesn’t, by itself, tell me what choice to make in any particular situation. Among other things, an ethical theory is supposed to help guide actions, by giving criteria for evaluating choices. Consider, then, the following deontic thesis: Act Relevance: Intrinsic value provides intrinsic reasons for action. That one outcome contains more intrinsic goodness than another is, or at least provides, a reason to act in such a way that the former rather than the latter occurs.
Both Value and Act Relevance are a central part of every consequentialist theory with which I am familiar. Their truth is also acknowledged by many, perhaps most, non-consequentialist approaches as well. If you accept Value, it’s hard to see how you could reject Act Relevance. In fact, I don’t think I can really make sense of the claim that intrinsic value exists, but doesn’t provide any reasons for action. What could it mean to say that something was worth having of pursuing for its own sake, but that there was no reason to have or pursue it? As I said, many philosophers, not just consequentialists, accept both Value and Act Relevance. To get a distinctively consequentialist approach to ethics, we need to add another deontic thesis: Act Irrelevance: Nothing other than intrinsic value provides intrinsic reasons for action.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Core Consequentialism 5 At this point, I should say a little about the difference between intrinsic and extrinsic reasons for action. To say that a difference in intrinsic value between two outcomes provides an intrinsic reason to choose one over the other is just to say that the reason is to be found in the differ ence in intrinsic value, considered in and of itself, and not anything that the difference might lead to, indicate, exemplify, or the like. Contrast this with an example involving extrinsic reasons. That a trusted advisor recommends action A over action B may well provide me with a reason to prefer A over B, but only because, and to the extent that, I believe the trusted advisor to be sensitive to intrinsic value. The advice, then, does not provide an intrinsic reason to prefer A to B, but rather is evidence that there is such an intrinsic reason.
1.2 Core Consequentialism If we combine Value, Act Relevance, and Act Irrelevance, we can get a distinctively consequentialist approach to an ethics of action. We con nect an axiological theory of the goodness (or badness) of states of affairs (including the maximal state of affairs that philosophers usually refer to as “the world”) with a deontic theory of the moral evaluation of different options in a situation of choice as follows: Core Consequentialism (CC): An action is morally better or worse than available alternatives, and thus there is greater or lesser (moral) reason to opt for it, entirely to the extent that the world containing it is overall better or worse (contains more or less net intrinsic value) than the worlds containing the alternatives.4
Most accounts of consequentialist approaches to ethics talk of “the con sequences” of actions, and propose evaluating actions by comparing their consequences with the consequences of alternatives. This is 4 I take it that indirect consequentialist theories, such as rule consequentialism, and motive consequentialism, reject Act Irrelevance, because some sort of indirect relation between intrin sic value and states of character, or group dispositions, or the like, also provides intrinsic reasons for action. This distinguishes them from theories such as Hare’s two level approach, or Railton’s sophisticated consequentialism.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
6 Introduction potentially misleading, because it suggests that there is a sharp distinc tion between an action itself and its consequences. Whether it’s possible to make such a distinction or not, it is no part of the consequentialist approach that actions themselves are not part of the states of affairs to be evaluated as better or worse than alternatives. An action itself may, for example, be pleasurable or painful. Clearly the pleasure or pain of the action is just as relevant to assessing it as is any pleasure or pain that follows the action. Likewise, talk of “the world” that contains the action makes clear that any difference an action makes to overall states of affairs is at least potentially relevant to its evaluation compared with available alternatives. Although I may sometimes talk about “the conse quences” of an action, or compare the consequences of one action with the consequences of another, this will only be for convenience. The strict interpretation will always be in terms of worlds containing the different actions. I would like to point out two crucial features of CC. First, the funda mental evaluation of actions, for a consequentialist, is always comparative. This is because the role of an ethical theory, when it comes to actions, is to guide choices, and it does this by providing reasons to act one way rather than another. Thus, reasons are also fundamentally comparative. Talk of “a reason to do A” should always be understood as shorthand for “a reason to do A rather than B.” Second, there is no m ention in CC of an action being “right” or “wrong,” of morality “demanding” or “requiring” or “commanding” that certain things be done or avoided, or even of morality telling agents what they “ought” to do. This is not because con sequentialists don’t talk about such things. Most (though not all) of them do. But what makes consequentialism a distinctive kind of ethical theory is its approach to the general moral evaluation of actions, not to specific categories of evaluation, such as right and wrong, permissible, impermissible, demanded, required, etc. For a consequentialist, the only consideration of direct relevance to the moral evaluation of an action (as opposed to the character, blameworthiness, etc. of the agent) is the com parative value of its consequences. In evaluating the consequentialist approach, as opposed to particular consequentialist theories (such as maximizing utilitarianism), it is important to see that no particular account of right and wrong, permissible, impermissible, and the like (including an account which ignores those notions) is essential to the
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
The Demandingness Objection 7 approach itself. Although, as I will presently explain, many consequen tialists, and other philosophers, seem to assume otherwise. Here then, roughly, is the motivating idea behind consequentialist approaches to morality. We care, or at least we should care, about how good or bad the world is. It matters to us. Through our behavior, we can make a difference, often small, but sometimes significant, to how good or bad the world is. To the extent that we can make a difference, it is better, morally better that is, for us to make the world a better place rather than a worse place, and the bigger the positive difference we can make, the better. It is hard to deny the force of this motivating idea. Most philosophers feel its attraction, and accept that the difference our behavior makes to the overall goodness of the world provides moral reasons for choosing to behave in some ways rather than others. Where some disagree with consequentialism is with the claim that such reasons are the only reasons that are directly relevant to the moral evaluation of behavior. Perhaps, they claim, there are other considerations that, at least sometimes, compete with consequentialist reasons, undercutting or overriding them.
1.3 The Demandingness Objection As I said above, most consequentialist theories do include accounts of what makes actions right or wrong, permissible, impermissible, and related notions. For example, in introducing consequentialism, Derek Parfit gives as its “central claim”: (C1) There is one ultimate moral aim: that outcomes be as good as possible.5 (My emphasis) As applied to acts, Parfit says, this gives us: (C2) What each of us ought to do is whatever would make the o utcome best.6 (My emphasis)
5 Parfit 1984, 24.
6 Ibid.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
8 Introduction Henry Sidgwick, in the first great systematic exposition of utilitarian ism, the most common and influential consequentialist theory, defines the theory as follows: By “utilitarianism” I mean the ethical theory according to which in any given circumstances the objectively right thing to do is what will produce the greatest amount of happiness on the whole.7 (My emphasis)
Likewise, Peter Carruthers, a critic of utilitarianism, says: An act-utilitarian holds that there is only one duty, that is binding at all times, namely to maximize utility.8 (My emphasis)
These three philosophers, like most contemporaries who either advocate or criticize a consequentialist approach to ethics, take for granted that any such approach will center on an account of right action, duty, permissibility, what one “ought” to or “should” do, what the theory “demands” or “requires,” and related notions. They also take for granted that such an account will be maximizing in form. Let us briefly examine, then, one of the currently most common, and influential, consequential ist theories, maximizing act utilitarianism, because it turns out that one of the most common objections to consequentialism is really an objec tion to the particular structure of this theory, and not to consequential ist approaches in general. Utilitarianism, most famously articulated and defended by Jeremy Bentham and John Stuart Mill, is a consequentialist theory with a wel farist account of the good. That is, according to pretty much all versions of utilitarianism, the only thing that is intrinsically good is the wellbeing, or happiness, of creatures capable of conscious awareness (or “sentient” creatures, for short). Unsurprisingly, the only thing that is intrinsically bad is unhappiness, or suffering. The early utilitarians defined happiness in terms of pleasure, and unhappiness in terms of pain, though they didn’t restrict the notions of pleasure and pain to physical or bodily sensations. Some modern utilitarians follow in this 7 Sidgwick 1981, Bk. IV ch. I, sec. 1.
8 Carruthers 1992, 32.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
The Demandingness Objection 9 tradition (broadly defined as “hedonism”), while others give differing accounts of well-being or welfare. As far as the deep disagreements between consequentialists and nonconsequentialists are concerned, nothing turns on the differences between rival accounts of well-being. Most modern utilitarians adopt a maximizing account of rightness, which can be roughly expressed as follows: MU: An action is right, if and only if there are no available alternatives that result in a larger net amount of well-being, otherwise it is wrong.
MU (Maximizing Utilitarianism) tells us that only the best action in any given choice is right (unless there is a tie for first place, in which case all tied optimal actions are right). The notion of a right action is often left unexplained, but is usually understood to mean an action that the moral theory demands be done, or an action that must be done, in order for the demands of morality to be met. Thus, if you don’t perform the action with the best consequences, you haven’t met the demands of morality, and thus have behaved wrongly. It seems that MU is quite demanding; in fact many would say too demanding. Consider taking a trip to the movies, at which you spend a total of $12 on a ticket. Given the amount of unrelieved suffering in the world, and the existence of well-organized and efficient charities, how likely is it that you couldn’t do more good by donating the $12 to charity than by spending it on a movie ticket (or three or four cups of coffee)? But, if you can do more good by donating the money to charity, it would be wrong to spend it on a movie ticket, and MU thus demands that you don’t do it. Perhaps you’ll say that’s fine. It would be wrong to spend the money on a movie when you could, perhaps, supply a much-needed vaccination to an at-risk child instead. But the same reasoning will apply to the next time you consider seeing a movie, or buying a cup of coffee, or eating out, or a whole host of other seemingly unremarkable things. In fact, given how much unrelieved suffering there is in the world, and how little most affluent people are doing about it, MU seems to tell all of us that we should be working almost full-time for the benefit of others, and only devote as much time to our own well-being as is strictly neces sary to maintain our physical and emotional health in order to keep working for the benefit of others. To the extent that this seems to count
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
10 Introduction against MU, it is known as the “demandingness objection,” and is one of the most commonly cited reasons for rejecting the theory (and other versions of maximizing consequentialism). Notice that the objection only applies, if it applies at all, to a consequentialist theory that demands maximization. A consequentialist theory that gives a different, perhaps nonmaximizing, theory of rightness, or one that denies that consequen tialism demands anything at all, wouldn’t be subject to this criticism. It is interesting to note that neither Jeremy Bentham nor John Stuart Mill employed a maximizing account of rightness. Here is Bentham: An action...may be said to be conformable to the principle of utility... when the tendency it has to augment the happiness of the community is greater than any it has to diminish it.9
And here is Mill: The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in propor tion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.10
Neither Bentham nor Mill demands that an agent maximize, and Bentham’s formulation doesn’t suggest any demands at all. Mill’s talk of actions being “right in proportion” to their consequences suggests that he thinks that actions can be more or less right or wrong, without any suggestion that morality “demands” maximal rightness. It is clearly possible to combine CC with an account of rightness that demands maximization, or with an account that sets the standard for rightness as, at least sometimes, somewhat short of maximization (per haps as “good enough”), or with some other nonmaximizing account of rightness, or with no account of rightness at all, and no claim about demands. The last option is the one I favor, and will explain and argue for in this book. 9 Bentham 1789, ch. 1, para. 6.
10 Mill 1861, ch. 2.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Brief Outline of the Book 11
1.4 Brief Outline of the Book The focus of this book is to argue that consequentialist ethical theories should not be interpreted as theories of either the rightness or goodness of actions, but instead as scalar theories that evaluate actions as better or worse than possible alternatives. Chapters 2 and 3 focus on the notions of right action and good action respectively. Modern moral philosophy (the last three hundred years or so) has been centrally concerned with theories of the rightness and goodness of actions. The debate between consequentialist theories, such as utilitarianism, and deontological the ories, such as that of Immanuel Kant, has centered on their different accounts of rightness and goodness. For at least the last hundred years the predominant consequentialist account of rightness has been that right actions are those that maximize the good (which, for a utilitarian, is usually construed as happiness). The consequentialist account of rightness has been criticized for being both too demanding and too per missive, and for failing to account for notions such as integrity and supererogation. Satisficing and maximizing versions of consequential ism have both assumed that rightness is an all-or-nothing property. I argue, in Chapter 2, that this is inimical to the spirit of consequential ism, and that, from the point of view of the consequentialist, actions should be evaluated purely in terms that admit of degree. Although there hasn’t been much discussion of the question of which actions are good, many consequentialists have made use of a notion that naturally lends itself to such an account, that of an action’s consequences being on balance good (or of an action producing more good than bad). I argue, in Chapter 3, that consequentialism cannot provide a satisfactory account of the goodness of actions, on the most natural approach to the question. On this approach, to say that an action is good is to say that better consequences resulted than if it had not been performed. It turns out that there is no nonarbitrary way to identify the relevant compari son world (the world in which the action in question is not performed). I also argue that, strictly speaking, a consequentialist cannot judge one action to be better or worse than another action performed at a different time or by a different person. I employ similar reasoning, in Chapter 4,
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
12 Introduction to show that consequentialism, at the fundamental level, has no room for the notion of an action harming someone (or something). Thus I argue that consequentialist theories should be seen as provid ing a much more radical alternative to other moral theories than has previously been acknowledged. Instead of providing rival accounts of the rightness and goodness of actions, consequentialist theories, at the deepest level, do away with such terms of moral evaluation altogether, and judge actions as simply better or worse than possible alternatives. The obvious objection to this approach is that it seems to undercut the action-guiding nature of morality. However, I argue that consequential ism, on my approach, still provides reasons for action. Such reasons are both scalar and essentially comparative. Thus, the judgment that a pos sible action, A, would be better than an alternative, B, provides a stronger reason to perform A than B, how much stronger depending on how much better A would be than B. This brings moral reasons into line with prudential reasons, which are clearly scalar. Does this leave consequentialism unable to say anything about right ness, duty, obligation, moral requirements, goodness (as applied to actions), and harm? Not entirely. While such notions have no part to play at the deepest level of the theory, they may nonetheless be of prac tical significance. By way of explanation, in Chapter 5, I provide a con textualist account of these notions, drawing on contextualist approaches to the epistemic concepts of knowledge and justification. Roughly, to say that an action is right, obligatory, morally required, etc. is to say that it is close enough to the best. What counts as close enough is determined by the context in which the judgment is made. Similarly, to say that an action is good is to say that it resulted in a better world than would have resulted had the appropriate alternative been performed. To say that an action harmed someone is to say that the action resulted in that person being worse off than they would have been had the appropriate alterna tive been performed. In each case, the context in which the judgment is made determines the appropriate alternative. A contextualist approach to all these notions makes room for them in ordinary moral discourse, but it also illustrates why there is no room for them at the level of funda mental moral theory. If the truth value of a judgment that an action is right or good varies according to the context in which it is made, then rightness or goodness can no more be properties of actions themselves
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Brief Outline of the Book 13 than thisness or hereness can be properties of things or locations themselves. Finally, in Chapter 6, I extend the scalar contextualist approach to demonstrate that a consequentialist has nothing to fear from the threat of determinism, and that the so-called “non-identity problem,” on which so much ink has recently been spilled, is not in the least problematic. I also briefly consider the practical upshots of adopting the scalar contextualist approach.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
2 The Scalar Approach to Consequentialism 2.1 Introduction Consequentialist theories such as utilitarianism have traditionally been viewed as theories of right action. Consequentialists have employed theories of value, theories that tell us what things are good and bad, in functions that tell us what actions are right and wrong. The dominant consequentialist function from the good to the right, at least since Sidgwick, has been maximization: an act is right if and only if it pro duces at least as much good as any alternative available to the agent, otherwise it is wrong. According to this maximizing function, rightness and wrongness are not matters of degree. Consequentialists are not alone on this score. Deontologists concur that rightness and wrongness are not matters of degree. There is an important difference, though. In typical deontological theories, properties that make an action right and wrong—e.g., being a keeping of a binding promise, a killing of an inno cent person, or a telling of a lie—are not naturally thought of as matters of degree. So one wouldn’t expect the rightness or wrongness of an act to be a matter of degree for deontology.1 But this is not the case with conse quentialism. Goodness and badness, especially in the utilitarian value theory, are clearly matters of degree. So the property of an act that makes it right or wrong—how much good it produces relative to available alternatives—is naturally thought of as a matter of degree. Why, then, is rightness and wrongness not a matter of degree? I will argue that, from the point of view of a consequentialist, actions should be evaluated purely in terms that admit of degrees. My argument 1 Though the approach of W. D. Ross might plausibly be interpreted in a scalar fashion. Morality by Degrees: Reasons without Demands. Alastair Norcross, Oxford University Press (2020). © Alastair Norcross. DOI: 10.1093/oso/9780198844990.001.0001
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
The Demandingness Objection 15 is, first and foremost, directed towards those who are already attracted to consequentialism. I am not here undertaking to argue for consequen tialism against any rival moral theory. However, there are at least two respects in which my arguments should be of interest to at least many adherents of alternative moral theories. First, most non-consequentialists think that one among other duties is a duty of general beneficence to do good for the world in general. Much of what I discuss in the book will clearly apply at least to this duty of beneficence. Furthermore, much of what I discuss, especially in Chapters 4 and 5, will be relevant to other deontological principles, such as prohibitions on harming.2 Second, some philosophers, who would otherwise have been attracted to a ver sion of consequentialism, have rejected the approach, largely because of what they perceive as the overdemanding implications of its account of rightness and wrongness. Perhaps they may be more attracted to a the ory constructed along the lines that I suggest. I shall conduct my discussion in terms of utilitarianism, since this is the most popular form of consequentialism, and the one to which I adhere. However, since none of my points rely on a specifically utilitar ian value theory, my argument applies quite generally to consequential ist theories.
2.2 The Demandingness Objection Since, according to maximizing utilitarianism, any act that fails to maxi mize is wrong, there appears to be no place for actions that are morally admirable but not required, and agents will often be required to perform acts of great self-sacrifice. This gives rise to the common charge that maximizing utilitarianism is too demanding. But how, exactly, are we to take this criticism? Utilitarianism is too demanding for what? If I take up a hobby, say mountain climbing, I may well decide that it is too demanding for me. By that, I mean that I am simply not willing to accept the demands of this hobby. I may, therefore, decide to adopt the less 2 I thank an anonymous referee for this book for pushing me to make this point more explicit.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
16 The Scalar Approach to Consequentialism demanding hobby of reading about mountain climbing instead. However, unless we adopt a radically subjectivist view of the nature of morality, according to which I am free simply to pick whichever moral theory pleases me, this approach will not work for the claim that utili tarianism is too demanding. When critics object to what they see as utilitarianism’s demands, they are not simply declaring themselves unwilling to accept these demands, but are claiming that morality doesn’t, in fact, make such demands. We are not, they claim, actually required to sacrifice our own interests for the good of others, at least not as much as utilitarianism tells us. Furthermore, there really are times when we can go above and beyond the call of duty. Since utilitarianism seems to deny these claims, it must be rejected. How should a utilitarian respond to this line of criticism? One per fectly respectable response is simply to deny the claims at the heart of it. We might insist that morality really is very demanding, in precisely the way utilitarianism says it is. But doesn’t this fly in the face of common sense? Well, perhaps it does, but so what? Until relatively recently, moral “common-sense” viewed women as having an inferior moral status to men, and some races as having an inferior status to others. These judg ments were not restricted to the philosophically unsophisticated. Such illustrious philosophers as Aristotle and Hume accepted positions of this nature. Many utilitarians (myself included) believe that the interests of sentient nonhuman animals should be given equal consideration in moral decisions with the interests of humans. This claim certainly con flicts with the “common-sense” of many (probably most) humans, and many (perhaps most) philosophers. It should not, on that account alone, be rejected. Indeed, very few philosophers base their rejection of a prin ciple of equal consideration for nonhuman animals merely on its con flict with “common-sense.” Furthermore, it is worth noting that the main contemporary alternative to a (roughly) consequentialist approach to morality is often referred to as “common-sense morality.”3 Those who employ this phrase do not intend the label itself to constitute an argu ment against consequentialism.
3 My apologies to the proponents of virtue ethics, the third-party candidate of ethical theories.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
The Demandingness Objection 17 Perhaps this dismissal of the demandingness objection is too hasty, though. It might appear that the previous paragraph contains more bluster than substance.4 Of course, many deeply held moral intuitions have proved to be unreliable, even outrageous. But that, alone, doesn’t provide good reason to reject this particular deeply held intuition (the rejection of demandingness). Consider this construal of the demand ingness objection: our confidence in the proposition that morality is not as demanding as act-utilitarianism says is higher than our confidence in act-utilitarianism itself and higher than our confidence in any argument for act-utilitarianism.5 All I have done so far is to motivate the view that we might be mis taken in the intuition that morality isn’t as demanding as maximizing utilitarianism says it is. But we might be mistaken about many things, both moral and nonmoral. What reason do we have to think that we are mistaken? Why should we lower our confidence in the nondemanding ness claim? I admit that, if all we had to go on were our subjective confi dences in these various claims, then some, perhaps many, would be quite justified in sticking to the claim they had most confidence in. But that is not all we have to go on. I will here offer a rough sketch of a line of argument to suggest that we have considerably more reason to doubt the intuition that morality isn’t demanding (in the way act utilitarianism says it is) than simply the lengthy history of confidently held intuitions, that have turned out to be mistaken. I do not pretend to be giving a comprehensive argument. I merely aim to show that a maximizing act utilitarian has considerable resources with which to counter the nonde mandingness claim. Suppose that we initially find ourselves simply comparing strengths of confidences in various claims, and find that our confidence in the nondemandingness claim is stronger than our confidence in either maximizing act utilitarianism or in any argument for that theory. Should we leave it at that? I suggest that we shouldn’t, for at least two reasons. First, we may well come to realize that the strength of our confidence in the relevant intuition can be explained by considerations that are
4 I’d like to thank an anonymous referee for this book for pushing me to say more on this point. 5 I owe this wording to an anonymous referee for this book.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
18 The Scalar Approach to Consequentialism consistent with the intuition being false. This wouldn’t prove that the intuition was false, but, depending on the details, it might considerably undermine its relevance. Consider an extreme example. I meet a new colleague, Brian, for the first time, and immediately have the overwhelming intuitive feeling that he is a terrible person. This feeling persists, and even strengthens. After a while, during which I am convinced that Brian is, in fact, a terrible person, I discover that a different colleague, Julia, has been manipulat ing me. Starting a month before I first met Brian, Julia has been drug ging me, showing me pictures of Brian, making me nauseous, and repeating all kinds of damning claims about him. The drug is the kind that removes all conscious memories of what happens while under its influence. This discovery should, at least, considerably undermine my confidence in my intuition (which persists) that Brian is a terrible per son. It is, of course, quite consistent with this story that Brian really is a terrible person. Perhaps Julia knows that Brian is terrible, and that he is expert at hiding his terribleness from everyone (except her). The only way she can get me to see the truth is by manipulating me as she has done. So my discovery of her tactics doesn’t conclusively prove that Brian isn’t terrible. But it should make me discount my intuitive feeling as the grounds for any belief that he is terrible. The example of Brian and Julia is, as I said, an extreme one. Is there anything relevantly similar regarding the intuition (for those who have it) that morality isn’t very demanding? Clearly, people aren’t being drugged and manipulated in the same way as in my example. But there is a plausible explanation for the nondemandingness intuition, that might at least undermine its force. If morality is demanding, in the way maximizing act utilitarianism says it is, the burdens of morality would fall mostly on those agents with the most resources, while the benefits (of agents complying with the demands of morality) would accrue mostly to those patients (both human and nonhuman) with the least resources. Philosophers, who debate this issue, are certainly not (with rare exceptions) among the very most affluent in the world, but they are (most of them at least) way above average. If I believe that morality demands that I sacrifice large quantities of my time and other resources for the benefit of others, I will either do so, or probably experience feelings of guilt for not doing so.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
The Demandingness Objection 19 Neither option is likely to be in my self-interest. So, for myself, and most other academic philosophers (and indeed, most residents of industrial ized nations) there is a powerful self-interested motivation to believe that morality isn’t anywhere near as demanding as maximizing act utili tarianism says it is. The considerations of the previous paragraph are quite consistent with the truth of the nondemandingness intuition. Just because it is in my self-interest to believe something doesn’t show that thing to be false. At the very least, though, we should be wary of trusting moral intuitions that conveniently line up with our self-interest. A related point is that our moral intuitions are undoubtedly at least influenced (if not largely shaped) by powerful cultural forces. Those who stand to benefit the most from general acceptance of the belief that morality is nowhere near as demanding as utilitarianism says it is are those with the most resources. They are also those best placed to influence the kinds of cul tural factors that play a large role in shaping our intuitions. This consid eration also applies to the intuitions of those who stand to benefit from acceptance of moral demandingness. I have no idea whether the least affluent people, who consider the topic, have the nondemandingness intuition. I suspect that many do not. But even if most do, the fact that it is in the interests of the most powerful to foster such an intuition should make us wary of putting much epistemic weight on it. The phenomenon of powerful interests influencing many people to believe and act against their own self-interest is certainly not limited to recent political events (though these do serve as striking illustrations of it). Organized religions have, for millennia, been highly successful at channeling the beliefs and behavior of their adherents to the benefit of the religious establishments, and the detriment of most followers. I said above that there are at least two reasons why we shouldn’t be content simply to measure the strength of our intuitions regarding the demandingness of morality, the plausibility of utilitarianism, and the arguments for it. I have been explaining, in rough preliminary form, some considerations that might undermine our confidence in the epi stemic relevance of the nondemandingness intuition (if we have it). The second reason is that the nondemandingness intuition shouldn’t stand on its own. Nondemandingness, if it is a fact, isn’t a free-floating one. If morality isn’t as demanding as utilitarianism says it is, there must be
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
20 The Scalar Approach to Consequentialism some explanation for that fact. In particular, some alternative moral theory must be the correct one. There may well be many alternative theories that share nondemanding ness as a feature. So we don’t need to be justified in accepting one in par ticular, rather than any of the others, in order to accept nondemandingness. But we do need to be justified in accepting some feature of a theory (which might be shared by many theories), such as a principle, which entails non demandingness. One of the most popular principles, related to nonde mandingness, is the principle that the distinction between doing and allowing is morally relevant. To be more precise, the principle that it is, other things being equal, morally worse to do harm than to allow harm to occur. It should be clear how a rejection of this principle is closely con nected to the demandingness of a moral theory. The principle has been the subject of much discussion, often in the form of discussing the socalled “killing versus letting die” distinction (which is both broader and narrower than the doing harm versus allowing harm distinction, on the plausible assumption that death isn’t necessarily a harm).6 No consensus has yet emerged regarding what, precisely, the doing/allowing distinction amounts to, much less whether it can bear any moral weight (if there is a single distinction in the metaphysical vicinity). Perhaps someone will produce an account of the doing/allowing dis tinction that is both metaphysically robust, and clearly morally relevant. Or perhaps someone will articulate and defend a different principle that entails nondemandingness. Perhaps someone thinks they have already done so. I am deeply skeptical, but I don’t have space here to explore all such possibilities exhaustively. As I said above, I aimed merely to sketch a line of argument available to a utilitarian, who wishes to challenge the move from a simple weighing of the strengths of various intuitions to the rejection of utilitarianism on the grounds of demandingness. I hope I have at least motivated the view that a utilitarian can do more than sim ply insist that their intuitions have different strengths from those of their opponents, and thus that they have reached an argumentative impasse. As I said, a perfectly respectable utilitarian response to the criticism that utilitarianism is too demanding is simply to insist that morality really is very demanding. However, there are powerful reasons to take a 6 See, for example, Norcross 2013.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
The Demandingness Objection 21 different approach altogether. Instead of either maintaining the demands of maximizing utilitarianism, or altering the theory to modify its demands, we should reject the notion that morality issues demands at all. In order to see why this might be an attractive option, I will briefly examine the alleged category of supererogatory actions, and an attempted modification of utilitarianism to accommodate it. Maximizing utilitarianism, since it classifies as wrong all acts that fail to maximize, leaves no room for supererogation. A supererogatory act is generally characterized as an act which is not required, but which is in some way better than the alternatives. E.g. a doctor, who hears of an epidemic in another town may choose to go to the assistance of the people who are suffering there, although in doing so he will be putting himself at great risk.7 Such an action is not morally required of the doc tor, but it produces more utility than the morally permissible alternative of remaining in his home town. The category of the supererogatory embodies two connected intuitions that are at odds with maximizing utilitarianism. First, it seems that people sometimes go beyond the call of duty. Maximizing utilitarianism would not allow that. To do your duty is to do the best thing you can possibly do. And second, people who fail to make certain extreme sacrifices for the greater good are usu ally not wrong. It seems harsh to demand or expect that the doctor sac rifice his life for the villagers. The utilitarian can avoid these consequences by retreating to a form of satisficing utilitarianism.8 For example, one can allow that the bound ary between right and wrong can in some cases be located on the scale at some point short of the best. This would allow that an agent can do her duty without performing the best action available to her, and it would make it possible for her to go beyond the call of duty. The position of the boundary between right and wrong may be affected by such factors as how much self-sacrifice is required of the agent by the various options, and how much utility or disutility they will produce. For example, it may be perfectly permissible for the doctor to stay at home, even though the best option would have been to go and help with the epidemic. On the other hand, if all the doctor could do and needed to do to save the 7 See, for example, Feinberg 1961, 276–88. 8 Slote 1985a, ch. 3 discusses this suggestion.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
22 The Scalar Approach to Consequentialism villagers were to send a box of tablets or a textbook on diseases, then he would be required to do all he could to save them. Satisficing versions of utilitarianism, no less than the traditional ones, assume that the rightness of an action is an all-or-nothing property. If an action does not produce at least the required amount of good, then it is wrong; otherwise it is right. On a maximizing theory the required amount is the most good available. On a non-maximizing theory what is required may be less than the best. Both forms of utilitarianism share the view that a moral miss is as good as a mile. If you don’t produce as much good as is required, then you do something wrong, and that’s all there is to it, at least as far as right and wrong are concerned.
2.3 Scalar Utilitarianism Here’s an argument for the view that rightness and wrongness isn’t an all-or-nothing affair.9 Suppose that we have some obligations of benefi cence, e.g. the wealthy are required to give up a minimal proportion of their incomes for the support of the poor and hungry. (Most people, including deontologists such as Kant and Ross, would accept this.) Suppose Jones is obligated to give 10 percent of his income to charity. The difference between giving 8 percent and 9 percent is the same, in some obvious physical sense, as the difference between giving 9 percent and 10 percent, or between giving 11 percent and 12 percent. Such simi larities should be reflected in moral similarities. A moral theory which says that there is a really significant moral difference between giving 9 percent and 10 percent, but not between giving 11 percent and 12 per cent, looks misguided. At least, no utilitarian should accept this. She will be equally concerned about the difference between giving 11 percent and 12 percent as the difference between giving 9 percent and 10 per cent. To see this, suppose that Jones were torn between giving 11 percent and 12 percent and that Smith were torn between giving 9 percent and 10 percent. The utilitarian will tell you to spend the same amount of time persuading each to give the larger sum, assuming that other things are equal. This is because she is concerned with certain sorts of 9 I take the term “scalar” from Slote 1985b, who discusses scalar morality in ch. 5.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Scalar Utilitarianism 23 consequences, in this case, with getting money to people who need it. An extra $5,000 from Jones (who has already given 11 percent) would satisfy this goal as well as an extra $5,000 from Smith (who has given 9 percent). It does not matter whether the $5,000 comes from one who has already given 11 percent or from one who has given a mere 9 percent. A related reason to reject an all-or-nothing line between right and wrong is that the choice of any point on the scale of possible options as a threshold for rightness will be arbitrary. Even maximization is subject to this criticism. One might think that the difference between the best and the next best option constitutes a really significant moral difference, quite apart from the difference in goodness between the options. We do, after all, attach great significance to the difference between winning a race and coming second, even if the two runners are separated by only a fraction of a second. We certainly don’t attach anything like the same significance to the difference between finishing, say, seventh and eighth, even when a much larger interval separates the runners. True enough, but I don’t think that it shows that there really is a greater significance in the difference between first and second than in any other difference. We do, after all, also attach great significance to finishing in the top three. We give medals to the top three and to no others. We could just as easily honor the top three equally and not distinguish between them. When we draw these lines—between the first and the rest, or between the top three and the rest, or between the final four and the others—we seem be laying down arbitrary conventions. And saying that giving 10 percent is right and giving only 9 percent is wrong seems analogously conven tional and arbitrary. An all-or-nothing theory of right and wrong would have to say that there was a threshold, e.g., at 10 percent, such that if one chose to give 9 percent one would be wrong, whereas if one chose to give 10 percent one would be right. If this distinction is to be interesting, it must say that there is a big difference between right and wrong, between giving 9 per cent and giving 10 percent, and a small(er) difference between similarly spaced pairs of right actions, or pairs of wrong actions. The difference between giving 9 percent and 8 percent is just the difference between a wrong action and a slightly worse one; and the difference between giv ing 11 percent and 12 percent is just the difference between one
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
24 The Scalar Approach to Consequentialism supererogatory act and a slightly better one. In fact, if the difference between right and wrong is at all significant, it must be possible for it to offset at least some differences in goodness. For example, if the thresh old for rightness in a case of charitable giving were $10,000 for both Smith and Jones, the difference between Smith giving $10,000 and giv ing $9,000 must be more significant than the difference between Jones giving $9,000 and giving somewhat less than $8,000. But suppose that Smith is wavering between giving $10,000 and giving $9,000, and that Jones is wavering between giving $9,000 and giving $7,999. No utilitar ian would consider it more important to persuade Smith to give the higher amount than to persuade Jones to give the higher amount. This applies equally whether the threshold in question is at maximization or some point short of maximization. It might be objected that the argument of the previous paragraph doesn’t demonstrate that the supposed difference between right and wrong is not significant for a utilitarian, just that the rightness or wrong ness of other actions doesn’t figure in the determination of whether a particular action is itself right or wrong. My action of persuading either Smith or Jones to give the larger sum is judged by the goodness of its consequences. The fact that if I persuade Smith, there will be one more right action than if I persuade Jones, is not relevant to the goodness of the consequences of my action. However, this might not show that the difference between right and wrong is not morally significant in some other respect. But in what could this significance consist? It is clearly not relevant to decisions about what to do. Worse, it is not even relevant to what a utilitarian would hope for. To see this, suppose that both Smith and Jones have decided to tie their decisions to the coin toss in the Superbowl. Smith has decided that, if the coin lands heads, he will give $10,000 rather than $9,000; and Jones has decided that, if the coin lands tails, he will give $9,000 rather than $7,999. Assuming no other foresee able morally relevant consequences of the coin toss, a utilitarian will clearly hope that the coin lands tails. That is because she cares about utility, not rightness. Tails will produce one more dollar, which will produce more utility. The fact that heads will produce one more right action is simply irrelevant. The same thing applies even to an example in which a utilitarian is thinking about her own actions. Suppose that you, a maximizing utilitarian, wake up one day with a wicked hangover. You
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Scalar Utilitarianism 25 remember that yesterday you were grappling with two decisions. In the first one, you were inclined to perform act A, but were considering B instead, and in the second you were inclined to perform act C, but were considering D instead. In the first decision, B was better than A by 10 (net) hedons, and was the best of all your available options. In the sec ond decision, D was better than C by 11 (net) hedons, but was actually second best to act E of all your available alternatives (but you had defin itely decided against E). You remember that a friend managed to talk you up to the better of the two options in one of the decisions but not in the other. But you don’t remember which it was. In thinking about this, qua utilitarian, you should clearly hope that you did D instead of C, rather than B instead of A. Even though this means that in neither deci sion did you do the right thing. The situation would remain the same, even if the difference between C and D were only 10.1 (net) hedons, or 10.01. So we are left with the bare claim that the difference between right and wrong is itself morally significant, even though it is irrelevant to decisions about what to do, and irrelevant to the value of states of affairs, and thus irrelevant to what a utilitarian, qua utilitarian, cares about or hopes for. If, despite accepting all this, a utilitarian were to insist that the difference between right and wrong is still “significant,” the appropriate response would be to quote the character Inigo Montoya, from the movie The Princess Bride. “You keep using that word. I do not think it means what you think it means.” By contrast with all-or-nothing conceptions of right and wrong, good and bad are scalar concepts, but as with many other scalar concepts, such as rich and tall, we speak of a state of affairs as good or bad (simpliciter). This distinction is not arbitrary or conventional. The utilitarian can give a fairly natural account of the distinction between good and bad states of affairs. For example: consider each morally significant being included in the state of affairs. Determine whether her conscious experience is better than no experience. Assign it a positive number if it is, and a negative one if it isn’t. Then add together the numbers of all morally significant beings in the state of affairs. If the sum if positive, the state of affairs is good. If it is negative, the state of affairs is bad. Note that although this gives an account of a real distinction between good and bad, it doesn’t give us reason to attach much significance to the distinction. It doesn’t make the difference between a minimally good state
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
26 The Scalar Approach to Consequentialism of affairs and a minimally bad state of affairs more significant than the difference between pairs of good states of affairs or between bad states of affairs. To see this, imagine that you are consulted by two highly powerful amoral gods, Bart and Lisa. Bart is trying to decide whether to create a world that is ever so slightly good overall or one that is ever so slightly bad overall. Lisa is trying to decide whether to create a world that is clearly, but not spectacularly, good, or one that is clearly spectacularly good. They each intend to flip a coin, unless you convince them one way or the other in the next five minutes. You can only talk to one of them at a time. It is clearly more important to convince Lisa to opt for the better of her two choices than to convince Bart to opt for the better of his two choices. If utilitarianism only gives an account of goodness, how do we go about determining our moral obligations and duties? It’s all very well to know how good my different options are, but this doesn’t tell me what morality requires of me. Traditional maximizing versions of utilitarian ism, though harsh, are perfectly clear on the question of moral obliga tion. My obligation is to do the best I can. Even a satisficing version can be clear about how much good it is my duty to produce. How could a utilitarian, or other consequentialist, theory count as a moral theory, if it didn’t give an account of duty and obligation? After all, isn’t the central task of a moral theory to give an account of moral duty and obligation? Utilitarians, and consequentialists in general, seem to have agreed with deontologists that their central task was to give an account of moral obligation. They have disagreed, of course, sometimes vehemently, over what actually is morally required. Armed with an account of the good, utilitarians have proceeded to give an account of the right by means of a simple algorithm from the good to the right. In addition to telling us what is good and bad, they have told us that morality requires us to produce a certain amount of good, usually as much as possible, that we have a moral obligation to produce a certain amount of good, that any act that produces that much good is right, and any act that produces less good is wrong. And in doing so they have played into the hands of their deontological opponents. A deontologist, as I said earlier, is typically concerned with such properties of an action as whether it is a killing of an innocent person, or a telling of a lie, or a keeping of a promise. Such properties do not usually come in degrees. (A notable exception is raised by the so-called
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Scalar Utilitarianism 27 duty of beneficence.) It is hard, therefore, to construct an argument against particular deontological duties along the lines of my argument against particular utility thresholds. If a utilitarian claims that one has an obligation to produce x amount of utility, it is hard to see how there can be a significant utilitarian distinction between an act that produces x utility and one that produces slightly less. If a deontologist claims that one has an obligation to keep one’s promises, a similar problem does not arise. Between an act of promise-keeping and an alternative act that does not involve promise-keeping, there is clearly a significant deonto logical distinction, no matter how similar in other respects the acts may be to each other. A utilitarian may, of course, claim that he is concerned not simply with utility, but with maximal utility. Whether an act pro duces at least as much utility as any alternative is not a matter of degree. But why should a utilitarian be concerned with maximal utility, or any other specific amount? To be sure, a utilitarian cannot produce an account of duty and obli gation to rival the deontologist’s, unless he claims that there are morally significant utility thresholds. But why does he want to give a rival account of duty and obligation at all? Why not instead regard utilitar ianism as a far more radical alternative to deontology, and simply reject the claim that duties or obligations constitute any part of fundamental morality, let alone the central part? My suggestion is that utilitarianism should be treated simply as a theory of the goodness of states of affairs and of the comparative value of actions (and, indeed, of anything appro priately related to states of affairs, such as character traits, political sys tems, pension schemes, etc.), which rates possible alternatives in comparison with each other. This system of evaluation yields informa tion about which alternatives are better than which and by how much. In the example of the doctor, this account will say that the best thing to do is to go and help with the epidemic, but it will say neither that he is required to do so, nor that he is completely unstained morally if he fails to do so. If a utilitarian has an account of goodness and badness, according to which they are scalar phenomena, why not say something similar about right and wrong: that they are scalar phenomena but that there is a point (perhaps a fuzzy point) at which wrong shades into right? Well, what would that point be? I said earlier that differences in goodness should be
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
28 The Scalar Approach to Consequentialism reflected by differences in rightness. Perhaps the dividing line between right and wrong is just the dividing line between good and bad. In fact, Mill’s statement of the principle of utility might seem to suggest such an approach: The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in propor tion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.10
There are, of course, notorious difficulties with interpreting Mill’s state ment. Talk of tendencies of actions to promote happiness, for example, could lead to a form of rule utilitarianism. However, since my concern is not to offer an interpretation of Mill, I will set aside issues of scholarship, and focus instead on the suggestion that an action is right just in case it is good, and wrong just in case it is bad. There are two reasons to reject this suggestion. The first is that it seems to collapse the concepts of right and wrong into those of good and bad respectively, and hence, to make the former pair redundant. The second is that, on the account of good and bad states of affairs I offered the utilitarian, it is not clear that there is any satisfactory account of the difference between good and bad actions with which to equate the difference between right and wrong actions. An argument to this conclusion is the main topic of the next chapter. Is there an alternative account of wrongness? In the next two sections, I consider two possible linguistic analyses. First, I examine different ver sions of the view that an action is wrong if and only if it is blameworthy (or ought to be punished). Second, I briefly examine the view that “right” means “supported by the best (or strongest) reasons.”
2.4 Wrongness as Blameworthiness We tend to blame and punish agents for their wrong actions and not for actions that are not wrong. And we tend to consider it wrong to blame or punish someone for an action that was not wrong. Perhaps, then, a 10 Mill, Utilitarianism, ch. 2, para. 2.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Wrongness as Blameworthiness 29 utilitarian (or other theorist) can analyze wrongness in terms of blame worthiness. Consider this well-known passage from Mill: We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellow-creatures; if not by opinion, by the reproaches of his own conscience.11
Although, as I said earlier, my main concern is not Mill scholarship, it is worth noting that a close reading of the paragraphs immediately preced ing this passage suggests that Mill is not here proposing an analysis of wrongness that fits with utilitarianism. He is, rather, pointing out some features of the ordinary usage of the term “wrong.” Nonetheless, an analysis of wrongness in terms of punishment is worth considering. This suggests the possibility of a scalar conception of wrongness. Since cen sure (and other forms of punishment) comes in degrees, then perhaps wrongness might also come in degrees. Consider the following definition of wrong action: WA: An action is wrong if and only if it is appropriate to impose vari ous sanctions on the agent. What does it mean to say that it is “appropriate” to sanction? Since appropriateness is a normative notion, the most natural understanding is to think of it as meaning “obligatory.”12 In that case, WA would be: WA1. An action is wrong if and only if we ought to impose various sanctions on the agent. But if WA is to be understood as WA1, it leads to a definitional circle or regress. It tells us to understand what is wrong in terms of what it is wrong not to do. (I take it that “wrong” and “ought not to be done” are interchangeable.) But we don’t have a better grasp on the notion of “ought 11 Ibid., ch. 5, para. 14. 12 Alternatively, we could understand it as meaning “permissible.” I shall focus on the obligatory reading here, since it seems the most popular. Note that the chief objection to the obligatory reading also applies to the permissibility reading.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
30 The Scalar Approach to Consequentialism to sanction” than we have on the notions of “ought to keep promises” or “ought to feed the hungry.” Trying to understand the wrongness of one action in terms of the wrongness of other actions is unenlightening. There is an alternative account of appropriateness according to which it is still normative. That account says that an action is appropriate if and only if it is optimific. WA would then amount to: WA2: An action is wrong if and only if it is optimific to punish the agent. This suggestion avoids the uninformative circularity of WA1.13 Let us suppose that WA2 expresses the sort of connection between wrongness and censure that people have in mind. Can it provide the utilitarian with an adequate account of wrongness? I believe that a utilitarian should not embrace WA2. For he cannot identify wrong actions with actions which it is optimific to sanction. To see this consider what Sidgwick says about praise. From a Utilitarian point of view, as has been before said, we must mean by calling a quality, ‘deserving of praise’, that it is expedient to praise it, with a view to its future production: accordingly, in distribut ing our praise of human qualities, on utilitarian principles, we have to consider primarily not the usefulness of the quality, but the usefulness of the praise.14
The utilitarian will, of course, say the same about censure as Sidgwick says about praise: we should assess whether it is good to punish or blame someone by assessing the utility of doing so. Punishing or blaming are actions just like promise-keeping or killing and, like those actions, their value is determined by their consequences, their power to produce utility. If there is a conceptual connection of the sort asserted by WA2 between an action’s being wrong and its being appropriate to punish the 13 At first sight, (WA2) appears to endorse maximization. That is, it might look as though it enjoins legislators, judges, police, and ordinary people in everyday life to punish in a way that produces the best consequences. But it doesn’t. According to (WA2), to say that judges and others ought to punish in an optimific manner means that it is optimific for third parties to punish them if they failed to do so, and it may well turn out that this is not the case. 14 Sidgwick 1981, 428.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Wrongness as Blameworthiness 31 agent, and if Sidgwick’s account of when it is appropriate to punish is correct, then we should be able to determine whether an act is wrong by determining whether punishing the agent of that act will produce more utility than any alternative. On the other hand, if there is good reason to reject this method of deciding whether someone has done wrong, then there is reason to reject WA2. I submit that there is reason to reject the claim that the wrongness of an action is determined by whether punishing the agent would produce more utility than not. This is because our concept of wrongness is constrained by one or both of the following principles which conflict with WA2. 1. If action x is wrong, then an action y done by someone in exactly similar circumstances, with the same intention and the same conse quences, is also wrong. We might call this the principle of universalizability. It might, however, be optimific to punish the agent of x but not the agent of y.15 Hence, according to WA2, x would be wrong, but y would not. 2. If someone does the best she can, and does very well indeed, then she has done nothing wrong. But it can sometimes be optimific to punish a utility-maximizer. For example, imagine that Agnes has always produced as much utility as it was possible for her to produce. Moreover, none of her actions has led to any unfortunate consequences, such as someone’s untimely death or suffering. Punishing her as a scapegoat might nevertheless produce more utility than not doing so. It is absurd to say that she has done something wrong just in virtue of the fact that it is appropriate or optimific to punish her. 15 It might be objected that, if it is optimific to punish the agent of x but not the agent of y, there must be something different in the circumstances of x and y (I owe this objection to an anonymous referee for this book). While technically true, this is the kind of difference, like a difference in mere identity, which is not relevant to the principle of universalizability. For example, I can’t claim that the mere fact that it is I, Alastair, who is robbing the bank, makes my action relevantly different from the otherwise similar act of my sister, Joanna, whose bank rob bery last week I roundly condemned. I likewise can’t claim that the fact that I am beloved, and that therefore punishing me would distress many others, is a morally relevant difference, when it comes to comparing my action with that of Joanna, whose punishment would delight many others. (My actual sister, Joanna, is much beloved, but also a good sport, so I’m sure she doesn’t mind appearing fictitiously in this example.)
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
32 The Scalar Approach to Consequentialism Given any one of these constraints on any recognizable understand ing of wrongness, the utilitarian cannot say that whether an action is wrong is determined by whether it is optimific to punish the agent for doing it.16 It is interesting to note that something like WA2 could allow the utili tarian to make room for the existence of intractable moral dilemmas, which utilitarians traditionally reject. Roughly, a moral dilemma is a situation in which an agent cannot help but do wrong. It is a situation in which each of an agent’s available alternatives is morally impermissible. On the traditional maximizing version of utilitarianism such situations cannot exist.17 The right action is the best alternative available to the agent (or one of the best, in the event of a tie for first place). All other actions are wrong. Therefore, there is always at least one permissible alternative available to the agent. If we determine rightness and wrong ness, not by whether the consequences of the action in question are bet ter than those of any available alternative, but by whether the consequences of the further action of punishing the action in question are better than any available alternative (to the action of punishing), we can easily imagine moral dilemmas. Consider Jimmy, who is faced with a choice between helping a little old lady across the street or beating her brains in and stealing her purse. On pretty much any moral theory, including just about every version of utilitarianism, the choice is easy. The right thing to do is to help the little old lady across the street. At the very least, that wouldn’t be wrong. However, consider what WA2 has to say about this choice, specifically about whether either action would be wrong. The crucial question in each case is whether it would be opti mific to punish Jimmy for performing the action in question. Suppose that Jimmy is widely disliked (whether fairly or not). In this case, it may well be optimific to punish him for mugging the little old lady and to punish him for helping her. In fact, according to WA2, people who are widely disliked may, for that very reason, face moral dilemmas all the time. My own view is that utilitarians are correct to reject moral 16 It might occur to the reader to wonder whether a satisficing version of (WA2) would work better than the “optimific” version. The principles I have expressed here would produce conflicts with such a version also. 17 For a suggested variation of utilitarianism that accommodates moral dilemmas, see Slote 1985b, 161–8. For a criticism of Slote’s suggestion, see Norcross 1995, 59–85.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Wrongness as Blameworthiness 33 dilemmas. However, even a friend of dilemmas would object to the account of them provided by WA2. Perhaps the problem with the current suggestion is that it makes the optimality of punishing an agent both a necessary and sufficient condi tion of wrongness. What if it merely functioned as a constraint, as a necessary condition on wrongness? (This suggestion would seem to fit Mill’s wording.) In that case, we would need at least one other consider ation to determine the deontic status of an action. Perhaps we could reintroduce maximization to fill that role, in such a way as to make room for all three of the traditional deontic categories. Consider: WA3: An action is right if and only if it is optimific. An action is wrong if and only if (i) it fails to be optimific, and (ii) it is optimific to punish the agent. An action is merely permissible if and only if it is neither right nor wrong. WA3 would allow us to say that many actions that fail to maximize util ity are nonetheless not wrong, even though they are not right either. For example, those who give substantial portions of their income to help others, but could do even more good by giving more, would not be acting wrongly, unless it was optimific to punish their failure to optimize, which would be unlikely. However, a wealthy person who fails to give anything to help others may well be acting wrongly, if it is optimific to punish him (most likely by social sanctions). WA3 avoids violating principle 2 (that if someone does the best she can, she does no wrong), but it does violate principle 1. Consider two characters, Philo and Miso, faced with the same choice among an array of alternatives. The consequences of their choices are identical. Clearly, if they each choose their best option, they each do the right thing, according to WA3. However, what if they each choose their second-best option? In this case, whether they act wrongly will depend on the conse quences of punishing them. Suppose that Philo is much loved, and that punishing him for pretty much anything will fail to be optimific. Suppose further that Miso is much hated, and that punishing him for pretty much anything (including for doing the best he can) will be opti mific. In this case, Miso and Philo may act identically, in terms of conse quences, but Miso’s action will be wrong and Philo’s won’t. Unlike WA2,
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
34 The Scalar Approach to Consequentialism WA3 doesn’t render it impossible for Miso to avoid wrongdoing, but it may render it far more difficult for Miso than for Philo. In fact, it may be impossible for Philo to act wrongly. This is clearly a result that no utili tarian should accept. There is a further, perhaps more fundamental, reason why a utilitar ian (or any consequentialist) should not accept an analysis of right or wrong in terms of punishability. It is at least part of the essence of any consequentialist view that the moral status of anything (whether an action, a character trait, an institution, or anything else) is entirely determined by the consequences of that thing itself. In particular, an action’s moral status, compared with the status of alternative possible actions, is determined by comparing the consequences of that action with the consequences of the relevant alternatives.18 The suggestion that the wrongness of an action is determined, even in part, by the conse quences of punishing (or otherwise censuring) that action entails that the wrongness of an action is not solely determined by the consequences of that action, but at least in part by the consequences of a different action, the action of punishing. I submit that there can be no conceptual connection, for the utilitar ian, between wrongness and punishability or blameworthiness.
2.5 Rightness and Reasons Even if we reject a conceptual connection between wrongness and pun ishment, isn’t there a simple conceptual analysis of rightness in terms of reasons? If a utilitarian accepts that one possible action is better than another just in case it has better consequences, must she not also accept that there is more (moral) reason to perform the better action? Furthermore, must she not accept that there is the most (moral) reason to perform the best action available to the agent? But doesn’t “(morally) right” simply mean “supported by the strongest (moral) reasons” or “what we have most (moral) reason to do”? In which case, we seem to have arrived back at the maximizing conception of rightness. 18 This is at least part of the reason why so-called “rule-consequentialism” is not a conse quentialist theory of the moral status of actions, although it may well appeal to a consequen tialist theory of the moral status of institutions.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Morality and Publicity 35 The problem with this suggestion is that it is highly implausible that there is such a simple conceptual connection between rightness and maximal reason. Consider the example of supererogation discussed above. Supererogatory acts are supposed to be morally superior to their merely permissible alternatives. There will be more (moral) reason to perform putative supererogatory acts than to perform their morally inferior alternatives. But if “right” simply means “supported by the strongest reasons,” supererogatory acts will be right (unless there are even better alternatives) and the morally inferior, but supposedly accept able, alternative acts will be wrong. One response to this, which I discuss below, is to claim that rightness is an ideal, and that duty doesn’t require acting rightly. But if “right” and “duty” are equated, then, by definition, there can be no acts that go “above and beyond” duty, in the sense of being better than duty. So it would seem that those who argue for the category of supererogatory actions are simply confused about the mean ings of words. But this is highly implausible. Likewise, those who criti cize utilitarianism’s maximizing account of rightness as being too demanding are surely not simply mistaken as to the meaning of the word “right.”
2.6 Morality and Publicity If utilitarianism is interpreted as a scalar theory, that doesn’t issue any demands at all, it clearly can’t be criticized for being too demanding. Does this mean that the scalar utilitarian must agree with the critic who claims (i) we are not frequently required to sacrifice our own interests for the good of others; and (ii) there really are times when we can go above and beyond the call of duty? Strictly speaking, the answers are “yes” to (i), and “no” to (ii). (i) It may frequently be better to sacrifice our interests for the good of others than to perform any action that pre serves our interests. Sometimes it may be much better to do so. However, these facts don’t entail any further facts to the effect that we are required to do so. (ii) As for supererogation, the scalar utilitarian will deny the existence of duty as a fundamental moral category, and so will deny the possibility of actions that go “beyond” our duty, in the sense of being better than whatever duty demands. The intuition that
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
36 The Scalar Approach to Consequentialism drives the belief in supererogation can, however, be explained in terms of actions that are considerably better than what would be reasonably expected of a decent person in the circumstances. A contextualist account of notions such as supererogation may shed some light on how a consequentialist can incorporate them into the theory, without admit ting them at the fundamental level. I explore such a contextualist approach in Chapters 5 and 6. Utilitarianism should not be seen as giving an account of right action, in the sense of an action demanded by morality, but only as giving an account of what states of affairs are good and which actions are better than which other possible alternatives and by how much. The funda mental moral fact about an action is how good it is relative to other available alternatives. Once a range of options has been evaluated in terms of goodness, all the morally relevant facts about those options have been discovered. There is no further fact of the form “x is right,” “x is to-be-done,” or “x is demanded by morality.” This is not to say that it is a bad thing for people to use phrases such as “right,” “wrong,” “ought to be done,” or “demanded by morality,” in their moral decision-making, and even to set up systems of punishment and blame which assume that there is a clear and significant line between right and wrong. It may well be that societies that believe in such a line are happier than societies that don’t. It might still be useful to employ the notions of rightness and wrongness for the purposes of everyday decision-making. If it is practically desirable that people should think that rightness is an all-or-nothing property, my proposed treatment of utilitarianism suggests an approach to the question of what function to employ to move from the good to the right. In different societies the results of employing different functions may well be different. These dif ferent results will themselves be comparable in terms of goodness. And so different functions can be assessed as better or worse depending on the results of employing them. The suggestion of the previous paragraph might seem to raise a famil iar problem for utilitarianism. It has been famously argued by Bernard Williams and Michael Stocker that utilitarianism might require that noone believe it. After all, if the happiest world is one in which no-one believes utilitarianism, or perhaps even in which everyone believes utili tarianism to be false, the theory itself seems to demand that no-one
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Morality and Publicity 37 believe it. Stocker uses the term “esoteric” to apply to theories that dis play this feature. Both Williams and Stocker seem to regard esotericism as a grave defect in a moral theory. At first glance, there certainly seems to be something defective about a moral theory that demands that noone believe it. If I am correct that utilitarianism, at the fundamental level, makes no demands, we will have to rephrase the objection in terms of reasons for believing the theory, but the spirit of it would remain. It is fairly easy to see, though, that this objection, even the ori ginal version, is misguided. Why should it even count as a strike against a moral theory that it might require that no-one believe it? To be sure, this certainly seems to violate what Rawls calls the “publicity condition” for a moral theory. Doesn’t it seem right that if a moral theory is correct, it should be pos sible for everyone, or at least most people, to act as it prescribes and to believe that it is correct? Perhaps this is at least contingently true, but it is hard to see how it could be a necessary feature of the correct moral theory. To see this, imagine that there are two deities, Bart, who is good, and Lisa, who is bad.19 Imagine, further, that the correct moral theory is the following version of divine command theory: BARTISM An act is wrong iff it is forbidden by Bart, otherwise it is permissible. Many people believe BARTISM and act on it. Many other people believe a false moral theory, LISANITY, which has the same structure as BARTISM, but which centers on the commands of Lisa. Both Lisa and Bart regularly appear to the people and issue their very different com mands. There is nothing in the story so far to suggest that BARTISM could not be the correct moral theory. Suppose now that the people are getting better and better. In fact, most people now believe BARTISM and very few believe LISANITY. This annoys the hell out of Lisa, who desperately wants people to do what She says, so She works the follow ing piece of trickery on the minds of the people. Every time Lisa appears to the people, they believe they are seeing Bart, and vice versa. Lisa, who, though evil is also more powerful than Bart, also fixes Bart so that 19 See what I just did? You were expecting Lisa to be the good one, weren’t you?
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
38 The Scalar Approach to Consequentialism He is not aware of the people’s reversed perceptions. She also messes with their memories, so they aren’t aware that the commands they now accept are inconsistent with the ones they accepted previously (not that inconsistency has ever presented problems for religious theories). Now most, if not all, people who believe in BARTISM will actually fail to act as it prescribes. Conversely, those who believe LISANITY will usually do what BARTISM requires. Has this exercise of evil power by Lisa ren dered the previously true BARTISM false? This would be a very strange conclusion. If BARTISM was true before, it is still true now, it’s just that now it’s better if people believe LISANITY instead.20, 21 So far, I have been speaking of the “truth” of utilitarianism (or BARTISM), and asking whether it would matter if the truth of a moral theory required that people not believe it. However, my argument is not aimed only at moral realists, who believe that moral theories are object ively true. Suppose, for example, that I regard morality as fundamentally chosen, rather than discovered. Perhaps my moral commitments express something deeply rooted in my character. Wouldn’t such a moral antirealist have good reasons for embracing something like the publicity constraint?22 It will be easier to see both the appeal and the failing of the publicity constraint, if we pause briefly to consider the role of moral theories, or at least one central aspect of their role. Both moral realists and antirealists (of various kinds) agree that moral theories are action-guiding in the following sense: they provide reasons for acting. If my moral the ory contains a prohibition on coveting my neighbor’s ass, I have a reason not to covet my neighbor’s ass (I’m not sure whether coveting is a kind 20 Although I talk of the “truth” of a moral theory, the example could be recast to apply to moral anti-realist approaches. 21 It might be objected that my example of BARTISM and LISANITY is question-begging, on the grounds that I began by supposing the truth of a moral theory, that doesn’t have publi city built in to it. According to this objection, I should, instead, have assumed a version of div ine command theory, according to which rightness and wrongness are equated with the revealed will of a deity. But this, in turn, seems to beg the question in favor of the publicity condition. More importantly, my example neither assumes publicity nor rules it out. Furthermore, the versions of divine command theory, with which I am familiar, simply equate moral properties with the will of a deity, saying nothing about whether or how this will must be revealed. Of course, someone who is immovably wedded to publicity may simply insist that nothing could even count as a moral theory, unless it had publicity built into it. My argument is aimed at those for whom the truth of the publicity condition is at least open to question. 22 A notable moral anti-realist, Jonathan Bennett, explicitly rejects the publicity condition in Bennett 1995; see, especially, p. 22.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Morality and Publicity 39 of action, but bear with me). But if, according to my moral theory, I shouldn’t even believe my moral theory, how is it supposed to supply me with reasons? And if it can’t supply me with reasons, how can it be a moral theory? The obvious answer to this is to point out that reasons don’t have to be embodied in consciously held beliefs, or even uncon scious beliefs, in order to apply. The smoker who doesn’t believe that smoking is bad for her has the same reason to quit as the better informed (or less self-deceived) smoker. At this point, the moral anti-realist will probably point out that the harmful effects of smoking are a matter of objective fact, whereas moral theories inhabit (according to him) an entirely different realm. The reasons supplied by moral theories are more like the reason I have for benefiting someone I care deeply about than the reason I have for quitting smoking. If I care deeply about Smith and you don’t, I have a reason for benefiting Smith that simply doesn’t apply to you. But this example can be modified to illustrate how moral theories can provide reasons for acting to those who don’t accept them, even given moral anti-realism. Suppose I care deeply about Smith and want her to be happy, above all else. However, I also know, from bitter past experience, that when I care deeply about someone, I become irrational, possessive, violent, and obsessive. In fact, everyone for whom I have cared deeply has suffered terribly as a result. Given that I really do want Smith to be happy, I judge that it would be better if I could get myself not to care about her at all. Perhaps I succeed in this endeavor, and no longer care about Smith. As a result, she is a lot happier than she would have been. My emotional commitment to Smith provided the reason for me to change my feelings, and continues to provide reasons for my behavior, even though such reasons are now inaccessible to me. This suggests that the assumption that moral anti-realists must be com mitted to the publicity condition stems from an impoverished view of how reasons for behavior can operate. Now consider how a moral anti-realist might view my example. Given the kind of being Bart is, and the kind of person I am, BARTISM is my chosen theory. My acceptance and advocacy of BARTISM express some thing deeply rooted in my character. But what if Lisa were tricking me in the manner described above? Even though I embrace BARTISM, I judge that, were Lisa to be tricking me, it would be better, according to my chosen theory, if I were to embrace LISANITY instead. Perhaps I am
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
40 The Scalar Approach to Consequentialism told that LISA will begin Her trickery tomorrow (the trickery will, of course, include erasing my memory of being told this). There is a rigor ous course of drug and behavioristic treatment, that I can undergo today. This treatment has a 95 per cent chance of changing my character in such a way that I will embrace LISANITY. Given that I currently embrace BARTISM, I have a very good reason to submit myself to the treatment. The reason is supplied by BARTISM itself. I might regard it as regrettable that Lisa’s power has driven me to this, but I don’t consider BARTISM any less appropriate as a moral theory because of it. If the treatment is successful, and I come to embrace LISANITY (and there fore act as BARTISM requires), there is a very clear sense in which BARTISM is still providing me with reasons for acting, even though I would then believe otherwise.
2.7 Rightness and Goodness as Guides to Action At this point, someone might object that I have thrown out the baby with the bath water. To be sure, scalar utilitarianism isn’t too demand ing, it’s not nearly demanding enough! How can a theory that makes no demands fulfill the central function of morality, which is to guide our actions? It is clear that the notions of right and wrong play a central role in the moral thinking of many. It will be instructive to see why this so. There are two main reasons for the concentration on rightness as an all-ornothing property of actions: (i) a diet of examples which present a choice between options which differ greatly in goodness; (ii) the impera tival model of morality. Let’s consider (i). When faced with a choice between either helping a little old lady across the road on the one hand, or mugging her on the other, it is usually much better to help her across the road. If these are the only two options presented, it is easy to classify helping the old lady as the “right” thing to do, and mugging her as “wrong.” Even when there are other bad options, such as kidnapping her or killing and eating her, the gap between the best of these and helping her across the road is so great that there is no question as to what to do. When we move from considering choices such as these to considering
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Rightness and Goodness as Guides to Action 41 choices between options which are much closer in value, such as helping the old lady or giving blood, it is easy to assume that one choice must be wrong and the other right. Let us move now to (ii). Morality is commonly thought of as some sort of guide to life. People look to morality to tell them what to do in various circumstances, and so they see it as issuing commands. When they obey these, they do the right thing, and when they disobey, they do a wrong thing. This is the form of some simple versions of divine command ethics and some other forms of deontology. Part of the motivation for accepting such a theory is that it seems to give one a simple, easily applicable prac tical guide. Problems arise, of course, when someone finds herself in a situation in which she is subject to two different commands, either of which can be obeyed, but not both. In these cases we could say that there is a higher-order command for one rather than the other to be done, or that the agent cannot help doing wrong. The effect of allowing higherorder commands is to complicate the basic commands, so “do not kill” becomes “do not kill, unless . . . .” The effect of allowing that there could be situations in which an agent cannot help doing wrong is to admit that morality may not always help to make difficult choices. In either case, one of the motivations for accepting an imperatival model of morality— simplicity, and thus ease of application—is undermined. Unless one does espouse a simple form of divine command theory, according to which the deity’s commands should be obeyed just because they are the deity’s commands, it seems that the main justification for the imperatival model of morality is pragmatic. After all, if we don’t have the justification that the commands issue from a deity, it is always legit imate to ask what grounds them. That certain states of affairs are good or bad, and therefore should or should not be brought about, seems like a far more plausible candidate to be a fundamental moral fact than that someone should act in a certain way. However, it is generally easier to make choices if one sees oneself as following instructions. It may well be, then, that the imperatival model of morality, with the attendant prominence of the notions of right and wrong, has a part to play at the level of application. It may in fact be highly desirable that most people’s moral thinking is conducted in terms of right and wrong. On the other hand, it may be desirable that everyone abandon the notions of right
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
42 The Scalar Approach to Consequentialism and wrong. I do not wish to argue for either option here, since the issue could probably only be settled by extensive empirical research. The approach of the last few paragraphs might seem merely to relocate a problem to a different level. I have been claiming that, although moral ity doesn’t actually tell us what we ought to do, there may be pragmatic benefits in adopting moral practices that include demands. Societies that adopt such practices may be better (happier, more flourishing, etc.) than those that don’t. But surely this doesn’t solve anything. We want to know whether we ought to adopt such practices. Scalar utilitarianism seems to be silent on that question. Since scalar utilitarianism doesn’t tell us what we ought to do, it can’t guide our actions (including our choices of what moral practices to adopt and/or encourage in society). But any adequate moral theory must guide our actions. Therefore the theory should be rejected. This argument has three premises: 1. If a theory doesn’t guide our action, it is no good. 2. If a theory doesn’t tell us what we ought to do, it doesn’t guide our action. 3. Utilitarianism, as I have described it, does not tell us what we ought to do. To assess this argument we need to disambiguate its first premise. The expression “guide our action” can mean several things. If it means “tell us what we ought to do” then premise (1) is question-begging. I shall construe it to mean something more like, “provide us with reasons for acting.” On that reading, I shall concede (1), and shall argue that (2) is false. Here is Sidgwick in defence of something like (2): Further, when I speak of the cognition or judgement that ‘X ought to be done’ – in the stricter ethical sense of the term ought – as a ‘dictate’ or ‘precept’ of reason to the persons to whom it relates, I imply that in rational beings as such this cognition gives an impulse or motive to action: though in human beings, of course, this is only one motive among others which are liable to conflict with it, and is not always – perhaps not usually – a predominant motive.23 23 Sidgwick, The Methods of Ethics, p. 34.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Rightness and Goodness as Guides to Action 43 As Sidgwick acknowledges, this reason can be overridden by other reasons, but when it is, it still exerts its pull in the form of guilt or uneasiness. Sidgwick’s point rests on internalism, the view that moral beliefs are essentially motivating. Internalism is controversial. Instead of coming down on one side or the other of this controversy, I shall argue that, whether one accepts internalism or externalism, the fact that a state of affairs is bad gives reason to avoid producing it as much as would the fact that producing it is wrong. Suppose internalism is correct. In that case the belief that an act is wrong gives one a reason not to do it. Furthermore, such a reason is necessarily a motivating reason.24 It seems that the utilitarian internalist should take the position that the belief that a state of affairs is bad is also a motivating reason to avoid producing it, and the belief that one state of affairs is better than the other may well give the believer a stronger reason to produce the first than the second. If the fact that an act is wrong gives us reason to avoid it, then the fact that it involves the production of a bad state of affairs, by itself, gives us reason to avoid it. Now let’s suppose externalism is true. In that case the fact that an act is wrong gives one a motivating reason to avoid doing it if one cares about avoiding wrongdoing. If this is what wrongness amounts to, then it seems no defect in a theory that it lacks a concept of wrongness. For it may be true that one cannot consistently want to avoid doing wrong, believe that an act is wrong and do the act without feeling guilt. But this doesn’t provide a distinctive account of wrongness, because we can replace each occurrence of the word “wrong” and its cognates in the above sentence with other moral terms such as “an action which 24 There can be reasons that are not necessarily motivating, e.g. prudential reasons. You may have a prudential reason to act in a certain way, be aware of the reason, and yet be not in the least motivated so to act. I am not here thinking of cases in which other motivations—moral, aesthetic, self-indulgent, and the like—simply overwhelm prudential motivations. In such cases you would still be motivated to act prudentially, but more motivated to act in other ways. If you simply didn’t care about your own well-being, prudential reasons would not be in the least motivating. But someone who didn’t care about her own well-being could still have, and even be aware of, prudential reasons. Similarly, if you are asked what is the sum of five and seven, you have a reason to reply “twelve,” but you may be not in the least motivated to do so, for you may not care about arithmetic truth, or any other truth. There may be reasons other than moral reasons that are necessarily motivating. For example, the belief that a particular action is the best way to satisfy one of your desires may provide a necessarily motivating reason to perform that action. The motivation may be outweighed by other motivations.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
44 The Scalar Approach to Consequentialism produces less than the best possible consequences” or “much worse than readily available alternatives” and the principle remains true. If the agent cares about doing the best he can, then he will be motivated to do so, feel guilt if he doesn’t, and so on. It is true that few of us care about doing the best we can. But then, many of us do not care about doing what we ought either.25 Whether internalism is correct or not, it looks as if premise (2) in the above argument is false. Abolishing the notion of “ought” will not ser iously undermine the action-guiding nature of morality. The fact that one action is better than another gives us a moral reason to prefer the first to the second. Morality thus guides action in a scalar fashion. This should come as no surprise. Other action-guiding reasons also come in degrees. Prudential reasons certainly seem to function in this way. My judgement that pizza is better for me than cauliflower will guide my action differently depending on how much better I judge pizza to be than cauliflower. Whether moral facts are reasons for all who recognize them (the debate over internalism) is an issue beyond the scope of this book, but whether they are or not, the significance each of us gives to such moral reasons relative to other reasons, such as prudential and aes thetic reasons, is not something which can be settled by a moral theory.
2.8 Two More Pleas for a Theory of the Right There are two other reasons I have encountered for requiring utilitarian ism to provide an account of the right. The first might be expressed like this: “If utilitarianism is not a theory of the right, it must only be a the ory of the good. Likewise, different consequentialist theories will be dif ferent theories of the good. But then how do we explain the difference between consequentialist and non-consequentialist theories in general? Since there are no restrictions on the kind of good that any particular version of consequentialism may be a theory of, we are left with nothing that is distinctive about consequentialism.”26
25 Slote, 1985a points this out. 26 I have heard this objection from Daniel Howard-Snyder and Shelly Kagan.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Two More Pleas for a Theory of the Right 45 This is not correct. I can still claim this distinctive feature for conse quentialism: it includes the view that the relative value of an action depends entirely on the goodness of its consequences. Of the acts avail able to the agent, the best action will be the one that produces the best consequences, the next best will be the one that produces the next best consequences, and so on. I can also claim that the better the action, the stronger the moral reason to perform it. This is not to concede the point to my opponents. The fact that there is a moral reason to perform some action, even that there is more moral reason to perform it than any other action, doesn’t mean that one ought to perform it. (Recall the discussion above of the suggested analysis of “right” in terms of reasons.) This dis tinguishes consequentialism from deontology, which allows that one may have a stronger moral reason to perform an action which produces worse consequences. For example, if faced with a choice between killing one and letting five die, the deontologist may acknowledge that five deaths are worse than one, but insist that the better behavior is to allow the five to die. According to that view, morality provides stronger reasons for allowing five deaths than for killing one.27 One advantage of the suggestion I offer here over, say, the view that it is of the essence of consequentialism to insist that the agent ought always to do whatever will produce the best consequences, is that it allows satisficing conse quentialists and scalar consequentialists to count as consequentialists. I have also encountered the following reason for requiring utilitarian ism to provide an account of the right as well as the good: The utilitarian will have to provide a function from the good to the right in order to compare her theory with various deontological alternatives. Our chief method for comparing moral theories, according to this suggestion, consists in comparing their judgements about which acts are right or wrong. It is true that contemporary discussions of the relative merits of utilitarianism and deontology have often focused on particular examples, asking of the different theories what options are right or wrong. However, to assume that a moral theory must provide an account of the right in order to be subjected to critical scrutiny begs the question 27 The full story about what distinguishes consequentialism from deontology will have to be more complicated than this. It will have to incorporate the claim that the consequentialist ranking of states of affairs is not agent-centered. See Scheffler 1982 for a discussion of this notion.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
46 The Scalar Approach to Consequentialism against my proposed treatment of utilitarianism. That utilitarians have felt the need to provide accounts of rightness is testimony to the perva sion of deontological approaches to ethics. Part of what makes utilitar ianism such a radical alternative to deontology, in my view, is its claim that right and wrong are not fundamental ethical concepts.
2.9 Rightness as an Ideal In this chapter, I have argued that utilitarianism is best conceived as a theory of the good, that judges actions to be better or worse than pos sible alternatives, and thus provides reasons for actions. I have argued that the traditional utilitarian account of rightness as an all-or-nothing property, whether the maximizing or satisficing version, should be abandoned. However, there may be an alternative account of rightness that is particularly congenial to a scalar utilitarian approach. If, instead of conceiving of rightness as a standard that must be met (perhaps to avoid censure), we conceive of it as an ideal to which we aspire, we may be able to accommodate it within a scalar framework. The suggestion is that the ideally right action is the maximizing action, and alternatives are more or less right, depending on how close they come to maximiz ing. Although the ideal itself is often difficult to attain, the theory cannot be charged with being too demanding, since it doesn’t include the demand that one attain the ideal. Nonetheless, the ideal functions as a guide. This would be similar to the approach taken by many Christians, who view Christ as a moral exemplar. A common articulation of this view is the question “what would Jesus do?,” often abbreviated on brace lets, bumper stickers, handguns, and the like as “WWJD.” Inasmuch as the extant accounts of Christ’s life provide a basis for answering this question, the answer is clearly supposed to function as an ideal towards which we are supposed to aspire, and not as a demand that must be met in order to avoid wrongdoing. The closer we come to emulating the life or the actions of Christ, the better our lives or our actions are. The utilitarian version (WWJSMD?) might be easier to apply, both epistemically and practically. There are, of course, well-known epistemic problems with even a subjective expected-utility version of utilitarian ism, but these pale into insignificance compared with the difficulty of
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Rightness as an Ideal 47 figuring out what Jesus would do, whether the (presumably) actual his torical figure, or the literary composite portrayed in the biblical (and other) sources. As for the practical problems with viewing Christ as an exemplar, it may turn out that the ideal is not simply difficult to attain, but in some cases impossible. On the assumption that Christ had divine powers, an assumption that is undoubtedly accepted by most adherents of the Christ-as-exemplar moral theory, we may sometimes be literally unable to do what Jesus would have done. For example, suppose I am attending a wedding in Lubbock (TX), and the wine runs out. Amid the wailing and the gnashing of teeth I glance at the “WWJD” engraved on my cowboy boots. Well, it’s clear what Jesus would do in this case (John: 2, 1–10), but I simply can’t turn water into wine. However, the utilitarian ideal is, by definition, possible. In this case it might involve driving out side the city limits (Lubbock is dry in more than one sense) to one of the drive-through liquor stores, loading up on the surprisingly good local wines, and returning to spread cheer and much-needed intoxication to the wedding festivities. Or perhaps, more plausibly, it might involve sending the money to famine relief.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
3 Good and Bad Actions 3.1 Introduction It is usually assumed to be possible, and sometimes even desirable, for consequentialists to make judgments about both the rightness and the goodness of actions. Whether a particular action is right or wrong is one question addressed by a consequentialist theory such as utilitarianism. Whether the action is good or bad, and how good or bad it is, are two others. I argued, in the previous chapter, that consequentialism should not employ the notions of right and wrong. I will argue in this chapter that consequentialism cannot provide a satisfactory account of the goodness of actions, on the most natural approach to the question. I will also argue that, strictly speaking, a consequentialist cannot judge one action to be better or worse than another action performed at a different time or by a different person. Even if such theories are thought to be primarily concerned with rightness, this would be surprising, but in the light of my rejection of the place of rightness in consequentialism, it seems particularly disturbing. If actions are neither right (or wrong) nor good (or bad), what moral judgments do apply to them? Doesn’t the rejection of both rightness and goodness, as applied to actions, leave consequentialism unacceptably impoverished? On the contrary, I will argue that consequentialism is actually strengthened by the realization that actions can only be judged as better or worse than possible alternatives.
3.2 Goodness and Rightness Consequentialism has traditionally been viewed as a theory of right action. Consequentialists have employed theories of value, theories that tell us what things are good and bad, to provide inputs for functions Morality by Degrees: Reasons without Demands. Alastair Norcross, Oxford University Press (2020). © Alastair Norcross. DOI: 10.1093/oso/9780198844990.001.0001
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Goodness and Rightness 49 whose outputs tell us what actions are right and wrong. The theory of the good is usually taken to be a theory of the goodness of states of affairs. The most common consequentialist function from the good to the right embodies the maximizing requirement: R An act is right iff there are no available alternatives that produce a greater balance of goodness over badness. That is, the right action is simply the best action.1 So, what should consequentialism say about good actions? What, if any, is, or rather should be, the connection between the consequentialist accounts of right actions and good actions? There are three different approaches to these questions with prima facie appeal, which can be roughly characterized as follows: (i) “Right” and “good,” as applied to actions, are interchangeable, except for the fact that “good” admits comparative and superlative forms. (ii) The goodness of an action is a function of the goodness of the motive (or maybe even the whole character) from which it sprang. (iii) The goodness of an action is a function of the goodness of its consequences. (iii) seems to me the most natural approach for a consequentialist to adopt, and so the argument of this chapter, apart from the brief remarks of the next two paragraphs, is directed towards (iii). I leave open the possibility that a consequentialist account of good actions along the lines of (i) or (ii) could be provided, although, as I will now explain, even such accounts will be affected by my arguments regarding (iii). (i) has a certain intuitive appeal, rooted in the ordinary usage of the terms “right” and “good” beyond a merely consequentialist framework, though our intuitions (at least mine and those of others I have consulted) are by no means univocal on this point. The upshot of adopting (i), for a consequentialist, is that all and only right actions are good actions, and that some good actions are better than others. This is particularly counterintuitive on the most popular consequentialist account of rightness, R. I will have more to say about a maximizing account of good actions in Section 3.3. Furthermore, how should a maximizing 1 For convenience, I will ignore the possibility of ties.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
50 Good and Bad Actions consequentialist understand the claim that one good/right action is better than another? The obvious answer seems to be that the first action leads to a greater balance of good over bad2 than does the second. However, I will argue, in Sections 3.3–3.5, that a consequentialist cannot give a satisfactory account of the notion of an action’s leading to a balance of good over bad. I will also argue, in Section 3.6, that it follows from this that a consequentialist cannot say, strictly speaking, that one actual action is either better or worse than another. These difficulties also affect the attempt to tie (i) to a satisficing account of rightness/ goodness, on the most intuitive reading of such an account.3 (ii) has been suggested by both consequentialists and nonconsequentialists as an account of good actions. On the nonconsequentialist side we have W. D. Ross: Now when we ask what is the general nature of morally good actions, it seems quite clear that it is in virtue of the motives that they proceed from that actions are morally good. (Ross 1973, 156)4
A notable consequentialist who takes a similar type of approach is J. J. C. Smart: We can also use ‘good’ and ‘bad’ as terms of commendation or discommendation of actions themselves. In this case to commend or discommend an action is to commend or discommend the motive from which it sprang. (Smart 1973, 49)
The first problem that springs to mind with this approach is the problem of individuating motives. For example, take the case of the mother of a vicious killer, who lies to the police to keep her son out of jail, despite 2 Or smaller balance of bad over good. 3 Such an account would specify that, in order to be right/good, an action would have to produce a certain balance of good over bad. Other satisficing accounts might demand that an action be no further than a certain distance from the best option, in order to be right/good. This might be a fixed distance, or it might vary with the context of the choice, or even of the evaluation itself. This type of approach, except for the variation that has the rightness/goodness of actions vary with the context of evaluation (see Section 3.6), is not affected by the argument of this chapter. 4 Ross sees his account as being in the tradition of that most famous of non-consequentialists, Immanuel Kant.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Goodness and Rightness 51 knowing exactly what he did. Consider the following two, equally plaus ible, accounts of her motive: A She lied because she wanted to p rotect her child from harm. B She lied because she wanted to protect her child from the consequences of his own terrible wrongdoing. Whether her lie is good or bad, according to Smart’s approach, could depend on which account of her motive we accept. The problem here is not just an epi stemic one. It’s not as if we just don’t know whether her motive was really to protect her child from harm or to protect him from the consequences of his own terrible wrongdoing, but if we did, we could evaluate the motive. To ask which of the alternatives was really her motive might be like asking whether it was really the President of the United States or Abraham Lincoln who delivered the Gettysburg Address. However, let’s assume that we have a satisfactory method for individuating motives. The next question for the proponent of this approach is whether the motive is evaluated with reference to the specific agent or a broader class of agents. A motive which, in most people, leads to good results could lead to terrible results in a few, or vice versa. Finally, what evaluation of the motive is required in order for the action to be good? Smart seems to make the mistake of assessing the motive in terms of the right actions to which it leads. As an explanation of why we should approve of the desire to save life, even though it sometimes leads to doing the wrong thing, he says, “in general, though not in this case, the desire to save life leads to acting rightly” (Smart 1973, 49). But a motive’s production of right actions is not the main consideration for a consequentialist. To see this, consider two possible competing motives, A and B, which are relevant to three different choices, each between two different actions. For the first two choices, A would produce the better action and B the worse action. For the last choice, B would produce the better action and A the worse action. So, in this simplified context, A would lead to more right actions (on a maximizing theory such as Smart’s) than would B. Should a consequentialist prefer A to B (assuming that the same motive has to be operative in each choice)? That depends on the details of the three choices. If the better, and therefore right, action in the first two choices is only slightly better than the alternative, but the better action in the last choice is much better than the alternative, the consequentialist would clearly prefer motive B over A. In fact, the best motive from a consequentialist perspective could be one that leads to no
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
52 Good and Bad Actions right actions at all. Consider three possible alternative motives, M1, M2, and M3. M1 leads to the best (and therefore right, on a maximing account) action half the time, and the worst (by a long way) action the other half of the time. M2 is the reverse. It leads to the best action on all the occasions when M1 leads to the worst, and to the worst, when M1 leads to the best. M3 leads to an action that is a very close second to the best every time. M3 never leads to the right action, but is clearly the best of the three possible motives. So, clearly a consequentialist would compare motives in terms of the goodness of their consequences, not the rightness. Must the motive (in this agent or some wider class of agents) lead to a mere balance of good over bad, a particular positive balance of good over bad, the greatest possible balance of good over bad, in order for the action it produces to be classified as good? Only the maximizing alternative will be unaffected by my treatment of option (iii) above (the goodness of an action is a function of the goodness of its consequences), since my arguments against the notion of an action’s leading to a balance of good over bad can be adapted to apply to the notion of a motive’s leading to a balance of good over bad. I leave open the possibility, therefore, however distant, that some account of good actions could be produced that ties the goodness of an action to some consequential feature of the motive from which it sprang. Let’s return to (iii): The goodness of an action is a function of the goodness of its consequences. How might a consequentialist employ her value theory to give a theory of good actions? It doesn’t follow from the fact that we have a theory of the goodness or badness of states of affairs that we have a theory of the goodness or badness of actions, but there might seem to be an easy method for constructing the latter out of the former: G An act is good iff it produces more goodness than badness; an act is bad iff it produces more badness than goodness. So far as I know, no consequentialist has advocated G in print,5 though the central concept, that of an action’s producing a balance of goodness 5 Though Michael Slote suggested it in comments on an early version of Howard-Snyder and Norcross (1993).
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Goodness and Rightness 53 over badness, is either explicit or implicit in much consequentialist literature. Indeed, if we consider what maximizing consequentialism tells us about assessing rightness, the procedure seems to be as follows: first determine how much goodness and badness each possible act will produce; next rank them according to the sum, positive or negative, of goodness over badness; finally declare the act with the greatest sum to be the right act. Thus Sidgwick: By Utilitarianism is here meant the ethical theory, that the conduct which, under any given circumstances, is objectively right, is that which will produce the greatest amount of happiness on the whole;. . . . by Greatest Happiness is meant the greatest possible surplus of pleasure over pain, the pain being conceived as balanced against an equal amount of pleasure. (Sidgwick 1981, 411 and 413)
If each act produces either goodness or badness (or both), and it is pos sible for amounts of goodness and badness to cancel each other out, we ought to be able to classify acts as either good, bad, or neutral, according to whether the sum of goodness over badness is positive, negative, or neither. This possibility is suggested by Bentham’s claim that An action . . . may be said to be conformable to the principle of utility . . . when the tendency it has to augment the happiness of the community is greater than any it has to diminish it. (Bentham 1789, ch. 1, para. 6)
Bentham is usually understood to be talking about rightness.6 This would seem to be a non-maximizing account of rightness, that locates the threshold between right and wrong actions at the same point as the threshold between good and bad actions, according to G. A combin ation of G with Bentham’s account of rightness could give us a version of (i), discussed above. All and only right actions are also good actions, but some are better than others. It also seems possible simply to equate rightness with goodness, according to G, (and wrongness with badness). In fact, Mill’s statement 6 See, for example, Quinton 1989, 1–3.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
54 Good and Bad Actions of the principle of utility can be interpreted as suggesting such a scalar interpretation of rightness (and wrongness): The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. (Mill 1861, ch. 2)
Another possibility is to incorporate both the maximizing requirement and the distinction between good and bad actions in an account of rightness. Slote suggests a modification of utilitarianism to the moral theory that results if one demands of a right action both that it produce consequences no less good than those producible by any alternative act available to a given agent and that those consequences be, on balance, good. (Slote 1985b, 162)
The distinction between good and bad actions, judged from a consequentialist point of view, might even figure in a non-consequentialist theory of right action. Shelly Kagan describes a theory that incorporates a “zero threshold constraint against doing harm” (Kagan 1989, 191ff.). Here’s some of what he has to say about this constraint: the zero threshold constraint against doing harm forbids doing harm in those cases where this will lead to an overall loss in objective good. It does not, however, provide any barrier to doing harm in those cases where this will result in an overall gain . . . so long as the harm brings about better consequences overall, it need not bring about the best. For so long as the suboptimal act of harm-doing will, on balance, bring about more good than harm, it will not be ruled out by the zero threshold constraint. (Kagan 1989, 191–2)
None of these philosophers is here explicitly advocating a theory of good actions, but they all seem to be using something like the concept involved in G, that is, the concept of an action producing a balance of goodness over badness. It would appear that a consequentialist can classify as good any action that will on balance, bring about more good than
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Goodness and Comparisons 55 harm, or whose consequences are, on balance, good, or whose tendency to augment the good of the community is greater than any it has to diminish it. Indeed, these phrases appear to capture a single common and commonly accepted consequentialist concept. But that appearance is deceptive. I will argue that there are, in fact, several concepts that they might capture, but, since none of these yields a plausible version of G, there is no satisfactory way for a consequentialist to use G, or anything like it, to judge actions as simply good or bad, as opposed to better or worse than specific alternatives.
3.3 Goodness and Comparisons How might we explain what it is to augment the good of the community, or for the consequences of an action to be, on balance, good? For the sake of simplicity, I will assume happiness and unhappiness to be the only things of intrinsic value and disvalue. Consider an agent, called Agent, whose action affects only herself and one other person, Patient. Agent is faced with a range of options that do not affect her own happiness, but have dramatically different effects on Patient’s happiness. This case seems simple enough. The good actions are those that make Patient happy, the bad are those that make him unhappy. But this won’t yet do. It seems to assume that Patient was neither happy nor unhappy to begin with. Let’s modify the account slightly. The good actions are those that make Patient happier, the bad are those that make him unhappier. Happier than what? One obvious answer is happier than he was before the action. If Agent does something that increases (or augments) Patient’s happiness, she has done a good thing. To generalize, we simply compare the welfare of all those affected by a particular action before and after the action. If the overall level of welfare is higher after than before, the action is good. If it is lower, the action is bad. If it is the same, the action is neutral. But this still won’t do. Consider again a restricted case involving only Agent and Patient. Call this case Doctor: Patient is terminally ill. His condition is declining, and his suffering is increasing. Agent cannot delay Patient’s death. The only thing she can do is to slow the rate of increase of Patient’s suffering by administering various drugs. The best
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
56 Good and Bad Actions available drugs completely remove the pain that Patient would have suffered as a result of his illness. However, they also produce, as a side-effect, a level of suffering that is dramatically lower than he would have experienced without them, but significantly higher than he is now experiencing.7 So the result of administering the drugs is that Patient’s suffering continues to increase, but at a slower rate than he would have experienced without them. The very best thing she can do has the consequence that Patient’s suffering increases. That is, after Agent’s action Patient is suffering N amount of suffering as a direct result of Agent’s action, and N is more than Patient was suffering before the action. Has Agent done a bad thing if she slows the rate of increase of Patient’s suffering as much as she can? This hardly seems plausible. It is consistent with the schematic description of this case to imagine that Agent has done a very good thing indeed. Clearly, we can’t simply compare states of affairs before and after a particular action. Agent has made Patient happier: not happier than he was, but happier than he would have been. We compare states of affairs, not across times, but across worlds. Agent has done a good thing, because she has made Patient happier than he would have been had she done something else. Even though Patient is now suffering more than he was, he would have been suffering even more, if Agent had done anything else instead. As I said, an evaluation of Agent’s action involves a comparison of different worlds. But which world (or worlds) do we compare with the world containing Agent’s action?8 With what do we compare Patient’s 7 The rate-of-increase of pain is essential to the example. It is important that Patient suffer more after the treatment than before, because the view I am arguing against involves a simple comparison of Patient’s welfare before and after the action. If the treatment left Patient in less pain after the action than before, it would count as a good action, both intuitively and according to the account in question. Also, why not simply have an example involving a regular painkiller, that removes some, but not all, of the pain that Patient would have suffered? It would avoid possible complications if Patient’s later states are uncontroversially caused by Agent’s action. According to some intuitively appealing accounts of causation and mental state identity, ordinary pain-killers do cause the later painful states. However, there are theories of caus ation and mental state identity according to which this is not so, and I don’t want my example to depend on the truth of any particular controversial metaphysical view. 8 For the sake of simplicity, I pretend throughout this chapter that the world is deterministic. Thus, I talk of the world in which the action is performed. Perhaps, though, an acceptance of indeterminism will provide a method of assessing the goodness or badness of actions. Consider the following sketch of an account, suggested both by an anonymous referee for the Philosophical Review and by Mark Brown: A possible action determines a cone of worlds: all
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Goodness and Comparisons 57 suffering? There are several other ways for Agent to behave. Which of these alternatives provides the relevant comparison? In this case it doesn’t seem to matter, because Agent has done the best thing possible. None of her other available options would have resulted in less suffering for Patient than did her actual behavior. But we don’t want to demand of a good action that there be no better alternatives. This would be a maximizing account of good actions, that would equate goodness with the maximizing notion of rightness. To see why this is unacceptable, consider the following case, that I will call Self-sacrifice: Agent is able, at the cost of some considerable effort and pain to herself, to make Patient moderately happy. She does so. This is nearly the best thing that she could do, but not quite. One alternative course of action would have made Patient considerably happier, while all her other alternatives would have resulted in far less happiness for him, and some would even have led to unhappiness. The action that would have resulted in more happiness for Patient, however, would have involved a fair bit more effort and pain for Agent. The extra effort and pain for Agent would have been slightly possible histories of the universe coinciding with the actual world up to the point of action and in which the action gets done. The value of the action, in terms of goodness or badness, is that of its cone. The value of the cone is determined by the value of the post-act part of the worlds in it, probably by integrating their value weighted by their probability of being the actual world. Unfortunately, the example of Doctor can be easily modified to show this kind of approach to be inadequate. Imagine that Agent and Patient are the last two living sentient beings, and that Agent is suffering from a similar condition to Patient’s, but less advanced. After they are both dead, there will be no more morally relevant beings, ever. Further imagine that Agent is unable to kill either herself or Patient. The best she can do for Patient is to administer the drugs that result in a slowed rate of increase of Patient’s suffering. It is highly plausible that the action cone for Agent’s act of administering the drug to Patient is bad. The probability of a miraculous recovery for either Agent or Patient is negligible. All, or almost all, of the post-act portions of the worlds in the action cone are bad. So, any plausible method of integrating the value of the worlds in the cone will yield the result that the cone is overall bad. Of course, all the action cones of the alternative possible actions will be even worse. But it is highly implausible to suggest that Agent’s action is bad, but not as bad as the alternatives. If we have an account of good and bad actions, it should judge her action to be good. More generally, the problem with the action cone approach is that it judges many intuitively good actions to be bad and many intuitively bad actions to be good. Consider a situation in which it’s overwhelmingly likely that, no matter what I do, goodness will outweigh badness throughout the future of the world. The worst thing that I can do is to torture and kill five people. Even if I do that, however, there will probably be such an abundance of goodness throughout the rest of the world, maybe in terms of happiness, that the post-act portions of the worlds in my action cone will be, almost exclusively, good. The integrated value of my action cone is good. But my action is clearly not good. My action is judged as good, on this approach, because the goodness of states of affairs that are unaffected by it outweighs the badness I bring about. This suggests a modification of this approach that counts only those states of affairs that are affected by an action. I discuss such an approach in Section 3.7.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
58 Good and Bad Actions outweighed by the extra happiness for Patient, but only slightly. So there is something that Agent could have done that would have had even better, albeit marginally better, consequences than what she did. But do we really want to say that Agent didn’t do a good thing in sacrificing her own comfort for the sake of Patient’s happiness? She didn’t do the best thing, but what she did seems to be pretty good. This is not to say, of course, that there are no situations in which only the best action is plausibly regarded as good, whether by consequentialists or others. There may even be situations in which great self-sacrifice is required in order to do good (the example of Lifeboat, discussed below, is possibly such a case). Such cases will typically involve the prevention of some great harm to another. In Self-sacrifice, however, Agent is already providing Patient with a considerable benefit, at no small cost to herself. The only motivation I can see for insisting that her action is not good, on the grounds that she can do even better, is the determination to equate the notions of the right and the good as applied to actions. If we are to give a consequentialist account of good actions, we should accommodate the intuition that at least some suboptimal acts are nonetheless good. The example of Self-sacrifice demonstrates that optimization is not an appropriate standard of goodness, but it also suggests a different approach. The reason why Agent doesn’t have to optimize in order to do good, it might be claimed, is that optimization involves, in this case, a greater sacrifice of her own interests than is required for mere goodness. (This leaves open the possibility that optimization is nonetheless required for rightness.) At this point we might be tempted to adapt Samuel Scheffler’s agent-centered prerogative to apply instead to goodness.9 Consider the following account: GAC: An act is good iff either (i) it is optimal,10 or (ii) producing better consequences would require showing less than a certain proportionate bias toward consequences for the agent. According to GAC, Agent’s action in Self-sacrifice could still be good, so long as the better action would have required showing less than the 9 Scheffler 1982. 10 Without (i) GAC would give the strange result that many sub-optimal acts were good while their optimal alternatives were not.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Goodness and Comparisons 59 relevant degree of bias towards herself. It seems plausible to assume that, whatever the relevant degree of bias is, it will be greater than Agent would show in performing the best action. This is because pretty much any bias towards herself would result in Agent preferring the secondbest over the best action. Recall that the extra burden that the best action would have imposed on Agent would only just have been outweighed by the extra benefit for Patient. Despite the success of GAC in coping with Self-sacrifice, I don’t think consequentialists should embrace it as an account of good actions. There are two reasons for this. First, GAC is an agent-relative account of good actions. The classical utilitarians all endorsed a non-agent-relative standard for assessing actions. Most famously, Bentham required “every body to count for one, and nobody for more than one,” and Mill said of the utilitarian agent, “As between his own happiness and that of others, utilitarianism requires him to be as strictly impartial as a disinterested and benevolent spectator.” (Mill 1861, ch. 2) These claims were made in connection with assessing the rightness of actions, but they embody a central feature of consequentialist ethical theories. Scheffler’s agentcentered prerogative is seen as a departure from consequentialism, not simply because it rejects maximization, but because the rejection of maximization is achieved by allowing agents a degree of partiality towards themselves. Those consequentialists for whom the disinterested benevolent spectator provides the appropriate model of moral assessment of actions will be loath to abandon that model when it comes to assessing the goodness of actions. I don’t wish to claim that it would be inappropriate to call a view incorporating agent-relative standards “consequentialist.” I suspect that non-agent-relativity in all action judgments is part of what distinguishes consequentialist theories from other ethical theories, but that is beyond the scope of the current work.11 At the very least, it is worth noting that, even if GAC were otherwise acceptable as an account of good actions, it would be unappealing to those consequentialists who embrace non-agent-relativity in all action judgments. But GAC is not otherwise acceptable. Optimization is unacceptable as the standard of goodness, because it excludes too much, such as Agent’s action in Self-sacrifice. GAC expands 11 For a useful discussion of this issue, see Howard-Snyder 1994.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
60 Good and Bad Actions the realm of good actions to include this action and others like it. However, GAC both excludes too much and includes too much, as the following examples demonstrate: Self-sacrifice 2: Agent is able, at the cost of some considerable effort and pain to herself, to make Patient moderately happy. She does so. This is nearly the best thing that she could do, but not quite. Her best option involves shifting the burden of making Patient happy from herself onto a third person, Other. The pain and sacrifice for Other in the best option would have been fractionally less than Agent bore in the second-best option. The happiness for Patient would have been identical. Agent’s action is not optimal, nor would producing better consequences require her to show less than the requisite amount of bias towards herself, since the best action—shifting the burden to Other—involves far more bias towards herself than does the second-best action. According to GAC, therefore, Agent’s action of bearing the burden of making Patient happy rather than imposing a fractionally smaller burden on someone else is not good. But this is highly implausible. How could a supporter of GAC defend this result? Perhaps she could argue as follows: Optimization is the default standard of goodness. However, agents are permitted a certain bias towards themselves. Thus, some departures from optimization can still be good, if better actions would have required excessive self-sacrifice. This is what allows Agent’s action in Selfsacrifice to count as good. In Self-sacrifice 2 Agent can do better at less cost to herself, so her decision to bear the burden of making Patient happy is just pointless masochism. I have two replies to this. First, why should optimization be the default standard of goodness? Aren’t there many non-optimal actions that are intuitively good, even when better actions wouldn’t involve significant self-sacrifice? Isn’t my giving $50 to a worthy charity a good action, even though I could have given $51 without significant self-sacrifice? Second, Agent’s action in Self-sacrifice 2 is not pointless masochism. The point is to spare Other the burden of helping Patient. This becomes clearer when we fill in the details of the case. Suppose my young child is sick and miserable and needs comfort in the middle of the night. My wife could provide the same amount of comfort as me at fractionally less cost to herself (she has a slightly less burdensome day ahead). Nonetheless, I drag myself out of bed and let her get a good night’s rest.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Agency and Inaction 61 GAC also classifies as good some actions that clearly aren’t, such as the following example: Lifeboat: Agent and Patient are adrift in a lifeboat, with only enough food to sustain one of them until help arrives. If they attempt to share the food, they will both die. Agent is a secondhand car salesman, who specializes in selling lemons. Patient is a dedicated physician, who runs a free clinic for poor children in the inner-city. While Patient is sleeping, Agent tips her over the side of the boat, thus ensuring his own survival. This is not the best action, but to do better, Agent would have had to have sacrificed his own life. Such a sacrifice would clearly have involved showing less than the permitted bias towards himself. According to GAC, then, Agent’s action of tipping Patient over the side of the boat is good. But this won’t do. However excusable we may deem Agent’s action, it is, by no stretch of the imagin ation, good.
3.4 Agency and Inaction When we think of someone doing a good or a bad thing, I suggest that an underlying concept is that of making a difference to the world. It is natural to think of a good action as one that makes the world better than it would have been if the action hadn’t been performed. This suggests the following interpretation of G: GC: An act A is good iff the world would have been worse if A hadn’t been performed; A is bad iff the world would have been better if A hadn’t been performed. This gives the right result in Doctor. If Agent hadn’t done what she did, the world would have been worse. Patient would have been suffering even more. GC assesses the goodness of an action by comparing the world in which it occurs with a world in which it doesn’t occur. But which world in which it doesn’t occur is the relevant one? In Doctor, for example, there are many different ways that Agent could have failed to administer the pain-reducing drugs to Patient. There are many different things that she could have done instead, including doing nothing. In this case, we don’t have to know just what Agent would have done instead,
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
62 Good and Bad Actions because we know that she did the best she could, so anything else would have been worse. The intuitive reading of GC involves a comparison with the world in which the agent is inactive. When we ask what the world would have been like if the action hadn’t been performed, we are considering a world in which the agent simply doesn’t exercise her agency. So, what is it not to exercise one’s agency? One obvious possibility is to remain completely immobile. But this clearly won’t do. Consider the following case, that I will call Button pusher: Agent stumbles onto an experiment conducted by a twisted scientist, named Scientist. He is seated at a desk with ten buttons, numbered “0” through “9,” in front of him. He tells her that the buttons control the fates of ten people. If no button is pressed within the next thirty seconds, all ten will die. If the button marked “9” is pressed, only nine will die; if “8” is pressed, eight will die, and so on down to “0.” He was, he explains, about to sit and watch as all ten died. However, to honor her arrival, he turns control of the buttons over to Agent. She is free to press any button she wishes, or to press none at all. Agent pushes “9,” killing nine people. If she had remained immobile, all ten would have died. According to GC, then, her action is good, since the world would have been worse, if she hadn’t performed it. But her action is not good. It led to the deaths of nine people who needn’t have died. She could have pressed “0” instead. Any satisfactory account of good actions has to judge this to be a bad action. Perhaps we should compare the results of Agent’s action with what would have happened if Agent hadn’t even been on the scene. There seem to be two ways to interpret this suggestion: (i) We imagine a world identical to the actual world before t, in which the agent miraculously vanishes from the scene at t; (ii) We imagine a world as similar as pos sible to the actual world before t, in which the agent is non-miraculously absent from the scene at t. That is, we imagine what would have had to have been different before t in order for the agent to have been absent at t. (i) runs foul of Button pusher. If Agent had miraculously vanished, instead of pushing button “9,” all ten people would have died. But this consideration clearly doesn’t incline us to judge that Agent did a good thing. (i) will also judge some intuitively good actions to be bad. Consider the following case, called Point Guard: Agent is a point guard and Patient is a power forward. Agent has just thrown a perfect alley-oop
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Agency and Inaction 63 pass to Patient, who has successfully dunked the ball. They are running towards each other preparing to slap hands high in the air.12 The slapping of Agent’s hand gives Patient a good deal of pleasure. Does Agent do a good thing in jumping up and slapping Patient’s hand? It would be very strange if we were forced to say that he does a bad thing. But let’s say that the sight of Agent suddenly vanishing would have caused a lot of enjoyment to Patient and everyone else. That enjoyment would have been considerably greater than the fairly trivial pleasure of hand slapping. Would we then have to say that it was a bad thing to slap Patient’s hand? In neither Button pusher nor Point Guard does it seem right to say that the goodness or badness of Agent’s action is affected by such counterfactual judgments. The fact that everyone would have had a good laugh if the point guard had suddenly vanished before their eyes doesn’t lead us to judge his hand slap to be bad. (ii) seems more promising. How do I know whether I have done a good thing? I ask myself whether I have made things better than they would have been if I hadn’t even been here in the first place. But this won’t do, either. Once again, it gives the wrong results in Button pusher and in Point Guard. In Button pusher, If Agent hadn’t even shown up in the first place, Scientist would have let all ten die, but we don’t on that count judge Agent’s action to be good. In Point Guard, assume that Patient was the backup point guard, and would have enjoyed playing point guard far more than merely being slapped on the hand by Agent. If Agent hadn’t been on the scene, Patient would have played point guard. But that consideration doesn’t lead us to judge Agent’s hand slapping to be a bad thing. These counterfactuals don’t capture what we mean when we ask what would have happened if the agent hadn’t exercised her agency. In Button pusher Agent kills nine people, but ten would have died had she been inactive, either through immobility or absence from the scene. The problem is not just that inactivity gives unacceptable results in particular cases, but rather that the comparisons it invites do not seem 12 For those who are unfamiliar with basketball, imagine that Agent is a wing forward who has just crossed the ball to Patient, a center-forward, who scores a goal. The rest of the story is the same. For those who are unfamiliar with both basketball and soccer, I would like to point out that an appreciation of sports, especially those two sports, contributes greatly to a rich inner life.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
64 Good and Bad Actions relevant to the goodness or badness of actions. If I do something that seems to be very bad, such as killing nine people with the press of a button, why should it matter that the consequences would have been even worse if I had been immobile or absent from the scene? Whether it is a good or a bad thing to kill nine people doesn’t seem to depend on whether even more would have died if I had been inactive, unless, perhaps, my killing nine is the only alternative to more deaths. In Button pusher, however, Agent could easily have prevented any deaths. These counterfactuals, then, don’t seem relevant to the goodness or badness of the actions. Both immobility and absence from the scene have failed to give us a satisfactory neutral point with which to compare the results of Agent’s actions. Not only do they give the wrong moral judgments about some actions, but they simply don’t seem to capture what we mean when we ask what would have happened if the agent hadn’t exercised her agency. The comparisons they invite are simply not the correct ones. When I wonder what would have happened if I hadn’t exercised my agency, I don’t suppose myself to be immobilized, or to be removed from the scene, miraculously or not.
3.5 The Course of Nature If we are to judge an action by comparing its consequences with what would have happened in the absence of agency, we need an intuitively acceptable account of the latter notion. One account with a lot of intuitive appeal is to be found in the writing of Alan Donagan. Donagan defines an action as “a deed done in a particular situation or set of circumstances; . . . [consisting] partly of [the agent’s] own bodily and mental states.”13 He continues: Should he be deprived of all power of action, the situation, including his bodily and mental states, would change according to the laws of nature. His deeds as an agent are either interventions in that natural process or abstentions from intervention. When he intervenes, he can 13 Donagan 1977, 42.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
The Course of Nature 65 be described as causing whatever would not have occurred had he abstained; and when he abstains, as allowing to happen whatever would not have happened had he intervened. Hence, from the point of view of action, the situation is conceived as passive, and the agent, qua agent, as external to it. He is like a deus ex machina whose interventions make a difference to what otherwise would naturally come about without them.14
In considering what would have happened if Agent hadn’t acted, Donagan doesn’t imagine Agent to be immobile or absent from the scene. Instead, he asks what would have happened in “the course of nature” (his phrase). The course of nature can include not only Agent’s physical presence, but also changes in her “bodily and mental states.” It is the exercise of human agency that gives Agent the option to intervene in the course of nature or to allow nature to take its course. All of Agent’s deeds are either interventions or abstentions. Those that make a difference to the course of nature, or what would have happened anyway, are interventions; those that leave the course of nature unchanged are abstentions. Whatever problems there are with this analysis of the difference between doing and allowing, it seems to have a good deal of intuitive support. How, then, does this account of what would have happened if I hadn’t exercised my agency fare as part of an analysis of the difference between good and bad actions? The major problem with this latest suggestion as a reading of GC is that it entails that letting nature take its course is never good or bad.15 According to GC, we assess the goodness or badness of an action by comparing the resulting world with the world that would have resulted if the agent hadn’t exercised her agency. According to Donagan’s account of agency, when an agent abstains from intervening in the course of nature, that is when she lets nature take its course, the resulting world is as if she hadn’t exercised her agency. So the world that results from letting nature take its course is neither better nor worse than the world 14 Ibid., 42–3. For a useful discussion of this passage and the distinction between doing and allowing contained therein, see Bennett 1993. 15 This is not to say that Donagan’s account itself entails that letting nature take its course is always morally neutral. It is only when we plug Donagan’s account into GC that we get this result.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
66 Good and Bad Actions that would have resulted if the agent hadn’t exercised her agency. If Agent allows to happen what would have happened anyway, her behavior is neither good nor bad. This may be more amenable to anticonsequentialists, who are more inclined to invest the doing/allowing distinction with moral significance than are consequentialists, but it will be unacceptable nonetheless. Even those who claim that it is generally worse to do bad things than to allow them to happen (and that it is better to do good things than to allow them to happen?) will admit that some allowings can be very bad and some can be very good. A couple of examples will suffice. Freedom fighter: Suppose that Agent is a lazy good-for-nothing, who just happens to be the spitting image of a courageous freedom fighter who is wanted by the oppressive government. Both Agent and the freedom fighter are in a bar one night when the forces of evil arrive, but the freedom fighter happens to be in the toilet. The policeman announces that they have been given reliable information that the freedom fighter is in the bar, and they won’t leave until they kill her. He immediately spots Agent. The police thrust Agent against a wall and tell her to hurry up and say whatever she has to say before they shoot her, because they are in a tearing hurry to get to the donut shop before it closes. Agent’s immediate strong inclination is to reveal her identity and direct the police to the bathroom. Instead, however, seizing her chance to redeem her pitiful life, she mumbles “it is not only a far far better thing that I do now than I have ever done before, but it’s also a pretty damn good thing.”16 The police shoot Agent, and leave the bar satisfied, before the freedom fighter, unaware of what has happened, emerges from the rest room. Agent had the chance to reveal her identity and save her life, but she chose to let things happen as they would have happened had she not exercised her agency at all. According to the analysis of good and bad actions we are considering, this would be neither good nor bad. It is, however, hard to imagine a better candidate for a good action than this one. If we have a theory of good and bad actions, it had better classify this as a good action. If you are uncomfortable with the fact that the consequences of Agent’s behavior run through the wills of other agents, consider this 16 My apologies to Charles Dickens.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Goodness and Counterfactuals 67 alternative: Agent is out mountain-climbing, and notices that the freedom fighter is directly below her. She immediately looks up to see a large rock about to hit her. As in the previous example, her immediate impulse is to move and save herself at the expense of the freedom fighter. She fights this impulse, however, allowing herself to be hit, and killed, by the rock, thus saving the freedom fighter. Note that the consequentialist’s reason for classifying this or the previous example as a good action has nothing to do with the fact that it is a case of self-sacrifice. It is because Agent has saved the freedom fighter that the action seems to be good. The freedom fighter will do a lot more good with the rest of her life than Agent would have done if she had lived. If, by contrast, Agent had sacrificed herself to save an even more worthless character than herself, no consequentialist would be tempted to say she had done a good thing. (Consequentialists can, however, praise the motive involved in self-sacrifice on the grounds that such a motive will usually be beneficial.) We also tend to think that allowings can be very bad. Return to the scenario of Button pusher. Suppose that Agent doesn’t press any button. That would be a very bad thing to do, according to any plausible account of good and bad action. But the account we are considering judges it to be morally neutral. What is perhaps even more counterintuitive, if Agent presses “9,” she has done a good thing! It seems, then, that Donagan’s account of what would have happened if Agent hadn’t exercised her agency doesn’t help the consequentialist provide a satisfactory account of the difference between good and bad actions.
3.6 Goodness and Counterfactuals There are other ways to read the counterfactuals in GC, that will give different accounts of goodness and badness. The most obvious alternative reading involves a judgment about which other possible world is closest to the world in which the action occurs.17 Instead of comparing the world in which the action occurs with a world in which the agent is either immobile or absent from the scene, we compare it with a world that is as much like it as possible, consistent with the action not 17 See, for example, Lewis 1973.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
68 Good and Bad Actions occurring. Sometimes that will be the world in which the agent is immobile, but often it will be a world in which the agent does something else instead. Let’s say I hit the bullseye while playing a game of darts. Given that I’m not a particularly good darts player, the closest world in which I don’t hit the bullseye is probably one in which I just miss it and hit the collar around it instead. According to this interpretation of GC, we consider what the agent would have done instead, if she hadn’t performed the action in question. How does this interpretation of GC handle the examples of Selfsacrifice and Button pusher? Consider Self-sacrifice first. Would the world have been worse if Agent hadn’t done what she did? Since she didn’t do the best possible thing, that depends on what she would have done instead. What Agent did in Self-sacrifice was very nearly the best. It involved a considerable amount of effort and pain for herself. It might seem plausible to assume, then, that if she hadn’t done that, she would have done something worse. A deviation from her actual action that required less self-sacrifice would have been easier, and thus more likely, than one that required more. A world in which she sacrifices less would seem to be closer to the actual world than would one in which she sacrifices more. But these are guesses, based on a very sketchy description of the case. Let’s add a couple of details to my previous description of Selfsacrifice. Agent is a committed consequentialist, highly predisposed towards self-sacrifice. There are no options available to Agent that are only slightly worse than what she did. In fact, the next best thing that she could do is much worse, involving suffering for Patient. The addition of these details makes it much less likely that Agent would have done something worse, if she hadn’t done what she did. In fact, it now seems overwhelmingly likely that she would have done even better, if she hadn’t done what she did. According to GC, then, her action was bad, since the world would have been better if it hadn’t been performed. But her action wasn’t bad. The fact that Agent’s character made it extremely unlikely that she would have done worse than she did doesn’t alter our intuitive judgment that her action was good. Now for Button pusher. Once again, let me add some details to my previous description of the case. Agent is highly misanthropic. She delights in the misfortunes of others, especially their deaths. Her initial inclination is to refrain from pushing any buttons, so that all ten will die. She is dissatisfied, though, that this will
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Goodness and Counterfactuals 69 involve, as she sees it, merely letting people die. She wants as many as possible to die, but she also wants to kill them. At the last second she changes her mind, and pushes “9.” If she hadn’t pushed “9,” she wouldn’t have pushed any button. She didn’t even consider the possibility of pushing a different button. The only question she considered was whether she should kill 9 or let 10 die. Clearly, the closest world in which Agent doesn’t push “9” is one in which she doesn’t push any button, and all ten die. Once again, GC judges Agent’s action to be good. But we are no more inclined to believe that her action is good than we were before we knew about her character defects. The fact that Agent’s character made it highly probable that she would have done even worse than she did doesn’t alter our intuitive judgment that her act of killing nine people was bad. What is particularly disturbing for a consequentialist about this latest reading of GC is that it makes the character of the agent relevant to the goodness of the action.18 The better the agent, the harder it is for her to do something good, and the worse she is, the easier it is. I don’t deny that the character of agents can influence some judgments about actions or other events in a fairly natural way. For example, our judgments about whether it’s a good thing that something happened are often influenced by our prior expectations. If we would have expected an outcome of an event or action to be worse, whether because of our knowledge of the characters of the agents involved or because of our prior experience of similar events, we may be pleased to discover that things aren’t as bad as they might have been. Thus, we might claim that it’s a good thing that only ten people were killed in the plane crash, or that the Republican Congress cut entitlements by only 80 percent. We don’t mean by this that the budget cut was good, just that it wasn’t as bad as we were expecting. It seems that none of the interpretations of GC can provide the consequentialist with a satisfactory account of what it is for an action to be
18 The problem here is both that the proposal makes character relevant to the goodness of actions at all, and that it does so in a particularly counterintuitive way. For a consequentialist, the first problem is more significant. This criticism also applies to the use of Donagan’s account of the course of nature. Since the course of nature includes both the physical presence of agents and changes in their “bodily and mental states,” an agent’s dispositions will affect what would have happened in the course of nature.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
70 Good and Bad Actions good. The intuition on which they were based is that a good action makes the world better. The difficulty lies in producing a general formula to identify the particular possible world (or worlds), than which the actual world is better, as a result of a good action. Any unified theory requires a way of fixing the contrast point, but the contrast point varies from situation to situation. Part of the problem is that our intuitions about the goodness or badness of particular actions are often influenced by features of the context that it would be difficult to incorporate into a general account. I will have more to say about this in Section 3.8.
3.7 In Search of a Noncomparative Account I began this chapter with some examples of philosophers who seemed to be making use of the concept of an action’s consequences being, on balance, good. I have tried various interpretations of this concept, and found that none of them provides a plausible consequentialist account of what it is for an action to be good. Each of these interpretations has involved a comparison between the world that results from the action and a different, uniformly specified, possible world. It might be objected at this point that this is the wrong approach. If I claim that an action’s consequences are, on balance, good, I am making a claim about the absolute value of certain states of affairs, not about their comparative value. If I am a hedonist, for example, I am not claiming that the world that resulted from the action contained a greater balance of pleasure over pain than any other particular world, I am claiming that the consequences of the action contained a balance of pleasure over pain. The fact that none of the comparative accounts of an action’s goodness proved satisfactory is entirely to be expected. We should be looking for a noncomparative account. If we take this objection seriously, we must ask what are the consequences of an action. Recall the scenario in Doctor. Agent administers a drug to Patient, who endures a great deal of suffering before eventually dying. Is Patient’s suffering a consequence of Agent’s action? It is hard to see how it couldn’t be. The drug that Agent administered produced the suffering. Patient experiences a good deal of pain and no pleasure as a result of Agent’s action. Assume that Agent experiences neither pleasure
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
In Search of a Noncomparative Account 71 nor pain. Agent and Patient are the only people involved. It would seem that the consequences of Agent’s action contain a balance of pain over pleasure. But Agent is doing the best she can, she is slowing the rate of increase of Patient’s suffering as much as possible. How could this be a bad thing to do? Perhaps the problem is that we have given only an incomplete description of the states of affairs that constitute the consequences of Agent’s action. It is true that Patient is suffering a good deal, but it is also true that he is not suffering even more. Perhaps this outweighs the suffering. More needs to be said, though. If I inflict some gratuitous suffering on you, it is true both that you suffer to a certain degree and that you don’t suffer any more than that. This is so no matter how much suffering I inflict on you. But there’s a crucial difference between Doctor and the infliction of gratuitous suffering. The suffering inflicted by Agent on Patient is not gratuitous. It is needed to prevent greater suffering. Perhaps what outweighs Patient’s suffering is not the simple fact that he is not suffering even more, but the more complex fact that greater suffering is prevented. This latter fact underlies our intuition that Agent has done a good thing. Patient was going to suffer even more, and Agent’s action prevents that suffering. When I inflict gratuitous suffering on you, I don’t prevent further suffering, because you weren’t going to suffer even more. But now it seems we no longer have a non-comparative notion of consequences. What distinguishes the case of Doctor from the gratuitous infliction of suffering is that Patient was going to suffer even more if Agent hadn’t administered the drug, but you weren’t going to suffer even more if I hadn’t inflicted suffering on you. If the fact that Patient doesn’t suffer even more is to count as a consequence of Agent’s action, but the fact that you don’t suffer even more is not to count as a consequence of my action, our notion of consequences must involve comparisons with other possible worlds. Similar problems plague any attempt to produce a non-comparative account of the notion of harming, as I will argue in Chapter 4. There is one more approach I will consider in search of a non-comparative notion of the goodness or badness of actions.19 If we can identify particular concrete states of affairs as the consequences of an action, we can evaluate those states of affairs as either on balance good, 19 I owe the following suggestion to Peter Vallentyne.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
72 Good and Bad Actions on balance bad, or neither. What prevents some (maybe most) states of affairs from being consequences of any particular previous action is that the action doesn’t affect them. In Doctor, Agent’s action affected Patient’s conscious mental states, but it didn’t affect the pleasures or pains of people thousands of miles away. The world that resulted from Agent’s action contained Patient’s suffering. It also contained the suffering and the pleasures of millions of other people, but, with respect to Agent’s action, these states of affairs were unavoidable.20 Agent’s action did not affect them. Consider an action A performed by an agent. A state of affairs S is avoidable with respect to A iff there is some action B that the agent could have performed in place of A, such that B would not have been followed by S.21 This suggests the following account of good actions: GS: An action A is good iff the states of affairs that are avoidable with respect to A are, on balance, good; A is bad iff the states of affairs that are avoidable with respect to A are, on balance, bad. Is GS an acceptable consequentialist account of good and bad actions? Is it a non-comparative account? It appears to be non-comparative. To determine whether a particular action is good, a consequentialist simply has to evaluate a set of states of affairs in one world. No comparison of values across worlds is needed. There is, however, an element of comparison across worlds involved in determining which states of affairs are avoidable with respect to an action. What makes a state of affairs avoidable with respect to an action is the fact that there is a possible world, accessible to the agent at the time of the action, that doesn’t contain that state of affairs. I’m not sure whether this challenges GS ‘s claim to be a non-comparative account of good and bad actions. It isn’t worth pursuing the point here, since GS is unacceptable, as I will explain.
20 It would be more accurate to say that the states of affairs are either avoidable or unavoidable with respect to an agent at a time, since the crucial question is whether the agent could have acted at that time in such a way that the state of affairs would not have obtained. It makes for a less cumbersome formulation, however, to tie avoidability to the action. 21 I mean here to exclude the use of backtracking counterfactuals. Only temporally subsequent states of affairs can be avoidable with respect to any particular action.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Intuitive Judgments 73 Consider the following example, called Party: Agent is at a party with one hundred other guests. The party is very isolated, so Agent is not in a position to affect the welfare of anyone who is not there. The other guests are all having a wonderful time. Agent possesses one dose of stomach ache powder that she drops in the glass of another guest, Patient, when Patient is not looking. Patient develops a severe stomach ache as a result of ingesting the powder, and is very unhappy for the rest of the party. The other ninety-nine guests, unaware of Patient’s suffering, are extremely happy. Agent chose Patient, because Patient was the most susceptible to the powder. She could have dropped the powder in the glass of any of the guests. Different guests would have suffered to different degrees, but none as much as Patient. She could also have disposed of the powder without harming anyone. If we have an account of good and bad actions, it must judge Agent’s action to be bad, though perhaps not fiendishly so. GS, though, judges Agent’s action to be good. Since Agent could have dropped the powder in anyone’s glass, each guest’s happiness or misery is an avoidable state of affairs with respect to Agent’s action. Since ninety-nine of them are exceedingly happy and only one is very unhappy, the avoidable states of affairs are, on balance, good. (If you are inclined to think that the misery of one guest would outweigh the happiness of ninety-nine, simply add more happy guests until you change your mind.) GS also judges many intuitively good actions to be bad. Agent’s action in Doctor is one example. I know of no other account of what it is for an action’s consequences to be, on balance, good, or for the act to bring about more good than harm, or for its tendency to augment the general happiness to be greater than any it has to diminish it, or of any of the other related notions. Perhaps a satisfactory account can be produced, but I doubt it. Despite the widespread use of such notions, I conclude that they are, in fact, unavailable to consequentialists as accounts of the difference between good and bad actions.
3.8 Intuitive Judgments Where does this leave a consequentialist account of the moral status of actions? If the arguments of the previous chapter were convincing,
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
74 Good and Bad Actions consequentialists should abandon (at least at the fundamental level) the notions of right and wrong actions. If the arguments of this chapter are convincing, consequentialists should also abandon the notions of good and bad actions. They can judge actions to be better or worse than alternatives, and better or worse by certain amounts, but not to be good or bad simpliciter. Just how surprising is this latter result? Perhaps it is not surprising at all. After all, moralities in which ascriptions of goodness to acts are fundamental are concerned with intentions or moral character in a way that consequentialism is not.22 Perhaps, then, it is not surprising that consequentialism can provide no satisfactory account of the goodness of actions. But I have been arguing for this conclusion by considering whether consequentialism can provide a satisfactory account of what it is for an action’s consequences to be, on balance, good, or for the act to bring about more good than harm, or any of the other related notions with which I began the chapter. That these notions, which are widely used by consequentialists, admit of several different interpretations, none of which provides a satisfactory consequentialist account of good actions, is certainly surprising. In fact, there are more surprises. Recall Sydney Carton’s thoughts in A Tale of Two Cities. “It is a far, far better thing that I do, than I have ever done.” He might plausibly have added (as does Agent in my example of Freedom fighter), “And it’s a pretty damn good thing, too.” I have been arguing that the latter claim is, strictly speaking, unavailable to a consequentialist. Despite appearances, Carton’s action is neither good simpliciter nor good to a certain degree (moderately, fairly, very, pretty damn, etc.). In fact, his original, comparative, claim is also unavailable. Consequentialists can judge actions to be better or worse than alternatives, not better or worse than other actions performed at different times or by different people. Why does this last result follow? Intuitively, a consequentialist should say that one actual action is better than another just in case the first produces a greater balance of goodness over badness (or smaller balance of badness over goodness) than does the second. I have argued, however,
22 I don’t mean that consequentialism is not concerned with intentions or character. I mean that consequentialism, at least the most familiar versions of it, doesn’t tie the evaluation of acts to the evaluation of character or intentions in the same direct fashion as some other approaches.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Intuitive Judgments 75 that the notion of producing a balance of goodness over badness admits of several different interpretations, none of which provides the consequentialist with a plausible account of the goodness of actions. If Carton’s action is neither good simpliciter nor good to a certain degree, what would be the basis for comparison with any of his past actions or anyone else’s actions? In that case, it might be objected, what is the basis for comparison with any of his possible alternative actions? I do, after all, claim that an action can be assessed as better or worse than possible alternatives. Consider Carton’s action in A Tale of Two Cities compared with the possible alternative of revealing his identity. His actual action is better than the alternative, just in case the world that results from it is better than the world that would have resulted from the alternative. If, as seems plausible, the world in which Carton is guillotined and Evremonde lives is better overall than the world in which Carton goes free and Evremonde is guillotined, then Carton’s action is better than the alternative. How much better it is depends on how much better is the world that results from it. So, why not apply this technique to the comparison of different actual actions? In fact, it might even be easier to perform the comparison in this context. When we compare an actual action with a possible alternative, we are comparing the world that results from the action, the actual world, with the possible world that would have resulted from the alternative. For all the difficulty we have in assessing the actual world, given the difficulty of predicting the future for example,23 we have an even harder time assessing a merely possible world. When it comes to comparing different actual actions, however, we are not called on to assess merely possible worlds. Every actual action leads to an actual world, the actual world in fact. But this is precisely why the technique of comparing worlds will not give an acceptable means for comparing different actual actions. Every action leads to a temporal segment of the same world. Consider two actions, one performed before the other. The only difference between the world that results from the earlier action and the world that results from the later one is that the first world includes a temporal segment not included in the second. If this segment is overall good, the first world is better than 23 For an argument that the difficulty of predicting the future should not unduly worry consequentialists, see Norcross 1990.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
76 Good and Bad Actions the second, if the segment is overall bad, the first world is worse than the second.24 If we apply the technique of comparing worlds to a comparison of different actual actions, we get the following unacceptable result: consider a temporal segment of the actual world, bounded by times t1 and t2, such that the segment is overall good. Any action performed at t1 is better than any action performed at t2. Worse still, any two simultan eous actions are equally good. Clearly, then, we can’t compare different actual actions in the same way that we compare possible alternatives. What Carton should have said was “it is a far far better thing that I do than anything I could have done instead.” I am not claiming that there are no comparisons for a consequentialist to make between different actions. Given the possibility of comparing an actual action with a possible alternative, a consequentialist can construct methods for comparing two different actual actions. She can compare them with respect to their distances from the best alternative in each case, or the worst alternative, or some other alternative. If it seems intuitively obvious that one action is better than another, it will probably also be obvious which comparison grounds that judgment. Different contexts will make different comparisons appropriate. In some contexts perhaps no comparison will be appropriate. Let me illustrate this point. First, consider an assessment of two different actions in which the relevant comparison is with the best alternative in each case. Recall the scenario in Button Pusher. Agent can push any one of ten buttons, killing between none and nine people, or push no button at all, with the result that ten people die. Suppose she pushes “5.” Now consider a variation. Agent is faced with only four buttons, labeled “1” to “4.” If she pushes no button, five people die. Suppose she pushes “2.” Intuitively, pushing “5” in the original case is worse than pushing “2” in the variation. Here, the relevant comparison in each case seems to be with the best alternative. The difference between pushing “5” and the best alternative of pushing “0” is greater than the difference between pushing “2” and the best alternative of pushing “1.” Pushing “2” is also closer to the worst alternative than is pushing “5,” but that doesn’t play a part in our judgment. 24 I am assuming here that the morally relevant future is not infinite. For a discussion of possible problems for consequentialism if the morally relevant future is infinite, see Nelson 1991. For a suggested solution to these problems, see Vallentyne 1993.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Intuitive Judgments 77 Next, consider a case in which the relevant comparison is with the worst alternative in each case. Suppose there are ten people trapped in a burning building. Agent can rescue them one at a time. Each trip into the building to rescue one person involves a considerable amount of effort, risk, and unpleasantness. It is possible, albeit difficult and risky, for Agent to rescue all ten. Suppose she rescues a total of five. Now imagine a similar situation, except that there are twenty people trapped in the building. Once again, Agent can rescue them one at a time. Each trip into the building involves the same amount of effort, risk and unpleasantness as each trip in the last example. It is possible for Agent to rescue all twenty, though this would be even harder and more risky than rescuing ten. Suppose she rescues a total of seven. Intuitively, the rescue of seven in the twenty person case is better than the rescue of five in the ten person case. The relevant comparison here seems to be with the worst alternative in each case. The difference between rescuing seven and the worst alternative of rescuing none is greater than the difference between rescuing five and the worst alternative of rescuing none. Rescuing seven is also further from the best alternative of rescuing twenty, than rescuing five is from rescuing ten, but that comparison isn’t relevant to our intuitive judgment. Finally, consider a case in which the relevant comparison is with neither the best nor the worst alternative in each case. Suppose that Ross Perot gives $1,000 to help the homeless in Dallas and I give $100. Intuitively, Perot’s action has better consequences than mine.25 Perot’s action is further both from the best and from the worst alternative than mine. It is plausible to assume that his immense riches give him a much greater range of options than is available to me. If we think that his action has better consequences than mine, we are not swayed by the fact that it falls short of his best alternative by more than does mine. But neither are we influenced by the fact that Perot’s action is better than his worst option by a much greater amount than mine is better than my 25 This is not to say that Perot’s action is more praiseworthy or that it shows him to be a better person. $1,000 is nothing to Perot, whereas $100 is a significant amount to me. This consideration seems to affect the praiseworthiness of the action, not the goodness. Perot’s action might not be more praiseworthy than mine, but isn’t it still better? Rich people are simply better placed to do good than poor people. This is not a reason to praise them or their actions (except inasmuch as such praise will encourage them to do more), or to denigrate poor people or their actions.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
78 Good and Bad Actions worst option. After all, if he gave only $50, the gap between that and his worst alternative would still be far greater than between my giving $100 and my worst option,26 and yet we would now judge my action to be better. So, why does Perot’s action seem to have better consequences than mine? The natural comparison in this case seems to be with the alternative in which we do nothing with the money. If we compare the world with Perot’s donation to the world in which the money simply sits in the bank, the increase in goodness is greater than if we compare the world with my donation to the corresponding world. My claim that some comparisons are more “natural” than others might suggest a contextualist approach to the evaluations of actions as good or bad, and better or worse than other actual actions. Indeed, in Chapter 5, I will explore a contextualist approach to “good” and “bad,” as applied to actions, “right” and “wrong” (and the related notions of permissibility, impermissibility, and supererogation), “harm,” and even “possibility.” I will argue that such an approach can explain the appropriateness of using such terms, and perhaps even render statements that use them (non-vacuously) true or false. It cannot, however, accommodate them at the fundamental, action-guiding level of moral theory.
3.9 Conclusion I have argued in this chapter that a consequentialist cannot give a satisfactory account of the goodness of actions in terms of the goodness of their consequences. My arguments also affect a consequentialist attempt to equate goodness (of actions) with rightness, or to give an account of the goodness of actions in terms of the value of the motives from which they spring. If my claims are correct, and, further, if the reasons I gave in the previous chapter for a consequentialist to reject rightness prove compelling, what can a consequentialist say about the moral status of actions? It appears that she can truly say of an action only how much better or worse it is than other possible alternatives. But common-sense 26 This is because Perot’s vastly greater resources provide him with the opportunity to do far more harm than is available to me, with my limited resources. Perot’s worst alternative is not merely to give no money, but is rather to expend some of his resources on producing terrible effects.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Conclusion 79 tells us that actions are (at least sometimes) right or wrong, good or bad. We look to moral theories to give us an account of what makes actions right or wrong, and good or bad. Indeed, it might even be thought that “right action” and “good action” are basic and indispensable moral concepts. How, then, can consequentialism claim to be a moral theory, if it can give no clear sense to such notions? To the extent that you are moved by this objection, you may take the argument of this and the previous chapter to be a contribution to the vast body of anti-consequentialist literature. Let me briefly explain why I don’t so take it. It is true that “right action” and “good action” are concepts central to modern moral philosophy.27 But are they indispensable, or do they just appear to be so, because pretty much every competing theory, consequentialist and non-consequentialist alike, gives some account of them? One approach to this question is to ask what is the central function of morality, and how do the concepts of “right action” and “good action” relate to it. I suggest, in common with many other moral theorists, that the central function of morality is to guide action, by supplying reasons that apply equally to all agents.28 The judgments that certain actions are right or good might seem to supply such reasons, but they are by no means the only sources of action-guiding reasons. Consequentialism, on my account, can tell an agent how her various options compare with each other. Concerning a choice between A, B, and C, consequentialism can tell the agent, for example, that A is better than B, and by how much, and that B is better than C, and by how much. She is thus provided with moral reasons for choosing A over B and B over C, the strength of the reasons depending on how much better A is than B and B than C. Doctors seem to provide prudential reasons of just this scalar nature, when they tell us, for example, that the less saturated fat we eat the better.29
27 Ancient moral philosophy, on the other hand, is centered around questions of virtuous character and the good life. 28 Thus, if it is morally right to do x, all agents, for whom x is an option, have a reason to do x. The claim that morality provides reasons for action is different from the claim that particular moral codes, accepted by particular societies, provide reasons for action. Such codes may well provide reasons for action, but these reasons only apply to members of the relevant societies. The reasons provided by morality apply equally to all agents. 29 For a more detailed discussion of the action-guiding nature of scalar morality, see Howard-Snyder and Norcross 1993, 119–23.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
80 Good and Bad Actions “Nonetheless,” you might say, “even if consequentialism, on your account, survives as a moral theory, aren’t its chances of being true diminished by your arguments? Moral common-sense, at least the contemporary version appealed to by western moral philosophers, tells us that some actions are good or bad, and better or worse than other actual actions. You say that consequentialism can make no sense of such judgments. To that extent, consequentialism clashes with moral common-sense, and should be rejected.” It is true that my version of consequentialism clashes with moral common-sense in its denial that actions are, strictly speaking, good or bad, or better or worse than other actual actions. To the extent that you are immovably wedded to (contemporary western academic) moral common-sense, my arguments should give you reasons to reject consequentialism. However, those who are immovably wedded to the verdicts of particular versions of common-sense morality do not need my arguments to reject consequentialism. Consequentialists are prepared to accept a considerable amount of disagreement with common-sense morality. I am not claiming, however, that consequentialists should simply ignore the deliverances of common-sense morality. At the very least, it is desirable for consequentialists to explain why it sometimes seems appropriate to make (and even express) the judgments, that I have argued are strictly unavailable to them. That is why I will also argue that a consequentialist can employ a contextualist approach to explain the appropriateness of making judgments, such as that the action was good, or better than a previous action, or worse than a different action performed by somebody else. I don’t wish to make too much of this latter claim, though. Linguistic and moral appropriateness are two entirely different matters. To explain why a particular judgment sounds appropriate is not to justify the practice of making such judgments. One of the great strengths of consequentialist ethical theories is that they explain and justify some of our intuitions while challenging others. I suspect that there is some value in making the sorts of judgments about actions that I have argued are strictly unavailable to a consequentialist, but that overall we would be better off if we concentrated our attention on how our actions compare with other possible alternatives. When we contemplate a course of action, instead of asking whether we are doing better or worse than other people, we should ask whether there are better alternatives that we are willing to
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Conclusion 81 undertake. By focusing on each situation of choice, we may be less likely to become disheartened by what appears to be our inability to do as much good as others, or complacent at our ability to do more. At the very least we won’t be tempted to justify our behavior by pointing out that many others are doing no better. In the next chapter I will briefly show how many of the arguments of this chapter apply also to the notion of harmful actions.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
4 Harm 4.1 Introduction I argued in the previous chapter that consequentialism cannot provide a satisfactory account of the goodness of actions, on the most natural approach to the question, and that, strictly speaking, a consequentialist cannot judge one action to be better or worse than another action performed at a different time or by a different person. In this chapter, I will demonstrate that similar arguments apply to the standard consequentialist account of what it is for an action (or an agent) to harm someone. The Standard consequentialist approach to harm is illustrated by the following principle, defended by Derek Parfit: (C6) An act benefits someone if its consequence is that someone is benefited more. An act harms someone if its consequence is that someone is harmed more. (69) How should we understand what it is for someone to be “harmed more”? Intuitively, it is for someone to be made worse off by the action. But worse off than what? With what do we compare the result of the action? One suggestion that we can quickly dispense with is that we should compare the welfare of the victim before and after the action. Recall the example of Doctor from the previous chapter. Patient is terminally ill. His condition is declining, and his suffering is increasing. Doctor cannot delay Patient’s death. The only thing she can do is to slow the rate of increase of Patient’s suffering by administering various drugs. The best available drugs completely remove the pain that Patient would have suffered as a result of his illness. However, they also produce, as a sideeffect, a level of suffering that is dramatically lower than he would have Morality by Degrees: Reasons without Demands. Alastair Norcross, Oxford University Press (2020). © Alastair Norcross. DOI: 10.1093/oso/9780198844990.001.0001
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Introduction 83 experienced without them, but significantly higher than he is now experiencing. So the result of administering the drugs is that Patient’s suffering continues to increase, but at a slower rate than he would have experienced without them. The very best thing she can do has the consequence that Patient’s suffering increases. That is, after Doctor’s action Patient is suffering N amount of suffering as a direct result of Doctor’s action, and N is more than Patient was suffering before the action. Has Doctor harmed Patient if she slows the rate of increase of Patient’s suffering as much as she can? This hardly seems plausible. It is a far more plausible description of this case to claim that Doctor has in fact greatly benefited Patient. Clearly, we can’t simply compare Patient’s welfare before and after a particular action. Doctor has made Patient better off: not better off than he was, but better off than he would have been. Just as in the case of good and bad actions, we compare levels of welfare, not across times, but across worlds. Doctor has benefited Patient, because she has made Patient better off than he would have been had she done something else. Even though Patient is now suffering more than he was, he would have been suffering even more, if Doctor had done anything else instead. In thinking about harm, then, what most consequentialists, including Parfit, have in mind is the idea of someone being worse off than they would otherwise have been. Thus, we get the following: HARM An act A harms a person P iff P is worse off, as a consequence of A, than she would have been if A hadn’t been performed. An act A benefits a person P iff P is better off, as a consequence of A, than she would have been if A hadn’t been performed.
At first glance, this seems perfectly straightforward. I shoot and kill you. As a consequence of my act, you are worse off than you would otherwise have been, that is, than you would have been if I hadn’t shot you. However, on closer investigation, things turn out not to be so straightforward. I will investigate the standard consequentialist approach to harm, by first focusing on cases in which it appears that a group of people can together harm someone, even though none of the members of the group harms anyone. I will examine Derek Parfit’s approach
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
84 Harm to group harms, and argue that it is unsuccessful. I will argue for an alternative account of harm that applies both to individual acts and to group acts.
4.2 Harm in a Respect or All Things Considered Before getting to the main argument, I will address a possible source of confusion. The notion of harm with which I am concerned, and which is of most interest to consequentialists, is the notion of harm all things considered, as opposed to harm in some respect or other. Let me illustrate this distinction with an example that might appear to challenge the consequentialist account of harm I have suggested.1 Tonya and Nancy are rivals in the cut-throat world of competitive ice-skating. Tonya, seeking to ensure victory in an upcoming contest, attacks one of Nancy’s knees with a baseball bat. Nancy’s injuries keep her out of competition, and require extensive medical treatment. In the course of the treatment, a tumor, unrelated to the injury, is discovered and successfully treated. If Nancy hadn’t been injured and undergone extensive tests, the tumor wouldn’t have been discovered in time, and she would have suffered a painful and career-ending illness (her knee, though, would have been fine). It appears, then, that, as a consequence of Tonya’s savage attack, Nancy is better off than she would otherwise have been, and thus that Tonya’s attack didn’t harm Nancy, but rather benefited her. But surely, we might object, Tonya’s attack harmed Nancy. It severely mangled her knee. Doesn’t this type of case show that HARM cannot be the correct account of harm and benefit? Not at all. Tonya’s attack benefited Nancy in the long-term, while it certainly harmed her in the short-term. It also harmed Nancy’s knee, while benefiting Nancy overall. We may be reluctant to judge that Tonya benefited Nancy, but that is because we believe, erroneously, that such a judgment must somehow redound to Tonya’s credit. The reason why we correctly believe that Tonya’s action doesn’t ground a positive judgment of her character is, of course, that the benefits to Nancy were entirely unforeseen by Tonya. Suppose it were 1 I owe the example to Frances Howard-Snyder (and, indirectly, to the cut-throat, kneesmashing world of competitive ice skating itself).
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Parfit on Group Harms 85 otherwise. Tonya is deeply concerned for Nancy’s welfare, and knows that Nancy has a nascent tumor, although she is unable to persuade Nancy of this or to persuade her to undergo tests (perhaps Nancy, being somewhat snooty, suspects that Tonya is trying to undermine her confidence in order to gain a competitive advantage). Tonya realizes that only a severe knee injury will result in the discovery and treatment of Nancy’s tumor. She therefore attacks Nancy, knowing that she herself will be caught, banned from competitive ice-skating, and even sent to jail. To this version of the story, we have no difficulty responding that not only did Tonya benefit Nancy, but that her action was an heroic piece of self-sacrifice. While it is clear that HARM may be adapted to render restricted judgments about harm or benefit in certain respects (harm to the knee versus benefit to the person overall, harm in the short-term versus bene fit in the long-term), it is not clear why these should be of interest to a consequentialist. Unless we take the implausible (at least from a consequentialist perspective) position that we have stronger reasons not to harm than we do to produce comparable benefits,2 judgments of overall harm or benefit will provide the same reasons for action as the sum of all the relevant restricted judgments. Henceforth, therefore, I should be understood to be talking about harm all things considered, unless I explicitly say otherwise.
4.3 Parfit on Group Harms Derek Parfit claims, in Reasons and Persons, that an act that makes no difference to anyone’s welfare can nonetheless be wrong “because it is one of a set of acts that together harm other people.”3 This claim
2 An asymmetry between reasons not to harm and reasons to benefit is also implausible from a non-consequentialist perspective that treats harms in certain respects as more basic than overall harms. Consider an approach to harm that classifies causing pain (among other things) as harming, and preventing pain as benefiting, and that claims that our reasons not to harm are stronger than our reasons to benefit. Such an approach may well, depending on the size of the supposed asymmetry between reasons, judge that Doctor has stronger reason not to administer the pain-relieving drugs than to administer them. This should clearly be unaccept able to non-consequentialists as well as consequentialists. 3 Parfit 1984, 70.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
86 Harm (part of what Parfit calls “(C7)”) is illustrated by the following case of overdetermination: Case One. X and Y shoot and kill me. Either shot, by itself, would have killed me. (70) Even though neither X nor Y harms me, since I would have been killed by the other, even if one had not shot me, Parfit claims that it is absurd to conclude “that X and Y do not act wrongly” (70). He continues: X and Y act wrongly because they together harm me. They together kill me . . . On any plausible moral theory, it is a mistake in this kind of case to consider only the effects of single acts. On any plausible theory, even if each of us harms no one, we can be acting wrongly if we together harm other people. (70)
This is also supposed to apply in cases of preemption, where the acts are not simultaneous.4 Parfit presents the following illustration: Case Two. X tricks me into drinking poison, of a kind that causes a painful death within a few minutes. Before this poison has any effect, Y kills me painlessly. (70) Here, as with Case One, Parfit claims that neither X nor Y harms me, but also that they both act wrongly “because they together harm me. They together harm me because, if both had acted differently, I would not have died” (71). Against the objection that Y does harm me, since he kills me, Parfit presents: Case Three. As before, X tricks me into drinking poison of a kind that causes a painful death within a few minutes. Y knows that he can save your life if he acts in a way whose inevitable side-effect is my immediate and painless death. Because Y also knows that I am about to die painfully, Y acts in this way. (71)
4 Parfit doesn’t use the term “preemption.”
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Parfit on Group Harms 87 In this case, Y not only doesn’t harm me, but he acts as he ought to. Y doesn’t harm me because, “if Y had acted differently, this would have made no difference to whether I died.” (71) X, on the other hand, both harms me and acts wrongly, “because it is true that, if X had not poisoned me, Y would not have killed me” (71). Since Y affects me in the same way in Case Two as in Case Three, Y doesn’t harm me in Case Two either. What makes Y’s act wrong in Case Two is that “he is a member of a group who together harm me” (71). What prevents this being true of Y in Case Three? After all, it is true in Case Three that, if both X and Y had acted differently, I would not have been harmed. In response to this, Parfit points out that it is also true that, if X, Y, and Fred Astaire had all acted differently, I would not have been harmed. It doesn’t, of course, follow that Fred Astaire is a member of a group who together harm me. We need a clearer account of what it is to be a member of a group who together harm, or benefit, others. Parfit attempts to provide such an account with: (C8) When some group together harm or benefit other people, this group is the smallest group of whom it is true that, if they had all acted differently, the other people would not have been harmed, or benefited. (71–2)
This group consists of X and Y in Case Two, but only of X in Case Three. Parfit’s (C7) and his treatment of Cases One, Two, and Three might appear to be in tension with his rejection, in the preceding section of Reasons and Persons, of what he calls the “Share-of-the-Total view,” as it applies to the following case:5 The First Rescue Mission: I know all of the following. A hundred miners are trapped. . . . If I and three other people go to stand on some platform, this will . . . save the lives of these hundred men. If I do not join this rescue mission, I can go elsewhere and save, single-handedly, the lives of ten other people. There is a fifth potential rescuer. If I go elsewhere, this person will join the other three, and these four will save the hundred miners. (67–8) 5 This was the thesis of Ben Eggleston’s paper (Eggleston 1999), to which I delivered the response. The current chapter has grown out of my reply to Eggleston’s paper.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
88 Harm The Share-of-the-Total view would have me help save the hundred miners, since my share of the credit would thus be twenty-five lives, as opposed to a mere ten, if I save the other ten single-handedly. This, as Parfit points out, is clearly mistaken. What I should do is save the ten, since this will make a difference of ten lives saved over helping to save the hundred. If an act can be wrong because, although it doesn’t harm, it is a member of a group that together harms, can’t an act be right because, although it doesn’t benefit, it is a member of a group that together bene fits? Parfit has something to say about this. He presents: The Third Rescue Mission. As before, if four people stand on a platform, this will save the lives of a hundred miners. Five people stand on this platform. (72) This case, Parfit says, demonstrates the need to add a further claim to (C8), because “there is not one smallest group who together save the hundred lives” (72). Parfit returns to this kind of case in a later section (section 30), where he claims that the crucial question in such cases concerns what an agent knows or has reason to believe. He offers the following additional principle: (C13) Suppose that there is some group who, by acting in a certain way, will together benefit other people. If someone believes that this group either is, or would be if he joined, too large, he has no moral reason to join this group. A group is too large if it is true that, if one or more of its members had not acted, this would not have reduced the benefit that this group gives to other people. (83)
Parfit is here talking of moral reasons in the subjective sense. To have a moral reason in this sense is to have a reason that is epistemically available in a robust sense. Such a reason may also be an objective reason. If, in (C13), the agent’s belief about the size of the group is true, then he also has no objective moral reason to join the group. One can have objective moral reasons that are not epistemically available, in which case one doesn’t have the corresponding subjective reasons. One can also have subjective moral reasons that don’t correspond to objective reasons. If the agent’s belief about the size of the group is false, she may
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Parfit on Group Harms 89 have an objective reason to join the group, but no subjective reason. It is easy to see now that Parfit’s notion of group harms and benefits, as articulated in (C7), (C8), and (C13), does not imply that I have a moral reason (either objective or subjective) to join the others in The First Rescue Mission, since it is part of the description of the case that I know all the relevant facts. In both Case One and Case Two, however, nothing is said about what X and Y know or believe (in contrast to Case Three). On the most intuitive readings of the cases, neither knows about the other’s actions. In which case, both have subjective moral reason not to kill me. Even if they did know of the other’s actions, their actions wouldn’t be rescued from wrongness by (C13), which only concerns benefits and not harms. This asymmetry, though, appears ad hoc, so a charitable reading would have (C13) apply to harms and benefits. Parfit’s rejection of the Share-of-the-Total view is not inconsistent with (C7). A worry remains, however, about the motivation for his treatment of group harms and benefits. If the agents’ belief states are as important in the ascription of moral reasons as (C13) would have them be, why not appeal directly to the fact that both X and Y believe themselves to be harming me to justify the judgment that what they both do is wrong? In fact, in section 10 of Reasons and Persons, Parfit, speaking of how he will use moral terms, declares that “wrong will usually mean subjectively wrong, or blameworthy” (25). Why does Parfit bother with (C7) and (C8), if he can do the job by a simple appeal to the distinction between subjective and objective wrongness? I suspect that the actions of both X and Y in Case One and, perhaps, in Case Two, would be intuitively judged wrong, even if it were specified that each knew about the other’s action. It is probably this intuitive judgment that Parfit is trying to capture in his notion of group harms. Furthermore, it seems clear that I am harmed in Case One. After all, I am killed. What could be a clearer case of harm than that? Since neither X nor Y individually harm me, according to C6, we need an account of who does the harming. Recall the difference between Y’s action in Case Two and in Case Three. (C8) tells us that Y is a member of a group that harms me in Case Two, but not in Case Three, because, if X hadn’t poisoned me, Y would still have shot me in Case Two, but not in Case Three. So it turns out that whether Y acts wrongly in the actual world depends on Y’s behavior in
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
90 Harm certain possible worlds—most likely the closest worlds in which X doesn’t poison me. This is a somewhat strange result. To see just how strange it is, let’s consider some variations on Case Two. Case Two-and-a-Quarter. As before, X tricks me into drinking the poison that will shortly result in my painful death. Before any of the real unpleasantness can set in, however, the slightly bitter taste of the poison prompts me to visit the nearest soda machine to purchase a Coke to take the taste away. Y is lurking by the soda machine waiting for a victim on whom to practice his assassination skills. Y shoots and kills me instantly and painlessly. If I hadn’t drunk the poison, I wouldn’t have visited the soda machine, and wouldn’t have been shot. In fact, if I hadn’t visited the soda machine, Y would have grown tired of waiting for a victim, and would have decided to become an accountant instead of an assassin.
If we apply (C8) to Case Two-and-a-Quarter, we get the result that the group who harms me consists just of X. If X had not poisoned me, I wouldn’t have been harmed. And yet, the intuition that Y acts wrongly in Case Two applies equally to Case Two-and-a-Quarter, which is quite consistent with Case Two. We don’t, in general, excuse behavior that appears to be wrong, if we discover that the agent wouldn’t have had the opportunity to perform the wrong act, were it not for the seemingly unrelated behavior of someone else. Consider an even more challenging case. Case Two-and-a-Half. X tells me a particularly funny joke. I laugh so much that I become hoarse, and visit the soda machine to purchase a Coke. Y, once again lurking to practice his assassination skills, shoots and kills me painlessly. The Coke that I had purchased, and was about to drink, had been infected with a deadly poison in a freak undetected soda canning accident.
Y doesn’t harm me in this case, since I would have died of Coke poisoning, if he hadn’t shot me. The smallest group who harms me consists of just X, since, if X hadn’t told me the joke, I wouldn’t have needed to visit the soda machine. In this case, X harms me and Y doesn’t. What is more,
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Parfit on Group Harms 91 Y isn’t even a member of a group who harm me (at least according to (C8)). It seems to follow that X acts wrongly and Y doesn’t. Do we have to accept these rather counterintuitive results, if we follow Parfit’s approach? My cases Two-and-a-Quarter and Two-and-a-Half, and Parfit’s Cases One and Two are underdescribed with respect to a crucial question. Does Y (and X in Case One) believe that he is harming me? That is, does Y know that I am about to die anyway? The most natural reading of the cases, as I said earlier, assumes that Y believes he is harming me. On this reading Parfit can say that Y acts wrongly in the subjective sense. However, as we have seen, Parfit’s appeal to group harms can be at least partially motivated by the desire to judge both X’s and Y’s actions to be wrong in Case One and Case Two, on the assumption that they both know about the other’s action. Let’s consider, then, what to say about these cases, on the assumption that Y knows that I will die anyway. Parfit’s appeal to (C7) and (C8) implies that the crucial difference between Y in Case Two and in Case Three is that his killing me in Case Three depends on X having already made my death inevitable, but in Case Two it doesn’t. In Case Three, if X hadn’t already poisoned me, Y wouldn’t have been willing to bring about my death in the course of saving you. My examples, though, should make us suspicious of this claim. Y’s killing me in both my examples depends on X having already made my death inevitable, and yet Y’s behavior in my cases is intuitively judged on a par with case Two, not Case Three. Isn’t the intuitively crucial difference between Case Two and Case Three that Y’s killing me in Case Three depends on his belief that my death is already inevitable, whereas it doesn’t in Case Two? This raises an interesting question concerning Y’s status in Case Three. According to Parfit, Y isn’t a member of the group that harms me, because, according to (C8), that group consists of only X. If X hadn’t poisoned me, Y wouldn’t have killed me. However, suppose that Y’s knowledge of X’s poisoning me comes from overhearing X plotting to poison me. Suppose further that the closest possible world in which X doesn’t poison me is one in which he changes his mind at the last minute, after Y has overheard him plotting. In that world, Y kills me anyway, because he still believes that I am going to die from poisoning. These details are quite consistent with Parfit’s description of
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
92 Harm the case.6 Is Y, in the actual world, a member of the group that harms me? It seems that he is, since it is not true of X that if he had acted differently, I would not have been harmed. What is worse, one plausible reading of the relevant counterfactual yields the result that Y alone is a member of the group that harms me. Suppose that the closest possible world in which Y doesn’t kill me is one in which he doesn’t overhear X plotting to poison me. This is because X doesn’t plot to poison me (and doesn’t poison me) in this world. So it is true of Y that, if he had acted differently, I would not have been harmed. We could attempt to block this move by banning the use of backtracking counterfactuals in applying (C8). When we consider what would have happened if someone had acted differently, we must suppose the world changed only at and after the time of the action. While this move may prevent Y from being the only member of the group that harms me, it doesn’t address the previous point, that Y may be a member of that group, because his killing me depends not on X’s already having made my death certain, but on Y’s belief that this is so. In fact, banning backtracking counterfactuals blocks one line of defense against this result. When we consider the closest world in which X doesn’t poison me, we can’t consider a world in which he doesn’t plot to poison me, since the plotting precedes the actual poisoning. If, as I say, the intuitively crucial difference between Case Two and Case Three is that Y’s killing me in Case Three depends on his belief that my death is already inevitable, whereas it doesn’t in Case Two, we can see why Y’s behavior in my two cases, Two-and-a-Quarter and Twoand-a-Half, is intuitively on a par with his behavior in Case Two. In my cases, even though Y would not have killed me if X hadn’t made my death inevitable, he would still have killed me, if he hadn’t believed that my death was already inevitable. That is, given that I was at the soda 6 It might be objected that my elaboration of the example prevents Y’s true belief in the actual world that X has poisoned me from being knowledge. If the closest world in which X doesn’t poison me is one in which Y believes that X has poisoned me, it seems that Y’s actual belief doesn’t track the truth in the right way to be knowledge. This objection relies on a controversial theory of knowledge. It’s not even clear that it succeeds in the context of that theory. Given that the contexts in which we consider whether Y has knowledge and in which we consider whether Y is a member of the group that harms me are different, different possible worlds may be relevant to each. Furthermore, if we simply changed Case Three to specify either that Y simply believes that I am about to die painfully, or that Y knows that it is almost certain that I am about to die painfully, our intuitive judgements of Y’s behavior would remain unchanged.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Parfit on Group Harms 93 machine, Y’s belief that I was about to die anyway (in Case Two-and-a-Half suppose that I had already drunk the tainted Coke by the time Y spotted me) played no part in his decision to shoot me. Suppose that it did. Suppose that, in Case Two-and-a-Quarter, Y is a sniper in a crack troop of marines who are only used against vicious terrorists who are holding innocent hostages. About to be dispatched on a vital mission, Y discovers that his aim is off. He needs practice on human targets, but he would never willingly harm an innocent person. Along I come, about to die painfully anyway. Y would never have considered shooting me, if he hadn’t known this. How are we now to judge Y’s behavior in shooting me? Far from being wrong, it appears now to be quite admirable. Not only doesn’t it involve harm (it actually saves me from a painful death), but it is motivated by a desire to save the lives of innocent people. Furthermore, it considerably improves the chances of saving such lives. The appeal to group harms seems to be motivated by a desire to accommodate the intuition that both X and Y act wrongly in Case One, even if each knows that the other will shoot. It is not clear, though, that utilitarianism can’t accommodate such intuitions as are worth accommodating without an appeal to group harms.7 So, how should a utilitarian judge Case One, on the assumption that both X and Y believe that the other will shoot? At first glance, it would appear that their actions are both objectively and subjectively right (assuming that they couldn’t have been doing something better, if they hadn’t been shooting me). Both these judgments can be challenged, though. X may believe that Y is going to shoot me anyway, but it is unlikely that he believes that there is no chance that Y will fail to kill me. Perhaps Y’s gun will jam, or I will make an unexpected movement at the moment of shooting, or Y will simply change his mind. No matter how small these chances may be, X is not entitled to ignore them completely. X is doing what he expects will make my death even more likely than it already is. This is enough to make his action subjectively wrong. (The same applies to Y.) Similar things can be said about one type of objective wrongness. If the world is not completely deterministic, there are objective probabilities other than 1 and 0. If the right act is the act with the highest objective expected utility, it is unlikely that X’s and Y’s actions will be right. It seems that 7 Frank Jackson makes a similar point in Jackson 1997.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
94 Harm the utilitarian can judge both X and Y to have acted wrongly, without appealing to the problematic notion of group harms. However, the ori ginal spirit of the problem can be resurrected, if we modify Case One. Consider Case One modified to Case One-and-a-Half as follows. Suppose that X’s shooting me has the side-effect of curing the paralysis in the left leg of an innocent child, Suzie. Suppose further that Y’s shooting me has the side-effect of curing the paralysis in Suzie’s right leg. If neither shoots me, Suzie will be permanently unable to walk. If both shoot me, Suzie will live a physically normal life. If only one shoots me, Suzie will walk with the aid of a crutch. Now, suppose that both X and Y know of the side-effect of their own act of shooting, but neither knows that the other even exists. If they both shoot me, we can say that they both act wrongly in the subjective sense. They do what they believe will make things worse, assuming that curing the paralysis in one leg does not justify killing an innocent person. However, each act is right in the objective sense, since it didn’t harm me, given that the other also shot me, and it made things better for Suzie. One important difference between Case One and Case One-and-a-Half is that X’s and Y’s actions in the latter are right in the objective expected utility sense, given certain reasonable probability estimates. Suppose that there is only a one percent chance that Y will not shoot. Suppose further that the chance of X’s shot curing Suzie’s paralysis in her left leg is at least 99.99 percent. It is plausible now that the objective expected utility of X’s shooting me is higher than that of any alternative not involving shooting me. The tiny chance that I would not have died if he hadn’t shot me is outweighed by the near certainty that Suzie’s leg will be cured if he does shoot me. Perhaps someone will object that a one in a hundred chance of death is not outweighed by the near certainty of a cure for a paralyzed leg. If so, we can adjust the probabilities accordingly. Someone may object that any chance of death outweighs a cure for a paralyzed leg. Such a view is not only highly implausible,8 it is also not likely to be held by a utilitarian. Now, suppose that X and Y both know that the other will shoot me, and what the effects of both actions will be. How do we intuitively judge 8 See Norcross 1997a for arguments against this view.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Parfit on Group Harms 95 them? That depends on what role their knowledge of the other’s action played in their decision. Suppose first that neither is influenced in his decision by the knowledge of what the other will do. Even though he knows that he is making things better than they would have been, that is not why he acts as he does. He would have shot me anyway, even if he had not known of the other’s existence. The normal intuitive judgment, in this case, may well be that they both act wrongly. Even though a utilitarian would have to say that their actions are right both objectively and subjectively, she can still render a negative judgment on their characters. Since they are both quite willing to do what they believe will make things much worse, the fact that they don’t believe their current actions to make things worse doesn’t excuse them. The utilitarian thus explains the common intuitive judgment (if there is one) that X and Y both act wrongly as the result of the all too common confusion of judgments of actions with judgments of character. If we know all the facts about the agents’ motivations, we can see that their actions, in some sense, spring from bad characters, even though the very same actions could have come from good characters. A consequentialist account of character will ultimately connect evaluations of character with evaluations of actions. For example, we might say that a character trait, C1 is better than another, C2, just in case the possession of C1 makes one likely, ceteris paribus, to perform better actions than does the possession of C2.9 Given the limited plasticity of human nature, it is clear that a good character trait may sometimes lead to an action that is significantly suboptimal, and that a bad character trait may sometimes lead to performing the best, or close to the best, action available. Suppose now that both X and Y know that the other will shoot me, and that is why they are also prepared to shoot me. If X believed there was much of a chance of Y not shooting me, he wouldn’t shoot me. So X’s shooting me depends on his belief that Y will also shoot me, and the same goes for Y. Each shoots me because he believes (correctly) that he 9 This approach can be subject to many variations. For example, do we compare C1 and C2 with respect to a particular person, a particular type of person, the “average” person, etc.? Do we compare propensities with respect to the circumstances a particular individual is likely to encounter, given what we know about her, given her social position, given “normal” circumstances, etc.?
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
96 Harm is not thereby harming me (or that the risk of harming me is very small), and that he is greatly benefiting Suzie. Each does what is subjectively right and objectively right, both as regards actual results and objective expected utility. Furthermore, neither appears to display a bad character, at least not what a utilitarian should call a bad character. The willingness to shoot someone who is going to be shot at the same time anyway, in order to achieve a great benefit—curing a paralyzed leg— seems to be an admirable character trait from a utilitarian perspective. Perhaps this last judgment could be challenged, on the grounds that the possession by both X and Y of this character trait on this occasion makes the world worse. If neither had this character trait, I wouldn’t have been shot. However, this is a rather peculiar perspective from which to assess character traits. It is tied both to this occasion, and, even more peculiarly, to regarding X and Y as a group. If only X hadn’t had this character trait, the world would have been worse. I would still be dead, and Suzie would have been paralyzed in one leg. The same goes for Y. Given the limited plasticity of human nature, the appropriate perspective for assessing character traits will not be one that is tied to highly specific situations.10 It appears, then, that a utilitarian should judge both X’s and Y’s behavior positively, unless she appeals to (C7) with its problematic notion of group harms. Why not judge X’s and Y’s behavior positively? Given all the assumptions of the preceding paragraph about their beliefs and motivations, it’s not clear that there’s even a common-sense moral intuition that they act wrongly, or that they display bad character. The problem, of course, is that X’s and Y’s supposedly admirable behavior leads to a worse state of affairs than would have resulted if they had both acted badly, according to utilitarianism. Recall that Parfit’s explanation of Case One, which would also apply to Case One-and-a-Half, is that, although neither X nor Y individually harms me, they harm me as a group. Although we have seen that Parfit’s account of group harms is fraught with problems, the intuition remains that the group consisting of X and Y does harm me. How else do I end up being harmed?
10 For discussion of this point see Norcross 1997b.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Individual Harm 97
4.4 Individual Harm I have argued that Parfit’s account of group harms runs into some ser ious problems. Part of the reason for this is that he builds his account from the standard consequentialist approach, as given by HARM: HARM An act A harms a person P iff P is worse off, as a consequence of A, than she would have been if A hadn’t been performed. An act A benefits a person P iff P is better off, as a consequence of A, than she would have been if A hadn’t been performed.
There are, however, some obvious problems with HARM, considered just as an account of individual harms. HARM approaches the question of whether an act harms by comparing the world in which it occurs with a world in which it doesn’t occur. But which world in which it doesn’t occur is the relevant one? The options here seem to be similar to those I considered in the previous chapter. Perhaps the intuitive reading of HARM involves a comparison with the world in which the agent is inactive. When we ask whether P would have been worse off if the act hadn’t been performed, we are considering a world in which the agent simply doesn’t exercise her agency. So, what is it not to exercise one’s agency? One obvious possibility is to remain completely immobile. But, as in the case of good and bad actions, this clearly won’t do. Consider a variation on the case of Button pusher, from the previous chapter. Call this one Button pusher 2: An agent, named Agent stumbles onto an experiment conducted by a twisted scientist, named Scientist. He is seated at a desk with one hundred buttons, numbered “0” through “99,” in front of him. He tells her that the buttons control the amount of pain to be inflicted on a victim, named Victim. If no button is pressed within the next thirty seconds, Victim will suffer excruciating agony. If the button marked “99” is pressed, Victim will suffer slightly less; if “98” is pressed, Victim will suffer slightly less, and so on down to “0,” which will inflict no suffering on Victim. He was, he explains, about to sit and watch as Victim suffered the maximum amount. However, to honor her arrival, he turns control of the buttons over to Agent. She is free to press any button she wishes, or to press none at all. Agent pushes “99”, inflicting almost maximal suffering on Victim.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
98 Harm If she had remained immobile, Victim would have suffered even more. According to HARM, on the inactivity reading of what it is for an agent not to perform an action, Agent’s act doesn’t harm Victim, and even benefits him, since he would have suffered even more, if she hadn’t pushed any button. But surely her act doesn’t benefit Victim. It led to excruciating agony for him, when he needn’t have suffered at all. She could have pressed “0” instead. If any act harms, it seems clear that this one does. Again, as with the case of good and bad actions, consider the suggestion that we should compare the results of Agent’s act with what would have happened if she hadn’t even been on the scene. There seem to be two ways to interpret this suggestion: (i) We imagine a world identical to the actual world before t, in which the agent miraculously vanishes from the scene at t; (ii) We imagine a world as similar as possible to the actual world before t, in which the agent is non-miraculously absent from the scene at t. That is, we imagine what would have had to have been different before t in order for the agent to have been absent at t. (i) runs foul of Button pusher2. If Agent had miraculously vanished, instead of pushing button “99,” Victim would have suffered even more. But this consideration clearly doesn’t incline us to judge that Agent’s act benefited Victim. (ii) seems more promising. How do I know whether I have harmed or benefited someone? I ask myself whether they are better or worse off than they would have been if I hadn’t even been here in the first place. But this won’t do, either. Once again, it gives the wrong results in Button pusher2. If Agent hadn’t even shown up in the first place, Scientist would have let Victim suffer the maximum amount, but we don’t on that count judge Agent’s act to benefit Victim. In Button pusher2 Agent inflicts excruciating agony on Victim, but he would have suffered even more had Agent been inactive, either through immobility or absence from the scene. The problem is not just that inactivity gives unacceptable results in particular cases, but rather that the comparisons it invites do not seem relevant to whether an act harms or benefits. If I do something that seems to be very harmful, such as inflicting excruciating agony on someone with the press of a button, why should it matter that they would have suffered even more if I had been immobile or absent from the scene? Whether it is harmful to inflict
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Individual Harm 99 pain on someone doesn’t seem to depend on whether they would have suffered even more if I had been inactive, unless, perhaps, my inflicting such pain on them is the only alternative to more suffering. In Button pusher2, however, Agent could easily have prevented Victim from suffering altogether. These counterfactuals, then, don’t seem relevant to the question of whether an act harms or benefits. There are other ways to read the counterfactuals in HARM, that will give different accounts of harm and benefit. The most obvious alternative reading involves a judgment about which other possible world is closest to the world in which the action occurs.11 Instead of comparing the world in which the act occurs with a world in which the agent is either immobile or absent from the scene, we compare it with a world that is as much like it as possible, consistent with the act not occurring. Sometimes that will be the world in which the agent is immobile, but often it will be a world in which the agent does something else instead. So, how does a standard possible worlds analysis of the counterfac tuals in HARM hold up? Consider another example: suppose you witness the following scene at Texas Tech University: A member of the Philosophy department, passing Bobby Knight on campus, waves cheerily and says “Hey, Knight.” Bobby Knight, turning as red as his sweater, seizes the hapless philosopher around the neck and chokes her violently, while screaming obscenities. By the time Bobby Knight has been dragged away, the philosopher has suffered a partially crushed windpipe and sustained permanent damage to her voicebox, as a result of which she will forever sound like Harvey Fierstein. Has Bobby Knight’s act harmed the philosopher? The intuitive answer is obvious, and HARM seems to agree. The philosopher is much worse off than she would have been had Bobby Knight not choked her (unless, perhaps, she has always wanted to sound like Harvey Fierstein). But suppose we discover that Bobby Knight has recently been attending anger management classes. Furthermore, they have been highly successful in getting him to control his behavior. When he becomes enraged, he holds himself relatively in check. On this particular occasion (only the third violent outburst of 11 See, for example, the accounts of counterfactuals developed by David Lewis and Robert Stalnaker.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
100 Harm the day), he tried, successfully, to tone down his behavior. In fact, if he hadn’t been applying his anger management techniques, he wouldn’t have choked the philosopher, but would rather have torn both her arms from her body and beaten her over the head with them. Since it took great effort on Bobby Knight’s part to restrain himself as much as he did, it seems that the closest possible world in which he doesn’t choke the philosopher is one in which she is even worse off. HARM, in this case, seems to give us the highly counterintuitive result that, not only does Bobby Knight’s act of choking not harm the philosopher, it actually benefits her. Now apply this reading of HARM to Button pusher2. Let me add a couple of details to my previous description of the case. Agent delights in the suffering of others. She is initially inclined to press no button, so that Victim will suffer maximally, but she’s dissatisfied that this will involve, as she sees it, merely letting Victim suffer, rather than actually making him suffer. She wants Victim to suffer as much as possible, but she also wants to make him suffer. At the last second she changes her mind, and pushes “99.” If she hadn’t pushed “99,” she wouldn’t have pushed any button. She didn’t even consider the possibility of pushing a different button. The only question she considered was whether she should make Victim suffer excruciating agony or let him suffer even more. Clearly, the closest world in which Agent doesn’t push “99” is one in which she doesn’t push any button, and Victim suffers even more. Once again, HARM judges Agent’s act to benefit Victim. But we are no more inclined to believe that her act is beneficial than we were before we knew about her character defects. The fact that Agent’s character made it highly probable that she would have done even worse than she did doesn’t alter our intuitive judgment that her act of inflicting excruciating agony harms Victim. What is particularly disturbing for a consequentialist about this latest reading of HARM is that it makes the character of the agent relevant to whether the act harms or benefits.12 The better the agent, the harder it is for her to benefit someone, and the worse she is, the easier it is.
12 The problem here is both that the proposal makes character relevant to whether actions harm or benefit, and that it does so in a particularly counterintuitive way. For a consequentialist, the first problem is more significant.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
The Influence of Context 101
4.5 The Influence of Context on the Appropriateness of Harm Ascriptions It seems that none of the interpretations of HARM can provide the consequentialist (or anyone else) with a satisfactory account of what it is for an act to harm or benefit. The intuition on which they were based is that a harmful act makes someone worse off than they would otherwise have been. The difficulty lies in producing a general formula to identify the particular possible world (or worlds), with which to compare the world that results from a harmful act. Any unified theory requires a way of fixing the contrast point, but the contrast point varies from situation to situation. Part of the problem is that our intuitions about whether particular acts harm or benefit are often influenced by features of the context that it would be difficult to incorporate into a general account. The key to solving this puzzle is the realization that the fundamental consequentialist account of harm is an essentially comparative one. Harm is always relative to some alternative. An act, A, harms someone, relative to an alternative act, B, if it results in their being worse off than they would have been if B had been performed instead. Act A may harm someone relative to alternative act B, and benefit that same individual relative to a different alternative act C. There is no fundamental noncomparative moral fact of the form “act A harms person X.” The fundamental moral facts, as regards harm, are of the form “act A results in a worse world for X than alternative act B, and a better world than alternative act C. The A-world is worse for X than the B-world by a certain amount n, because X is worse off in A than in B by amount n. The A-world is better for X than the C-world by a certain amount n2, because X is better off in A than in C by amount n2.” Our intuitive judgments about what acts really harm what people are explained by appeal to conversational context. If I say that Booth’s shot harmed Lincoln, the context selects, as an appropriate alternative act of Booth, pretty much anything else except shooting Lincoln. It may be true that Booth could have shot Lincoln in such a way as to lead to a much more agonizing death than the one he in fact suffered. This alternative, however, is normally not salient (and may never be). Likewise, in discussing Bobby Knight’s behavior, very few (if any) contexts make the arm-tearing alternative the appropriate comparison. Even though it is, in some obvious
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
102 Harm psychological sense, more likely that Bobby Knight will tear the arms off the source of a perceived slight than that he will not assault her at all, when we discuss his behavior, we probably have in mind an alternative that would have been more likely for most other people. Sometimes different, equally normal, contexts can give the result that one act is appropriately judged, in one, to be a harming, and, in another, to be a benefiting. For example, my father writes a will, in which I receive half his estate. This is the first will he has written. Had he died intestate, I would have received all of his estate. Two among his many other options were to leave me none of his estate or all of it. Does my father’s act of will-writing harm me or benefit me? Imagine a conversation focused on my previous plans to invest the whole estate, based on my expectation that I would receive the whole estate. It might be natural in such a context to describe my father’s act as harming me. I end up worse off than if he had left me all his estate, which I had expected him to do, either by not making a will at all, or by making one in which he left me the whole shebang. It might be natural in such a context to say that my father “deprived me of half of his estate” by writing the will. Imagine, though, a different, but equally natural, conversation focusing on my lack of filial piety and the fact that I clearly deserve none of the estate. In this context it may be natural to describe my father’s act as benefiting me. In writing the will, he gave me half his estate. After all, he should have left me nothing, such a sorry excuse for a human being I was. Return now to the case of group harms. In Case One, the supposed problem is that neither X nor Y seems to make me worse off than I would otherwise have been, and thus neither X nor Y harms me (on the most natural counterfactual comparative account of harm), and yet I clearly end up harmed. The appeal to group harms is supposed to solve this problem, by allowing us to say that, even though neither X nor Y individually harms me, the group consisting of X and Y harms me. This is grounded in the fact that, if neither X nor Y had shot me, I wouldn’t have died. But, as we saw, this is too quick. If neither X nor Y nor Fred Astaire had shot me, I also wouldn’t have died. But we don’t, on that account, say that the group consisting of X, Y, and Fred Astaire harmed me. Parfit’s attempt to specify which group is the relevant one in cases of group harms is, as we saw unsuccessful. But if we apply what we have learned about individual harm to cases of group harm, we see that there
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Appropriate, or True? 103 is simply no need to specify the conditions under which a group counts as the group responsible for a harm. It is true that, if X and Y hadn’t shot me, I wouldn’t have died. It is also true that, if X hadn’t shot me, but Y had, I would have died. It is also true that, if X, Y, and Fred Astaire hadn’t shot me, I wouldn’t have died. There are countless other true counterfactuals concerning the behavior of X and Y (and sometimes others), and my death or survival. In different contexts, it may be appropriate to make claims about group or individual harms based on different counterfactuals. For example, suppose, first, that Y wouldn’t have considered shooting me, if he hadn’t believed that X was also shooting me, but that X was determined to shoot me come what may. Suppose, second, that Y wanted to be part of a group that caused death, but only if his participation made no negative difference to the victim. It is fairly easy to imagine a conversation, focused on the second assumption, in which it is claimed that the group consisting of X and Y harmed me. It is also easy to imagine a different conversation, focused on the first assumption, in which it is claimed that X harmed me, but Y didn’t. The appropriateness of asserting various claims about harm may well depend, at least in part, on which counterfactuals are salient in a particular conversational context. At this point an objection may arise. Introducing the example of my father writing a will, I said that the same act can be appropriately judged, in one context, to be a harming, and, in a different context, to be a benefiting. Given that I am talking about harm all things considered, am I claiming that one act can be both a harming and a benefiting? No. First, there is an important difference between saying that it is appropriate to judge that an act is P, and that it is true to judge that an act is P. Second, on the contextualist approach that I will suggest in the next chapter, while it is possible that one act may be truly described, in one context, as a harming, and truly described, in a different context, as a benefiting, it doesn’t follow that that act is both a harming and a benefiting. I will postpone explanation of this point until the next chapter.
4.6 Appropriate, or True? In this, and the previous two, chapters, I have argued that, strictly speaking, a consequentialist should not judge acts to be right or wrong,
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
104 Harm permissible, obligatory, supererogatory, good or bad, harmful or beneficial. I have also claimed, however, that a consequentialist may be able to explain, and perhaps even justify, the appropriateness of the practice of making such judgments. So, what could it be for it to be appropriate to make a particular judgment, other than the judgment being true? First, there is a kind of appropriateness that I am not interested in, except tangentially. Suppose that your grandmother loves wearing elaborate hats, decorated with all kinds of absurd paraphernalia (flowers, birds, etc.), and that her friends of a similar age, with whom she spends most of her time, all love such hats too. Suppose further that you are her favorite grandchild, and that she really likes to please you. You visit her one day to discover her wearing a particularly monstrous creation. You can barely stand to look at it. She is just off to a party with her (no doubt similarly behatted) friends, and asks you “do you like my hat?” It would obviously be false to say that you like it, and yet it may be perfectly appropriate to reply “I do, I do like that party hat.” There is no question of the truth of what you say. It is false, plain and simple. And yet it is appropriate, because it spares her feelings, and doesn’t risk hurting her in the long run. If she were heading to a party, where people would sneer at her for that hat, it would be a different matter, but her party will be full of people who share her taste in hats. The appropriateness of the utterance here is a matter of the morality of it, and doesn’t (except indir ectly) concern the truth or falsehood of what is said. Now consider a different kind of appropriateness. Suppose we are discussing infant mortality statistics for different nations. Suppose further that nations A and B calculate infant mortality by looking at the number of infants who experienced a live birth, but didn’t survive until age one. Nation A has an infant mortality rate of 7 per thousand, while nation B has a rate of 11 per thousand. It appears that we can say, with confidence, that nation B has a higher infant mortality rate than nation A. But now suppose that we discover two further facts. Nation B has both a higher rate of premature birth than nation A, and a much more aggressive policy of reviving seemingly still-born premature infants, although most of the most premature don’t survive for more than a day or two. Nation A has a policy of not classifying as a live birth any infant that doesn’t survive more than an hour or two. Given these differences in information gathering methods, it doesn’t seem appropriate to say
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Appropriate, or True? 105 that B has a higher infant mortality rate than A. This isn’t because we simply don’t know which has a higher rate than the other. It is because there is no such thing as the infant mortality rate simpliciter. There is a rate calculated one way, and a rate calculated a different way, and so on. But what if we discovered that B’s rate, if calculated the A way, would be only 6, and that A’s rate, if calculated the B way, would be 12. Couldn’t we then say, appropriately, that B definitely has a lower infant mortality rate than A? Or perhaps we discover that B’s rate, holding the methodology the same, has dropped from 13 to 11 in the last twenty years. Couldn’t we then say that B’s rate has definitely dropped in the last twenty years? Not necessarily. Take the latter case first. Suppose we dig into the details a bit more, and discover a few changes that have taken place over the last twenty years. First, in-utero testing for life-shortening conditions, combined with therapeutic abortions, has increased considerably. The result has been a noticeable drop in infants born with conditions that will kill them within a year. Second, there has been a significant increase in aggressive treatment of conditions, that would formerly have resulted in death in under a year. Much of this treatment is very expensive, and extends the life of the infant to no more than around eighteen months. Third, there has been a significant decrease in spending on pre-natal and neo-natal care for low-income mothers, with the result that significant numbers of infants are born with life-threatening conditions, who could have been much healthier with relatively low cost interventions. Many of these infants die before they reach one year, many more die in their second year of life, and many others suffer ill health their entire lives. So, now we see that the drop in the raw statistic, from 13 to 11, is made up of at least three very different factors. First, many fetuses that formerly would have survived to live birth, but not much beyond that, are no longer part of the infant mortality rate. Second, many infants that formerly wouldn’t have survived a year, are now being kept alive a little longer, and so are not part of the infant mortality rate. Third, many infants that would formerly have lived healthy lives, are now doomed to premature death or a life-long health deficit, some of whom are now part of the infant mortality rate. Suppose that the entire drop, and more, consists in those fetuses and live births that won’t live past two anyway. Some are not counted, because they are aborted, and some because their lives are stretched out beyond the age of one on
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
106 Harm multiple machines, but not much beyond that. Of those that could quite easily have been given a long healthy life, there is a marked increase in the numbers who die before one, and before two. To say that the infant mortality rate has dropped significantly in the last twenty years would be misleading at best. It’s true that the number has dropped, with no change in methodology. But the drop consists in multiple factors, which point in different directions. There is no such thing as the infant mortality rate simpliciter. How appropriate it is to use cross-temporal or crossnational comparisons, even using the same methodology, depends on our purposes in referring to this statistic. If nation A has a higher rate than nation B, using B’s methodology, it may be because A doesn’t expend a lot of resources keeping infants alive for just over a year, while ignoring those who can live a long healthy life, with a little well-timed intervention. Philosophers, and public policy analysts, often cite infant mortality rates to compare the quality of the healthcare of one nation with that of another. But, given that we may be far more concerned with effective interventions than with ultimately futile ones, such crude measures may serve to hide, rather than illuminate, the important underlying facts. According to this approach to appropriateness, it could be appropriate to claim that nation A has improved its infant mortality rate, but not appropriate to claim that nation B has done so, even if they both show the same drop in the rate over the same time period, as calculated the same way. This could be, for example, because B’s drop is entirely due to an increase in aggressive treatment of infants that serves only to keep some alive just past one year, hooked up to expensive machines, and experiencing a low, possibly negative, quality of life (or perhaps no quality of life at all, because in a persistent vegetative state), who would otherwise have died late in their first year. On the other hand, suppose that A’s drop is entirely due to better pre-natal and infant care, that results in infants who would formerly have died in their first year living normal healthy lifespans. The reason it is appropriate to make the claim about A, but not about B, is that our purpose in making a claim about improved infant mortality rates is to draw attention to a worthwhile change of some sort. It is hard to see a short extension of a life lived at a low (perhaps zero, or even negative) experiential quality as a worthwhile change. Whereas, the change from a short unhealthy life to a long healthy one seems far more worthwhile.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Appropriate, or True? 107 Returning to the case of harm, it may be appropriate, in some contexts, to say that my father’s act of will-writing harmed me, because I would have been better off if he hadn’t written a will at all, or had written one in which he left his whole estate to me. And it may be appropriate, in other contexts, to say that his act benefitted me, because I would have been worse off, if he had written a will in which he left me what I deserved. Consider one of the latter contexts. A group of my acquaintances have spent considerable time discussing my filial failings, my father’s awareness of them, and the almost-universal expectation that my father would write a will in which he left me nothing. In this context, the truth of the claim that I would have been worse off, if my father had done what I deserved (and almost everyone expected), explains the appropriateness of the claim that his act of will-writing benefitted me. It is still, of course, true, in this context as in all others, that I would have been better off, if he hadn’t written a will at all, or had written one in which he left me everything. It’s just that the truth of those counterfac tuals is irrelevant to the appropriateness of making the claim about harm in this context. There is no fundamental fact of the form “my father’s act really harmed me, or really benefitted me.” There are just (many) different true counterfactuals, undergirding the appropriateness (in different contexts) of making seemingly contradictory claims (in different contexts) about harm. Does this commit us to the view that such claims about harm express propositions that are either false or trivially true (because predicating a property of actions that isn’t real)? Or perhaps to the view that such claims have no propositional content at all? Not necessarily, as I will explain in the next chapter.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
5 Contextualism Good, Right, and Harm
5.1 Introduction I argued in the previous three chapters that consequentialism is not fundamentally concerned with such staples of moral theory as rightness, duty, permissibility, obligation, moral requirements, goodness (as applied to actions), and harm. In fact, I argued that the standard consequentialist accounts of these notions are either indeterminate (in the case of the latter two) or redundant. What is fundamental to a consequentialist ethical theory is a value theory, for example hedonism or some other form of welfarism, and the claim that the objects of moral evaluation, such as actions, characters, institutions, etc. are compared with possible alternatives in terms of their comparative contribution to the good. For example, one action is better than another, just in case, and to the extent that, the world that contains it is better than the world that contains the other from the time of the choice onwards. (This assumes determinism, for the sake of simplicity. If indeterminism is true, we will have to replace talk of the world containing an action with talk of a set of worlds.) Furthermore, our (moral) reasons for choosing between alternative actions, institutions, etc. are essentially comparative, and correspond to the comparative consequential value of the options. I might have a better reason for choosing to do A than to do B, and better by a certain amount, but neither reason is either good or bad simpliciter. So, if all a consequen tialist moral theory supports at the fundamental level are comparative evaluations of actions, characters, institutions (and thus also compara tive reasons for choosing among them), what, if anything, does it have to say about such notions as right and wrong, duty, obligation, good and bad actions or harm? Morality by Degrees: Reasons without Demands. Alastair Norcross, Oxford University Press (2020). © Alastair Norcross. DOI: 10.1093/oso/9780198844990.001.0001
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Error Theory 109
5.2 Error Theory There are two main options, the second of which will be the main focus of this chapter and the next. The first, which I will only briefly discuss here, is a form of eliminativism, combined with an error theory regarding our common usage of these terms. The consequentialist could simply say that there’s no such thing as right and wrong actions, good and bad actions, harmful actions, etc. It doesn’t, of course, follow from this that “anything goes,” if that is taken to mean that everything is permissible, and so, for example, it’s perfectly permissible to torture innocent chil dren. Just as no actions are either right or wrong, none are permissible or impermissible either. Neither does it follow that anything goes, if that is taken to mean that morality has nothing to say about actions. The action of torturing an innocent child will almost certainly be much worse than many easily available alternatives, and thus strongly opposed by moral reasons when compared with other options. It does, however, follow that descriptions of actions (or characters, or institutions) as being right or wrong, good or bad, harmful, required, permissible, and the like are all mistaken (either false, trivially true, or meaningless). This might seem to be a rather uncomfortable result. We can under stand how some, perhaps many, claims about the rightness or goodness or permissibility of actions are mistaken, but all claims? Even people who disagree, perhaps vehemently, about which particular actions are right or wrong, for example, agree in claiming that some actions are right, and some others are wrong. Is it plausible that we have all been mistaken all this time? I don’t find this possibility particularly implaus ible. Similar things may well be true for certain areas of theological or scientific discourse. If there is no God, for example, all claims about what God loves or hates are mistaken (either false or meaningless). A conservative protestant theologian and a liberal Jewish theologian may disagree vehemently over what God actually loves or hates, but agree that God really does love some things and hate others. This doesn’t make me any the more inclined to agree that there is a God who loves some things and hates others. Nor should it. Whatever reasons there may be to believe in a God who loves and/or hates, they are neither bolstered nor diminished by the agreement of my two otherwise disagreeing
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
110 Contextualism: Good, Right, and Harm theologians. Similarly, much scientific discourse assumes the existence of entities that may turn out not to exist. One of the best-known his torical examples involves the non-existent (as we now know) substance phlogiston. Future scientists may well regard some of the currently postulated entities (or properties) of twenty-first century physics in the same way current scientists regard phlogiston. It might, perhaps, be argued that the situation is different for morality. While theology and fundamental physics is unashamedly concerned with unobservable, or at least difficult to observe, entities, morality is concerned with everyday properties that require little or no expertise to discern. I don’t find such considerations particularly compelling. I see no reason whatsoever to believe that moral properties, if they exist, should be easy to discern. Nor is it even remotely plausible to suppose that moral truths are easily accessible, without the benefit of considerable training and reflection. In fact, the history of change and development in commonly accepted systems of moral belief, combined with the copious amount of dis agreement that still persists among both academics and nonacademics, suggests that either moral truths are themselves nonexistent, or at least difficult to discover. If we discover that currently commonly accepted moral categories, such as right, wrong, permissible, harm, and the like, don’t, in fact, pick out any fundamental moral facts, that would really be no more surprising than discovering that the gods, or fundamental particles (or physical properties), whose existence seems to be assumed by the discourse of theologians or scientists don’t really exist. A form of eliminativism, then, about such moral properties as right ness, goodness (of actions), or harmfulness (of actions) is a perfectly reasonable response to the arguments of Chapters 2–4. Perhaps all claims about the rightness of particular actions really are false, trivially true, or meaningless. But eliminativism is not forced on us by my arguments. My claims are that a consequentialist (at least) has no room for the relevant notions at the fundamental level of her theory. It is nonetheless possible to give a reductivist account of these notions, from which it follows that it is possible, even quite common, to express substantively true or false propositions involving them. In what follows, I will explore what I con sider to be the most natural, and promising, way for a consequentialist to accommodate some of the commonly accepted moral properties, despite excluding them from the fundamental level of the theory.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Good and Bad 111
5.3 Contextualism: Good and Bad What I propose is a form of contextualist analysis of the relevant moral terms, similar in form to some recent contextualist approaches to the epistemological notions of knowledge and justification. Roughly, to say that an action is right, obligatory, morally required, etc. is to say that it is at least as good as the appropriate alternative (which may be the action itself). Similarly, to say that an action is good is to say that it resulted in a better world than would have resulted had the appropriate alternative been performed. To say that an action harmed someone is to say that the action resulted in that person being worse off than they would have been had the appropriate alternative been performed. In each case, the context in which the judgment is made determines the appropriate ideal or alternative. I will illustrate first with the cases of good actions and harmful actions, since they are structurally similar. First, let’s revisit the main arguments of Chapters 3 and 4, concerning good and bad actions, and harmful and beneficial actions, to remind ourselves why satisfactory noncontextualist accounts of such notions are not available to the consequentialist. If the goodness of an action is to be a consequentialist property, something like the following account suggests itself: G An act is good iff it produces more goodness than badness; an act is bad iff it produces more badness than goodness. The general idea expressed in G is used by philosophers, both consequen tialist and non-consequentialist, though not necessarily as an explicit account of good and bad actions. But what does it mean to produce more goodness than badness, or, to put it another way, to have conse quences that are on balance good? The obvious answer is that for an action to have on balance good consequences is for it to make a positive difference in the world, that is, to make the world better. But better than what? Recall the example of a doctor, Agent, who is slowing the rate of increase of Patient’s suffering as much as she can. Even though Patient’s suffering continues to increase, Agent’s action intuitively seems to be good. The reason why it seems as if Agent’s action is good is that it makes the world better than it would
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
112 Contextualism: Good, Right, and Harm have been if the action hadn’t been performed. This suggests the following account of good actions: GC: An act A is good iff the world would have been worse if A hadn’t been performed; A is bad iff the world would have been better if A hadn’t been performed.
If Agent hadn’t administered those drugs to Patient, Patient would have suffered even more. But this is an easy case, which hides a crucial problem with GC. According to GC, whether an action is good or bad depends on what the world would have been like if it hadn’t been performed. So, what would the world have been like, if Agent hadn’t administered those drugs to Patient? That depends on what Agent would have done instead. She might have tried a different course of treatment, which was less effective. She might have simply sat and watched while Patient’s suffer ing increased. She might have tried a different course of treatment that actually increased the rate of increase of Patient’s suffering (either inten tionally or not). In this case, we don’t need to know precisely what Agent would have done instead, because we know that she did the best she could, and thus that the world would have been worse, if she had done anything else. But other examples are not so easy. Recall: Button Pusher. Agent can push any one of ten buttons (labeled “0” through “9”), killing between none and nine people, or push no button at all, with the result that ten people die. No button is any more diffi cult to push than any other, nor is there any pressure (either physical or psychological) exerted on Agent to push any particular button.
Suppose that Agent pushes the button labeled “9,” with the result that nine people die. Intuitively, this seems like a pretty bad action. However, suppose also that Agent is highly misanthropic, and wants as many people as possible to die. Her initial inclination was to press no button at all, so that all ten would die. She also enjoys being personally involved in the misfortunes of others, however, and believes that pressing a button would involve killing, whereas refraining from pressing any button would involve “merely” letting die, which, from her misguided perspective, is
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Good and Bad 113 less personally involving. She struggled long and hard over her decision, weighing the advantage of one more death against the disadvantage of less personal involvement. She never contemplated pressing any button other than “9.” It’s clear, then, that if Agent hadn’t pressed “9,” she would have pressed no button at all. So the world would have been worse, if she hadn’t pressed “9.” But this doesn’t incline us to judge her action to be good. Although Button Pusher might suggest that anything less than the best action is bad, we are not likely to endorse that as a general principle. Consider: Burning Building. There are ten people trapped in a burning building. Agent can rescue them one at a time. Each trip into the building to rescue one person involves a considerable amount of effort, risk and unpleasant ness. It is possible, albeit difficult and risky, for Agent to rescue all ten.
Suppose that Agent rescues nine people, and then stops, exhausted and burned. She could have rescued the tenth, so doesn’t do the very best she can, but do we really want to say that her rescue of nine people wasn’t good (was actually bad)? None of the different interpretations of GC can provide the conse quentialist with a satisfactory account of what it is for an action to be good. The intuition on which they are based is that a good action makes the world better. The difficulty lies in producing a general formula to identify the particular possible world (or worlds), than which the actual world is better, as a result of a good action. Any unified theory requires a way of fixing the contrast point, but the appropriate contrast point varies with the situation of evaluation. Part of the problem is that our intuitions about the goodness or badness of particular actions are often influenced by features of the context that it would be difficult to incorp orate into a general account. This suggests the following contextualist account of good and bad actions: G-con: An action is good iff it is better than the appropriate alternative. An action is bad iff it is worse than the appropriate alternative. As examples for which different conversational contexts are unlikely to change the appropriate alternative, consider again Button Pusher and
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
114 Contextualism: Good, Right, and Harm Burning Building. Suppose that Agent pushes “5” in Button Pusher. It is hard to imagine a conversational context in which anything other than pushing “0” is selected as the appropriate alternative. Pushing “5” would clearly be judged a bad action in just about any plausible conversational context. Now suppose that agent rescues three people in Burning Building. In most conversational contexts the appropriate alternative will be res cuing none (or perhaps one), and so the rescue of three will be judged to be good. Now consider an example for which a change in conversational context might change the appropriate alternative. Recall the case from Chapter 3 of Ross Perot’s and my donations to the homeless. Let’s flesh it out with some more details. Perot. Ross Perot gives $1,000 to help the homeless in Dallas and I give $100. In most conversational contexts both of our actions will be judged to be good, because the appropriate alternatives will be ones in which we give no money. But consider again Perot’s donation. Let’s add a couple of details to the case: (i) Perot has a firm policy of donating up to, but no more than, $1,000 per month to charity. (Some months he gives less than $1,000, even as little as nothing at all, but he never gives more than $1,000.) (ii) He had been intending to give $1,000 this month to complete construction on a dam to provide water for a drought-stricken village in Somalia, but changed his mind at the last minute. As a result of Perot’s switching the money this month to the homeless in Dallas, the dam takes another month to complete, during which time twenty chil dren die of dehydration. Now it is not nearly so clear that we should say that Perot’s action was good. A change in the description of the action might change the appropriate comparison. The extra details about the dam in Somalia make it unclear how to evaluate the action. It is still true that giving the $1,000 to the homeless is better than leaving it in the bank, but it is unclear whether this continues to ground the judgment that Perot’s action is good. In fact, it is very tempting to say that Perot did a bad thing by diverting the money from the dam to the homeless. The point here is not just that learning the details of the dam in Somalia changes the appropriate comparison. The point is rather that what com parisons are appropriate can change with a change in the linguistic context,
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Harm and Benefit 115 even if there is no epistemic change. For example, different descriptions of the same action can make different comparisons appropriate. If we ask whether Perot’s diversion of the $1,000 from the starving Somalians to the Dallas homeless was good, we will probably compare the results of the actual donation with the alternative donation to the Somalians. If, however, we ask whether Perot’s donation to the Dallas homeless was good, we may simply compare the donation to the alternative in which the money sits in the bank, even if we know that Perot had previously intended to send the money to Somalia. It might be objected at this point that there are theories of action individuation, according to which Perot’s diversion of the $1,000 from the starving Somalians to the Dallas homeless is not the same action as Perot’s donation to the Dallas homeless. According to such theories, my example involves a switch from one action to another (spatiotemporally coextensive) one, rather than a mere switch in the way of describing a single action. However, there can clearly be changes in linguistic context that affect the appropriateness of comparisons, without affecting which action is being referred to, on any plausible theory of action individuation. There may be a change in the appropriate comparison even without a change of action description. Suppose that, just before asking whether Perot’s donation to the Dallas homeless was good, we have been discussing his prior intention to give the money to the Somalians. In this context, we are quite likely to compare the actual donation with the better alternative. On the other hand, suppose that, just before asking whether his dona tion was good, we have been discussing the fact that Perot has made no charitable contributions at all in four of the last six months, and small ones in the other two. In this context, we will probably compare the actual donation with a worse alternative.
5.4 Contextualism: Harm and Benefit Consider now the most common consequentialist approach to harm: HARM An act A harms a person P iff P is worse off, as a consequence of A, than she would have been if A hadn’t been performed. An act A benefits a person P iff P is better off, as a consequence of A, than she would have been if A hadn’t been performed.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
116 Contextualism: Good, Right, and Harm As I demonstrated in Chapter 4, the same problems that apply to the consequentialist account of good and bad actions apply to the consequen tialist account of harmful and beneficial actions. Recall the example of Bobby Knight, the erstwhile basketball coach at Texas Tech University: A member of the Philosophy department, passing Bobby Knight on campus, waves cheerily and says “Hey, Knight.” Bobby Knight, turning as red as his sweater, seizes the hapless philosopher around the neck and chokes her violently, while screaming obscenities. By the time Bobby Knight has been dragged away, the philosopher has suffered a partially crushed windpipe and sustained permanent damage to her voicebox, as a result of which she will forever sound like Harvey Fierstein. Has Bobby Knight’s act harmed the philosopher? The intuitive answer is obvious, and HARM seems to agree. The philosopher is much worse off than she would have been had Bobby Knight not choked her. But suppose we discover that Bobby Knight has recently been attending anger management classes. Furthermore, they have been highly success ful in getting him to control his behavior. When he becomes enraged, he holds himself relatively in check. On this particular occasion (only the third violent outburst of the day), he tried, successfully, to tone down his behavior. In fact, if he hadn’t been applying his anger management tech niques, he wouldn’t have choked the philosopher, but would rather have torn both her arms from her body and beaten her over the head with them. Since it took great effort on Bobby Knight’s part to restrain him self as much as he did, it seems that the closest possible world in which he doesn’t choke the philosopher is one in which she is even worse off. HARM, in this case, seems to give us the highly counterintuitive result that, not only does Bobby Knight’s act of choking not harm the philoso pher, it actually benefits her. Likewise, recall Button pusher 2, in which Agent is faced with the choice of pushing any of 100 buttons, or no but ton at all, with different resulting levels of suffering for Victim, from excruciating agony if no button is pushed, all the way to no agony if button “0” is pushed. Agent, remember, wants Victim to suffer, but also wants to be actively involved in bringing about that suffering. She debates between pressing “99” and pressing no button, never even con sidering pressing a different button. HARM seems to give the result that Agent doesn’t harm Victim, if she presses “99,” even though this results in almost maximally excruciating agony. Victim is actually better off
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Harm and Benefit 117 than he would have been if Agent hadn’t pressed “99.” If she hadn’t pressed “99,” she wouldn’t have pressed any button, and Victim would have suffered even more excruciating agony. As with good and bad actions, the consequentialist account of harmful and beneficial actions includes a comparison with an alternative possible world. To harm someone is to make her worse off than she would have been. The alternative with which we are to compare the actual action, though, is not always plausibly identified by the counterfactual. Features of the conversational context in which a particular action is being assessed can affect which alternative is the appropriate one.1 Now let’s consider a contextualist account of harm: H-con: An action A harms a person P iff it results in P being worse off than s/he would have been had the appropriate alternative been performed. Many straightforward examples involve actions for which the conversa tional context is most unlikely to change the appropriate comparison, or at least unlikely to change it so as to produce a different judgment. For example, chancing to encounter you at a philosophy conference, I kill and eat you. It is hard to imagine a conversational context in which the appropriate alternative action is worse for you than being killed and eaten. Likewise, to use a real example, if I say that Booth’s shot harmed Lincoln, the context selects, as an appropriate alternative act of Booth, pretty much anything else except shooting Lincoln. It may be true that Booth could have shot Lincoln in such a way as to lead to a much more agonizing death than the one he in fact suffered. This alternative, how ever, is normally not salient (and may never be). However, it’s also a fairly straightforward matter to produce an example for which the appropriate alternative does change with a change in conversational context. Sometimes different, equally normal, contexts can render one act a harming or a benefiting. Recall the example from Chapter 4 of my father writing a will, in which I receive half his estate. This is the first 1 Counterfactuals themselves are, of course, infected by context, but not always in the same way as judgments about harm. For example, simply entertaining a counterfactual may change the context in a way in which considering a judgment of harm without explicitly entertaining the counterfactual may not.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
118 Contextualism: Good, Right, and Harm will he has written. Had he died intestate, I would have received all of his estate. Two among his many other options were to leave me none of his estate or all of it. Does my father’s act of will-writing harm me or benefit me? Imagine a conversation focused on my previous plans to invest the whole estate, based on my expectation that I would receive the whole estate. In such a context, the appropriate alternative, with which to com pare the actual act of leaving me half the estate, may well be one in which I receive all the estate. In which case, my father’s act resulted in me being worse off than I would have been, had he performed the appropriate alternative. Thus, the claim that my father harmed me by writing the will may express a true proposition. Imagine, though, a different, but equally natural, conversation focusing on my lack of filial piety and the fact that I clearly deserve none of the estate. In this context, the appro priate alternative may well be one in which he writes me out of his will altogether. In which case, my father’s act resulted in me being better off than I would have been, had he performed the appropriate alternative. Thus, in this context, the claim that my father benefitted me by writing the will may express a true proposition. Introducing the previous example, I said that different contexts can render one act a harming or a benefiting. Given that I am talking about harm all things considered, how can I claim that one act can correctly be described as both a harming and a benefiting? Wouldn’t this be contradictory? Likewise, in discussing Perot’s donation to the Dallas homeless, I said that the context in which it is discussed can determine whether the appropriate comparison is with a better or a worse alterna tive, and thus whether Perot’s action is correctly described as good or bad. Again, it seems that I am claiming that one action can be correctly described as both good and bad. Isn’t this contradictory? No. In order to see why not, we need to be precise about what I am committed to. I say that one act can be correctly described in one conversational context as good, and can be correctly described in a different conversa tional context as bad. Likewise, one act can be correctly described in one conversational context as a harming, and can be correctly described in a different conversational context as a benefitting. The reason why no contradiction is involved is that a claim of the form “act A was good,” or “act A harmed person P,” can express different propositions in different contexts. On my suggested account of good actions, to claim that act A
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Harm and Benefit 119 was good is to claim that A resulted in a better world than would have resulted if the appropriate alternative to A had been performed. Likewise, to claim that act A harmed person P is to claim that A resulted in P being worse off than s/he would have been if the appropriate alternative to A had been performed. Given the context-relativity of “the appropriate alternative,” claims about harm (and benefit) have an indexical elem ent. In this case, “appropriate” functions as an indexical, in much the same way as “today” functions as an indexical. Features of the context of utterance help to determine the referents of the terms. In the case of “today,” the relevant feature is simply the day on which it is uttered (or written). In the case of “appropriate,” the relevant contextual features may be more complicated. But, just as “today is a good day to die” can express different propositions in different contexts of utterance, so can “Smith’s act of will-writing harmed his son,” or “Perot’s donation to the Dallas homeless was good.” How, then, can we apply this approach to the cases of group harms discussed in Chapter 4? Just as there are facts of the form “act A results in person P being worse off than he would have been if alternative act B had been performed,” there are facts of the form “the combination of X’s doing A and Y’s doing B results in person P being worse off than if X had done C and Y had done D.” In discussing whether the group con sisting of X and Y harmed P, by doing A and B respectively, some conver sational contexts may pick out the appropriate alternative as the one in which X does C instead and Y does D instead, while other contexts may pick out alternatives in which only one of X and Y acts differently from how they in fact acted. Consider again Case One. Both X and Y shoot me simultaneously. I die. If one, but not the other, had shot me, I will still have died. Only if neither had shot me, would I have survived. Since I end up dead, and death is considered a paradigm case of a harm, we need to assign the harm to something. X’s shooting me is naturally contrasted with his not shooting me (and doing pretty much any other non-lethal thing instead), as is Y’s shooting me. Since it’s not a fact that if X hadn’t shot me, I wouldn’t have died, nor is it a fact that if Y hadn’t shot me, I wouldn’t have died, we need to find another fact in the vicinity that hooks up with my being dead rather than alive. Such a fact is that if neither X nor Y had shot me, I wouldn’t have died. There may be many
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
120 Contextualism: Good, Right, and Harm other facts that also hook up with my death. For example, if my neighbor had been more persistent in getting me to fix her leaking faucet, I would have been delayed and would not have been in a position to be shot by X and Y. This fact is unlikely to be salient. Neither is the fact that, if X and Y had not shot me and Fred Astaire had done something different too, I would not have died. The salient fact is that, if neither X nor Y had shot me, I wouldn’t have died. There is, however, no fundamental moral fact of the form “the group consisting of X and Y really harmed me.” Even when it is contextually appropriate to take a fact about what would have happened if the members of a particular group had acted in certain different ways as grounding a claim about group harms, there may be no simple recipe for assigning credit or blame to the members of the group. In the version of Case One-and-a-Half in which both X’s and Y’s actions depend on their knowledge that the other will shoot me, it is probably inappropriate to blame either one, although it may be appro priate to assign the harm to the group consisting of the two of them. In Case One, it is probably appropriate to blame both X and Y, and to assign the harm to the group consisting of the two of them. The contextualist approach can also deal with those cases in which, as Parfit says, there is not one smallest group who together bring about the harm. Consider a harming version of the Third Rescue Mission: Overkill. If four people stand on a platform, this will kill a hundred miners. Five people stand on this platform. There is not one smallest group of whom it is true that, if they had all acted differently, the other people would not have been harmed. Nevertheless, it seems natural to claim that the five people together harm the hundred miners. The salient fact that would make such a claim appropriate is of the form “if any two of that group had acted differently (done almost anything except stood on the platform), the miners would not have been killed.” At this point I should clarify the role of salience in my contextualist account of moral terms. I mean by salience, roughly, the degree to which the participants in a conversational context consciously focus on an alter native. There may be more sophisticated accounts of salience, but this is certainly a common one. Salience often plays a role in determining
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Harm and Benefit 121 which alternative the context selects as the appropriate one, but salience may not be the only determining factor. To see this, consider an example that might be thought to pose a problem for my account, if salience is solely responsible for selecting the appropriate alternative.2 Imagine a group of comic-book enthusiasts talking about how great it would be if their leader, Ben, had the abilities of Spiderman. After an hour or three of satisfying fantasizing, they are joined by Ben himself, who apologizes for being late. He explains that he was on his way when his grandmother called him on his cellphone. She had fallen, and she couldn’t get up without his help. It took him more than an hour to get to her, because of traffic congestion, during which time she had been lying uncomfortably on the floor. Once he helped her up, though, she was fine. He is sorry that he is late, but the rest of the group, who are also devoted grandsons, must agree that benefiting his grandmother is a good excuse. “Au contraire,” reply his friends, that is the “worst excuse ever.” He didn’t benefit his grandmother at all, but rather harmed her, since he would have reached her a lot sooner, and prevented much suf fering, if he had simply used his super spider powers to swing from building to building, instead of inching his way in traffic. Furthermore, he would have reached the meeting on time. Clearly, something is amiss here. Even though the alternative in which Ben swings through the air on spidery filaments is, in some sense, salient, it is not thereby the appropriate alternative with which to compare his actual behavior. We can’t make an alternative appropriate simply by talk ing about it, although we may be able to make it salient that way. Perhaps we should add to salience, among other things, a commitment to some thing like “ought implies can.” Since Ben cannot swing through the air on spidery filaments, this is ruled out as an appropriate alternative.3 While this may explain why the super spidery alternative is not appropriate in this case, I am reluctant to endorse a general principle, according to which causal inaccessibility rules out alternatives. As I will argue in the next chapter on contextualism and free will, and context ualism and the non-identity “problem,” there may be conversational
2 I owe at least the general idea of this example, though not the details, to Ben Bradley. He suggested something like this in discussion as a problem for my account. 3 I owe this suggestion to Julia Driver.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
122 Contextualism: Good, Right, and Harm contexts in which a causally inaccessible, or even nomologically impos sible, world is the appropriate comparison. However, since my purpose here is simply to sketch the contextualist approach to various ethical notions, I won’t attempt a more detailed account of exactly how context helps fix the referent of “appropriate alternative,” and how salience dif fers from appropriateness. So, where does this discussion leave the place of harm in consequen tialist theory? I am not proposing that we do away with all talk of harm or benefit in our ordinary moral discourse. What I am claiming is that, for the purposes of ethical theorizing, harm and benefit do not have the kind of metaphysical grounding required to play fundamental roles in ethical theory, nor do judgments of harm or benefit make any distinct ive contributions to reasons for action. If, in considering whether to do A, I correctly judge that A would harm P, I am judging that A would result in P being worse off than s/he would have been if I had per formed the appropriate alternative action. This is certainly a relevant consideration from a consequentialist perspective, but it is also one that I would already have taken into account, if I had considered all my available alternatives.
5.5 Contextualism: Rightness and Supererogation Consider a contextualist analysis of “right”: R-con: An action is right iff it is at least as good as the appropriate alternative. The idea here is that the concept of right action (and duty, permissibility, obligation, and the like) invokes a standard, against which the action in question is judged. The standard maximizing consequentialist theory is a non-contextualist4 theory of the right, which fixes the standard
4 This may be a little hasty. I argue in the next chapter that the notion of an “available alter native” is itself affected by context. It follows that a maximizing version of consequentialism, according to which an action is right iff there are no better available alternatives, will itself be partly contextualist.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Rightness and Supererogation 123 as optimizing. For the maximizer, the appropriate ideal is always the optimal option. By contrast, the contextualist approach I am suggesting allows the conversational context to affect the standard. It seems likely that most (ordinary) contexts will be sensitive to such factors as difficulty (both physical and psychological), risk, and self-sacrifice in establishing the appropriate ideal. For example, most, if not all, contexts will establish the act of pushing “0” as the appropriate ideal in Button Pusher, so that any other action will be judged wrong. Burning Building, is a little trickier, but it is hard to imagine many ordinary contexts that set the rescue of everyone as the appropriate alternative. The standard criticism of maxi mizing consequentialism, that it fails to accommodate supererogation, is based in the intuition that there are cases in which duty, or right action, doesn’t demand maximizing. It is possible, it seems, in at least some of these cases, to go “above and beyond” the demands of duty. In particular, in cases where going beyond the demands of duty involves producing more good for others, at the expense of some (perhaps much) cost (or risk) to oneself, and perhaps with great effort, such behavior is deemed particularly praiseworthy. Burning Building seems to be a good example of such a case. In order to get a context that would set the rescue of all ten people as the appropriate ideal, we could imagine a conversation among committed maximizing consequentialists, or perhaps among pro ponents of a Christ-as-ideal moral theory, or perhaps it will be enough to imagine a conversation in a philosophy class that has just been presented with maximizing consequentialism. Just as the epistemological context ualist presents classroom contexts as setting particularly demanding epistemic standards, and thus as being ones in which “I don’t know that I have hands” can be uttered truly, so the ethical contextualist can present classroom contexts in which the maximizing alternative determines the truth value of claims of rightness. Of course, classroom contexts might also set very low standards. A discussion of the demandingness objec tion to consequentialism might set a pretty lax standard. Many ordinary contexts may also set pretty low standards. I have often been told by col leagues that my behavior was supererogatory, when I have agreed to a committee assignment which, although not strictly required by the terms of my employment contract, nonetheless involved no personal risk, and fairly minimal effort.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
124 Contextualism: Good, Right, and Harm It is important to stress that the contextualism I am suggesting is a linguistic thesis. I am not suggesting that there is a property of rightness (or goodness, etc.) of a particular action, which can vary with the con text in which it is discussed. I am suggesting that a sentence such as “John’s act of donating $100 to CARE on June 1st 2017 was right” may express different propositions when uttered in different contexts. The rightness of John’s act doesn’t vary with the context in which it is dis cussed. That is because the context in which the previous sentence was uttered (or read) determined the property picked out by “rightness” in that context. Assume that John’s act possessed that property. If so, no change in linguistic context can change the fact that John’s act possessed that property. A change in linguistic context can make it the case that a different utterance of “rightness” will pick out a different property. Now consider a case, which may seem to raise problems for this con textualist approach to rightness, or at least to permissibility: Button Pusher 3: Agent is confronted with a panel of 11 buttons, labeled from “0” to “10.” She is told that, if no button is pressed in the next 10 seconds, Victim will experience a fairly, but not overwhelmingly, painful shock for one minute. If button “5” is pressed, Victim will experi ence this same shock for a minute. If button “10” is pressed, Victim will experience an intensely agonizing shock for one minute. If button “0” is pressed, Victim will experience no shock. The other buttons all corres pond, in the manner appropriate to their labeling, to varying levels of unpleasant shock between intense agony and nothing at all. Furthermore, if Agent presses a button, any button, she herself will experience a shock for one minute, at level “1” (the level of shock that Victim will experience, if button “1” is pressed). This is mild, but still noticeable enough to be unpleasant. Clearly, the best thing that Agent can do is press “0,” resulting in a mildly unpleasant shock for herself, and no shock for Victim. The worst thing she can do is press “10,” resulting in an intensely agonizing shock for Victim, and a mildly unpleasant shock for herself. Notice something curious about this example. It is fairly easy to envisage conversational contexts, in which pressing “0” is picked out as the appropriate alternative, and thus in which “Agent acted wrongly in doing nothing” would express a true proposition. Pressing “0” inflicts a mild pain on Agent, but Victim will suffer a much worse pain, if Agent doesn’t press any button. One doesn’t have to be a maximizer to think
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Rightness and Supererogation 125 that it’s reasonable to expect someone to endure a little suffering in order to prevent someone else from suffering a lot more. It’s also fairly easy to envisage contexts in which not pressing any button is picked out as the appropriate alternative. It’s true that Victim will suffer, as a result of no button being pressed, considerably more than Agent will suffer as a result of pressing any button. But it’s just for a minute, and it’s not lifethreatening. Surely, someone might say, no-one is required to endure even mild suffering themselves to prevent even considerable non-fatal suffering to others. I know many people who would say this. In fact, I know some people who would say that Agent wouldn’t even be required to press a button, if Victim would suffer at level 10 as a result of Agent’s inaction. I find this attitude misguided, to say the least, but there’s no denying that it’s fairly common, especially among those attracted to some form of (ethical) libertarianism. (Some more extreme libertarians would even say that Agent isn’t required to press any button, even if Victim would die otherwise, and Agent herself wouldn’t suffer any unpleasantness from pressing a button.) But now, try to envisage a context in which pressing, say, “3” were picked out as the appropriate alternative in my original version of Button Pusher 3, in which Victim will suffer at level 5 if no button is pressed. If Agent presses “3,” Victim will suffer less than if Agent presses no button, and the total amount of suffering (3 for Victim and 1 for Agent) will also be less. If there can be contexts in which pressing no button is picked out as the appropriate alternative, shouldn’t there also be contexts in which doing something slightly better overall is instead picked out as the appropriate alternative? Yet, it seems natural to say that, if Agent is going to press any button, the only acceptable button is “0.” Since Agent will suffer the same, minor, amount, no matter which button she presses, there’s simply no excuse for pressing “3” rather than “0.” It’s fairly easy, then, to imagine a context in which people are inclined to say that it’s morally acceptable not to press any button, supererogatory to press “0,” but unacceptable to press “3.” But, if the conversational context merely determines which option is picked out by “appropriate,” and the ranking of the options is determined by their consequences, shouldn’t pressing “3” be considered to be better than not pressing any button, and thus to be at least morally acceptable? The overall consequences of pressing “3” are, after all, better than those of not pressing any button.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
126 Contextualism: Good, Right, and Harm One option, in response to a case like this, is to embrace a far more radical version of ethical contextualism than the one I have proposed. Perhaps the conversational context doesn’t simply determine which option is the “appropriate” one in interpreting “right” (or “permissible,” etc.), but it also, at least sometimes, determines the deontic ordering of alternatives as better or worse.5 The most likely contexts in which the option of not pressing any button was picked out as the “appropriate” one would also provide a ranking of alternatives that didn’t simply track the overall value of the worlds containing them. Thus, the option of pressing “3” could be ranked as deontically worse than not pressing any button, even though the overall state of affairs in which Agent suffers at level 1 and Victim suffers at level 3 is better than the one in which Victim suffers at level 5 and Agent doesn’t suffer at all. Such a radical approach is clearly a departure from the basic consequentialist framework which I endorse, and within which the arguments of this book are situated. There are good reasons to embrace the consequentialist approach, which I sketched in the introductory chapter, and thus not to take the more radical contextualist approach. The main reason to take the more rad ical approach is to accommodate what might seem to be common intuitive judgments about hypothetical cases, such as the one under discussion. But this is a particularly weak reason to depart from an otherwise well-supported approach, especially if the intuitions, such as they are, can be explained in a way that doesn’t require such a departure, as I will now argue. I said above that it is fairly easy to imagine a context in which it is judged permissible not to press any button, supererogatory to press “0,” but impermissible to press “3.” The puzzle seems to be that, according to the overall consequentialist approach I am working with, pressing “3” is better than not pressing any button. But recall that I also explained that seemingly puzzling conjunction of judgments by pointing out that, if Agent was going to suffer the pain involved with pressing a button, there was simply no excuse for pressing “3” rather than “0.” This suggests that, if there is anything to the judgment that it is worse to press “3” than not to press any button, it has to do with an evaluation of character rather than of action. 5 Peter Unger suggests something similar in Unger 1996, ch. 7.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Rightness and Supererogation 127 What kind of person would choose to undergo the mild suffering of pressing a button, and then not minimize Victim’s suffering, given that it makes no difference to how much they themselves would suffer (or how much effort they would have to expend)? A person who would choose to press “3” would probably have a worse character than one who would choose not to press any button. Given that judgments of character can come apart from judgments of action, the apparent puzzle disappears. It really is worse not to press any button than to press “3.” If Agent tells you that he is debating between those two options, and has definitively ruled out any other option, you would try to persuade him to press “3.” If you simply heard that Agent was debating between those two options (and only those two), and you had no opportunity to influence his decision, you would likewise hope that he chooses to press “3.” Nonetheless, if you learn of someone faced with the choice in Button Pusher 3, who chooses to press “3,” you would probably think worse of them than of someone who chooses not to press any button. According to the contextualist approach I am considering, the con versational context in which a judgment is made (e.g. of the goodness or permissibility of an action) can affect which of an agent’s options is the relevant one with which to compare her behavior, but it can’t affect the ordering of the options themselves. Which of two possible options is better, and by how much, is determined by the value of the worlds con taining the options, in terms of net goodness. Talk of “the ordering of the options” might suggest that, in any choice situation, there is a certain array of choices, ordered from best to worst according to the axiology of the relevant version of consequentialism. Conversational context affects which among that array is the relevant one with which to compare the actual choice. But it might do more than that. Even though context, on my suggestion, can’t change an ordering of already given options, it might affect which options get into the ordering in the first place. That is, it might affect which options count as “possible” alternatives to the actual choice made by the agent. This helps to explain why the possibility that causal determinism is true doesn’t pose a threat to consequentialist approaches to ethics, as I will explain in the next chapter.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
6 Contextualism Determinism, Possibility, and the Non-Identity Problem
6.1 Contextualism: Free Will and the Threat of Determinism Determinism is thought to pose a problem for moral responsibility to the extent that we agree with the principle that someone is only to be held morally responsible for an action, if s/he could have done otherwise. The worry, of course, is that if determinism is true, nobody could ever have done otherwise. Utilitarians, and other consequentialists, might seem to be in a better position than other, less enlightened, theor ists in this regard. The standard utilitarian response to the possibility of determinism (or indeterminism), well articulated by Sidgwick, is twofold. First, holding someone responsible, and related notions such as desert, praise, blame, punishment, and reward, are all actions that themselves can be assessed in terms of their consequences. the Determinist can give to the terms “ill-desert” and “responsibility” a signification which is not only clear and definite, but, from an utilitarian point of view, the only suitable meaning. In this view, if I affirm that A is responsible for a harmful act, I mean that it is right to punish him for it; primarily, in order that the fear of punishment may prevent him and others from committing similar acts in future. (ME Bk. I ch. V, sec. 4)
So, the question of whether to hold someone responsible for an action is to be settled by reference to the consequences of the act of holding someone responsible. Whether someone could have done otherwise is, Morality by Degrees: Reasons without Demands. Alastair Norcross, Oxford University Press (2020). © Alastair Norcross. DOI: 10.1093/oso/9780198844990.001.0001
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Free Will and Determinism 129 at best, indirectly related to the question of whether and how to hold them responsible. But what of the question of what makes an act right or wrong? If we say that someone is morally responsible for an act just in case it is right to punish him for it, we need to know what makes such an act of punishing right, as well as what makes any act right. Consider, then, the standard maximizing account of rightness embraced by Sidgwick: an act is right just in case it is the best act (in terms of its consequences) that the agent could have performed. Of course, this seems to bring back the question of whether the agent could have done otherwise. If determinism is true, it might be thought that every action is both optimal and pessimal. Every action is both the best and the worst of all the acts that the agent could have performed, because every action is the only action that the agent could have performed. If optimality is a sufficient condition for rightness, all our actions will be right, which entails that all acts of blaming, punishing, holding responsible, etc. will also be right. This would lead to the rather counterintuitive conclusion that every act that is punished is both blameworthy and right! It is blameworthy, because it is rightfully punished, and it is rightfully punished, because punishing it was the best possible action (because the only possible action). It is right, because it is the best possible action (again, because it is the only possible action). Sidgwick’s response to this is: As regards action generally, the Determinist allows that a man is only morally bound to do what is “in his power”; but he explains “in his power” to mean that the result in question will be produced if the man choose to produce it. And this is, I think, the sense in which the prop osition “what I ought to do I can do” is commonly accepted: it means “can do if I choose”, not “can choose to do.” (ME Bk. I ch. V, sec. 3)
Likewise, G. E. Moore has a similar account of these notions: all along we have been using the words “can,” “could,” and “possible” in a special sense. It was explained in Chapter I (¶¶ 17–18), that we proposed, purely for the sake of brevity, to say that an agent could have done a given action, which he didn’t do, wherever it is true that he could have done it, if he had chosen; and similarly by what he can do,
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
130 Determinism, Possibility, Non-Identity or what is possible, we have always meant merely what is possible, if he chooses. Our theory, therefore, has not been maintaining, after all, that right and wrong depend upon what the agent absolutely can do, but only on what he can do, if he chooses. (Ethics, ch. 6, sec. 3)
So, on the maximizing account, an act A is right just in case there is no better act B, such that if the agent chose to do B, she would have done B. Likewise, on the more general contextualist approach I have been exploring, an act A is right just in case it is at least as good as the appropriate alternative B, such that if the agent chose to do B, she would have done B. This allows the consequentialist to talk of the range of actions that are available to an agent in a given situation, even if determinism is true. Actions may be non-trivially optimific, or truly suboptimific. The standard maximizing conception of rightness can be used to judge actions, as can a satisficing conception, or a scalar conception. The range of available alternatives to a given action are all those actions of which it is true that, if the agent had chosen to perform them, she would have succeeded in performing them. Whether she could have chosen to perform them is irrelevant. But things are not so simple. Consider a standard Frankfurt-style example designed to undercut the Principle of Alternate Possibilities. George is considering whether to sign into law a statute requiring homosexuals to be branded with a pink triangle on their foreheads and atheists to be branded with a scarlet “A.” Karl has a completely reliable mind-control device with which he can guarantee that George will choose to sign the law. If the device detects that George is about to choose to veto the law, it will make him choose to sign it. Otherwise, the device is inactive. In the standard version of this example, George chooses without any help from Karl’s device. That is, George chooses to sign the bill, and he would have so chosen, even if Karl’s device hadn’t existed. The example is supposed to pump our intuitions that George is morally responsible for signing the bill (or choosing to sign it), even though he couldn’t have done otherwise. Likewise it could pump our intuitions that George’s act of signing is wrong. Now consider the case in which the device actually operates to ensure that George chooses to sign the bill. That is, the device detects that George is about to choose to veto the bill, so it intervenes to make
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Free Will and Determinism 131 George choose to sign it. Now, although George chooses to sign the bill, it is no longer true that he would have so chosen even if Karl’s device hadn’t existed. This version certainly seems to pump our intuitions that George is not morally responsible in this case. Leaving aside intuitions about moral responsibility, what should we say about how George’s behavior (in either version) compares with his possible alternatives? Do we say that it is suboptimal, because, if he had chosen to veto the bill, homosexuals and atheists wouldn’t have been persecuted, at least not quite so much? But how do we evaluate that counterfactual? The closest world in which George chooses to veto the bill is one in which Karl’s device doesn’t work. Given that Karl’s device is completely reliable, the closest world in which it doesn’t work may well be one in which it doesn’t even exist. Given that Karl’s device has been highly instrumental in gaining political power for George, if the device hadn’t existed, or if it had been unreliable, George wouldn’t have had political power, and so wouldn’t have been in a position to sign or veto the law in question. In fact, given that Karl’s device has been highly instrumental in gaining political power for those who wrote and passed the law in the first place, if the device hadn’t existed or had been unreliable, there would have been no such law for anyone to sign or veto. So, the closest world in which George chooses to veto the law may be very far indeed from the actual world in which George chooses to sign the law. Given the ways in which the world would have had to have been different in order for George and all the homophobic theocrats to have gained power without Karl’s device, it may even be the case that that world with the law vetoed is worse than the actual world with the law signed. (The level of homophobic and religious persecution in the distant veto world may be so high, even without the branding law, that not only the overall wellbeing, but even the wellbeing of homosexuals and atheists may be higher in the actual signing world.) If the closest world in which George chooses to veto the law is as far from the actual world as all that, why would we claim that vetoing is a relevant alternative with which to compare George’s act of signing? Compare this case with a straightforward case of physical disability limiting a range of choices. Suppose that Mary is a kindergarten teacher. An explosion in the kindergarten causes a joist to sever her left leg below
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
132 Determinism, Possibility, Non-Identity the knee, and traps two children, Bill and Ben, in a burning room. Mary quickly wraps a tourniquet around her leg, and hops into the room to save Bill, who is closer to the door. However, by the time she has got Bill to safety, Ben is dead. If she had had both legs, she could have run into the room and had time to save both Bill and Ben. However, given her recent loss of her leg, her saving Bill is the best she could do, and is pretty heroic to boot. It simply wasn’t in her power to save both, because it wasn’t in her power to run (as opposed to hop) into the room. But what about the counterfactual “if she chose to run into the room, she would have succeeded in running into the room”? That may well be true. After all, Mary wouldn’t have chosen to run into the room, unless she hadn’t lost a leg. She is level-headed enough not to choose to do something that she knows full well she cannot do. The world in which Mary doesn’t lose a leg, perhaps because she moved just before the joist fell, and does choose to run into the room to save both Bill and Ben, is much closer to the actual, hopping world than the closest George veto-choosing world is to the actual George signing-choosing world. But the fact that there is a relatively close world in which Mary doesn’t lose her leg and thus does choose to run doesn’t ground the claim that it was in Mary’s power to run in the actual world. It might be objected at this point that I am unfairly bringing in details of what the world would have had to have been like in order for George to have vetoed the law. Sidgwick’s suggestion is about the meaning of “in his power,” and is intended to apply to the question of what actions are in our power, even if determinism is true. Perhaps we should simply stipulate that we ban backtracking counterfactuals, and hold everything about the actual world constant, except for the choice itself. In this case, despite Karl’s device both existing and operating, George chooses to veto the law. Quite apart from the considerable whiff of adhocery about this move, it leaves open the question of how to assess the option of vetoing the law. Clearly, we are not supposed to hold everything about the actual world except for the choice the same. We are supposed to alter things after the choice. So, as a result of choosing to veto the law, George actually does veto the law. As a result of George vetoing the law, it doesn’t go into effect, and many homosexuals and atheists are persecuted less than in the actual world. This is what is supposed to ground our judgment that George’s act of signing the law is suboptimal. But what of
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Free Will and Determinism 133 Karl’s reaction to what he sees as a violation of natural law? Since we can’t suppose that Karl’s device malfunctions in any way prior to George choosing to veto the law, Karl may well conclude from the fact that George chooses to veto that the natural order has broken down. Perhaps he takes this as a sign of the end times. The rapture has occurred, and he is still down on earth! Who knows what such beliefs would push him to do? We certainly can’t be sure that the results would be better overall than those of George signing the original law. Furthermore, in the version of the George example in which Karl’s device detects that George is about to choose to veto, and so works its neurophysiological magic to ensure that George chooses to sign, are we really prepared to say that it was in George’s power not to sign? A more humdrum example can also illustrate a problem with Sidgwick’s suggestion. Frances Howard-Snyder considers an example in which she is playing chess against Karpov.1 Suppose that something really important depends on whether she beats Karpov. Perhaps some innocent lives will be saved just in case she beats Karpov, and she knows it. Given that, it would clearly be wrong of her not to beat Karpov, if she had it in her power to beat him. However, we are all supposed to agree, Frances cannot beat Karpov. He is the world chess champion, and she is a lowly philosophy professor from Bellingham. No matter how hard she tries, she cannot beat him. But this is not because he is unbeatable. After all, Big Blue has beaten him,2 as have some very good human chess players. There are some sequences of moves, such that those sequences would result in beating Karpov. Each of those sequences consists of moves that Frances could make, if she chose to do so. Call one such sequence “A.” If Frances executes A, she beats Karpov. Furthermore, if she chooses to execute A, she can execute A. So, why isn’t it in her power to beat Karpov? I suggest that the solution to these problems for the consequentialist is to appeal to the conversational context of praising, blaming, judging right and wrong, holding responsible, and the like. Even if, strictly 1 Howard-Snyder 1997. 2 I know, I know, it was Kasparov who lost to Big Blue (thank you pedants, who claim to be able to tell the difference between Russian chess players), but if you take an “a” and the “s” out of “Kasparov” and rearrange the remaining letters just a little, you get “Karpov.” Besides, I bet that Big Blue could also beat Karpov.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
134 Determinism, Possibility, Non-Identity speaking, an agent couldn’t have done otherwise, conversational context may select certain counterpossible alternatives as the relevant ones with which to compare the action. We may, therefore, be able to make sense of a negative (or positive) judgment of an action based on a comparison of the action with an alternative that was not, strictly speaking, available to the agent. So, how does this contextualist approach apply to the examples I discussed earlier, or to the possibility that determinism is true? Consider first Frances’s chess match against Karpov. Even though there is at least one sequence of moves, A, that would have beaten Karpov, and Frances could have executed A if she had chosen to do so, given that she didn’t know what A was, and couldn’t reasonably have been expected to know, no conversational context would select A as an appropriate alternative to whatever sequence of moves Frances did make. That is because our judgments about rightness are constrained by our judgments about abilities, and our judgments about abilities are constrained in at least some ways by our judgments about knowledge. The examples of George signing the bill are a bit more complex. Some contexts may be sensitive to the presence of Karl’s device in such a way that it renders George’s act not wrong, though it may not be right either. I suspect that most conversational contexts presuppose that such devices aren’t present. Consider what would happen if the parties to a conversation concerning an acknowledged wrong action discovered that the agent’s choice had been ensured by such a device. Of course, since such devices don’t exist, we don’t have any hard data on how such a discovery would affect our judgments, but we can make intelligent guesses. Furthermore, there are realistic cases in which we discover that an agent’s choice was strongly influenced by mental abnormalities, such as the belief that God told her to do it, that she is actually Napoleon, or that tax cuts for the wealthy will “trickle down” and eventually benefit the poor. Such discoveries often do affect our judgments of the rightness or wrongness of actions, or even of the appropriateness of making such judgments at all. Similarly, it may be a presupposition of all normal conversational contexts that our actions are not determined. Sidgwick may seem to be suggesting something like this when he says: Certainly when I have a distinct consciousness of choosing between alternatives of conduct, one of which I conceive as right or reasonable,
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Free Will and Determinism 135 I find it impossible not to think that I can now choose to do what I so conceive . . . I can suppose that my conviction of free choice may be illusory: that if I knew my own nature, I might see it to be predetermined that, being so constituted and in such circumstances, I should act on the occasion in question contrary to my rational judgment. But I cannot conceive myself seeing this, without at the same time conceiving my whole conception of what I now call “my” action fundamentally altered: I cannot conceive that if I contemplated the actions of my organism in this light I should refer them to my “self ”—i.e. to the mind so contemplating—in the sense in which I now refer them.3
A classroom context in which evil demon scenarios are discussed may make denials of even quite simple knowledge express true propositions, even though such denials would express false propositions in just about every other context. Likewise, a classroom context in which the claim that all our actions are determined is accepted may make the denial of moral judgments about our actions express true propositions. In such contexts, there may be no relevant alternatives to an agent’s actual behavior. In just about every other context, even among people who believe that determinism is true, certain alternatives to an agent’s actual behavior may be regarded as relevant. Suppose that Mary is angry with Joseph, and considers reprimanding him verbally, but instead pokes him in the eye with a sharp stick. Suppose also that a world, that is similar to the actual world before the eye-poking, but containing instead the verbal reprimand, is significantly better than the actual eye-poking world. Even if the verbal reprimand world is causally inaccessible to Mary at the time of the eye-poking, it nonetheless might be the basis of the “appropriate” alternative, with which Mary’s action is compared, in the judgment that Mary acted wrongly in poking Joseph in the eye with a sharp stick. Recall the contextualist analysis of right action: R-con: An action is right iff it is at least as good as the appropriate alternative. We can either stipulate that, if an action is not right, it is wrong, or provide a parallel contextualist analysis of permissible action (with the 3 Sidgwick 1981, Bk. I, ch. 5, sec. 3.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
136 Determinism, Possibility, Non-Identity stipulation that, if an action is not permissible, it is wrong). Given that Mary’s actual action of poking Joseph in the eye with a sharp stick is not at least as good as the alternative of issuing him a verbal reprimand, the judgment that Mary acted wrongly will express a true proposition, so long as the verbal reprimand is picked out as the appropriate alternative. My claim is that many (probably most) conversational contexts are such that the verbal reprimand (or other alternative, that is also clearly better than eye-poking) will be picked out as the appropriate alternative. This even applies to many contexts, in which the participants accept causal determinism, but don’t consciously focus on it. Thus the scalar approach to consequentialism, combined with a contextualist analysis of the relevant notions, diffuses the threat of determinism.
6.2 Contextualism and the Non-identity “Problem”: Right and Wrong Derek Parfit’s exploration of the so-called “non-identity problem,” especially in Reasons and Persons, has generated an enormous literature. Recently, my own department alone produced a whole book on the issue, by David Boonin,4 and two doctoral dissertations, by Chelsea Haramia and Duncan Purves. I must confess to being rather bemused by the prodigious quantity of intellectual effort expended on this topic over the years. A consequentialist analysis, even of the maximizing var iety, has always seemed to me to provide an adequate explanation of why there simply is no “problem” in the vicinity. However, the scalar approach, combined with the kind of contextualism I am advocating in this chapter, serves to illustrate even more clearly why this is nothing more than a pseudo problem. I take it that my readers are already familiar with the non-identity problem, but, for ease of discussion, I will briefly sketch it, as introduced in Boonin’s opening chapter. Here is his opening statement: Our actions sometimes have an effect not only on the quality of life that people will enjoy in the future, but on which particular people will 4 Boonin 2014.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Non-identity and Wrongness 137 exist in the future to enjoy it. In cases where this is so, the combination of certain assumptions that most people seem to accept can yield conclusions that most people seem to reject. When this happens, we have a problem. (1)
Most writers focus their discussion on Parfit’s examples, or variations on them, and Boonin is no exception. He introduces variations on two of Parfit’s examples. The first involves a choice of whether to conceive now a child, who will be born with a disability, or to conceive later a different child, who won’t have the disability. Here are the relevant details: Wilma has decided to have a baby . . . as things now stand, if Wilma conceives, her child will have . . . the kind of disability that clearly has a substantially negative impact on a person’s quality of life [but] not so serious as to render the child’s life worse than no life at all. The child’s life will . . . clearly be worth living. Wilma can prevent this from happening. If she takes a tiny pill once a day for two months before conceiving, her child will be perfectly healthy. The pill is easy to take, has no side effects, and will be paid for by her health insurance. Fully understanding all of the facts about the situation, Wilma decides that having to take a pill once a day for two months before conceiving is a bit too inconvenient and so chooses to throw the pills away and conceive at once. As a result of this choice, her child is born with a significant and irreversible disability. (2)
Wilma’s actual child is a girl, named “Pebbles.” Had Wilma waited, taken the pills, and conceived two months later, she would have had a different child, a boy, named “Rocks.” Boonin’s second example involves a choice between two different energy policies for a wealthy society: A wealthy society is running out of the fossil fuels that have made its affluence possible, and it is choosing between two sources of energy to replace them. One option is a source of energy that would enable its current citizens to continue to enjoy a high standard of living and which would have no negative impact on future generations. The second option is a source of energy that would enable its current citizens to enjoy a slightly higher standard of living but which would generate
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
138 Determinism, Possibility, Non-Identity a significant amount of toxic waste. The waste could be safely buried for a long period of time, but it is known that after five hundred years, the waste would leak out and that of the millions of people who would be exposed to it, tens of thousands would be painlessly killed as a result once they reached the age of forty . . . I will refer to [the first option] as the safe policy . . . I will refer to [the second option] as the risky policy. Although the difference in terms of the quality of life that the two policies would make possible for the current members of this society is relatively minor, over time it is enough to have a significant impact on a variety of choices that indirectly determine which people will be conceived in the somewhat distant future . . . I will stipulate that over time, the effects of these subtle differences will be enough to generate two entirely distinct sets of people: the set of people who will exist five hundred years from now if the safe policy is selected, and the completely different set of people who will exist five hundred years from now if the risky policy is selected. Knowing that the risky policy will generate toxic waste that will eventually leak and painlessly kill tens of thousands of innocent people in the future, the current members of the wealthy society nonetheless decide to select that option because doing so will enable them to enjoy a slightly higher quality of life. As a result of their choice, the toxic waste that they create and bury leaks out five hundred years later and painlessly kills tens of thousands of innocent people once they reach the age of forty. (5–6)
It seems clear, Boonin says, that both Wilma’s choice to conceive now, instead of waiting, and the wealthy society’s choice of the risky policy are morally wrong. However, he claims, there are convincing arguments in each case that the choices are not wrong. Here is his argument for that conclusion in the conception case: P1: Wilma’s act of conceiving now rather than taking a pill once a day for two months before conceiving does not make Pebbles worse off than she would otherwise have been. P2: If A’s act harms B, then A’s act makes B worse off than B would otherwise have been.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Non-identity and Wrongness 139 C1: Wilma’s act of conceiving now rather than taking a pill once a day for two months before conceiving does not harm Pebbles. P3 [stipulated]: Wilma’s act of conceiving now rather than taking a pill once a day for two months before conceiving does not harm anyone other than Pebbles. C2: Wilma’s act of conceiving Pebbles does not harm anyone. P4: If an act does not harm anyone, then the act does not wrong anyone. C3: Wilma’s act of conceiving Pebbles does not wrong anyone. P5: If an act does not wrong anyone, then the act is not morally wrong. C4: Wilma’s act of conceiving Pebbles is not morally wrong. (3–5) C4, says Boonin, seems implausible. But, “The premises seem right. The premises entail the Implausible Conclusion. The Implausible Conclusion seems wrong. That’s the problem.” (5) What should a consequentialist say to this? Clearly, that P5 doesn’t even seem right, let alone being right. For a maximizing consequentialist, the question of whether an option is right turns on whether there are better available alternatives. What makes an alternative better is that it leads to a world with a greater net amount of good. Nothing in the theory requires that the good be distributed amongst the same population in each option. Perhaps P5 could gain a faint veneer of intuitive plausibility by focusing on simplified hypothetical choices that involve the same population of moral patients existing in each alternative. But, at least since Sidgwick drew attention to it in the Methods of Ethics,5 it has been well-known that our choices have the potential to affect who exists. No consequentialist, then, would accept P5. This is not to say that this response is only available to consequentialists. Most ethicists, of whatever theoretical inclination, accept that consequences are relevant to the moral character of actions. Likewise, most ethicists accept that, other things being equal, a world with ten million very happy people is better than a world with a different ten million only slightly happy people, or than a world with a different five million only slightly happy people (Boonin and Parfit both claim that the difference between same number cases
5 See Sidgwick 1981, Bk. IV, ch. 1, sec. 2.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
140 Determinism, Possibility, Non-Identity and different number cases is significant in discussing the non-identity problem, but for the purposes of my argument this difference is irrelevant). Most ethicists can accept that the consequences of the safe policy are better than the consequences of the risky policy, and that the consequences of waiting to conceive Rocks are better than the consequences of conceiving Pebbles, and thus that there is at least some moral reason to favor the first over the second option in each case. Boonin accepts this, but claims that it doesn’t help with the problem: I will simply concede that it would have been morally better for Wilma to have waited and conceived Rocks. The problem is that even if we agree that it would have been better if she had waited, we seem unable to justify the claim that she did something positively immoral by failing to do so. And most people seem to think not only that Wilma could have made a better choice, but that her failing to do so was morally wrong.
As I said, no consequentialist would accept P5, so no consequentialist would be persuaded by Boonin’s argument to the “Implausible Conclusion” (C4). We don’t seem in the least unable to justify the claim that Wilma did something positively immoral by failing to wait. Even though Wilma neither harmed nor wronged Pebbles (or anyone else) by conceiving her, she acted wrongly, because she failed to pursue the better alternative of waiting and conceiving Rocks two months later. At least, this is what a maximizing consequentialist would say. So what’s the problem with this response? It might seem that the maximizing consequentialist solution to the non-identity problem, simple, clear, and obvious as it is, comes at too high a price. Even though it gives the results we want in the cases of Wilma and the wealthy society, it commits us to unpalatable results in other cases. In particular, it commits us to all kinds of procreative obligations, that we really don’t want. It’s one thing to say that Wilma, who has decided to have a child, and is faced with the two options in the example, would act wrongly by choosing the worse option. But what should we say of the initial choice of whether to have a child at all? Suppose that, if Wilma opts to have a child, the child will have a life worth living, and furthermore there will be more net good in the world
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Non-identity and Wrongness 141 than if Wilma opts not to have a child and do pretty much anything else with her life. In this case, the maximizing consequentialist approach tells us that Wilma has a moral obligation to have a child. Conversely, suppose that, if Wilma opts to have a child, there will be less net good in the world than if she pursues some other option. This might be so, even if Wilma’s child would have a thoroughly happy life. In this case, the maximizing consequentialist approach tells us that Wilma has a moral obligation not to have a child, but to pursue whichever other option would result in the most net good. But surely, we might think, if anything is entirely up to me, it is my procreative choices. Morality can’t demand that I have children, or not have them (except, perhaps, in extreme circumstances). I can’t have an obligation to have a child (or not to have one). If maximizing consequentialism is unacceptable for these reasons, we can’t appeal to it to provide an answer to the non-identity problem. The argument of the previous paragraph is a version of the demandingness objection against maximizing consequentialism. How should a maximizer respond (other than by adopting the scalar approach I advocate)? As I said in chapter 2, a maximizer could simply insist that morality is very demanding. After all, what reason do we have to assume that the demands of morality are easy to meet? I think we can say a little more than this, especially with regard to the present topic of reproduct ive obligations. To the extent that this objection to maximizing consequentialism has intuitive force, part of it might come from a confusion about what it would mean for my reproductive choices to be not entirely “up to me.” Suppose that our reproductive choices were not up to us, in the sense that we were forced, either by actual physical force, or by coercive legal measures, to reproduce when and how someone else, say a government, determined. This condition, which has applied to many women (and continues to apply in various places), would be morally intolerable. If the claim that my reproductive choices are up to me amounts to the claim that there are powerful moral reasons to object to physical or legal coercion in the reproductive realm, it seems highly plausible. However, a moral theory doesn’t coerce or force behavior. If I am morally obligated to donate to charity, be considerate to my grandmother, or to have two children (rather than one or three), it is still entirely up to me whether to donate, to be considerate, and how many children to have (though the last one is probably equally up to at least
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
142 Determinism, Possibility, Non-Identity one other person too). No moral theory can force or coerce me into doing any of these things. If I fail to fulfill a moral obligation, I don’t face the threat of jail. I suspect that a large part of the objection to maximizing consequentialism as “too demanding,” stems from a confusion between coercion and force on the one hand, and mere judgment on the other. That would not be surprising. When a person or an institution demands something of someone else, the demand is often backed up by the threat of external sanctions. However, even if morality does, in some sense, make demands, it doesn’t threaten sanctions, at least not the kind of coercive external sanctions that seem so unacceptable. Nonetheless, we might simply have a strong intuition that morality doesn’t actually make such strong demands. As I argued in Chapter 2, I think the maximizing consequentialist has ample resources to challenge and undermine such an intuition. Maximizing consequentialists, then, should probably stick to their guns, when it comes to the non-identity problem. They should insist that their theory explains why Wilma and the wealthy society act wrongly, and simply bite the bullet on the implication that we do, in fact, have moral obligations in the reproductive realm, which most of us probably fail to fulfill. If the non-identity problem provided the only reason to abandon the maximizing version of consequentialism, it shouldn’t worry them. But, as I argued in Chapter 2, there are powerful reasons to abandon the idea that morality, especially if some version of consequentialism is true, makes demands at all, whether of the maximizing variety or any other. It provides reasons for action. There are the strongest reasons to do the best of one’s options. But there is no demand, over and above this, that one do what one has the strongest reasons for doing. As I have been arguing in this, and the previous, chapter, we can make sense of the idea that many standard moral claims express substantively true (and false) propositions, by deploying a contextualist semantics. So, what should the scalar consequentialist say about the non-identity problem? First, it’s clear that both Wilma and the wealthy society take the option that is opposed by strong moral reasons. It would be far better, morally, for Wilma to wait to conceive, and for the wealthy society to select the safe policy. Recall, though, that Boonin concedes this, but thinks it doesn’t go far enough, saying “most people seem to think not
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Non-identity and Harm 143 only that Wilma could have made a better choice, but that her failing to do so was morally wrong.” The scalar consequentialist could remind him at this point that what “most people seem to think” about the morality of Wilma’s choice is no better a guide to moral truth than what most scientists used to think about phlogiston was a guide to physical truth, or than what most theologians think about a deity is a guide to theological truth. But we can do more than that. We can explain how “Wilma acted wrongly” can express a true proposition, even within the scalar approach. Recall, again, the contextualist analysis of “right”: R-con: An action is right iff it is at least as good as the appropriate alternative. In pretty much any conversational context in which Wilma’s choice is discussed, waiting to conceive will be picked out as the appropriate alternative. Thus “Wilma acted wrongly in conceiving Pebbles” will express a true proposition. However, in discussing reproductive choices in general, very few conversational contexts will select the maximizing alternative as the appropriate one. “It is permissible to choose not to have a child, even if it would be better overall to have one” will usually express a true proposition. So, for the scalar consequentialist, the nonidentity problem simply doesn’t present a problem, at least not as far as accounting for judgments of right and wrong goes.
6.3 Contextualism and the Non-identity “Problem”: Harm Scalar consequentialism, combined with a contextualist semantics for moral terms like “right” and “wrong,” can explain how “Wilma acted wrongly in conceiving Pebbles” and “the wealthy society was wrong to choose the risky option” both express true propositions, without being committed to what seem like objectionable claims about reproductive obligations. However, the non-identity problem might seem to pose a further problem. Consider Wilma again. Compare Wilma’s choice with a slightly different one, faced by Wanda. Suppose that Wanda has already conceived, and receives the results of an early in utero test. The doctor
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
144 Determinism, Possibility, Non-Identity informs Wanda that her fetus is suffering from a genetic condition that, if untreated, will result in the kind of disability that clearly has a substantially negative impact on a person’s quality of life, but not so serious as to render the child’s life worse than no life at all. The child’s life will clearly be worth living. In fact, it is the very same disability that Pebbles has. Wanda can prevent this from happening. If she takes a tiny pill once a day for two weeks, her child will be perfectly healthy. The pill is easy to take, has no side effects, and will be paid for by her health insurance. Fully understanding all of the facts about the situation, Wanda decides that having to take a pill once a day for two weeks is a bit too inconveni ent and so chooses to throw the pills away. As a result of this choice, her child, Marbles, is born with a significant and irreversible disability. Whatever our reaction to Wilma’s choice, it seems we have the same reaction to Wanda’s choice. Both make choices that are significantly morally inferior to an easily available alternative. In pretty much any conversational context, both “Wilma acted wrongly in conceiving Pebbles” and “Wanda acted wrongly in failing to cure Marbles” would express true propositions. In fact, we might think that both Wilma’s choice and Wanda’s choice are wrong for the very same reason. But notice that we wouldn’t just say that Wanda acted wrongly, but also that she harmed Marbles by choosing not to take the pills. The contextualist analysis of harm can accommodate this: H-con: An action A harms a person P iff it results in P being worse off than s/he would have been had the appropriate alternative been performed. Just about any conversational context will select, as the appropriate alternative, one in which Wanda takes the pills, and cures Marbles. Marbles is thus worse off, as a result of Wanda’s choice, than she would have been, if Wanda had chosen the appropriate alternative. But we can’t say the same thing about Pebbles, in Wilma’s choice. The appropriate alternative for Wilma is not just taking the pills, but also waiting to conceive. If she waits to conceive, she will conceive Rocks, not Pebbles. Given that Pebbles’ life is worth living, Pebbles is not worse off than she would have been, had the appropriate alternative been performed. So, it seems that “Wanda harmed Marbles” will usually express a true
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Contextualism: Non-identity and Harm 145 proposition, but “Wilma harmed Pebbles” will not. But, if Wilma’s and Wanda’s choices are wrong for the same reason, shouldn’t we also be able to say the same things about harm concerning their choices, at least if we are speaking of both in the same conversational context? Likewise, consider again the wealthy society’s choice of the risky policy. The results of the risky policy are clearly significantly worse than the results of the safe policy. Pretty much every conversational context would select the safe policy as the appropriate alternative, so “the wealthy society acted wrongly in opting for the risky policy” would express a true proposition. But what of the tens of thousands of people, five hundred years in the future, who live lives worth living, but die painlessly at forty from the leaked toxic waste, that results from the risky policy? Doesn’t the risky policy harm them? It seems not. Recall that they would never have existed, had the safe policy been chosen. But, it might seem to some, the risky policy does harm those people in the future. They die at forty, rather than later. This seems like harm. Their deaths are caused by the toxic waste, which itself is the result of the risky policy. So shouldn’t we be able to say that the risky policy harmed them? Notice that these results, both in the case of Wilma and of the wealthy society, pose a problem, if at all, for any version of the counterfactual comparative account of harm, not just for the contextualist account I have been exploring. So, what should we say about harm in the various non-identity cases? Many people are, initially at least, inclined to say that the risky policy harms the future people. However, I think this is because most people simply don’t understand the fragility of existence, and, even if they eventually do, their intuitions aren’t necessarily guided by their understanding. Because of the morally loaded nature of harm, the intuition that there is something bad about enacting the risky policy, which is pretty accurate, can lead to the intuition that the policy harms the future people, which isn’t. Consider an analogous case. The university president hires a dean in order to bring financial efficiency to the institution. The president knows that the dean will, among other things, concentrate on hiring instructors with terminal masters’ degrees to relatively secure positions with much higher teaching loads and lower salaries than tenure track professors with doctorates. As a result, the faculty is made up of instructors who have decent jobs, but not as good jobs as professors
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
146 Determinism, Possibility, Non-Identity who would have been hired instead. If one of those instructors, learning about the president’s appointment of the dean a few years earlier, were to say that the appointment harmed her, because it led to her having a higher teaching load and lower salary than she would have had, she would be simply mistaken. She wouldn’t have been hired, if the previous policy of hiring professors with doctorates had been in place. Also, assume that she would have had a less secure and worse paid instructor position elsewhere, if she hadn’t been hired by this particular university. Likewise, consider an analogous case to that of Wilma and Pebbles. Suppose that, as a teenager, a woman conceives and carries to term a child, Penny. She raises Penny as a single mother, and Penny has a difficult childhood, in comparison with her peers at school. Money is tight, so Penny is always dressed in second-hand clothes. There is no money for vacations, trips to the movies, or eating out. Seeing the lives of her school friends, Penny may grow resentful. She may believe that her mother made a morally bad decision to get pregnant with her when she was a teenager, rather than waiting to have a child. But if Penny also believes that her mother’s decision harmed her, because she would have been better off, if her mother had waited a few years to conceive, she is simply mistaken (unless her life is actually not worth living). If she says to her mother “I wish you’d waited a few years before conceiving, because then I would have had a better childhood,” the mother could appropriately respond “If I’d waited, you wouldn’t have existed.” Both Wilma’s and Wanda’s choices are morally problematic, for the very same reason. In each case, they could easily have made a much better choice. This is also what allows “Wanda harmed Marbles” to express a true proposition in most contexts. But it doesn’t follow that “Wilma harmed Pebbles” expresses a true proposition in any context. That is because all the moral work is done by the comparative facts about the values of the worlds containing both Wilma’s and Wanda’s choices. This won’t satisfy everyone (some people are never satisfied). Some philo sophers may be so determined to get the result that both Wilma and the wealthy society harm the relevant people that they will construct ever more outlandish accounts of harm. For example, someone might claim that the victim is not a particular individual, but rather an office-holder. Put another way, they might say that a victim de dicto is harmed, rather than de re. Thus, Pebbles is harmed qua Wilma’s first child, because she
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Practical Upshots of the Scalar Approach 147 is worse off than Wilma’s first child (who would have been Rocks) would have been, had Wilma taken the pills and waited. I don’t have the space to consider such attempts here. They have spawned an enormous literature. I would like to point out that, if someone is intractably convinced that Wilma harms Pebbles and the wealthy society harms the future people, the contextualist account can accommodate that. Recall that I said, in Section 6.1 of this chapter, that conversational context might select a counterpossible alternative as the appropriate one with which to compare the actual action. Context might also select an astronomically distant alternative as the appropriate one. Consider Penny again. It is, in some sense, possible, though vanishingly unlikely (perhaps even nomologically impossible), that she could have been conceived ten years later. The same goes for Pebbles being conceived two months later. Likewise, we can imagine the world of the safe policy in five hundred years, populated by the same people as in the risky policy. We might, then, compare the actual life of Penny, or Pebbles, or the tens of thousands of toxic waste victims, with what their lives would have been like, if they had somehow existed in the futures of the better choices. Once we understand the details of the radical contingency of human identity, however, unless we are stubbornly determined to say that the victims were harmed, the linguistic context is almost certain to rule out such worlds (possible or not) as being the appropriate ones to form a judgment about harm. And rightly so. The risky policy is bad, because it leads to a worse world than the safe policy, but not necessarily a world in which anyone is harmed by the policy.
6.4 The Possible Practical Upshot of Adopting Scalar Contextualist Utilitarianism You might wonder what is the practical upshot of the arguments of this book. I have been arguing that, at the level of fundamental moral theory, consequentialist theories such as utilitarianism should be understood as scalar, rather than maximizing or satisficing. But I have also argued that we should combine a contextualist semantics with the scalar approach, thus rendering claims such as “action X was wrong” both meaningful and truth-apt. My arguments are clearly relevant to the theoretical
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
148 Determinism, Possibility, Non-Identity disputes between consequentialist approaches and their rivals, for example with regard to the demandingness objection. But do they affect how a consequentialist should engage in ordinary moral discourse? Since I am not proposing that we do away with the use of such terms as “right,” “wrong,” “permissible,” “harm,” and the like, but rather that we understand them in contextualist fashion, you might think that adopting my scalar contextualist version of utilitarianism, as opposed to a traditional maximizing version, will make no difference to ordinary moral discourse. This would be a mistake, as I will explain. Maximizing utilitarians are sometimes faced with the following choice, when asked to give practical moral advice to an audience: (i) tell the truth (according to their theory) about what moral obligations apply, and risk alienating their audience (because the demand to maximize will be seen as unreasonable/absurd), or (ii) lie about the extent of their audience’s moral obligations, by downplaying them in order to produce at least some moral improvement, and run the risk of the lie being discovered and their advice disregarded. For example, when asked how much of our disposable income we “ought” to give to charity, a utilitarian may think it more effective to recommend something like 10 percent or even 5 percent, than whatever amount would actually do the most good (which would be something pretty close to 100 percent of disposable income). Although a utilitarian would approve such a lie (if effective), it would nonetheless run counter to a good character, and would run the risk of instability. Peter Singer, for example, is sometimes criticized for neither doing as much as his theory commits him to, nor even publicly advocating for doing that much. If a maximizing utilitarian claims that someone is morally permitted to do something which they know to be suboptimal, and which is easily identifiable by others as suboptimal, they run the risk of their advice being disregarded as hypo critical. Whether the charge of hypocrisy is fair or not is irrelevant. What matters is its potential to undermine the force of the utilitarian’s moral guidance. Now consider the approach of a scalar contextualist utilitarian. When asked how much money we ought to give to charity, the contextualist will pay attention to the linguistic context, which will set the standard for the appropriate option. If all she cares about is saying something
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Practical Upshots of the Scalar Approach 149 true, she will calibrate her advice appropriately. But, of course, that’s not all she cares about (in fact she only cares about truth instrumentally). She cares about promoting the good. So she will be concerned not just to identify the linguistic context within which she is communicating, but also often to change it. For example, by drawing attention to the relative ease with which we can produce more net utility than the option picked out as appropriate, she may succeed in changing the context sufficiently to identify a better option as the appropriate comparison. Let me illustrate. I have, on a few occasions, given a public lecture entitled “How to be Good.” I started out by claiming that it’s hard to be good, because of all the preventable suffering in the world, and the fact that most people with the resources to make a positive difference make little to no difference. If all, or most, of those who could help to prevent suffering actually contributed a comparatively modest amount (proportionally speaking) of their resources, no-one would be unduly burdened (on even a fairly lax understanding of “unduly”). The problem is that most people are bastards, who don’t respond to, or even recognize, genuine moral reasons, and so morality can seem to be highly burdensome for those who do recognize its reason-giving force. One of the examples of easily preventable suffering that I discuss in this talk is the horrendous suffering of vast numbers of animals raised for human consumption, used as experimental subjects, and used as objects of human entertainment. In discussing this example, Mylan Engel pointed out that, rather than emphasizing how hard morality is, given the indifference of most people, I should emphasize how easy it is to make a significant difference. After all, we all need to eat, and we all need to make decisions about what to eat. Choosing to eat a plant-based meal rather than one with animal products is, for most people, relatively easy (with a little education). Drawing attention both to the suffering of farmed animals, and to the relative ease with which most of us can choose plant-based meals, has a good chance of changing the conversational context of a discussion of these things to the point where the choice of a meal with animal products would be below the level of moral acceptability. In general, then, a contextualist utilitarian should pay close attention to conversational context, and be alert to the possibilities of changing
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
150 Determinism, Possibility, Non-Identity contexts to set higher standards for permissibility. This might seem like rhetorical manipulation, and perhaps it is. But conversational contexts aren’t fixed, nor are they given from on high. They are the products of shared assumptions and background beliefs. As a party to the conversation, the contextualist utilitarian is legitimately entitled to shape the context. Drawing attention to the suffering of animals, for example, and the relative ease with which we can make a difference to that suffering is really no different from a geographer in a conversation with flat-earthers drawing attention to the evidence against the flat-earth hypothesis. Recall the following passage from Mill, discussed in Chapter 2: We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellow-creatures; if not by opinion, by the reproaches of his own conscience.6
In Chapter 2, I argued against a consequentialist analysis of “wrongness” based on an interpretation of this passage. The problem, of course, is that, for a consequentialist, the question of when someone “ought to be punished” depends on the consequences of punishing. I also suggested that the most plausible interpretation of the passage is that, rather than suggesting an analysis of wrongness, Mill is pointing out some features of the ordinary usage of the term “wrong.” Consider the passage again, in the light of a suggested contextualist semantics for “wrong”: W-con: An action is wrong iff it is not at least as good as the appropriate alternative.7 Given that the context selects which alternative is “appropriate,” it is possible that thoughts about sanctions play an important role. Perhaps 6 Mill 1861, ch. 5, para. 14. 7 Notice that, when combined with the contextualist account of “right” (R-con), this leaves open the possibility that there may be actions which are, in some contexts, correctly described as neither right nor wrong. “Appropriate” in R-con may pick out a different alternative from that picked out by “appropriate” in W-con, in the same conversational context. Roughly, in some contexts, the standards set to avoid wrongness may be lower than those set to achieve rightness. A contextualist account of “permissible” would, in such contexts, identify as the “appropriate” alternative the same alternative as that picked out in W-con.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Conclusion 151 we are guided by the thought that, to be wrong, an action must be bad enough that, at least the internal sanction of guilt or shame is appropriate. Ordinary thinking is at most only partly consequentialist. So what guides the thought that guilt or shame is appropriate for performing a particular action is unlikely to be wholly, or even mostly, the consequences of experiencing such emotions. In pointing out the ease with which we can avoid certain bad consequences, a contextualist utilitarian may shift the context to the point that guilt or shame is seen as the appropriate reaction to producing such consequences. The suggestion that ordinary thinking about rightness and wrongness is at least impli citly guided by thinking about sanctions and rewards has much plausibility. If true, it suggests tactics of shaping conversational contexts to better promote the good.
6.5 Conclusion In this book I have argued that consequentialists in general, and utilit arians in particular, should embrace a scalar version of their theories, which leaves no room for notions of rightness and wrongness, goodness and badness (of actions), harm (as applied to actions), and the like at the level of fundamental moral theory. I have also explored a contextualist approach to the semantics of terms such as “right,” “permissible,” “good” (as applied to actions), “harm,” and “possible.” Such a contextualism can explain how the scalar approach to consequentialism may be the correct ethical theory, and yet ordinary sentences, such as “Jeffrey Dahmer acted wrongly,” or “Jeffrey Dahmer harmed his victims” can express true propositions. A contextualist approach to all these notions makes room for them in ordinary moral discourse, but it also illustrates why there is no room for them at the level of fundamental moral theory. If the truth value of a judgment that an action is right, or good, or harmful varies according to the context in which it is made, then rightness, or goodness, or harm can no more be properties of actions themselves than thisness or hereness can be properties of things or locations themselves. To be more accurate, since “right” (and the other terms I have discussed) can be used to pick out different properties when used in different contexts, many actions will possess a property that can be legitimately
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
152 Determinism, Possibility, Non-Identity picked out by “right” (or “good,” “harmful,” etc.) and lack many other such properties. Which properties we are interested in will vary from context to context. But we are not mere passive observers of, and adherents to, conversational contexts. We can also play an active role in shaping those contexts. In fact, a fruitful avenue for promoting the good is the very activity of context shaping.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Bibliography Aristotle, Nichomachean Ethics. Widely reprinted. Bennett, Jonathan. 1993. “Negation and Abstention: Two Theories of Allowing,” Ethics 104 (1); reprinted in Bonnie Steinbock and Alastair Norcross (eds.), Killing and Letting Die, 2nd ed. (Fordham, 1994). Bennett, Jonathan. 1995. The Act Itself. Oxford: Clarendon Press. Bentham, Jeremy. 1789. Introduction to the Principles of Morals and Legislation. Widely reprinted. Boonin, David. 2014. The Non-Identity Problem and the Ethics of Future People. Oxford: Clarendon Press Carruthers, Peter. 1992. The Animals Issue: Moral Theory in Practice. Cambridge: Cambridge University Press. Donagan, Alan. 1977. The Theory of Morality. Chicago: University of Chicago Press. Eggleston, Ben. 1999. “Does Participation Matter? An Inconsistency in Parfit’s Moral Mathematics,” presented at the APA Eastern Division, December 1999. Feinberg, Joel. 1961. “Supererogation and Rules,” Ethics 71: 276–88. Foot, Philippa. 1983. “Utilitarianism and the Virtues”, Proceedings and Addresses of the American Philosophical Association 57 (2): 273–83. Hare, R. M. 1982. Moral Thinking: Its Levels, Methods and Point. Oxford: Oxford University Press. Hitchcock, Christopher. 1996. “Farewell to Binary Causation,” Canadian Journal of Philosophy 26: 267–82. Howard-Snyder, Frances. 1994. “The Heart of Consequentialism,” Philosophical Studies 76: 107–29. Howard-Snyder, Frances. 1997. “The Rejection of Objective Consequentialism,” Utilitas 9 (2): 241–8. Howard-Snyder, Frances and Norcross, Alastair. 1993. “A Consequentialist Case for Rejecting the Right,” The Journal of Philosophical Research 18: 109–25. Jackson, Frank. 1997. “Which Effects?,” in Reading Parfit, ed. Jonathan Dancy. Oxford: Wiley-Blackwell, 42–53. Kagan, Shelly. 1989. The Limits of Morality. Oxford: Oxford University Press. Lewis, David. 1973. Counterfactuals. Cambridge MA: Harvard University Press Mill, J. S. 1861. Utilitarianism. Widely reprinted. Moore, G. E. 2005. Ethics, ed. William Shaw. Oxford: Oxford University Press. Nelson, Mark. 1991. “Utilitarian Eschatology,” American Philosophical Quarterly 28: 339–47. Norcross, Alastair. 1990. “Consequentialism and the Unforeseeable Future,” Analysis 50: 253–6. Norcross, Alastair. 1995. “Should Utilitarianism Accommodate Moral Dilemmas?,” Philosophical Studies 79 (1): 59–85.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
154 Bibliography Norcross, Alastair. 1997a. “Comparing Harms: Headaches and Human Lives,” Philosophy and Public Affairs 26 (2): 135–67. Norcross, Alastair. 1997b. “Consequentialism and Commitment,” Pacific Philosophical Quarterly 78 (4): 380–403. Norcross, Alastair. 1997c. “Good and Bad Actions,” The Philosophical Review 106 (1): 1–34. Norcross, Alastair. 2005a. “Contextualism for Consequentialists,” Acta Analytica 20 (2)” 80–90. Norcross, Alastair. 2005b. “Harming in Context,” Philosophical Studies 123 (1–2): 149–73. Norcross, Alastair. 2006. “Reasons without Demands: Rethinking Rightness,” in James Dreier (ed.), Blackwell Contemporary Debates in Moral Theory. Oxford: Wiley-Blackwell, 38–53. Norcross, Alastair. 2013. “Doing and Allowing”, in Hugh Lafollette (ed.), International Encyclopedia of Ethics. Oxford: Wiley-Blackwell. Parfit, Derek. 1984. Reasons and Persons. Oxford: Oxford University Press. Quinton, Anthony. 1989. Utilitarian Ethics (2nd ed.). La Salle, Illinois: Open Court. Railton, Peter. 1984. “Alienation, Consequentialism, and the Demands of Morality,” Philosophy and Public Affairs 13 (2): 134–71. Ross, W. D. 1973. The Right and the Good. Oxford: Oxford University Press. Scheffler, Samuel. 1982. The Rejection of Consequentialism: A Philosophical Investigation of the Considerations Underlying Rival Moral Conceptions. Oxford: Clarendon Press. Sidgwick, Henry. 1981. The Methods of Ethics (7th ed.). Indianapolis: Hackett Publishing Company. Slote, Michael. 1985a. Common-sense Morality and Consequentialism. Boston: Routledge and Kegan Paul. Slote, Michael. 1985b. “Utilitarianism, Moral Dilemmas, and Moral Cost,” American Philosophical Quarterly 22: 161–8. Smart, J. J. C. 1973. “An Outline of a System of Utilitarian Ethics,” in J. J. C. Smart and Bernard Williams, Utilitarianism For and Against. Cambridge: Cambridge University Press, 3–74. Steinbock, Bonnie, and Norcross, Alastair (eds.). 1994. Killing and Letting Die. New York: Fordham University Press. Stocker, Michael. 1990. Plural and Conflicting Values. Oxford: Oxford University Press. Unger, Peter. 1996. Living High and Letting Die. Oxford: Oxford University Press. Vallentyne, Peter. 1993. “Utilitarianism and Infinite Utility,” Australasian Journal of Philosophy 71: 212–17. Williams, Bernard. 1973. “A Critique of Utilitarianism,” in J. J. C. Smart and Bernard Williams, Utilitarianism For and Against. Cambridge: Cambridge University Press, 77–150.
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Index Absence from the scene as contrast point for good and bad actions 62–4 as contrast point for harm and benefit 98–9 Action individuation, different theories of 115 Agency 61–7, 97–8 Agent-relativity 59 Arbitrariness Counterfactual comparisons 11–12 Distinction between good and bad states of affairs 25 Thresholds for rightness 23 Aristotle 1–2, 16 Astaire, Fred 87, 102–3, 119–20 Backtracking counterfactuals, see Counterfactuals Bart 25–6, 37–40 Bartism 37–40 Basketball 63, 116 Bennett, Jonathan 38n.22, 65n.14 Bentham, Jeremy 8–10, 53, 59 Big Blue 133, 133n.2 Blame/Blameworthiness 6–7, 28–34, 36, 89, 120, 128–9 Boonin, David 136–40, 142–3 Booth, John Wilkes 101–2, 117–18 Bradley, Ben 121n.2 Cana, Wedding at 46–7 Cannibalism, at philosophy conferences 117–18 Carruthers, Peter 8 Cauliflower 44
Character 5n.4, 6–7, 21, 27, 34, 38–40, 49, 68–9, 74, 79n.27, 84–5, 94–6, 100, 108, 126–7, 139–40, 148 Christ, Jesus 46–7 Comparisons actions 74–8 affected by conversational context 114–15, 117–18, 148–9 cross-temporal 55–6 cross-world 11–12, 56–8, 62–4, 70–2, 97–9, 101–2, 117 with inaccessible or impossible alternatives 133–4 Context, conversational 12–13, 76, 101–7, 111, 113–15, 117–27, 133–6, 143–52 Contextualism ethical 35–6, 78, 80–1, 103 epistemological 12–13, 111, 123 Counterfactuals 62–4, 67–70, 91–2, 98–100, 102–3, 107, 117, 131–3, 144–5 Course of nature 64–7 Coveting, re my neighbour’s ass 38–9 Demandingness 7–12, 15–22, 42, 46, 123, 141–2, 147–8 Deontology/deontologist/ deontological 11–12, 14–15, 22–3, 26–7, 41, 45–6 Determinism 56n.8, 93–4, 108, 127–36 Divine Command Ethics 37, 38n.21, 41 Doing/Allowing 20, 45, 64–7 Donagan, Alan 64–7
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
156 Index Driver, Julia 121n.3 Duty 8, 12–15, 21–2, 26–7, 35–6, 108, 122–3 Eliminativism about moral properties 109–10 End times 132–3 Engel, Mylan 149 Error Theory about moral terms 109–11 Esotericism 36–7 Externalism, motivational 43–4 Fierstein, Harvey 99–100, 116 Frankfurt examples 130 Free will 128–36 Harm group 83–97, 102–3, 119–20 Counterfactual account of 101–2, 144–5 In non-identity cases 143–7 Howard-Snyder, Daniel 44n.26 Howard-Snyder, Frances 59n.11, 79n.29, 84n.1, 133 Ideal rightness as 35, 46–7 contextually determined 123 Immobility as contrast point for good and bad actions 62–4 as contrast point for harm and benefit 97–9 Imperatival model of morality 40–2 Inaction in theory of good and bad action 61–4 in theory of harm 97–9 Indeterminism 56n.8, 108, 128 Indexical 118–19 Instrumental value 2 Internalism, motivational 43–4 Intrinsic value 2–5, 8–9, 55 Intrinsic and extrinsic reasons for action 5
Kagan, Shelly 44n.26, 54 Karpov, Anatoly 133–4 Kasparov, Garry 133n.2 Killing/letting die 20, 45, 68–9, 100, 112–13 Knight, Bobby 99–102, 116–17 Libertarianism, ethical 124–5 Lincoln, Abraham 50–2, 101–2, 117–18 Lisa 25–6, 37–40 Lisanity 37–40 Lubbock, Texas 46–7 Maximization/maximizing 6–12, 14–24, 32–5, 46, 48–54, 56–8, 122–3, 128–30, 136, 139–43, 147–8 Mill, JS 1–2, 8–10, 27–9, 33, 53–4, 59, 150 Moore, GE 129–30 Moral dilemmas 32–3 Motive 5n.4, 49–52, 66–7, 78–9 Non-identity problem 12–13, 121–2, 135–47 Obligation 12–13, 22–3, 26–7, 103–4, 108, 111, 122–3, 140–4, 148 Ought Implies Can 121–2 Overdetermination 85–6 Parfit, Derek 7, 82–97, 120, 136–7, 139–40 Perot, Ross 77–8, 114–15, 118–19 Pizza 44 Preemption 86 Principle of Alternative Possibilities 130 Publicity 35–40 Punishment 28–34, 128–9, 150–1 Rapture 132–3 Rawls, John 37 Reductionism about moral properties 110 Responsibility 102–3, 128–31, 133–4 Ross, WD 14n.1, 22–3, 50
OUP CORRECTED AUTOPAGE PROOFS – FINAL, 17/02/20, SPi
Index 157 Salience 101–3, 117–22 Sanctions 29–30, 33, 141–2, 150–1 Satisficing 11–12, 21–2, 26, 45–6, 49–50, 130, 147–8 Scheffler, Samuel 45n.27, 58–9 Self-sacrifice 15–16, 18–19, 21–2, 35–6, 56–61, 66–9, 84–5, 123 Sidgwick, Henry 8, 14, 30–1, 42–3, 52–3, 128–9, 132–5, 139–40 Slote, Michael 21n.8, 22n.9, 32n.17, 44n.25, 52n.5, 54 Smart, JJC 50–2
Spiderman 121–2 Stocker, Michael 36–7 Supererogation/supererogatory 11–12, 20–1, 23–4, 35–6, 78, 103–4, 123–6 Texas Tech University 99–100, 116 Threshold 23–4, 26–7, 53 Unger, Peter 126n.5 Williams, Bernard 36–7