129 90 6MB
English Pages 160 [161] Year 2023
Moral Responsibility and the Flicker of Freedom
Moral Responsibility and the Flicker of Freedom Justin A. Capes
Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2023 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. CIP data is on file at the Library of Congress ISBN 978–0–19–769796–2 DOI: 10.1093/oso/9780197697962.001.0001 Printed by Integrated Books International, United States of America
To Marla. It’s you and me (and the kids), kid.
Contents Acknowledgments
ix
1. A Flicker of Freedom
1
2. The Symmetry Argument
22
3. Objections and Replies
57
4. Frankfurt Cases
92
5. Confirmation Not Counterexample
132
References Index
137 145
Acknowledgments Were it not for Michael Robinson and Philip Swenson, this book would never have been written. It was Michael who first convinced me that the flicker of freedom strategy defended here deserves to be taken seriously, and it was Philip who helped me see that considerations concerning omissions could be used to motivate and defend the strategy. Philip and I first argued for that claim in our coauthored paper “Frankfurt Cases: The Fine-Grained Response Revisited,” Philosophical Studies 174 (2017): 967–981. Portions of that paper appear here, in revised form, and I thank both Philip and the paper’s publishers for permission to reuse this material. Similar thanks are owed to the publishers of the following articles of mine, revised portions of which are also incorporated here: “The Flicker of Freedom: A Reply to Stump,” Journal of Ethics 18 (2014): 427– 435; “Blameworthiness and Buffered Alternatives,” American Philosophical Quarterly 53 (2016): 270– 280; and “Against (Modified) Buffer Cases,” Philosophical Studies 179 (2022): 711–723. At a workshop in 2017, Philip casually suggested that I should write a monograph on the topic of moral responsibility and alternative possibilities. That offhand remark planted the seed for this book, a seed Michael McKenna watered at another workshop later that same year. My thanks to both Philip and Michael for encouraging me to embark on the project and for providing a lot of helpful feedback along the way. Taylor Cyr, John Fischer, and Carolina Sartorio each read and commented on a draft of the manuscript. I’m grateful to them for doing so and for the excellent comments they provided. All three are trenchant critics of the view here defended, and I appreciate them for always keeping me on my toes. The book also benefited from two anonymous referees for Oxford University Press, both of whom provided useful feedback that helped shape the final product. Thanks, whoever you are. Others who commented on portions of the book or who provided helpful discussion that influenced the ideas in it include Andrew Bailey, Randy Clarke, Chris Franklin, Carl Ginet, David Hunt, Stephen Kearns, Doug
x Acknowledgments Keaton, Andrew Khoury, Joseph Long, David McNaughton, Al Mele, Dan Miller, Dana Nelkin, David Palmer, Derk Pereboom, Travis Rodgers, Seth Shabo, Dan Speak, Patrick Todd, Luke Van Horn, and David Widerker. My thanks to you all. No doubt there are others I’ve forgotten to mention. As you’ll see if you continue reading, whether I’m blameworthy for doing so depends in part on whether I could have done otherwise. Finally, and most importantly, I thank my wife, Marla, to whom the book is dedicated, for her constant love and encouragement, for putting up with me through the ups and downs of what proved to be a longer than anticipated writing process, for being a sounding board for many of the ideas contained herein, and especially for doing what she could to curb my perfectionist tendencies so that I could actually finish the book. Evidently, her efforts on this last score, futile as they seemed to her at the time, paid off, for here the book is, warts and all.
1 A Flicker of Freedom Once upon a time virtually everyone would have agreed that “a person is morally responsible for what he has done only if he could have done otherwise” (Frankfurt 1969: 829). In those days this principle of alternative possibilities (PAP), as it has come to be known, was universally accepted, or very nearly so, a point of common ground for people with otherwise divergent views about free will and moral responsibility. It was seen as a promising way of cashing out the popular but inchoate thought that free will is required for moral responsibility, and, in keeping with that thought, was said to provide a principled explanation of the exculpatory force of pleas like “I couldn’t help it!,” “I had to!,” “There was no alternative!,” and “I had no choice!” (Or so legend has it. I leave it to historians of philosophy to determine the extent to which this legend is based in fact.) Nowadays, however, the principle is a subject of considerable controversy.1 The turning point was 1969. In December of that year the Journal of Philosophy published an article by Harry Frankfurt titled “Alternate Possibilities and Moral Responsibility” in which Frankfurt argues that PAP is false. He does so by constructing a series of hypothetical examples involving coercion or potential coercion culminating in one that he claims is a clear counterexample to the principle, a possible case in which a person is morally responsible for what he has done even though, due to potential coercion, he couldn’t have done otherwise. The impact that Frankfurt’s article has had on subsequent discussions in the philosophy of action about free will and moral responsibility is difficult to overstate and has often been compared to the impact that Edmund Gettier’s classic article “Is Justified True Belief Knowledge?” has had on subsequent discussions in epistemology about the nature of knowledge. The comparison is in many ways apt. Like Gettier, Frankfurt challenged what (at
1
For an overview of that controversy, see Robb (2020).
Moral Responsibility and the Flicker of Freedom. Justin A. Capes, Oxford University Press. © Oxford University Press 2023. DOI: 10.1093/oso/9780197697962.003.0001
2 Moral Responsibility and the Flicker of Freedom least according to legend) was a well-established and seemingly unshakable bit of philosophical orthodoxy by identifying what he claims are straightforward counterexamples to it. And, as with Gettier’s famous examples, there has emerged an enormous and extremely complex literature surrounding examples of the sort to which Frankfurt drew our attention (the “Frankfurt cases” as they are now known). There is, however, one notable difference between the two sorts of examples and the scholarly discussion surrounding them. Whereas it’s widely agreed that Gettier cases are counterexamples to the idea that knowledge is justified-true-belief, there remains, more than fifty years later, much debate about whether Frankfurt cases are counterexamples to PAP.2 I weigh in on that debate in this book. But before I do, I want to say a bit more about PAP and why the principle is important. Attention to these preliminary matters will help us more fully appreciate the significance of Frankfurt’s claim to have identified a counterexample to the principle and will also put us in a better position to assess that claim and various responses to it.
1.1 The Principle of Alternative Possibilities PAP is a principle about the conditions under which a person is morally responsible for what he has done. But what is it to be “morally responsible” for something? In the sense at issue, it’s to deserve (or merit or be worthy of) praise, blame, sanction, or reward for that thing.3,4 2 For an overview of the central aspects of that debate, see Sartorio (2017a) and the introduction to Widerker and McKenna (2003). 3 The expression “morally responsible” has several different senses corresponding to different kinds of moral responsibility. For discussion of these different kinds of moral responsibility, see Pereboom (2021), Shoemaker (2015), Watson (1996), and Zimmerman (1988, 2015). The sort of moral responsibility with which PAP is concerned is sometimes referred to as moral accountability. Some may therefore prefer to replace the more generic expression “morally responsible” as it appears in the principle with the somewhat more precise “morally accountable.” However, I’ll continue to use the more generic expression to maintain terminological continuity with the existing literature on PAP, which often makes no mention of different kinds of moral responsibility. 4 Dana Nelkin (2008) and Susan Wolf (1990) restrict PAP to blameworthiness. In their view, it’s not possible for a person to be blameworthy for what he has done if he couldn’t have done otherwise, but it is possible for a person to be praiseworthy for what he has done even if he
A Flicker of Freedom 3
I take no stand on exactly how the terms “praise” and “blame” are to be understood in this context except to say that a person deserves the responses to which those terms refer if and only if he deserves certain reactive attitudes. In the case of praise the relevant attitudes include gratitude and the sort of pride one might take in doing a commendable deed, while in the case of blame they include resentment, indignation, and guilt. I leave open the precise relationship between praise and blame and the reactive attitudes just mentioned. I leave open, in particular, the question of whether praise and blame just are (i.e., are identical to) those attitudes. What I’ve said here is consistent with the claim that they are and consistent as well with its denial. The terms “sanction” and “reward,” as I employ them, refer to a wide range of responses. These include formal punishments and honors of the sort doled out by the state and other institutions (corporations, universities, clubs, etc.). But they also include informal penalties and benefits of the sort that we as individuals regularly bestow on one another (e.g., a cutting remark, giving someone the cold shoulder, a gift given as a display of gratitude, etc.). It’s common in discussions of PAP to distinguish direct and indirect moral responsibility and to apply this distinction when formulating the principle. A person is said to be directly morally responsible for a certain state or event X if he is morally responsible for X but not solely in virtue of being morally responsible for something else, whereas a person is said to be indirectly morally responsible for X if he is morally responsible for X at least in part because he is morally responsible for something else. (Note that, understood in this way, direct and indirect moral responsibility aren’t mutually exclusive; a person can be both directly and indirectly morally responsible for the same thing.) Direct moral responsibility is often characterized as basic, original, nonderivative, or uninherited, whereas indirect moral responsibility is said to be nonbasic insofar as it’s derived or inherited from moral responsibility for other things. An example will help illustrate the distinction. In the absence of any exculpatory considerations, an assassin who shoots the mayor, killing her instantly, is morally responsible for the mayor’s death. The mayor’s death is couldn’t have done otherwise. Thus, as they see it, there is an asymmetry when it comes to the requirements for praiseworthiness and the requirements for blameworthiness. But I disagree. Although I’ll focus mainly on cases of blameworthiness, it seems to me that what I have to say about PAP in what follows works equally well whether it’s praiseworthiness or blameworthiness that’s at issue.
4 Moral Responsibility and the Flicker of Freedom the assassin’s fault, we might say; it’s something for which the assassin is to blame. However, the assassin is morally responsible for the mayor’s death only in virtue of being morally responsible for shooting her. Had he not been culpable for the shooting (if, e.g., he had been completely insane at the time), he wouldn’t be culpable for the resulting death either. His responsibility for that event is thus indirect, whereas his responsibility for the shooting, or perhaps for the decision to carry it out, is direct, as it doesn’t depend on or derive from his being morally responsible for anything else. PAP is often restricted to direct moral responsibility. However, I see no compelling reason to do so. Still, the distinction between direct and indirect responsibility will make an appearance from time to time in the subsequent discussion, which is why I introduce it here. PAP is often regarded as a principle exclusively about moral responsibility for actions. However, I see no compelling reason to restrict the principle in that way either. The classic statement of the principle quoted at the outset is written in terms of moral responsibility for what a person has done, where the expression “what he has done,” as it appears in that statement of the principle, is a placeholder for verbs like “walking” and verb phrases like “walking with purpose.” Such verbs and verb phrases often pick out an agent’s actions and features of or facts about his actions. However, they can also pick out an agent’s omissions and facts about his omissions (e.g., “omitting to water the flowers”), as well as nonvoluntary activities and facts about those activities (e.g., “snoring loudly during the colloquium”).5 Thus, on a straightforward reading of PAP, it’s a principle about moral responsibility for behavior (and/ or facts about behavior) construed broadly to include actions, omissions, and even certain nonvoluntary activities. As we’ll see, there is good reason to prefer this more inclusive reading of the principle.6 PAP is usually written as a material conditional, but no one treats it as such, and for good reason. Properly construed, the principle states a conceptually necessary condition for an agent to be morally responsible for what he
5 The example of snoring during the colloquium is from Clarke (2014: 4). 6 I use the term “omission” here and throughout to refer to all instances of inaction. This is a stipulative use of the term, one that doesn’t precisely track standard usage. As Clarke (2014: ch. 1) points out, our conception of an omission, as reflected by how we ordinarily use the term, doesn’t treat every instance of inaction as an omission. However, it’s simpler for my purposes here to ignore this complication. As far as I can tell, nothing of significance hinges on my taking this terminological shortcut.
A Flicker of Freedom 5
has done. It claims not just that there are no cases in which a person is morally responsible for what he has done and in which the person couldn’t have done otherwise, but also, and more fundamentally, that whether a person could have done otherwise is among the essential determinants of whether the person is morally responsible for his behavior, so that, necessarily, if a person is morally responsible for what he has done, this is in part because he could have done otherwise, whereas, necessarily, if the person couldn’t have done otherwise, he is, at least partly for that reason, not morally responsible for what he has done.7 Perhaps the most difficult element of PAP to interpret is the expression “could have done otherwise.” Three aspects of the expression require clarification, the first of which concerns time. According to Alfred Mele, “ ‘Could have done otherwise’ in PAP is typically given a synchronic reading. That is, PAP is typically understood to assert that a person is morally responsible for what he did at [time] t only if, at t, he could have done otherwise then.” But, as Mele points out, “when PAP is read that way, it is counterintuitive for mundane reasons.” To illustrate the point, he cites the case of “a drunk driver who, owing to his being drunk, runs over and kills a pedestrian he does not see.” Mele notes that, in an ordinary case of this sort, “the driver is morally responsible for killing the pedestrian,” even if, as t drew near, it was “too late for the driver to do otherwise at t than hit and kill the pedestrian” (2006: 84). However, the synchronic reading of PAP implies otherwise (i.e., it implies that the driver is not morally responsible for killing the pedestrian). That reading of the principle is therefore false. As examples like this make clear, “could have done otherwise” in PAP must be given a diachronic reading if the principle is to be defensible. That is, the principle must be understood to assert that a person is morally responsible for what he did only if he could, at some point (but not necessarily at or immediately prior to the time of action), have done otherwise.8 When PAP is read that way, it isn’t refuted by mundane cases like the one involving the drunk driver; for while the driver in that case may be blameworthy 7 For similar points, see Fischer (1994: ch. 7) and Leon and Tognazzini (2010). 8 As Mele (2006: 86) points out, a different way of understanding PAP in light of the drunk driver case is as a principle about direct moral responsibility. When PAP is understood in that way, the drunk driver case poses no difficulty for the principle, since the driver is only indirectly responsible for hitting and killing the pedestrian. But as I mentioned earlier, I see no reason to restrict the principle in that way, provided it’s read diachronically.
6 Moral Responsibility and the Flicker of Freedom for hitting and killing the pedestrian, there was a time earlier in the evening when he could have arranged things so that he wouldn’t have hit and killed anyone later that evening. For example, he could, at some time well prior to t, have called a cab, or arranged for a designated driver, or refrained from drinking altogether. Had he done any of those things, he would have made it home without incident. He therefore could, at that earlier time, have done otherwise than hit and kill the pedestrian later that evening, and it’s partly for that reason, defenders of PAP would say, that he is blameworthy for hitting and killing her. The second aspect of “could have done otherwise” that requires clarification is the word “could.” The word is equivocal; it can pick out various kinds of possibility (as in “There could be life on other planets”), permissibility (as in, “Mom told me I could watch TV after school”), ability (as in, “I didn’t know you could play the guitar”), and opportunity (as in “I could have asked her yesterday, but I chickened out”). So, what does the word pick out in PAP? Here there is disagreement among defenders of the principle. Some take the “could” in PAP to pick out only a general ability to do otherwise (see, e.g., Fara 2008), while most others take it to pick out an opportunity to do otherwise, where the opportunity to X is commonly taken to require an ability to X of one kind or another.9 I side with the majority on this issue. As I understand the principle, the “could” at issue is the “could” of ability plus opportunity, or, as I’ll sometimes put it for ease of expression, the “could” of agential options. Agents who have the option to do otherwise may be said to have more than one course of action open to them, to have alternative possibilities for action, and to be free (have the freedom) to do otherwise. Not just any ability and opportunity to do otherwise will do, however. The opportunity must be what David Brink and Dana Nelkin (2013) refer to as a “fair opportunity” to do otherwise. A fair opportunity to do otherwise, as Brink and Nelkin characterize it, has two main elements: “normative competence” and “situational control.” Normative competence is a matter of having certain cognitive and volitional capacities, including an ability to recognize and reflect on at least some of the (moral) reasons one has to do otherwise
9 Some authors take the “could” in PAP to pick out what’s known as a specific ability to do otherwise, where a specific ability to do otherwise is an ability to do otherwise in the specific circumstances in which one finds oneself. Such an ability is commonly taken to include an opportunity to do otherwise. For more on specific abilities, see Mele (2003; 2006: 17–18).
A Flicker of Freedom 7
and an ability to govern one’s behavior in accordance with those reasons. R. J. Wallace refers to these abilities collectively as “the powers of reflective self- control” (1994: 157).10 Situational control is, as the label suggests, a matter of whether the situation in which one finds oneself affords one a reasonable chance or opening or occasion to exercise the powers of reflective self- control to do otherwise. There is a long-standing debate about whether having alternative possibilities for action is compatible with a deterministic view of human agency (i.e., a view according to which everything we do is ultimately determined by things beyond our control, such as God or events in the distant past together with immutable laws of nature).11 Some insist that it is, others that it isn’t. I stay neutral on this issue here. Although I’m an incompatibilist, both about determinism and the freedom to do otherwise and about determinism and moral responsibility, I won’t invoke my incompatibilist views in this book. I would, however, like to make two points that I think should guide our judgments about agents’ options, points that, as far as I can tell, are consistent with my professed procedural neutrality on the compatibilism/incompatibilism issue. The first point is this (where “B” is a placeholder for verb phrases that pick out a person’s behavior and features of or facts about that behavior, phrases like “walking with purpose,” “omitting to water the flowers,” and “snoring loudly during the colloquium”): if there are aspects of a person’s situation over which the person has no control that would inevitably prevent him from B if he tried to B, it’s safe to say that B isn’t an option for the person in that situation. To see this, consider Zac, who is locked in an empty room with no way to escape. Given Zac’s predicament, it seems clear that leaving the room isn’t an option for him, since the locked door (together with other relevant aspects of 10 The abilities in question here need only be simple abilities. For more on simple abilities and how they differ from other, more complex abilities, see Mele (2003) and Metz (2020). 11 Throughout the book, I’ll use the term “determinism” to refer to a deterministic view of human agency. This is a somewhat nonstandard use of the term, since “determinism,” as standardly defined in the literature on free will and moral responsibility, isn’t equivalent to the claim that everything we do is necessitated by factors beyond our control. However, it’s simpler for my purposes here to ignore this complication. As far as I can tell, nothing of significance hinges on my taking this terminological shortcut. For the standard definition of “determinism” (at least as the term is often used in the literature on free will and moral responsibility), see van Inwagen (1983: 3).
8 Moral Responsibility and the Flicker of Freedom the situation, like his inability to pick locks or break sturdy wooden doors off their hinges, etc.) would prevent him from leaving the room if he attempted to leave. The same is true in a variation of the case in which the door isn’t locked but in which someone would inevitably lock it if Zac gave any indication that he was going to try to leave the room. Assuming everything else is the same as before, it’s obvious that leaving the room isn’t an option for Zac in this case either, even though the door isn’t currently locked. The second point is this: sometimes B isn’t an option for a person even if the person is capable of B and even if nothing would prevent the person from B if he attempted to B. This can happen when there are circumstances over which the person has no control that make it impossible for him to even attempt to B in those circumstances (see Lehrer 1968). Here’s a mundane example to illustrate the point. Suppose I’ve locked myself out of my office and that campus security won’t arrive to let me in for another ten minutes. Since Peter van Inwagen’s An Essay on Free Will is on the shelf in my office, and since I have no way of getting into my office or accessing the books therein until the security officer arrives to unlock the door, reading a few pages of that book while I await the officer’s arrival isn’t an option for me. And this is so even though I’m perfectly capable of reading a few pages of the book and even though nothing would prevent me from reading a few pages of it while I wait if I attempted to do so. The problem isn’t that I can’t read or that someone or something would prevent me from reading the book if I tried. The problem is that my current situation makes it impossible for me to even try.12 The third and final aspect of “could have done otherwise” that requires clarification is the notion of doing otherwise. As I understand it, to say that a person could have done otherwise is to say that he could have avoided doing what he did. In the case of actions or activity more generally, this entails only that the person could have refrained from or omitted the relevant action or activity, not that he could have engaged in some other activity in its place. In the case of omissions, it entails that the agent could have done what it is he omitted to do.
12 The point made in this paragraph is often made by those who insist that being free to do otherwise is incompatible with determinism. However, I think compatibilists about determinism and the freedom to do otherwise should embrace the point as well. The example I’ve used to illustrate it doesn’t presuppose incompatibilism.
A Flicker of Freedom 9
Putting the preceding points together, we can now restate the PAP a bit more precisely, albeit somewhat more cumbersomely, as follows: PAP: Necessarily, whether a person deserves praise, blame, sanction, or reward for B is determined in part by whether the person had, at some point, a fair opportunity to avoid B, so that if the person deserves even a little praise, blame, sanction, or reward for B, he deserves it in part because he had, at some point, a fair opportunity to avoid B, whereas if the person never had a fair opportunity to avoid B, he is, at least partly for that reason, not deserving of any praise, blame, sanction, or reward for B.
That’s rather a mouthful, I realize. So, for ease of expression, I’ll typically revert to the classic and much simpler statement of the principle quoted at the outset of the chapter. Keep in mind, though, that it’s this interpretation of it that I have in mind going forward.
1.2 Why It Matters Thus construed, PAP is important for several reasons, five of which I’ll mention here. The first concerns the project of formulating an adequate theory of moral responsibility. One of the main things a theory of moral responsibility should do is identify the determinants of responsibility (i.e., the conditions a person must satisfy in order to be, and in virtue of which the person is, morally responsible for things), thereby enabling us to distinguish considerations that are inculpatory and exculpatory from those that aren’t. If true, PAP is part of the correct theory of moral responsibility, or at least is entailed by it, as it articulates a determinant of deserved praise, blame, sanction, and reward. Figuring out whether the principle is true is thus part of the larger project of giving an account of the conditions under which people deserve those responses. The second reason PAP is important has to do with what I’d be happy to call “free will” if it weren’t for the fact that that term has been used by different people to mean very different things. So, to avoid confusion, I’ll use the expression “the freedom to do otherwise,” where a person is free to do
10 Moral Responsibility and the Flicker of Freedom otherwise if and only if he has the option to (nonaccidentally) avoid behaving as he does. It certainly seems that we are sometimes free to do otherwise. This isn’t to deny, of course, that there may be situations here and there in which we aren’t free to do otherwise, situations, that is, in which there is only one course of intentional action available to us. A doting father who notices his beloved toddler about to wander into a busy intersection may be powerless to resist the paternal urge to pull the child to safety, for example. In cases like that, it may be that the person did the only thing he could have done. Often enough, though, there is a range of possible actions available to us, a variety of different things we could do. Our future is a garden of forking paths, to use a familiar metaphor, and it’s sometimes up to us which path we follow. Or, again, so it would seem. But does anything of significance hinge on whether this seeming is veridical? Does it really matter whether we are ever free to do otherwise? It does if PAP is true, for if we never have more than one option for action, if what we do in any given situation is the only thing we could have done in that situation, if, in short, we are never free to do otherwise, it follows from PAP that we are never morally responsible for what we do and thus never deserve praise, blame, sanction, or reward for our behavior. And, if no one is ever deserving of any of those responses, then many of our institutional and interpersonal practices (e.g., punishment of criminals, excoriating a friend for betraying a confidence, etc.) lack the sort of moral justification we often take them to have.13 If, however, PAP is false, if we could be morally responsible for what we have done even if we couldn’t have done otherwise, then the claim made by some philosophers that we are in fact never free to do otherwise would be a much less troubling proposal, since, in that case, it wouldn’t pose any obvious threats to our commonsense belief in moral responsibility or to the social and political practices that we take that belief to underwrite. The question of whether we are sometimes free to do otherwise is therefore much more morally significant if PAP is true than it is if the principle is false.
13 To be clear, this isn’t to say that these practices have no moral justification whatsoever if moral responsibility doesn’t exist. It’s just to say that, in the absence of moral responsibility, they don’t have the sort of justification we ordinarily suppose them to have, namely, a justification based on moral desert. Whether there are other justifications of the practices in question, justifications that don’t appeal to moral desert, is a question for another occasion. For a topnotch discussion of this issue, one with which I have considerable sympathy, see Vargas (2013: ch. 6).
A Flicker of Freedom 11
A third and closely related reason PAP is important has to do with the classic problem of human freedom and divine foreknowledge. The problem, in a nutshell, is this: how can we reconcile the thesis that an infallible, all- knowing God exists with the commonsense thesis that some of us sometimes have free will? The two theses appear to be in conflict, for it seems that, necessarily, if God exists, is infallible, and always knows ahead of time exactly how we are going to behave, then we are never free to avoid behaving as we do. It seems, in short, that we can’t have both God (as traditionally conceived) and free will (as traditionally conceived).14 This is an interesting philosophical problem, one that has attracted the attention of theists and nontheists alike. But it becomes much more morally significant if PAP is true, for if moral responsibility requires the freedom to do otherwise, as the principle says it does, and if such freedom is indeed incompatible with the existence of an infallible, all-knowing God, then so too is moral responsibility. In this way, the problem of human freedom and divine foreknowledge takes on an added moral dimension if PAP is true, one it wouldn’t possess if, contrary to what the principle entails, moral responsibility doesn’t require the freedom to do otherwise. It’s worth mentioning in this connection the contention of some philosophers that rejecting PAP provides a solution to the problem of human freedom and divine foreknowledge. On their view, the sort of freedom or free will that’s required for moral responsibility (which, as they see it, is the sort of freedom with which we are primarily concerned) doesn’t involve the freedom to do otherwise and isn’t threatened by the supposition that God exists and has infallible foreknowledge of our future behavior. We can therefore have the requisite sort of free will, they claim, and can be morally responsible for our behavior, even if God exists and has exhaustive and infallible foreknowledge, and even if that’s incompatible with our being free to do otherwise.15 Whether they are right about that obviously depends in part on whether PAP is true. A fourth reason the principle is important concerns the long-standing debate about whether moral responsibility is compatible with a deterministic view of human agency. Many people, of course, believe that it isn’t, and PAP 14 This, of course, is only the barest sketch of the problem. For the classic twentieth-century statement of it, see Pike (1965). See Fischer and Todd (2015) for more recent developments. 15 See, e.g., Hunt (1999) and Zagzebski (1991: ch. 6).
12 Moral Responsibility and the Flicker of Freedom has traditionally been an important part of their thinking about the matter. Traditional incompatibilists about responsibility and determinism claim that, necessarily, if determinism is true, then, appearances to the contrary, we only ever have one course of action open to us at any given moment and thus are never free to do otherwise. The conjunction of that claim with PAP entails the incompatibilist conclusion that, necessarily, we are never morally responsible for what we do if determinism is true. Call this the traditional argument for the incompatibility of moral responsibility and determinism. Is the argument sound? That depends in part, obviously, on whether PAP is true. If it is, then the case for the incompatibility of moral responsibility and determinism is that much stronger. If, however, the principle is false, then the argument is unsound even if determinism is incompatible with the thesis that we are sometimes free to do otherwise. This brings us to a fifth and closely related point concerning the importance of PAP. Traditional compatibilists about determinism and moral responsibility accept the principle but reject the claim that we are never free to do otherwise if determinism is true. We might think of them as dual compatibilists, insofar as they think that the freedom to do otherwise and moral responsibility are both compatible with determinism.16 Other compatibilists about determinism and responsibility, though, are open to the view that determinism is incompatible with the freedom to do otherwise. These compatibilists have thus sought to challenge the traditional incompatibilist argument outlined above by objecting to PAP and its ilk. They argue that moral responsibility doesn’t require the freedom to do otherwise and is compatible with determinism even if the freedom to do otherwise isn’t.17 Their position is known as semicompatibilism.18 Whether it’s viable depends in part on whether PAP, or some principle very much like it, is true. 16 Prominent defenses of this traditional compatibilist position include List (2019), Nelkin (2011), and Vihvelin (2013). 17 I should note that some incompatibilists reject PAP, too. They agree with semicompatibilists that moral responsibility doesn’t require the freedom to do otherwise and, consequently, that the traditional argument for the incompatibility of moral responsibility and determinism is unsound. However, they go on to insist that determinism is incompatible with moral responsibility for other reasons, reasons having to do with the idea that to be morally responsible for our behavior, we must be the source of what we do in a way we can’t be if determinism is true. See, e.g., Pereboom (2001, 2014), Stump (1996), and Zagzebski (2000). Mele (2006) articulates an incompatibilist view of this sort as well, though he doesn’t endorse it. He is agnostic about whether incompatibilism is true. 18 Prominent defenses of semicompatibilism include Fischer (1994, 2006), Fischer and Ravizza (1998), McKenna (2013), and Sartorio (2016a). Mele (2006) articulates a
A Flicker of Freedom 13
1.3 Frankfurt Cases: The Fine-Grained Analysis A lot, then, hinges on whether PAP is true. So, is it? Frankfurt thinks not. He contends that there are cases in which a person couldn’t have done otherwise but in which the person did what he did for reasons of his own, because he really wanted to do it, and not at all because he couldn’t help doing it. The person thus would have done the same thing, in the same way, and for the same reasons, even if he could have done otherwise. And in cases like that, Frankfurt says, the person can be morally responsible for what he did even though he couldn’t help doing it.19 Here’s a relatively simple case of the sort Frankfurt envisions. Revenge: Jones is tired of Smith always meddling in his affairs. So, he decides to kill Smith. Jones makes this decision in the normal way (whatever exactly that is), just as he would have had things been perfectly normal. But, unbeknownst to Jones, things aren’t normal. While Jones was sleeping last night, a nefarious neuroscientist surreptitiously installed a neural control device in Jones’s brain that enables the neuroscientist to read Jones’s thoughts and, if need be, control his behavior. If Jones hadn’t decided on his own to kill Smith, the neuroscientist would have used this device to compel Jones to decide to kill Smith, and Jones would have been powerless to resist the device’s influence.20
semicompatibilist view of this sort as well, though he doesn’t endorse it. He is agnostic about whether compatibilism is true. 19 Frankfurt grants that pleas like “I couldn’t help it!” or “I had no choice!” are sometimes exculpatory. But when they are, Frankfurt says it’s not because they indicate that the person couldn’t have done otherwise. Rather, it’s because they indicate that the person did what he did only because he had to do it and not because it was what he really wanted to do. Strawson (1962) says something similar. He suggests that the plea “I couldn’t help it” is exculpatory because it reveals that the agent behaved as he did not because of a lack of moral concern but only because the situation left him with no feasible alternative but to behave that way. For a discussion of the ways in which the positions of Frankfurt and Strawson intersect, see McKenna (2005). 20 This case is based loosely on one from Widerker and McKenna (2003: 4), which, in turn, is based on Frankfurt’s original example. I postpone discussion of Frankfurt’s original case until §4.1, as it has certain features that I think are best ignored for now.
14 Moral Responsibility and the Flicker of Freedom This story is thin on details. In c hapter 4, I’ll consider several strategies for developing the example and others like it. However, this basic version of the case will do for now. Because Jones decided on his own to kill Smith, without any “help” from the neural control device, just as he would have had the device not been there, it seems that, given certain acceptable background assumptions (e.g., that Jones is sane, knows right from wrong, etc.), he is blameworthy, and thus morally responsible, for deciding to kill Smith. But Jones couldn’t have done otherwise than decide to kill Smith, for if he hadn’t decided on his own to do so, the neuroscientist would inevitably have used his neural control device to compel Jones to decide to kill Smith, and Jones would have been powerless to resist the device’s influence.21 Revenge thus appears to be a straightforward counterexample to PAP, a possible case in which a person is morally responsible for what he did even though he couldn’t have done otherwise. Appears to be, but is it really? Maybe not. Defenders of PAP insist that if we attend more carefully to certain crucial features of the case, we’ll see that it poses no threat to the principle. Consider, first, the stipulation that Jones couldn’t have resisted the influence of the neural control device had it been used to compel his decision. Some question the legitimacy of this stipulation. According to Maria Alvarez, for example, “it is not legitimate for a Frankfurt-style case simply to stipulate that, in the counterfactual case, the agent would be caused to perform an action that he cannot avoid performing. . . . Rather, any example needs to tell a compelling story that makes the suggestion plausible without begging the issues at hand” (2009: 67). Okay, but couldn’t Revenge easily be augmented so that it satisfies this desideratum? Taking our cue from Frankfurt, couldn’t we suppose that if Jones hadn’t decided on his own to kill Smith, the neuroscientist would have used the neural control device to manipulate “the minute processes of Jones’s 21 Jones presumably had the powers of reflective self-control. What he didn’t have, though, was an opportunity to exercise those powers to avoid deciding to kill Smith. He thus lacked situational control over whether he decided to kill Smith. In some versions of the story, this is because, if Jones had tried to avoid deciding to kill Smith, the neural control device would have prevented his efforts from being successful. In other versions of the story, it’s because the neural control device makes it impossible, circumstances being what they are, for Jones to even make the effort. Either way, not deciding to kill Smith wasn’t an option for Jones, which means that (contrary to what Brink and Nelkin [2013: 308–309] claim), Jones lacked a fair opportunity to avoid deciding to kill Smith.
A Flicker of Freedom 15
brain . . . , so that causal forces running in and out of his synapses and along the poor man’s nerves determine that he chooses to” kill Smith (1969: 835– 836)? Alvarez and several others think not.22 They contend that, although it may be possible for a neuroscientist to manipulate an agent’s brain and thereby render inevitable certain agent-involving events, no event caused in that way would be an action on the part of the agent. It would be a mere occurrence, more like an involuntary twitch than an exercise of the agent’s agency. If they are right about that, then the neural control device couldn’t have caused Jones to decide to kill Smith, or if it could have, it couldn’t have done so in a way that Jones would have been powerless to resist, in which case we no longer have any reason to suppose that Jones couldn’t have done otherwise than decide to kill Smith. This first response to Frankfurt cases turns on the claim that the neural control device couldn’t have caused Jones to decide to kill Smith, or if it could have, that it couldn’t have done so in a way that Jones would have been powerless to resist. But why think that? Some appeal here to the idea that unavoidable actions are impossible.23 As these philosophers see it, performing an action involves the exercise of a two-way power—a power “to act or to refrain from acting,” as Helen Steward (2012a: 160) puts it—so that what a person does at t counts as an action of his only if he could have avoided doing it at t. Others seem to think that actions are, by their very nature, events that can’t be compelled by outside forces like a neural control device.24 Neither view is plausible, though. There are good reasons to think that unavoidable actions are possible, and no good reason to think that they aren’t. Nor is there any good reason to think that actions can’t be compelled by outside forces like a neural control device. I therefore see nothing conceptually problematic or impossible about Frankfurt’s suggestion that such a device could manipulate an agent’s brain in a way that would ensure that the agent performs a certain action. In the preceding few sentences, I expressed some opinions of mine. But I haven’t argued for those opinions, not here anyway, nor do I plan to, as that
22 In addition to Alvarez (2009), see also Larvor (2010) and Steward (2008, 2009, 2012a, 2012b). I address these challenges to Frankfurt’s argument in more detail in Capes (2012). See also Shabo (2011, 2016). 23 For this view, see Alvarez (2009) and Steward (2008, 2009, 2012a, 2012b). 24 Larvor (2010) seems to have something like this in mind.
16 Moral Responsibility and the Flicker of Freedom would take us too far afield.25 Instead, I want to highlight a different problem with this first response to Frankfurt cases. Alvarez and others take issue with the stipulation in Revenge that Jones would have been powerless to resist the influence of the neural control device had it been used to compel his decision. But Frankfurt’s basic challenge to PAP doesn’t require that stipulation. To get a counterexample to the principle (as I’ve suggested the principle be understood), we don’t need a possible case in which a person is morally responsible for what he did even though he was powerless to do otherwise. A possible case in which a person is morally responsible for what he did even though he lacked a fair opportunity to do otherwise would do just as well, and this is so even if, in the final analysis, the person had it within his power to avoid doing what he did. To see this, consider Revenge 2: Jones is tired of Smith always meddling in his affairs. So, he decides to kill Smith. Jones makes this decision in the normal way (whatever exactly that is), without being coerced or compelled to make it. But, unbeknownst to Jones, if he hadn’t decided on his own to kill Smith, a neuroscientist who is monitoring Jones’s thoughts would inevitably have coerced Jones into deciding to kill Smith by pronouncing a credible threat so terrible that Jones would have had no (reasonable) choice but to comply with the threat by deciding right then and there to kill Smith.26
According to Alvarez, a threat like that might succeed in getting Jones to decide to kill Smith, but it wouldn’t render Jones powerless to avoid deciding to kill Smith. “A threat works,” she says, “by making non-compliance . . . highly unpalatable to the agent—not by eliminating its possibility” (2009: 72). Suppose she’s right about that. Suppose Jones had it within his power to refuse to comply with the neuroscientist’s threat, and, consequently, that he had it within his power to avoid doing what he did. Even so, the case still seems to be a counterexample to PAP. Because Jones decided on his own to kill Smith, without being coerced by the neuroscientist, it seems that he is blameworthy for deciding to kill Smith. (Remember, in the actual sequence of events, Jones decided for reasons of
25 26
I do, however, argue for several of them in Capes (2012). This version of the case comes from Frankfurt (1969: 835).
A Flicker of Freedom 17
his own to kill Smith, just as he would have had the neuroscientist not been there, no threat was issued, and Jones had no idea that one would have been issued had he not decided on his own to kill Smith.) However, although Jones could have done otherwise than decide to kill Smith, in the sense that he had both an ability and an opportunity to avoid deciding to kill Smith, he still lacked a fair opportunity to do otherwise. Having a fair opportunity to do otherwise, you’ll recall, entails having situational control, which in turn entails that the situation in which one finds oneself affords one a reasonable chance or opening or occasion to do otherwise. It’s this last element that’s missing in Revenge 2. The only way Jones could have avoided deciding to kill Smith was to not decide on his own to kill Smith and to then not comply with the neuroscientist’s threat once it was issued. But assuming the threat would have been both sufficiently credible and sufficiently terrible, it would have been unreasonable (for those who know all the morally relevant details of the case) to expect Jones not to comply with it. Jones therefore lacked a reasonable chance to avoid deciding to kill Smith.27 Revenge 2 thus appears to be a counterexample to PAP, a possible case in which a person is blameworthy for what he did even though he didn’t have a fair opportunity to avoid doing it. Cases like Revenge 2 also make trouble for an importantly different response to Frankfurt cases: the so-called Dilemma Defense. There are different versions of the defense, depending on the precise Frankfurt case at issue, but here’s the basic idea as applied to Revenge: The Dilemma Defense: Either Jones was causally determined to decide to kill Smith, or he wasn’t. If he was, then it would beg the question against incompatibilists about causal determinism and moral responsibility to claim that Jones is morally responsible for deciding to kill Smith. If, however, Jones wasn’t causally determined to decide to kill Smith, then he 27 Cavil: it’s always reasonable to expect a morally competent agent like Jones not to decide to kill an innocent person, for killing the innocent is always wrong. Response 1: it’s doubtful that killing the innocent is always wrong. If the only way you can prevent a nuclear holocaust is to kill an innocent person who will soon die anyway, it’s not clear to me that it would be wrong for you to kill that person. Response 2: deciding to kill an innocent person isn’t always wrong even if actually killing such a person is. If the only way you can prevent nuclear holocaust is to decide to kill an innocent person, and you also have a way of ensuring that you won’t carry out your decision, then surely you are permitted to decide to kill the innocent person. (I leave it to readers to fill in the details of how such a case might go.) Response 3: just replace deciding to kill Smith with a different action, one we couldn’t have reasonably expected Jones not to perform given the potential threat.
18 Moral Responsibility and the Flicker of Freedom could have done otherwise than decide to kill Smith. Either way, Revenge isn’t a clear counterexample to PAP, a possible case in which a person is clearly morally responsible for doing something he couldn’t have avoided doing.28
Although I’m an incompatibilist, and although I accept the conclusion of The Dilemma Defense, I don’t think the Defense itself succeeds in establishing that conclusion (at least not if PAP is understood in the way I’ve suggested it be understood). One problem with it concerns the meaning of “causally determined.” If “causally determined” means “deterministically caused,” then Jones wasn’t causally determined to decide to kill Smith. Jones decided on his own to kill Smith, and we can assume that whatever neural or cognitive processes may have led him to decide as he did weren’t deterministic. So, if Jones’s decision to kill Smith was caused at all, it wasn’t deterministically caused. Claiming that Jones is blameworthy for deciding to kill Smith therefore wouldn’t beg the question against incompatibilists about causal determinism and moral responsibility. But none of this changes the fact that Jones couldn’t have avoided deciding to kill Smith, for it doesn’t change the fact that the neural control device would have deterministically caused Jones to decide to kill Smith had Jones not decided on his own to do so, nor does it change the fact that Jones was powerless to resist the device’s influence. (I’m again assuming that, contrary to what Alvarez and others claim, the presence of the neural control device renders Jones’s action unavoidable.) Suppose, however, that “causally determined” means something like “inevitable given the circumstances.” Then Jones was causally determined to decide to kill Smith, as circumstances beyond his control made it inevitable that, one way or another, he would decide to kill Smith, either on his own or as a result of the neural control device. Still, given that Jones decided on his own to kill Smith, without being influenced by the device, it certainly seems that he is blameworthy for deciding to kill Smith, nonetheless. So, why shouldn’t we simply regard Revenge as a counterexample to both PAP and the thesis that causal determinism is incompatible with moral responsibility? 28 The Dilemma Defense was originally proposed by Robert Kane (1985: 51). It has subsequently been developed and defended by others, including Ginet (1996), Goetz (2002, 2005), Kane (1996, 2003), Widerker (1995), and Wyma (1997).
A Flicker of Freedom 19
Those who do aren’t begging the question. They are, rather, making an ordinary argument by counterexample, which I take to be a perfectly legitimate style of argument. This, of course, isn’t to say that their argument is sound (again, I don’t think it is). But whatever difficulties there may be with it, begging the question isn’t among them. A second difficulty with The Dilemma Defense is that, like the first response to Frankfurt cases we considered, it too is geared only to alleged counterexamples to PAP like Revenge in which the person (allegedly) couldn’t have avoided doing what he did. It therefore does nothing to address alleged counterexamples to PAP like Revenge 2 in which the person could have done otherwise (in the sense that he had an ability and opportunity to do otherwise) but in which the person lacked a fair opportunity to do otherwise. In Revenge 2, Jones isn’t causally determined to kill Smith (or so we may suppose), and, if Alvarez and others are to be believed, he wasn’t completely powerless to avoid deciding to kill Smith. But, as we have seen, the case still appears to be a counterexample to PAP, insofar as it seems that Jones is morally responsible for deciding to kill Smith even though he didn’t have a fair opportunity to do otherwise. A more promising response to Frankfurt cases (to my mind, anyway), one that applies to cases like Revenge 2 no less than to cases like Revenge, has to do with what John Martin Fischer calls “the flickers of freedom.” Fischer points out that Frankfurt cases like Revenge “seem at first to involve no alternative possibilities. But upon closer inspection it can be seen that, although they do not involve alternative possibilities of the normal kind, they nevertheless may involve some alternative possibilities.” Thus, “even in the Frankfurt- type cases, there seems to be a ‘flicker of freedom.’ ” And, as Fischer goes on to point out, several philosophers have argued “that these alternative possibilities (the flickers of freedom) must be present, even in the Frankfurt- type cases, in order for there to be moral responsibility,” a claim that, if true, would seem to vindicate PAP or a principle very much like it (1994: 134). To see one way in which this “flicker of freedom strategy,” as Fischer calls it, can be developed, consider Revenge again.29 Note that, although Jones couldn’t have avoided deciding to kill Smith, things didn’t have to go exactly the way they did either; there were still some alternative possibilities open to 29 For other ways of developing the strategy, see Fischer (1994: 136–140). See also Otsuka (1998) and Wyma (1997).
20 Moral Responsibility and the Flicker of Freedom Jones. As Frankfurt himself observes, “What action [Jones] performs is not up to him,” though “it is in a way up to him whether he acts on his own or as a result of [the neural control device]” (1969: 836). Evidently, then, Jones could have avoided deciding on his own to kill Smith (where the expression “on his own” indicates that Jones’s decision was an exercise of his own, natural, unassisted agency and thus wasn’t a result of some outside or artificial mechanism like the neural control device). Jones also could have tried harder to avoid deciding to kill Smith (e.g., by seriously reflecting on the reasons not to kill Smith and thereby attempting to get himself into a motivational position to make a morally better decision than the one he made), and, depending on how the details of the case are filled in, he could have avoided deciding at t to kill Smith, t being the precise moment at which Jones made his decision. Several defenders of PAP have seized on these observations, insisting that they hold the key to an accurate assessment of cases like Revenge (and Revenge 2, but to keep things simpler, we can just focus on Revenge). Their central claims, as applied to that case, are as follows. Jones isn’t morally responsible for deciding to kill Smith, and this is so at least in part because he lacked a fair opportunity to do otherwise. He is, however, morally responsible for other, closely related things (things like deciding on his own to kill Smith, not trying harder to avoid making such a morally bad decision, and perhaps too for deciding at t to kill Smith), which could explain why it may seem as if he is morally responsible for deciding to kill Smith even though he isn’t, the idea being that we are perhaps confusing moral responsibility for these other things with moral responsibility for deciding to kill Smith. Moreover, Jones is morally responsible for these other things at least in part because he had a fair opportunity to avoid them.30 I call this the fine-grained analysis of Frankfurt cases because it insists that we must be very precise, very fine-grained about what agents in those 30 Peter van Inwagen (1978: 224, n. 24) was the first to suggest that what agents in Frankfurt cases are really blameworthy for is acting on their own. See also van Inwagen (1983: 181). Naylor (1984) develops the claim in greater detail. More recent defenders of it include Cain (2014), Capes (2014), Capes and Swenson (2017), O’Connor (2000: 19–20), Robinson (2012, 2019), Speak (2002), and Swenson (2019). Not all proponents of this view would deny that Jones is blameworthy for deciding to kill Smith. Robinson (2012: 186–189; 2019) proposes a version of the view according to which Jones may be indirectly blameworthy for deciding to kill Smith in virtue of being directly to blame for making the decision on his own. This is consistent with PAP, as Robinson understands the principle, which he takes to apply only to direct responsibility. For the claim that Jones is blameworthy for deciding at t to kill Smith, see Franklin (2011), Ginet (1996, 2002) and Palmer (2011, 2013).
A Flicker of Freedom 21
cases are morally responsible for. This is something we find it necessary to do when making moral judgments about people and their behavior in many ordinary situations. (“It’s not what you said, it’s how you said it” is a familiar accusation, as is “she did the right thing but for the wrong reason.”) And it’s equally necessary, say proponents of the fine-grained analysis, in more fanciful scenarios like Revenge. If the fine-grained analysis of Frankfurt cases is correct, then such cases aren’t counterexamples to PAP and, indeed, would seem to provide further confirmation of the principle. They aren’t counterexamples to PAP because they aren’t cases in which a person is morally responsible for what he has done even though he didn’t have a fair opportunity to do otherwise. And they provide further confirmation of the principle because the range of things the agent in such cases is morally responsible for is determined in just the way we would expect it to be if PAP were true. The question, then, as I see it, is whether the analysis is correct. The remaining chapters of this book are devoted to arguing that it is. In c hapter 2, I advance an argument in favor of the analysis. In c hapter 3, I defend the analysis against objections. And in chapter 4, I argue that the analysis is no less plausible when applied to more detailed Frankfurt cases. I conclude, in chapter 5, with a brief summary and some reflections on the overall dialectic concerning whether Frankfurt cases are counterexamples to PAP.
2 The Symmetry Argument Why think the fine-grained analysis of Frankfurt cases (as opposed to the anti-PAP analysis championed by Frankfurt and others) is correct?1 The answer, in a word, is symmetry. To see what I’m getting at, consider the following case, which is a slightly expanded version of a story spun by Fischer and his coauthor Mark Ravizza (1998: 125). Sharks: John is walking on the beach when he sees a child struggling in the water. He believes that he could easily pull the child to safety and that he ought to do so, but he’s late for an important job interview and so decides not to help the child. To assuage his guilt, he tells himself that the child’s parents must be around somewhere and will soon turn up to make sure the child is okay. But the child is not okay, and she soon drowns. Unbeknownst to John, though, he couldn’t have rescued the child even if he had tried his very best to do so, as there was a school of hungry sharks hidden in the water that would inevitably have prevented him from rescuing the child had he tried to rescue her.2
According to Fischer and Ravizza, “The facts of [this] case exert pressure to say that John is not morally responsible for failing to save the child: after all, the child would have drowned, even if John had tried to save it.” Of course, John may very well be morally responsible for other, closely related things, things like deciding not to save the child, not deciding to save her, and not
1 Hunt and Shabo (2013: 607) press this question. The argument of this chapter provides an answer. 2 An earlier version of the story appears in Fischer (1986: 253). Readers looking for a more realistic example can get rid of the sharks and instead imagine that by the time John spotted the drowning child, she had already been without oxygen too long to survive. (This case, too, is from Fischer and Ravizza [1998: 125]). Readers can then compare that case with one in which the child would have survived had John taken the trouble to pull her to safety. Moral Responsibility and the Flicker of Freedom. Justin A. Capes, Oxford University Press. © Oxford University Press 2023. DOI: 10.1093/oso/9780197697962.003.0002
The Symmetry Argument 23
trying to save her, a fact Fischer and Ravizza readily acknowledge; “but,” in their estimation, “[John] is not morally responsible for not saving the child,” which seems right (1998: 125). Things would be different, though, had rescuing the child been an option for John. This becomes evident when we consider an ordinary version of the case in which there are no sharks in the water and in which John could easily have rescued the child. In that case, which I’ll refer to as All Clear, John is clearly blameworthy for not rescuing the child (and not just for things like deciding not to rescue her, not deciding to rescue her, and not trying to rescue her).3 But why is that? Why is it that John isn’t blameworthy for not rescuing the child in Sharks but is blameworthy for not rescuing her in All Clear? An obvious answer is that in Sharks John couldn’t have rescued the child, whereas in All Clear he could have rescued her. The scope of an agent’s moral responsibility (i.e., the range of things the agent is morally responsible for) in omission cases like Sharks would thus appear to be determined in part by what the agent could and couldn’t have done. The agent in such cases decides not to perform a certain action A, doesn’t try to A, and doesn’t A, though he could have decided instead to A and could have tried to A. However, the agent couldn’t have A-ed even if he had tried his very best to do so. Consequently, it seems that while the agent might be morally responsible for deciding not to A, not deciding to A, and not trying to A, he isn’t morally responsible for not A-ing. This fine-grained analysis of omission cases like Sharks, together with a plausible symmetry thesis according to which moral responsibility is determined by the same sorts of considerations whether it’s moral responsibility for actions or moral responsibility for omissions that’s at issue, supports the fine-grained analysis of action cases like Revenge. If the scope of an agent’s moral responsibility in omission cases like Sharks is indeed determined in part by what the agent could and couldn’t have done, as it appears to be, and if the symmetry thesis is true, then the scope of an agent’s moral responsibility in action cases like Revenge must be similarly determined in part by what the agent could and couldn’t have done. The agent in such cases performs an action A on his own at t, though he could have avoided A-ing on his own, could 3 This variant of the example is from Fischer and Ravizza (1998: 125). Its title is due to Clarke (2014: 141).
24 Moral Responsibility and the Flicker of Freedom have tried harder not to A, and, depending on the details of the case, could have avoided A-ing at t. However, the agent couldn’t have avoided A-ing even if he had tried his very best to do so. Consequently, while the agent may be morally responsible for A-ing on his own, for not trying harder not to A, and perhaps too for A-ing at t, he isn’t morally responsible for A-ing. This symmetry argument, as I’ll call it, has three main premises. The first is that while John might be blameworthy in Sharks for things like deciding not to save the child, not deciding to save her, and not trying to save her, he isn’t blameworthy in that case for not saving her. The second is that John isn’t blameworthy in Sharks for not saving the child at least in part because he couldn’t have saved her. And the third is the symmetry thesis, according to which the determinants of moral responsibility for actions are the same as those for omissions. All three premises are intuitively quite plausible, and all three, I’ll argue, hold up under further scrutiny.
2.1 A Minority Report Most people who have considered Sharks agree that John isn’t blameworthy in that case for not saving the child. There is, however, a minority report according to which John is blameworthy in Sharks for not saving the child, just as he is in All Clear. What might be said in favor of this alternative and, at least at first glance, counterintuitive judgment about the case? The argument typically given for it turns on the following irrelevance thesis: things that are causally irrelevant to what a person did or didn’t do are irrelevant to the person’s moral responsibility for what he did or didn’t do.4 Proponents of the argument claim that, in Sharks, the sharks are causally irrelevant to John not saving the child. (John didn’t save the child, they say, because he was too concerned about his interview, not because of the patrolling sharks.) The conjunction of that claim with the aforementioned irrelevance thesis entails that the sharks have no bearing on John’s moral responsibility for not saving the child. And so, since John would have been
4
This principle is originally due to Frankfurt (1969: 837).
The Symmetry Argument 25
blameworthy for not saving the child had the sharks not been there, as illustrated by All Clear, it follows that he is no less blameworthy in Sharks for not rescuing her.5 One difficulty with this argument is that the irrelevance thesis on which it’s based is false. As I’ll explain in section 3.1, things that are irrelevant to the etiology of a person’s behavior can be relevant to whether the person is morally responsible for that behavior.6 That by itself would be enough to undermine the argument. But the argument’s other main premise—that the sharks are causally irrelevant to John not saving the child—is also questionable. Who or what causes it to be the case in Sharks that John doesn’t rescue the child? Arguably, it’s the sharks (together, of course, with other relevant circumstances, like John’s inability to overcome or bypass a school of ravenous sharks). True, the sharks don’t play a causal or explanatory role in John’s decision not to rescue the child or in his not trying to rescue her. However, they arguably do cause it to be the case that John doesn’t rescue the child. A surefire way to cause a person not to A or to bring it about that the person doesn’t A (when it hasn’t already been settled whether the person will A) is to create conditions in advance, before the person has settled on trying to A, that render futile any attempt the person might make to A. Doing so causes it to be the case that the person doesn’t A. To illustrate, suppose you want to make sure that your preschooler doesn’t eat any cookies this evening. You can do so by putting the cookies out of reach or, better still, by removing them from the house entirely. If, in taking such measures, you make it
5 For this argument, see Cyr (2021) and Kearns (2011). 6 In an effort to illustrate the plausibility of the irrelevance thesis as applied to cases like Sharks, Taylor Cyr invites us to “suppose that, when approached by the child’s parent, John [who we are to imagine has now been made aware of the sharks] says that he could not have done otherwise than omit to save the child.” Cyr claims that “it would be a moral failing of John to cite the sharks’ presence as an excuse for what he failed to do,” given that “he knew full well . . . that it was not at all because of ” the sharks that he didn’t save the child (2021: 358). It might indeed be “a moral failing of John to cite the sharks’ presence as an excuse for what he failed to do.” But that would be so even if John isn’t blameworthy for not saving the child and even if the presence of the sharks provides him with a good excuse for not saving her. It would be disingenuous of John to offer the sharks’ presence as an excuse, if his aim in doing so is to convince us that he is blameless. That would be deceitful, since we know there are things in the story for which John is deserving of blame (e.g., his decision not to save the child). Moreover, there is often something distasteful about trying to excuse yourself to those who have suffered serious loss or harm, even if you are not to blame for their suffering. It would thus be inadvisable for John to offer excuses to the child’s grieving father. But that doesn’t mean John doesn’t have a good excuse for not rescuing the child.
26 Moral Responsibility and the Flicker of Freedom physically impossible for your preschooler to eat any cookies this evening, you will have caused it to be the case that he doesn’t eat any cookies this evening. And this is so regardless of whether he tries to find any cookies to eat this evening. This is how the sharks cause it to be the case in Sharks that John doesn’t rescue the child. They don’t thwart his attempt to save the child, for he makes no such attempt. They do, however, create conditions that render futile any attempt John might make to save the child. To further illustrate the point, compare Sharks with a structurally similar example. Make it Rain: Camelot and its environs are experiencing an extreme drought that’s wreaking havoc on the peasantry. The wizard Merlin casts an unbreakable spell at noon that will block any other magical attempts to make it rain. Later that evening, Morgana, who is completely unaware of the spell Merlin cast earlier in the day, considers casting a rain-making spell to end the drought but ultimately decides not to do so.
Here’s a fact: Morgana doesn’t make it rain. Who or what causes that to be a fact? Merlin, of course. By casting his spell at noon, he makes it the case that neither Morgana nor anyone else will make it rain. He does so by creating a situation in which it’s impossible for others to make it rain. Similarly, the presence of the sharks in Sharks makes it the case that neither John nor anyone else rescues the child. The sharks do this by helping to create a situation in which it’s physically impossible, human abilities being what they are, for anyone to rescue the child. Even so, it could be argued that John doesn’t save the child in part because of how he behaves (i.e., because he decides not to save the child and/or because he doesn’t try to save her). This might be true even if, as I just argued, it’s also true that the sharks are a cause of his not saving the child. John’s not saving the child might be overdetermined. And if it is, then we have the makings of another argument for the conclusion that John is blameworthy for not saving her. We would need two additional premises to complete the argument. The first is uncontroversial, namely, that John is blameworthy in Sharks for his bad behavior (i.e., for deciding not to rescue the child and for not trying to rescue her). The second is the claim that a person is blameworthy for the
The Symmetry Argument 27
expected bad outcomes of behavior for which he is blameworthy. Since we can safely assume that John expected his bad behavior to result in his not rescuing the child, the conjunction of these two premises with the claim that John’s behavior is among the causes of his not rescuing the child entails that John is blameworthy for not rescuing her.7 Of this argument’s three premises, two are questionable. The first is the claim that John’s behavior is a cause of his not rescuing the child. The second is the claim that a person is blameworthy for the expected bad outcomes of behavior for which he is blameworthy. I’ll address this second claim and others like it later in the chapter (see §2.3). My focus at present is on the claim that John’s behavior in Sharks is among the causes of his not rescuing the child. That claim is false. To see this, consider Make It Rain again. By deciding not to make it rain and/or by not trying to make it rain, Morgana doesn’t thereby make it the case that she doesn’t make it rain, as that result (Morgana not making it rain) has already been secured by Merlin and the spell he cast earlier in the day. Her decision isn’t even an overdetermining cause of her not making it rain, since the decision is made after the outcome has already obtained and thus occurs too late to have any causal impact on that outcome. (Outcomes can’t precede their causes.)8 Similarly, by deciding not to save the child and by not trying to save the child, John doesn’t contribute to its being the case that he doesn’t save her, as that result (John not saving the child) has already been secured by the sharks (together with John’s inability to overcome them, etc.). John’s behavior, like Morgana’s, occurs too late to be a cause of the relevant outcome.9 R. J. Wallace advances an argument similar to the one just considered. He contends that in cases in which a person chooses not to do what he is supposed to do but subsequently discovers that, due to some physical constraint 7 An argument of this sort is suggested by Kearns (2011: 316–317). Wallace (1994: 142–143) defends a similar argument. I discuss Wallace’s argument below. 8 Some more examples: If I purchase a red shirt from the manufacturer and immediately dye it the exact same shade of red, I don’t thereby cause it to be the case (or make it the case) that the shirt is red at some time or other, as that result (the shirt’s being red at some time or another) has already been secured by the manufacturer. Or, if I press the off button on a working TV remote for a TV that’s already off and that would have remained off even if I hadn’t pressed the off button, I don’t thereby cause the TV to be off, nor do I make it the case that the TV is off, as that result (the TV’s being off) has already been secured by someone or something else. 9 Sartorio (2005, 2016b) and Whittle (2018) both argue for the closely related conclusion that John’s decision not to help the child isn’t a cause of the child’s death.
28 Moral Responsibility and the Flicker of Freedom of which he was unaware, he couldn’t have performed the omitted action even if he had tried, the person is still blameworthy for not doing what he was supposed to do. Why? Because “In these cases, despite the presence of physical constraints, one’s omission nevertheless expresses precisely the kind of choice that our moral obligations prohibit.” Wallace concludes that “physical constraints all make it inevitable, in some sense, that one will omit to do something that is morally obligatory; but they will only provide valid excuses when they alone account for the omission” (1994: 142–143). Although Wallace doesn’t discuss cases like Sharks specifically, that appears to be the sort of case he has in mind. John elects not to save the child, though, due to a constraint of which he was unaware (viz., the sharks), he couldn’t have saved her. However, it could be argued that John’s omission “nevertheless expresses precisely the kind of choice that our moral obligations prohibit,” and, consequently, that the sharks alone don’t account for his omission, which, on Wallace’s view, means that they don’t provide John with a valid excuse for not saving the child. Wallace’s argument assumes that, in the sort of case at issue, the agent’s omission is overdetermined; the omission is accounted for by the constraint but also by the agent’s choice and thus is an expression of that choice. But, as we have seen, that assumption is false. The agent’s omission isn’t overdetermined. The constraint alone, I’ve argued, accounts for the omission (at least if “accounts” is a causal term), as the agent’s choice not to perform the relevant action occurs too late to help account for his not performing it. The omission therefore isn’t an expression of the agent’s choice (at least if “expression” is a causal notion), and, as I’ll argue in more detail momentarily, isn’t something for which the agent is morally responsible.10 There is, of course, a sense in which the agent’s omission in these sorts of cases reflects his choice. That the agent doesn’t perform the relevant action is an outcome that matches the content of the choice the agent made. John, for example, chose not to help the child, and, as we know, he didn’t help her. The outcome thus matches the content of his choice. But the fact that an outcome reflects an agent’s choice in this way doesn’t entail that the choice is a cause of 10 The agent’s decision does account for his not trying to perform the action, which perhaps explains why it might seem appealing to suppose that the choice not to perform the action also accounts for the agent’s not performing it. The choice thus does influence what the agent doesn’t do, just not in the same way it would have had performing the omitted action been an option for the agent.
The Symmetry Argument 29
the relevant outcome and isn’t sufficient to render the agent morally responsible for that outcome. To see this, consider the following case. Suppose you choose to press a button and then press it, believing that this will detonate a bomb that will kill your mortal enemy. But you are mistaken; pressing the button does nothing. Coincidently, however, someone else detonates the bomb, killing your enemy, though you don’t know this; you think you detonated the bomb. The outcome in this case—the explosion and the resulting death of your enemy—reflects the content of your choice. That’s exactly the outcome you chose to bring about. However, you weren’t the one who brought about that outcome, nor are you morally responsible for it, though you may, of course, be morally responsible for your choice and for carrying out that choice. So far then we haven’t seen a compelling reason to accept the minority report on Sharks. I do, however, think there are good reasons to reject it and to retain our initial judgment about the case (the judgment that John isn’t blameworthy for not saving the child). I’ll briefly sketch three arguments in support of that initial judgment. Although the first argument almost certainly depends on the third, I included it because I think it highlights some of the counterintuitive implications of claiming that John is blameworthy in Sharks for not rescuing the child. That first argument features yet another variation of the story: Too Far Away: John is on the other side of the world from the drowning child. He knows the child is drowning, but given his distance from the situation, he can do nothing to save her. (John knows the child is drowning because he has a video camera set up at that exact spot on the beach for his wildlife research and just happens to be watching the feed.) He is too far away to physically intervene and has no other way to help the child. John decides not to try to help the child, not because he knows he can’t help her, but simply because trying to save the child would distract him from his research. John wouldn’t have tried to save the child even if (he believed) he could have saved her.
What should we say about John in this version of the story? I think everyone would agree that John is a horrible person and perhaps too that he is culpable for deciding not to help the child and for not trying to help her. Surely,
30 Moral Responsibility and the Flicker of Freedom though, he isn’t to blame for not helping the child. After all, he was halfway around the world; there is nothing he could have done to help her. Suppose you accept this judgment. Then you should accept a similar judgment about Sharks, for this version of the story isn’t relevantly different from Sharks. In both cases, John couldn’t have rescued the child. In Sharks this is because he couldn’t have overcome or bypassed the sharks, while in Too Far Away it’s because he is too far away to help the child. In neither case, though, is it within John’s power to physically rescue the child, nor does he have any other means of rescuing her. Notice, too, that in neither case does John’s lack of options for saving child figure in his reasons for deciding not to intervene. In both cases, John decides as he does because of his own moral deficiency, not because saving the child isn’t an option for him. One difference between Sharks and this latest version of the story is that, here, John is aware that he can’t rescue the child, whereas in Sharks he isn’t. But that shouldn’t affect whether he is blameworthy for not rescuing her. What if John had believed (mistakenly, of course) that he could help the child (e.g., by calling the closest lifeguard station)? That might highlight the fact that John is blameworthy for deciding not to help the child and for not trying to help her. But I don’t think it would support the conclusion that he is blameworthy for not helping her.11 So, one argument for our initial judgment about Sharks goes like this. John isn’t blameworthy in Too Far Away for not saving the child. There is no relevant difference between that case and Sharks. Hence, John isn’t blameworthy in Sharks for not saving the child. A second argument for that judgment concerns remedial obligations. If John were blameworthy in Sharks for not saving the child, he would also be blameworthy for the child’s death. That isn’t true in every version of the case, of course. For example, it isn’t true in a version in which someone else saves the child.12 But it certainly seems true in Sharks. Being blameworthy for a harm gives rise to a pro tanto obligation to repair the damage, or, where repair isn’t possible, to somehow compensate those 11 For those who don’t accept this judgment, consider another case. Suppose I’m deluded into thinking that I have magical rain-making powers, but that I decide not to exercise these powers at present. Might I be blameworthy for not making it rain? Surely not. However, there is no relevant difference between this case and Too Far Away. So, if you accept the judgment about this case, then you should also accept the judgment about that one. 12 Clarke (2014: 143) makes a similar observation.
The Symmetry Argument 31
one has harmed. (It might be possible to have remedial obligations of this sort even without being blameworthy for the relevant harm. There may be such a thing as strict moral liability.13 Still, if a person is to blame for a particular harm, it seems the person has a pro tanto reparatory/compensatory obligation to those who have suffered the harm.) This means that if John were blameworthy for the child’s death, he would also have a pro tanto obligation to compensate the child’s parents for their loss or to somehow redress the harm they have suffered at his hands (insofar as that’s possible). Surely, though, John has no such obligation. One way to support this claim is to note that John wouldn’t have had an obligation to compensate the child’s family had he known about the sharks, and it seems strange to suppose that his ignorance of the sharks’ presence could make a difference to whether he has any such obligation to the child’s family. That John has no obligation to compensate the family also follows from the fact that he doesn’t causally contribute to the child’s death and couldn’t have prevented it. If an agent neither causally contributes to X nor could have prevented X, it’s plausible that the agent has no obligation to compensate those harmed by X.14,15 Hence, John has no obligation to compensate the child’s family for their loss. A second argument, then, for accepting our initial judgment about Sharks goes like this. If John were blameworthy for not saving the child, he would be obligated to compensate the child’s parents for her death or to provide them with some other form of redress for the loss of their child. But he isn’t obligated to do that. Hence, he isn’t blameworthy in Sharks for not saving the child. A third argument in support of our initial judgment about Sharks involves the notion of control and can be summarized as follows. A person is blameworthy for X only if the person had, at some point, some kind of control over
13 See Capes (2019) for a discussion of strict moral liability. 14 The idea of causal contribution at work in this sentence should be understood broadly to accommodate cases (if there are any) in which the transitivity of causation fails, cases, that is, in which an agent causes X, X causes Y, but in which the agent doesn’t cause Y. If such cases are possible, I would still count that a “causally contributing” to Y. 15 But don’t we sometimes hold parents liable for damage done by their children, even when the parents didn’t contribute causally to the damage and even if they couldn’t have prevented it? Indeed, we do. But, first, it’s not entirely clear to me that we should, and second, even if I’m wrong about that, the sorts of reasons that would justify this sort of strict parental liability seem not to be in play in Sharks. Thus, even if the principle to which this note is appended needs to be revised accordingly, I don’t think that would affect the point being made.
32 Moral Responsibility and the Flicker of Freedom X. At no point in Sharks did John have any control over not rescuing the child. Therefore, John isn’t blameworthy in Sharks for not rescuing her. I take the first premise of this argument for granted, as, I think, do most of my interlocutors. But I do want to say something in defense of its second premise. Why think John had no control in Sharks over not rescuing the child? There are, it seems, two basic kinds of control. Following Fischer (1986: 261), I’ll call them “actual causal control” and “regulative control.” As Fischer explains, an agent has actual causal control over X if and only if he causally contributes to X in an appropriate way—for example, when a pilot, as a result of normal deliberation and decision about the matter, steers the plane he is flying west. An agent has regulative control over X if and only if the agent has some control over whether X—for example, when a pilot has control over whether the plane he is flying goes west or not. At no point in Sharks does John have either kind of control over not rescuing the child. Given that John couldn’t have rescued the child, he obviously never has any control over whether he rescues her; that is, he never has regulative control over not rescuing her. But neither does he have actual causal control over not rescuing the child, for as I argued above, he doesn’t causally contribute to his not rescuing the child. John therefore never has any control of his not rescuing the child and is, for that reason, not the least bit blameworthy for not rescuing her.
2.2 Frankfurt Omission Cases John, I’ve argued, isn’t blameworthy in Sharks for not rescuing the child. But, again, things are different in All Clear. In that case, John could easily have jumped in the water and pulled the child to safety. What’s more, he understood that he ought to rescue the child and that it was within his power to do what he ought to do. It appears, then, that, unlike in Sharks, John has no excuse for not rescuing the child and, consequently, that he is blameworthy for not rescuing her. But why is that? Why is John blameworthy in All Clear for not saving the child but not in Sharks. Again, the answer seems to stare us in the face. Surely,
The Symmetry Argument 33
it’s because John couldn’t have saved the child in Sharks, whereas he could have saved her in All Clear. That, it seems, is the crucial difference between the two cases that explains why John isn’t blameworthy in the first case for not saving the child but is blameworthy in the second case for not saving her. Not everyone agrees, though. Frankfurt, for example, acknowledges that “John bears no moral responsibility [in Sharks] for failing to save the child.” However, he denies that this is because John couldn’t have saved the child in that case (1994: 622–623). Why, then, is John off the hook in Sharks for not saving the child? “The real reason,” Frankfurt says, is that “what [John] does has no bearing at all upon whether the child is saved,” as the sharks “see to it that the child drowns no matter what John does” (1994: 623). Even if John had tried his hardest to save the child, he still would have failed to save her. Matters are otherwise in All Clear, though. If John had tried to save the child in that case, he would have succeeded in saving her, as there is no barrier between attempt and success in that case. His behavior in All Clear therefore does have a bearing on whether the child drowns. This, Frankfurt claims, is the crucial difference between the two cases that explains why John isn’t blameworthy in Sharks for not saving the child but is blameworthy in All Clear for not saving her. Alison McIntyre (1994) and Randolph Clarke (1994, 2011) say something similar. Both agree that John isn’t blameworthy in Sharks for not saving the child but that he is blameworthy in All Clear for not saving her. However, they deny that this difference in what John is blameworthy for in the two cases is explained by the fact that John couldn’t have saved the child in Sharks but could have saved her in All Clear. So, what do they think explains it? According to McIntyre, an agent is blameworthy for not A-ing only if “the agent could have done A if he or she had decided to do so in the actual circumstances” (1994: 467). Since John couldn’t have rescued the child in Sharks even if he had decided to do so, it follows that he isn’t blameworthy in that case for not rescuing her. Things are different in All Clear, though. Had John decided to save the child in that case, he could have saved her. It’s this difference between the two cases, McIntyre would say, that explains why John isn’t blameworthy in Sharks for not saving the child but is blameworthy in All Clear for not saving her. Clarke takes a similar view of the matter. As he sees it, a person is morally responsible for not A-ing, in the sort of case at issue, only if the person would have A-ed had he intended to A and had he tried to carry out that
34 Moral Responsibility and the Flicker of Freedom intention.16 Since John wouldn’t have rescued the child in Sharks had he intended to rescue her and had he tried to carry out that intention, it follows that he isn’t blameworthy in that case for not rescuing the child. But, once again, things are different in All Clear. In that case, if John had intended to rescue the child, and if he had tried to carry out that intention, he would have rescued the child. It’s this difference between the two cases, Clarke says, that explains why John isn’t blameworthy in Sharks for not saving the child but is blameworthy in All Clear for not saving her. In assessing these views, I’ll focus mainly on Frankfurt’s position, but most of what I have to say about it also applies, mutatis mutandis, to the views of both McIntyre and Clarke. According to Frankfurt, the reason John isn’t blameworthy in Sharks for not saving the child is that “what [John] does has no bearing at all upon whether the child is saved,” as the sharks “see to it that the child drowns no matter what John does” (1994: 622). It’s worth pointing out that this last claim—the claim that, in Sharks, “the child drowns no matter what John does”—is false. Had John bypassed or somehow overcome the patrolling sharks, he would have succeeded in rescuing the child. True, he couldn’t have bypassed or overcome the sharks; he hasn’t the ability to do such things. Still, had he performed some such feat, the remainder of his rescue effort would have come off without a hitch. What is true about Sharks is that the child would have drowned no matter what John had done among those things he could have done. But that’s just to say that John couldn’t have prevented the child from drowning. This feature of Sharks therefore doesn’t offer us an alternative explanation of why John isn’t blameworthy in that case for not saving the child. There is, of course, a sense in which John’s behavior has a bearing in All Clear on whether the child is saved, a bearing it doesn’t have in Sharks. Whether John saves the child counterfactually depends in All Clear but not in Sharks on whether he attempts (decides, intends) to rescue her. If John had tried (decided, intended) to save the child in All Clear, he would have saved her. Not so in Sharks. But why think such dependence matters, except insofar as it’s indicative of the fact that John could have rescued the child in All Clear but not in Sharks? 16 Clarke notes that this principle “requires restriction, revision, and refinement” (2011: 612). However, for the sake of simplicity, I omit the necessary restrictions, revisions, and refinements, as they aren’t relevant to what I have to say here about his position.
The Symmetry Argument 35
The answer concerns what are sometimes known as Frankfurt omission cases, cases like Sloth: John is walking on the beach when he sees a child struggling in the water. He believes that he could easily pull the child to safety and that he ought to do so, but he’s late for an important job interview and so decides not to help the child. To assuage his guilt, he tells himself that the child’s parents must be around somewhere and will soon turn up to make sure the child is okay. But the child is not okay, and she soon drowns. Unbeknownst to John, however, he couldn’t have rescued the child. Indeed, he couldn’t have tried or even decided to rescue her. In order for him to decide to rescue the child and then try to rescue her, John would first have had to seriously consider making the effort, which he never does. But if he had, a neuroscientist who is monitoring John’s thoughts would have picked up on this and would have used a neural control device (the same one used in Revenge) to force John to decide not to rescue the child and then to carry out that decision, and there is nothing John could have done to prevent this from happening.17
John couldn’t have saved the child in this case either. However, some philosophers claim that, here, unlike in Sharks, John is blameworthy for not rescuing the child.18 If John is indeed blameworthy in Sloth for not saving the child despite the fact that he couldn’t have saved her, the case would support Frankfurt’s position (as well as McIntyre’s and Clarke’s) vis-à-vis Sharks. To see why, note that, in both All Clear and in Sloth, John not saving the child counterfactually depends on whether he tries (decides, intends) to save her. It’s true that, in Sloth, John couldn’t have tried (decided, intended) to save the child. Still, if he had tried (decided, intended) to save her in that case, he would have saved her. But, as we have seen, things are different in Sharks. Had John tried to save the child in that case, he would have failed to save her. Here, then, we have a structural difference between cases like All Clear and Sloth on the one hand and cases like Sharks on the other. And, if John really is blameworthy in 17 This example is from Clarke (2014: 139), which, in turn, is based on a case from Frankfurt (1994: 622). 18 See, e.g., Clarke (1994; 2011; 2014: ch. 6), Fischer and Ravizza (1998: ch. 5), Frankfurt (1994), Glannon (1995), Haji (1992), McIntyre (1994), and Zimmerman (1994).
36 Moral Responsibility and the Flicker of Freedom the former two cases for not saving the child but not in the latter, this structural difference, and not whether saving the child was an option for John, provides the obvious explanation as to why that’s so. But is John blameworthy in Sloth for not rescuing the child? It’s not at all obvious that he is. Indeed, when I reflect on that example, I have no inclination—none whatsoever—to deem John blameworthy for not rescuing the child. It seems pretty obvious to me that John isn’t blameworthy in that case for not rescuing her. Why do some people think otherwise? A reason that’s sometimes given for thinking that John is blameworthy in Sloth for not saving the child is that his not saving the child is an expected result of his decision not to save her, a decision for which he is blameworthy. The idea, spelled out a bit more fully, seems to be that because John is blameworthy for his decision not to help the child, and since that decision causes, in the expected way, John not to help the child, John’s blameworthiness for the decision carries over to his not helping her, so that he is blameworthy for his action and his omission.19 This is argument parallels one considered in section 2.1 for the conclusion that John is blameworthy in Sharks not saving the child. It should be unsurprising, then, that this argument fails for the same reasons that earlier argument fails.20 As we saw in connection with that earlier argument, John’s decision in Sharks not to save the child isn’t a cause of his not saving her in that case. But there is no relevant difference, as regards causation, between Sharks and Sloth. Hence, John’s decision in Sloth not to save the child isn’t a cause of his not saving her in that case.21,22 Moreover, as we’ll see in section 2.3, just because a person is 19 See, e.g., Frankfurt (1994: 623), Glannon (1995: 266), Haji (1992: 492), and Zimmerman (1994: 219). 20 It fails for an additional reason as well. In Sharks, John could have avoided deciding not to help the child and is blameworthy for deciding not to help her. But in Sloth (since it’s a Frankfurt case), John couldn’t have avoided deciding not to help the child and so it’s going to be controversial whether he is blameworthy for so deciding. 21 What, then, does cause John not to save the child? The neuroscientist (together, of course, with other relevant background conditions, such as John’s inability to bypass the neuroscientist). And I say this for the same reason that I say the sharks cause John not saving the child in Sharks. While the neuroscientist doesn’t cause John’s decision not to rescue the child, he arguably does make it the case that John doesn’t rescue the child. He does this by creating conditions in advance, well before John decides what to do, that make it impossible for John to even try to save the child. Consequently, it had already been settled, well before John decided what to do, that John wouldn’t rescue the child. His decision not to rescue her thus occurs too late to be a cause of the relevant fact. 22 See Sartorio (2005) and (2017b) for a different argument for the conclusion that John’s decision doesn’t cause his not rescuing the child.
The Symmetry Argument 37
blameworthy for behavior that causes, as expected, a specific bad outcome, it doesn’t follow that his blameworthiness for his behavior will carry over to the bad outcome. A second reason that has been offered in support of the conclusion that John is blameworthy in Sloth for not rescuing the child is based on a comparison of that case with Frankfurt action cases, such as Hero: Matthew is walking on the beach when he sees a child struggling in the water. He knows that he could easily rescue the child and that he ought to do so, but he’s late for an important job interview. After a brief moment of deliberation, Matthew decides to do the right thing and rescue the child. He immediately carries out this decision and rescues the child. Unbeknownst to Matthew, though, he couldn’t have done otherwise than rescue the child, for if he had deliberated a moment longer or given any other indication that he might not help the child, he would have been overwhelmed with feelings of guilt, which in turn would have produced in him an irresistible desire to save the child.23
Hero is similar to Sloth in that, in both cases, the agent makes a decision in an ordinary way and then carries out that decision (again, in an ordinary way), even though, unbeknownst to the agent, he would have been compelled to perform those same actions, had he not performed them in the ordinary way that he did. Sharks, you’ll note, isn’t similar in that respect, as there is nothing that would have compelled John to ignore the child if he hadn’t done so on his own. Some take these similarities between Hero and Sloth to support the judgment that, in Sloth, John is blameworthy for not saving the child even though saving her was never an option for him. According to Frankfurt, for instance, Matthew is “clearly praiseworthy for his action despite the fact that” he couldn’t have avoided it. “But if Matthew is clearly praiseworthy,” Frankfurt says, “then John is as clearly blameworthy [in Sloth] for [not saving the child] despite the fact that” he couldn’t have saved her (1994: 622). Similarly, Clarke contends that, given the similarities between Hero and Sloth, “Hero is a better candidate to guide our judgment in Sloth 23 This case is from Fischer and Ravizza (1998: 58). In a variant of the case, it’s a neuroscientist rather than overwhelming feelings of guilt that would have compelled Matthew to rescue the child if he hadn’t done so on his own.
38 Moral Responsibility and the Flicker of Freedom than is Sharks. And since in Hero an agent is responsible for doing something despite being unable to do otherwise, we have support here for the verdict that in Sloth the agent is responsible for not doing something despite being unable to do it” (2014: 151). The main difficulty with this argument is that it’s easy to turn on its head. One could argue, in response, that John clearly isn’t blameworthy in Sloth for not saving the child (though he may be blameworthy for other things in the story). So, given the similarities between Sloth and Hero, we should conclude that Matthew likewise isn’t praiseworthy in Hero for saving the child (though, of course, he may be praiseworthy for other things in the story). I say this counterargument is sound. Whether I’m right about that depends, in part, on whether we have good reason to think that John, whatever else he might be blameworthy for in Sloth, isn’t blameworthy in that case for not saving the child. Later, I’ll argue that we do. A third reason that has been offered in support of the conclusion that John is blameworthy in Sloth for not rescuing the child is due to Clarke (2014). “In Sloth,” Clarke says, “John intentionally doesn’t save the child,” but “It isn’t clear that this is so in Sharks.” That’s because, “In Sharks, whether John intends one thing or another is irrelevant to whether he saves the child. No matter what he had intended, and no matter how hard he might have tried, he wouldn’t have been able to carry out the rescue” (2014: 151–152). The same, though, can’t be said about Sloth, as we have seen. Although John couldn’t have intended to rescue the child in that case, and although he couldn’t have tried to rescue her, it’s still true that, had he intended to rescue her and had he tried to carry out that intention, he would have rescued her. Here, then, we have a potentially relevant difference between All Clear and Sloth on the one hand and Sharks on the other, a difference that could warrant the conclusion that John is blameworthy in the former two cases for not rescuing the child but not in the latter. Clarke summarizes the argument as follows: “The further case for the judgment that in Sloth [and in All Clear] John is responsible for not saving the child . . . is that he decides on his own not to do so and intentionally doesn’t do so. And what might be said to explain the different judgment in Sharks is that the second of these claims [that John intentionally doesn’t save the child] isn’t true in that case” (2014: 152–153). I won’t dispute Clarke’s claims about the application of the adverb “intentionally.” But what nonlinguistic features of the cases are doing the real work here? Apparently, it’s the fact that “In Sharks, whether John intends one thing
The Symmetry Argument 39
or another is irrelevant to whether he saves the child,” in the sense that “No matter what he had intended, and no matter how hard he might have tried, he wouldn’t have been able to carry out the rescue,” whereas this isn’t true in Sloth. In Sloth, if John had intended to save the child, and if he had tried his hardest to save her, he would have saved her. However, I fail to see why this difference between the cases matters given that John couldn’t have intended in Sloth to save the child and couldn’t have tried to save her. If the fact that John couldn’t have carried out the rescue in Sharks “No matter what he had intended, and no matter how hard he might have tried” is a good reason to think that John isn’t blameworthy in that case for not rescuing the child, then I should think the fact that John couldn’t even have intended to rescue the child in Sloth and couldn’t have tried to rescue her is at least as good a reason to think he isn’t blameworthy in that case for not rescuing her. The fact that John would have saved the child in Sloth if he had intended to save her and if he had tried to save her strikes me as irrelevant given that John could neither have intended nor tried to save her. One final point before proceeding. Clarke claims that John is blameworthy in Sloth for not saving the child but isn’t blameworthy in Sharks for not saving her. However, I don’t think Clarke has given us a good reason to accept that claim. According to Clarke, the relevant difference between the two cases has to do with whether John intentionally omits to save the child. In Sloth, he does, but in Sharks, he doesn’t. Again, I’m willing to grant (for the sake of argument) that, in Sharks, John doesn’t intentionally not save the child. But if that’s enough to get John off the hook for not saving the child, then it should be enough to get other agents in other cases off the hook for what they have done or failed to do. Evidently, though, it’s not. To see this, consider the following pair of cases: Voting Booth 1: Al votes for Gore by pulling “the Gore lever in a Florida voting booth” (Mele 2006: 25). He does so intentionally, and there’s nothing fishy going on. Voting Booth 2: Intending to vote for Gore, [Al] pulled the Gore lever in a Florida voting booth. Unbeknownst to Al, that lever was attached to an indeterministic randomizing device: pulling it gave him only a 0.001 chance of actually voting for Gore. Luckily, he succeeded in producing a Gore vote. (Mele 2006: 25)
40 Moral Responsibility and the Flicker of Freedom In Voting Booth 1, Al intentionally votes for Gore, and (we may suppose) is morally responsible for voting for Gore. In Voting Booth 2, however, Al didn’t intentionally vote for Gore, for as Mele observes, “Al’s voting for Gore [in that case] was too lucky to count as an intentional action” (2006: 25). This, however, doesn’t get Al off the hook for what he did. He can still be morally responsible for voting for Gore even though he didn’t do so intentionally. Whether Al’s action is intentional in these cases makes no difference to whether he is morally responsible for that action, and there is no reason to think things would be any different in omission cases like Sloth and Sharks. So, if, as Clarke contends, John is morally responsible in Sloth for not saving the child, then the fact that, in Sharks, John doesn’t intentionally not save the child should make no difference to whether he is morally responsible in that case for not saving her. And, by the same token, if, as Clarke also contends, John isn’t morally responsible in Sharks for not saving the child, then the fact that, in Sloth, John intentionally doesn’t save the child shouldn’t render him morally responsible in that case for not saving her. I’ve now considered three arguments for the judgment that John is blameworthy in Sloth for not rescuing the child and have found them all wanting. I’ll now lay out the case against that judgment. I’ve already alluded to one reason for thinking that John isn’t to blame in Sloth for not saving the child, namely, the similarities between that case and Sharks. John isn’t blameworthy in Sharks for not rescuing the child. But there is no difference between the two cases that would warrant different judgments about whether John is blameworthy for not rescuing the child. So, if John isn’t blameworthy in Sharks for not saving the child, then he isn’t blameworthy in Sloth for not saving her either. Hence, John isn’t to blame in Sloth for not saving the child.24 I’ve already defended the first premise of this argument, the claim that John isn’t blameworthy in Sharks for not saving the child (see §2.1). And anyway, most people who think John is blameworthy in Sloth for not rescuing the child agree that he isn’t blameworthy in Sharks for not rescuing her. So, let’s focus on the argument’s second premise, the claim that if John isn’t blameworthy in Sharks for not rescuing the child, then he isn’t blameworthy in Sloth for not rescuing her either. Why think that?
24
Swenson (2015, 2016a) defends a similar argument.
The Symmetry Argument 41
Well, for starters, rejecting the premise has some strange consequences. As Carolina Sartorio points out, “the view entails that, if the neuroscientist had decided that it was too much trouble to monitor [John’s] brain and,” in an effort to ensure that John didn’t save the child, “had released the sharks instead, then [John] wouldn’t have been responsible for” not saving the child (2017b: 135). But, as Sartorio goes on to point out, it seems strange to suppose that whether John is blameworthy for not saving the child hinges on which of these two methods the neuroscientist uses to ensure that John doesn’t save the child. John isn’t blameworthy in Sharks for not rescuing the child, and he isn’t blameworthy for not rescuing her, it seems, because he couldn’t have rescued her even if he had tried his best to do so. How much more reason, then, do we have to exonerate John in a case like Sloth, in which he couldn’t have even tried to rescue the child? Again, if the fact that any attempt John might have made to save the child in Sharks would have been futile is a good excuse for his not saving her—and it definitely is—then I should think the fact that John couldn’t have even attempted to save the child in Sloth would be at least as good an excuse for his not rescuing her. Think about it like this. In both Sharks and Sloth there is an insurmountable barrier to John saving the child. In Sharks the barrier is between attempt and success, whereas in Sloth it’s between deliberation and attempt. But it seems odd to suppose that the exact placing of the barrier should make a difference as to whether John is blameworthy for not saving the child. How could simply pushing the barrier earlier in the potential sequence of events make the difference as to whether John is blameworthy for not saving the child? Intuitively, it couldn’t. Or think about it like this. John couldn’t have saved the child in either case, but in Sharks he could have at least tried to save her, whereas in Sloth he couldn’t have done even that, since the neuroscientist would have prevented him from trying to save the child had he seriously considered doing so. John thus seems to have even less control over things in Sloth than he has in Sharks. Given this feature of the cases, it’s plausible that if John isn’t blameworthy in Sharks for not rescuing the child, then he isn’t blameworthy for not rescuing the child in Sloth either. For how could John having less control over things in Sloth than he has in Sharks somehow render him blameworthy for more things in Sloth than he is blameworthy for in Sharks? Intuitively, it couldn’t. If an agent is blameworthy for n number of things in one case and has even
42 Moral Responsibility and the Flicker of Freedom less control over what happens in an otherwise similar comparison case, it’s implausible to suppose that the agent could somehow be blameworthy for n +1 things in the comparison case. The point about control can be developed in a different way. Earlier I gave the following argument for the conclusion that John isn’t blameworthy in Sharks for not saving the child: 1. A person is blameworthy for X only if the person had (at some point) some control over X. 2. At no point in Sharks does John have any control over not rescuing the child. 3. Therefore, John isn’t blameworthy in Sharks for not rescuing the child. If this argument is sound as applied to Sharks (and, of course, it is), then it should apply, mutatis mutandis, to Sloth as well. The first premise is the same in both cases, and everything I said earlier in defense of 2 applies to the claim that John never has any control in Sloth over his not rescuing the child. Given that John couldn’t have rescued the child, he obviously doesn’t have regulative control over not rescuing her. But neither does he have actual causal control of his not rescuing the child, since he doesn’t causally contribute to his not rescuing her. So, at no point does John have any control in Sloth over not rescuing the child (i.e., over the fact that he doesn’t rescue her), which means that he isn’t blameworthy in that case for not rescuing her. Sloth has been invoked both to show that a person can be blameworthy for not A-ing even though he couldn’t have A-ed and to support explanations of the sort put forward by Frankfurt, McIntyre, and Clarke for why John isn’t blameworthy in Sharks for not rescuing the child, explanations that make no mention of the fact that rescuing the child wasn’t an option for John. The story accomplishes the second task only if it accomplishes the first, and I’ve argued that it doesn’t accomplish the first, as John isn’t blameworthy in Sloth for not rescuing the child.25 So, we now have two cases—Sharks and Sloth—in which John isn’t blameworthy for not saving the child, and in both cases, John couldn’t have saved her. A telling pattern, it seems to me. Indeed, if John isn’t blameworthy in 25 This would be a natural point at which to discuss an influential account of moral responsibility for omissions proposed by Fischer and Ravizza (1998: ch. 5) that yields the result that John is blameworthy in All Clear and in Sloth for not saving the child but isn’t blameworthy in
The Symmetry Argument 43
either case for not saving the child, the obvious explanation for why that’s so is that he couldn’t have saved the child in either case. Or so it would seem. There is, however, at least one other alternative explanation that needs to be considered, one that appeals to the causal structure of these scenarios.
2.3 A Causal Asymmetry I’ve argued that, in both Sharks and Sloth, John’s behavior (i.e., his decision not to help the child and his failure to try to help her) isn’t a cause of his not saving the child, and, consequently, that John doesn’t exercise actual causal control over his not saving her. The causal structure of All Clear, though, is different. In that case, John’s behavior is a cause of his not saving the child, and John arguably does exercise actual causal control over his not saving her. Perhaps, then, it’s this difference between the cases, and not the fact that John could have saved the child in All Clear but couldn’t have saved her in Sharks or in Sloth, that explains why John is blameworthy in All Clear for not saving the child but isn’t blameworthy in Sharks or in Sloth for not saving her. It could be argued that, in All Clear, John, in virtue of having actual causal control over his not saving the child, has sufficient control over his omission to be morally responsible for it. But, as we have seen, things are different in both Sharks and Sloth, for in those cases John doesn’t have any control over his not saving the child and, consequently, isn’t morally responsible in those cases for not saving her. We thus have a principled explanation of why John is morally responsible for not saving the child in All Clear but isn’t morally responsible for not saving her in Sharks or in Sloth, an explanation that doesn’t appeal to the fact that John could have saved the child in the former case but couldn’t have saved her in the latter two cases. Carolina Sartorio (2005, 2016a, 2016b) has developed and defended a position along these lines. Central to it is the following principle of transmission of responsibility (PTR).
Sharks for not rescuing her. However, there are well-known objections to that account to which I have nothing to add. Indeed, Fischer (2017) himself has been convinced by these objections. So, to save space in an already long chapter, I’ve omitted any discussion of Fischer and Ravizza’s account here. For the relevant objections, see Clarke (2011; 2014: ch. 5) and Swenson (2016a).
44 Moral Responsibility and the Flicker of Freedom PTR: If an agent is blameworthy for X, X causes Y, and the agent satisfies the relevant epistemic conditions for responsibility, then the agent is also blameworthy for Y.26
Sartorio uses this principle to explain why John is blameworthy in All Clear for not saving the child but isn’t blameworthy in Sharks for not saving her. In All Clear, John’s decision not to help the child is among the causes of his not helping her and of the child’s death, and, since John is blameworthy in that case for his decision, and since we may safely assume that the relevant epistemic conditions for responsibility are satisfied, PTR tells us that John is also blameworthy in All Clear for not rescuing the child and for her death. However, this argument doesn’t work in Sharks, for in that case, John’s decision not to save the child isn’t among the causes of his not saving her. And since, as we have seen, John lacks any control in Sharks over his not saving the child, we can further conclude that John isn’t blameworthy in that case for not saving her. So, again, we have a principled explanation of why John is blameworthy in All Clear for not saving the child but isn’t blameworthy in Sharks for not saving her, one that doesn’t appeal to the fact that John could have saved the child in All Clear but couldn’t have saved her in Sharks. Hence, we needn’t appeal to that fact to explain our differing moral judgments about these cases. The key component of Sartorio’s position is PTR. Although that principle is initially quite appealing, on closer inspection several difficulties become apparent. As Sartorio herself notes, “there are some cases that show that [PTR] needs some tinkering.” One sort of case is when Y is a good outcome. To illustrate the problem, Sartorio invites us to “Imagine that, if an agent A does X, then 10 starving children will be fed (let this be Y); however, if A doesn’t do X, then 100 starving children will be fed.” Sartorio then points out that, “Assuming A knows all this and does X freely, she seems blameworthy for doing X, but she doesn’t seem blameworthy for Y . . . , given that Y is not itself a bad outcome.” But PTR has the rather counterintuitive implication that the agent in this case is blameworthy for Y (10 starving children being fed). Sartorio concludes that PTR “would have to be understood in a way 26 Sartorio (2016b: 542) initially formulates the principle in terms of responsibility, but immediately restricts it to blameworthiness. Elsewhere, Sartorio refers to the principle as the principle of derivative responsibility (2016a: 76).
The Symmetry Argument 45
that avoids this counterintuitive implication,” perhaps by restricting it to bad outcomes (2016a: 77, n. 35). The problems don’t end there, however. Consider a case in which Y is a bad outcome. Two Buttons, One Bomb: Murdoch is addicted to pressing buttons, so much so that he is compelled (in a way that isn’t responsive to reasons) to press buttons whenever he has the opportunity to press them. Murdoch has just been given the choice between pressing a red button or a green one. Given his addiction, Murdoch can’t help pressing one of the two buttons, but which to press? If he presses the red button, that will detonate a bomb killing hundreds of innocent people, including Stepney, Murdoch’s sworn enemy, and it will also deduct $100 from the bank account of a struggling family. If he presses the green button, that will deactivate the bomb, but it will deduct $1,000 from the account of the struggling family. Knowing all of this, Murdoch freely presses the red button in an effort to kill Stepney, despite knowing that this is the morally wrong thing to do.
Murdoch is blameworthy for pressing the red button and blameworthy as well for the resulting explosion and loss of life. It’s doubtful, though, that he is blameworthy for the family losing $100, though it’s an expected bad outcome of an action for which he is blameworthy. If he had done the right thing and pressed the green button, the family would have lost even more money. Even more troubling for PTR are cases like Two Buttons, One Bomb 2: Murdoch is addicted to pressing buttons, so much so that he is compelled (in a way that isn’t responsive to reasons) to press buttons whenever he has the opportunity to do so. Murdoch has just been given the choice between pressing a green button or a red one. He knows that if he presses the green button, this will cause $10,000 to be deposited into the bank account of a needy family, but it will also detonate a bomb that will kill hundreds of innocent people. Murdoch also knows that if he presses the red button, this will cause $10,000 to be deposited into his bank account (Murdoch is reasonably well-off and doesn’t need the money), but it will still detonate the bomb. Given Murdoch’s addiction to pressing buttons, he can’t help pressing one or the other of the two buttons, but it’s up to him (and
46 Moral Responsibility and the Flicker of Freedom he’s reasons responsive with respect to) whether he presses the green button or the red one. Thinking about the fun stuff an extra ten grand would buy him, Murdoch freely presses the red button in an effort to secure the extra cash, and he does so despite being aware that this is the morally wrong thing to do.
Here, too, Murdoch is blameworthy for pressing the red button, and here too his doing so detonates the bomb, causing the explosion and resulting loss of life. Moreover, we may safely assume that the relevant epistemic conditions for responsibility are satisfied. PTR thus implies that Murdoch is blameworthy in this version of the case for the explosion and loss of life. However, as I’ll now argue, Murdoch isn’t blameworthy in this case for those outcomes. Murdoch would have caused the explosion and loss of life even if he had done the best he could have done in the situation, which was to press the green button. But if an agent would have caused a certain outcome even if he had done the best he could have done in the situation, it’s plausible that the agent isn’t blameworthy for the outcome in question.27 It follows that Murdoch isn’t blameworthy for the explosion or the resulting loss of life, though he is blameworthy for their immediate cause, and the epistemic conditions for responsibility are satisfied. PTR, then, is false. However, there is another version of Sartorio’s argument that isn’t vulnerable to the difficulties highlighted by the preceding pair of Two Buttons, One Bomb cases. Central to this alternative version of the argument is the principle of derivative blameworthiness. PDB: If an agent is blameworthy for performing an act X partly because she could foresee that it would likely causally result in Y, and X resulted in
27 To see the plausibility of this claim, imagine you’re the president of a company and that you have three competing policies you are thinking about implementing. You choose the second- best policy because it’s easiest for you, a choice that results in Chris being laid off. Suppose, though, that choosing the best policy would still have resulted in Chris being laid off. The only way Chris wouldn’t have been laid off is if you had chosen the worst of the three policies. Now, while you might be blameworthy for choosing the second-best policy and for not choosing the best policy, you’re not blameworthy for Chris being laid off, for while that may have been an expected bad outcome of your bad decision, you would still have caused Chris to be laid off even if you had chosen the best policy.
The Symmetry Argument 47 Y in roughly the way that she anticipated, then she is blameworthy for Y. (Sartorio 2016a: 77)
Unlike PTR, this principle doesn’t imply that Murdoch is blameworthy in Two Buttons, One Bomb for the family losing $100 dollars. While Murdoch is blameworthy for pressing the red button, and while that causes the family to lose $100 in the anticipated way, his blameworthiness for pressing the red button isn’t due even in part to the fact that he could foresee that pressing it would result in the family losing $100. Rather, he is blameworthy for pressing the button in part because he could foresee that this would unjustifiably result in the death of innocent people.28 Nor does PDB imply that Murdoch is blameworthy in Two Buttons, One Bomb 2 for the explosion and resulting loss of life. Again, while Murdoch is blameworthy in that case for pressing the red button, and while that causes the explosion and loss of life, Murdoch isn’t blameworthy for pressing the button because he could foresee that doing so would have those outcomes. Instead, he is blameworthy for pressing the button because he could foresee that doing so would make him richer at the expense of a struggling family in need of financial assistance. Note, moreover, that PDB, no less than PTR, can be used to explain why John is blameworthy in All Clear for not rescuing the child but isn’t blameworthy in Sharks for not rescuing her. It could be said that, in All Clear, John is blameworthy for his actions (e.g., his decision not to rescue the child) and that he is blameworthy for them in part because he could foresee that they would result in his wrongly not rescuing the child and in the child’s death. Since his actions resulted in his not rescuing the child and in the child’s death in exactly the way he anticipated they would, PDB tells us that John is blameworthy in All Clear for not rescuing the child and for the child’s death. The same isn’t true in Sharks, though, since John’s actions in that case aren’t a cause of his not rescuing the child or of the child’s death. PDB thus enables us to explain why John is blameworthy in All Clear for not saving the child but isn’t blameworthy in Sharks for not saving her, and it enables us to do so without appealing to the fact that John could 28 Below I raise doubts about whether the agent in these sorts of cases is blameworthy for the relevant action in part because he could foresee that it would have certain bad outcomes. For now, though, I set those doubts aside.
48 Moral Responsibility and the Flicker of Freedom have saved the child in All Clear but couldn’t have saved her in Sharks. We therefore needn’t appeal to that fact to explain our moral judgments about these cases. A crucial step in this version of the argument is the claim that John is blameworthy for his actions in All Clear in part because he could foresee that they would result in his not saving the child and in the child’s death. But, as I’ll now argue, that claim is false. John is blameworthy for his actions in All Clear, but not because he could foresee that they would have those outcomes. Rather, John is blameworthy for his actions in All Clear in part because he justifiably believes that they will result in his not rescuing the child and, quite possibly, in her death. John is equally blameworthy for his actions (i.e., for his decision not to rescue the child and for continuing to his interview without trying to help the child, etc.) in both All Clear and in Sharks, and it’s plausible that he is blameworthy for those actions for the very same reasons in both cases. Adding the sharks to the scenario shouldn’t affect why John is blameworthy for the relevant actions. But the reason John is blameworthy for his actions in Sharks isn’t that he could foresee that they would likely result in his not saving the child and in the child’s death, since, as we have seen, his actions in that case aren’t likely to cause those outcomes. (I’m assuming here that foresee is a success term, so that a person can foresee that his behavior will have a certain result only if his behavior [likely] will have that result.) Rather, what partly explains why John is blameworthy for his actions in Sharks is that he reasonably believes that those actions will result in his wrongfully not rescuing the child and perhaps too in the child’s death. Hence, what partly explains why John is blameworthy for his actions in All Clear is that he reasonably believes that they will result in his wrongfully not rescuing the child and perhaps too in her death. To accommodate this point, Sartorio would need to revise PDB accordingly, perhaps to PDB*: If an agent is blameworthy for performing an act X partly because she reasonably believes that it will likely causally result in Y, and X results in Y in roughly the way that she anticipated, then she is blameworthy for Y.
Since John is blameworthy in All Clear for his actions partly because he justifiably believes that they will result in his not saving the child and in the
The Symmetry Argument 49
child’s death, and since his behavior results in those outcomes in the anticipated way, it follows from PDB* that John is blameworthy in All Clear for not rescuing the child and for the child’s death. But, again, a similar line of reasoning won’t work in Sharks, since John’s behavior in that case isn’t a cause of the child’s death. The crucial step in this version of Sartorio’s argument is PDB*. However, that principle is false for much the same reason that PTR is false. To see this, consider a third (and final) version of Two Buttons, One Bomb. Two Buttons, One Bomb 3: Murdoch is addicted to pressing buttons, so much so that he is compelled (in a way that isn’t responsive to reasons) to press buttons whenever he has the opportunity to do so. Murdoch has just been given the choice between pressing a green button or a red one. He knows that if he presses the green button, this will cause $10,000 to be deposited into the bank account of a needy family. He also knows that if he presses the red button, this will cause $10,000 to be deposited into his bank account (Murdoch is reasonably well-off and doesn’t need the money), but that it will also detonate a bomb that will kill hundreds of innocent people, one of whom is Stepney, Murdoch’s mortal enemy. Unbeknownst to Murdoch, though, pressing the green button will also detonate the bomb and kill Stepney. Given Murdoch’s addiction to pressing buttons, he can’t help pressing one or the other of the two buttons, but it’s up to him (and he’s reasons responsive with respect to) whether he presses the green button or the red one. Murdoch freely presses the red button in an effort to enrich himself and to rid the world of Stepney, despite knowing that this is the morally wrong course of action.
Murdoch is again blameworthy for pressing the red button, and this is so in part because he reasonably believes that pressing the button will result in Stepney’s death. Pressing the button results in Stepney’s death in roughly the way that Murdoch anticipated it would. However, contrary to what PDB* implies, Murdoch isn’t blameworthy for Stepney’s death (though he may, of course, be blameworthy for other things, like intending to kill Stepney), and this for the same reason that he isn’t blameworthy for the bad outcomes in Two Buttons, One Bomb 2. Murdock would have caused Stepney’s death even if he had done the best he could have done in the situation, which was to press the green button instead. But, again, if an agent would have caused a
50 Moral Responsibility and the Flicker of Freedom particular outcome even if he had done the best he could have done in the situation, it’s plausible that the agent isn’t blameworthy for the outcome in question. It follows that Murdoch isn’t blameworthy for Stepney’s death and, consequently, that PDB* is false. There is a causal asymmetry between cases like All Clear and Sharks. In All Clear, John causes (is a cause of) his not rescuing the child, whereas that isn’t so in Sharks. But this difference between the cases isn’t what explains why John is blameworthy in All Clear for not saving the child but isn’t blameworthy for not saving her in Sharks; for, as we have seen, causing a bad outcome, even when the outcome is caused in an ordinary way, and even when the agent is aware that his behavior will result in that outcome, isn’t enough to render an agent blameworthy for the outcome. By itself, then, the causal asymmetry between cases like All Clear and Sharks doesn’t account for the difference in what the agent is and isn’t blameworthy for in those cases. What does account for that difference? Here’s a plausible suggestion to which we have yet to see a more compelling alternative: in All Clear John could have saved the child, whereas in Sharks he couldn’t have saved her.
2.4 The Symmetry Thesis The scope of an agent’s moral responsibility in omission cases, I’ve argued, is determined in part by what the agent could and couldn’t have done. From here it’s a short step to the conclusion that the scope of an agent’s moral responsibility in action cases, including Frankfurt cases like Revenge, is also determined in part by what the agent could and couldn’t have done. All we need is the symmetry thesis, according to which the determinants of moral responsibility (i.e., the conditions a person must satisfy in order to be, and in virtue of which the person is, morally responsible for things), are the same whether it’s responsibility for omissions or responsibility for actions that’s at issue. Let us, then, see what can be said for (and against) that thesis. The symmetry thesis is surely the default position, for, as David Hunt observes, it seems “there should be some unitary account of what it is to be morally responsible that would encompass both [action and omission] cases” (2005: 139). To reject the thesis absent a compelling reason to do so
The Symmetry Argument 51
would needlessly complicate our theory of moral responsibility. We should therefore reject it only if other theoretical demands require us to reject it. Perhaps, though, such theoretical demands are not far to seek. An action is an event of a certain kind, whereas, on a popular view of omissions, omissions are the absences of action, and absences of events aren’t themselves events of any kind.29 What, then, are they? There are several possibilities here. Some say that absences are facts (e.g., the fact that the relevant event didn’t occur), others that they are negative states of affairs, others that they are unactualized possibilities, and still others that they are nothing at all.30 We needn’t settle on one of these views here. The important thing to note for present purposes is that, on all these views of what omissions are, omissions turn out to be ontologically different from actions. This is especially true when it comes to the last position on the list, according to which omissions are literally nothing. If that view is correct, then actions and omissions are about as ontologically different as they could be. It’s the difference between existing and not. Would it, then, really be all that surprising if the determinants of moral responsibility for actions and the determinants of moral responsibility for omissions were asymmetrical in important respects? Some think not. Clarke, for example, contends that if omissions aren’t entities of any kind, “it is to be expected that there might well be major differences between what is required for responsibility for actions and what is required for responsibility for omissions” (2011: 622). Omissions may indeed be nothing. But if they are, then when an agent is morally responsible for omitting to perform a certain action, it’s not an omission for which he is morally responsible (there being no such entity for the person to be morally responsible for). He is, rather, morally responsible for the fact that he didn’t perform the relevant action.31 There needn’t be anything in the world that is the agent’s omission for that to be true. Much the same thing can be said on the other views of omissions mentioned earlier. This is obviously true if omissions are just facts about an agent not performing certain actions. If, instead, omissions are negative states of affairs, then it’s the relevant state of affairs for which the agent is morally
29 For this view, see Clarke (2014: chs. 1–2). 30 See Bernstein (2015a) and Clarke (2014) for discussion of these and other views about the nature of absences. 31 Clarke (1994: 196–197) takes a similar position.
52 Moral Responsibility and the Flicker of Freedom responsible. And, if omissions are unactualized possibilities for action, the agent is morally responsible for the fact that the relevant possibilities for action remain unactualized. On all these views, though, when an agent is morally responsible for not performing a certain action, what the agent is responsible for is a certain fact or state of affairs. Once we recognize this, we can see that there is no relevant difference here between moral responsibility for actions and moral responsibility for omissions. In a typical case in which a person is morally responsible for acting in a certain way, one thing the person is morally responsible for is the fact that he acted in that way (or the state of affairs of his acting in that way). In both action and omission cases, then, it’s certain facts about the person’s behavior (or certain states of affairs involving that behavior) for which the person is morally responsible. Thus, what the agent is morally responsible for is of the same ontological kind whether we have in view moral responsibility for actions or instead moral responsibility for omissions. Well, almost. In cases of moral responsibility for action, the agent is also morally responsible for the action itself, an event, and not just for the fact that he performed that action. In cases of moral responsibility for omitting to perform some action, though, there is no event corresponding to the fact that the agent didn’t perform the relevant action. This follows from the view presently being assumed that an omission isn’t itself an event (e.g., it’s not some sort of negative event) but instead the absence of the relevant event.32 Thus, in typical cases of moral responsibility for action, the agent is morally responsible both for the fact that he performed the relevant action and also for the corresponding action, whereas in typical cases of moral responsibility for omission, the agent is morally responsible only for the fact that he didn’t perform the relevant action, there being no action or other event corresponding to that fact in the way there is in cases of moral responsibility for action. This difference between cases of moral responsibility for actions and cases of moral responsibility for omissions hardly seems relevant, though, unless there is a difference between events on the one hand and facts and states of 32 I’m assuming this view only because it’s the basis for the objection to the symmetry thesis presently being considered. If it should turn out that omissions aren’t absences but events of a more familiar sort, this objection, which turns on the assumption that actions and omissions are ontologically very different, would be dead in the water. For a recent defense of the view that omissions are ontologically very similar to actions, see Silver (2018).
The Symmetry Argument 53
affairs on the other that might ground an asymmetry in the determinants of moral responsibility for actions and the determinants of moral responsibility for facts and states of affairs. But as far as I can tell, there is no such difference. I see nothing about actions qua events to suggest that the determinants of moral responsibility for them differ significantly from the determinants of moral responsibility for states of affairs or facts about agents’ behavior. An importantly different objection to the symmetry thesis, one that, if sound, enables us to retain much of what is plausible about the thesis, is due to Fischer (1986) and Fischer and Ravizza (1991).33 Fischer and Ravizza observe that, in cases like Revenge, the agent seems to be morally responsible for doing what he couldn’t have avoided doing, whereas, in cases like Sharks, the agent seems not to be morally responsible for not doing what he couldn’t have done. These judgments are initially quite puzzling, as they imply a seemingly inexplicable asymmetry between actions and omissions when it comes to moral responsibility. However, Fischer and Ravizza contend that there is a plausible theoretical framework that both explains and justifies this initially puzzling asymmetry by showing that it “follows from a symmetrical deep principle,” one that “connects responsibility with control in a certain way” (1991: 271). The “deep principle” is that “moral responsibility requires control,” either actual causal control or regulative control (Fischer 1986: 266). “This principle,” Fischer and Ravizza point out, “treats acts and omissions symmetrically at a deep level,” insofar as it implies that moral responsibility for both actions and omissions requires some sort of control (1991: 271). The asymmetry concerns the type of control that’s required. Moral responsibility for actions requires actual causal control, whereas moral responsibility for omissions requires regulative control. But why is that? Why would moral responsibility for actions require one kind of control and moral responsibility for omissions require another? The answer, according to Fischer and Ravizza, has to do with the fact that actions and omissions involve different relations between the agent and certain events associated with the relevant action. When an agent acts freely, the agent exercises actual causal control of the action and events involved 33 In their later work, Fischer and Ravizza repudiate this asymmetrical view. See Fischer and Ravizza (1998: ch. 5). More recently, though, Fischer (2017) has returned to the view (or something close to it).
54 Moral Responsibility and the Flicker of Freedom therein. For example, if an assassin freely kills a victim, the assassin exercises actual causal control over the victim’s death. Things are different in the case of omissions, though. When an agent omits an action, the agent doesn’t exercise actual causal control over the relevant events. Often this is because those events don’t occur. If the assassin in the preceding example had omitted to kill the victim, the assassin wouldn’t have exercised actual causal control over the victim’s death. In the simplest case this would be because the victim wouldn’t have died. Actions and omissions thus “involve different relations between an agent and some possible [event]; in actions the relation is a certain sort of causation, and in omissions the relation is the lack of this sort of causation” (1991: 270). This difference between actions and omissions, Fischer and Ravizza contend, entails a difference in the sort of control required to be morally responsible for each. When an agent acts freely, he exercises actual causal control over the relevant events, and so satisfies the control condition on moral responsibility. This is so, moreover, even if the agent doesn’t have regulative control over those events. Actual causal control is enough. Thus, an agent can be morally responsible for his action even if he lacked regulative control over it and so even if he couldn’t have avoided performing that action. But things are different in the case of omissions. When an agent omits to act, he doesn’t exercise actual causal control of the relevant event. “So in the case of an omission, if the agent is to have any sort of control of the relevant event . . . , he must have regulative control over it” (1991: 271). And, as Fischer and Ravizza go on to explain, having regulative control over an omission requires having the option to perform the omitted act. Thus, whereas moral responsibility for A-ing doesn’t require that the agent could have avoided A-ing, moral responsibility for not A-ing does require that the agent could have avoided not A-ing. This account provides a nice explanation of our intuitions about cases like Revenge and Sharks. In Revenge, Jones exercises actual causal control over his decision to kill Smith and thus (assuming other conditions for moral responsibility are satisfied) is morally responsible for that decision. But, in Sharks, John doesn’t have actual causal control over the movements that would be involved in his rescuing the child, since he doesn’t perform those movements, and neither does he have regulative control over them, since he couldn’t have done everything that needed to be done to rescue the child. So, in Revenge, Jones has sufficient control over his decision to kill Smith, at least by Fischer and Ravizza’s lights, to be morally responsible for that decision even though
The Symmetry Argument 55
he couldn’t have avoided it, whereas John, in Sharks, has no control over his omission and thus doesn’t have sufficient control over that omission to be morally responsible for it. Ultimately, however, Fischer and Ravizza’s asymmetrical view is untenable. They may be correct that when an agent performs an action, he brings about a related event. They may also be correct that “when an agent omits to do something, he (typically) fails to bring about the same type of possible event,” and thus that “Actions and omissions . . . involve different relations between an agent and some possible [event].” But it doesn’t follow that when an agent omits to do something he doesn’t exercise actual causal control over relevant items. As we have seen, agents can have actual causal control of their omissions (or of the relevant facts about what they have omitted to do). In All Clear, John decides not to save the child and carries out that decision, thereby causing (in an appropriate way) his not saving the child. In doing so, he exercises actual causal control of his omission (or, if omissions aren’t entities of any kind, of the fact that he omitted to perform the action).34 Agents can thus have actual causal control of their actions and of their omissions. So, if, as Fischer and Ravizza (1991: 267) claim, having actual causal control satisfies the control requirement for moral responsibility in the case of actions, I can see no reason why the same wouldn’t be true in the case of omissions. And, by the same token, if, as our Two Buttons, One Bomb cases seem to indicate, actual causal control isn’t enough to satisfy the control requirement, if some sort of regulative control (e.g., having a fair opportunity to do otherwise) is required, I can see no reason why it would be required for moral responsibility for omissions but not moral responsibility for actions. Thus, whatever kind of control is required for moral responsibility, it’s arguably the same sort of control whether it’s moral responsibility for actions or moral responsibility for omissions that’s at issue. Symmetry is again preserved.
2.5 Preliminary Conclusions The agent in omission cases like Sharks decides not to A and doesn’t try to A, though he could have instead decided to A and could have tried to carry
34
In their later work, Fischer and Ravizza (1998: 148) recognize this point.
56 Moral Responsibility and the Flicker of Freedom out that decision. But the agent couldn’t have A-ed even if he had tried. Consequently, it seems that while the agent might be morally responsible for deciding not to A, not deciding to A, and not trying to A, he isn’t morally responsible for not A-ing. The scope of an agent’s moral responsibility in such cases thus appears to be determined in part by what the agent could and couldn’t have done. Moreover, it’s plausible that the determinants of moral responsibility are the same whether it’s moral responsibility for actions or moral responsibility for omissions that’s at issue. Thus, we may further conclude that the scope of an agent’s moral responsibility in action cases like Revenge is also determined in part by what the agent could and couldn’t have done. The agent in such cases performs an action A on his own at t, though he could have avoided A-ing on his own, could have tried harder not to A, and perhaps too could have avoided A-ing at t. But he couldn’t have avoided A-ing even if he had tried. Consequently, while the agent is perhaps morally responsible for A-ing on his own, for not trying harder not to A, and for A- ing at t, he isn’t morally responsible for A-ing, which, of course, is just what PAP predicts. Thus, far from being counterexamples to that principle, cases like Revenge seem to provide further confirmation of it.
3 Objections and Replies Having argued for the fine-grained analysis of Frankfurt cases, I now respond to eight objections to it. Although a couple of these objections may highlight the need for some relatively minor modifications to the analysis, none of them give us good reason to doubt its core tenets.
3.1 The Irrelevance Argument The first objection is due to Frankfurt (1969) and takes the form of an argument for the conclusion that Jones is blameworthy in Revenge for deciding to kill Smith, contrary to what the fine-grained analysis implies. Frankfurt notes that the features of Revenge that make it impossible for Jones to do otherwise—the neuroscientist and his neural control device—aren’t among the causes of the decision Jones makes to kill Smith and make no difference to whether Jones decides to kill Smith. But, according to Frankfurt, things that are “in this way irrelevant to the problem of accounting for a person’s action” are also irrelevant to whether the person is morally responsible for his action (1969: 837). Hence, Frankfurt concludes that the neuroscientist and his device are irrelevant to whether Jones is morally responsible for deciding to kill Smith. And so, since Jones presumably would have been blameworthy for deciding to kill Smith in an ordinary version of the story in which the neuroscientist and his device are absent and in which Jones could have done otherwise than decide to kill Smith, Frankfurt further concludes that Jones is no less blameworthy in Revenge for deciding to kill Smith. Following Michael McKenna (2008: 772), I call this Frankfurt’s irrelevance argument. The difficulty with the argument, which I think is now widely recognized, is that, contrary to what Frankfurt claims, facts that make no difference to whether a person behaves as he does and that play no role in causing or otherwise explaining the person’s behavior often figure in our assessment Moral Responsibility and the Flicker of Freedom. Justin A. Capes, Oxford University Press. © Oxford University Press 2023. DOI: 10.1093/oso/9780197697962.003.0003
58 Moral Responsibility and the Flicker of Freedom of the person’s moral responsibility for his behavior, and rightly so, it would seem. A person who performs an action while being aware of features of the action that make it immoral may act despite his awareness of those features, not because of them. The person’s awareness of the wrong-making features of the action needn’t have anything to do with why the person performs the action, nor need it have an impact on whether the person performs the action. Yet the fact that the person is aware, at the time of action, of the wrong-making features of his action is morally significant and may have a bearing on whether the person is blameworthy for performing the action. Indeed, it seems to be just the sort of fact we would need to take into account when assessing whether and to what extent a person is blameworthy for his behavior.1 Here’s an example of Carolina Sartorio’s (2016a: 36) that nicely illustrates the point: Squeaky Button: Carolina really loves squeaky noises. She knows that pushing a certain button will result in a squeaky noise, so she pushes the button to hear the noise. She also knows that pushing the button will result in a remote village being destroyed. However, she still pushes the button, not because she wants the village to be wiped out, but simply because she really wants to hear that squeaky sound she loves so much.
Carolina pressed the button in this case not because she was aware that doing so would result in a village full of innocent people being destroyed but rather despite her awareness of that fact. Her awareness of what would happen to the village if she pressed the button thus had nothing to do with how her action came to be performed or why she performed it and is therefore irrelevant to an explanation of why she performed the action. Nor did her awareness of what would happen to the village make a difference to whether she did what she did. She would have done the same thing, in the same way, and for the same reason, even if she hadn’t been aware of the danger to the village. Carolina’s awareness of what would happen to the village if she pressed the button is nevertheless clearly relevant to an accurate assessment of her moral responsibility for pressing the button. Carolina is blameworthy (and 1 This objection to Frankfurt’s claim is originally due to Widerker (2000: 190). See also Sartorio (2016a: 36).
Objections and Replies 59
thus morally responsible) for pressing the button, and this is so in part because she was aware of what would happen to the village if she pressed it. The fact that she was aware of what would happen to the village if she pressed the button also seems relevant to whether she is blameworthy for pressing the button; for if, through no fault of her own, she had been ignorant of what would happen to the village if she pressed the button, then she wouldn’t be blameworthy for pressing it. So, contrary to what Frankfurt claims, facts that are irrelevant to an explanation of why a person did what she did, and that make no difference to whether the person behaved in that way, are nevertheless sometimes relevant to an assessment of whether and to what extent the person is morally responsible for her behavior.2 Frankfurt has acknowledged these difficulties with his argument and has revised it accordingly. “What counts in the assessment of a person’s moral responsibility,” Frankfurt now says, “is not only what causes, reasons, or motives led to his action. It is also important to appreciate what sort of act he thought he was performing. A morally pertinent explanation of what a person has done must include an account of what he believed himself to be doing” (2003: 342–343). But the fact that Jones couldn’t have avoided deciding to kill Smith doesn’t tell us anything about “what sort of act he thought he was performing,” nor does it tell us anything about “what causes, reasons, or motives led to his action,” since he would have done the same thing for the same reasons even if he could have avoided deciding to kill Smith. Hence, Frankfurt concludes that that fact is irrelevant to whether Jones is morally responsible for his action. And, again, since Jones would have been blameworthy for the action in a version of the story in which he could have done otherwise, it follows that he is no less blameworthy for it in Revenge. This revised version of the irrelevance argument relies implicitly on the following premise: a fact is relevant to the assessment of a person’s moral 2 The point also calls into question an argument of Fischer’s (2010) about Frankfurt cases. Applied to Revenge, Fischer’s claim is that if Jones isn’t morally responsible for deciding to kill Smith, it isn’t because Jones couldn’t have done otherwise. In support of that claim, Fischer contends that while the neuroscientist and his device made it inevitable that Jones would decide to kill Smith, these things “are irrelevant to Jones’s moral responsibility” (2010: 330). But why think that? One way to defend the contention would be to point out that the neuroscientist and his device played no role in the production of Jones’s decision. But as we have just seen, the fact that something plays no role in the production of an agent’s action doesn’t mean that that thing is irrelevant to whether or why the person is morally responsible for the action. We are, then, left to wonder why the neuroscientist and his device are “irrelevant to Jones’s moral responsibility.” For a similar reply to Fischer, see Palmer (2014).
60 Moral Responsibility and the Flicker of Freedom responsibility only if it tells us something about “what causes, reasons, or motives led to his action” or about the sort of thing the agent took himself to be doing. This premise, you’ll notice, isn’t threatened by our judgments about cases like Squeaky Button. Although the fact that Carolina was aware of what would happen to the village if she pressed the button doesn’t help explain why she pressed the button (i.e., it doesn’t tell us “what causes, reasons, or motives” led her to press it), that fact does tell us something about the sort of thing she took herself to be doing, which would explain why it’s relevant to her moral responsibility for pressing the button. The revised irrelevance argument therefore isn’t vulnerable to the sorts of difficulties that afflict the original version. There are, however, other difficulties with the revised argument. Consider, again, its central premise: facts that don’t tell us anything about what causes, reasons, or motives led a person to behave as he did, or about the sort of thing the person took himself to be doing, are irrelevant to an assessment of the person’s moral responsibility for what he did. But that’s just not so. To see this, consider the following variants of Sartorio’s Squeaky Button. Minimal Decency: Carolina loves squeaky noises. She knows that pushing the button now in front of her will result in a squeaky noise. She also knows that pushing the button will prevent a remote village from being wiped out. She is in no way tempted to do otherwise than press the button now, and she presses it straightaway, both so that she can hear that squeaky sound she loves so much and also to save the village from destruction. Heroic Effort: Unlike Carolina, Juan hates squeaky noises. Indeed, he’s positively terrified of them, and the desire he experiences to avoid hearing them is nigh irresistible for him. Juan knows that pressing the button now in front of him will result in a squeaky sound, but he also knows that pressing the button will prevent a remote village from being wiped out. So, he gathers his courage, and though it causes him great discomfort, he presses the button in an effort to save the village from destruction.
Carolina, it seems to me, isn’t praiseworthy for pressing the button in Minimal Decency. After all, it was the minimally decent thing to do, and
Objections and Replies 61
a person doesn’t get any credit for minimal decency, at least not in the absence of serious pressure to do otherwise. Juan, however, is a different story. It seems to me that he is praiseworthy for pressing the button in Heroic Effort, for while pressing the button was the minimally decent thing to do, he was under serious pressure to do otherwise. It took some real courage and moral fortitude for him to put aside his discomfort and his fear of squeaky noises and to do what was morally required of him. For that reason, it seems to me that he deserves at least some credit for pressing the button and saving the village. The fact that Juan is so afraid of squeaky noises is relevant to whether he is morally responsible for pressing the button, for had he not been so afraid of them, pressing the button wouldn’t have required any courage or moral fortitude, in which case he wouldn’t have been praiseworthy for pressing it. By itself, however, that fact doesn’t tell us anything about what causes, reasons, or motives led him to behave as he did (he pressed the button despite his fear, not because of it), nor does it tell us anything about the sort of thing he took himself to be doing.3 Heroic Effort is thus a counterexample to the main premise of the revised irrelevance argument.4 Contrary to what Frankfurt claims, facts that play no role in explaining a person’s behavior and that don’t tell us anything about the sort of thing the person took himself to be doing can be relevant to an assessment of the person’s moral responsibility for the behavior in question. Once we recognize this, it’s no longer clear why the neuroscientist and his device should be deemed irrelevant to whether Jones is morally responsible for what he did. True, they don’t causally explain Jones’s action, nor do they in any way impact the actual sequence of events that led Jones to behave as he did. But as we have just seen, that by itself doesn’t mean that they couldn’t be relevant to whether he is morally responsible for deciding to kill Smith.
3 Does the fact that Juan is afraid of squeaky noises tell us that he took himself to be performing a courageous act? And if so, wouldn’t that call into question my claim that Juan’s fear of squeaky noises doesn’t tell us anything about the sort of thing he took himself to be doing? Indeed, it would. But the fact that Juan is afraid of squeaky noises doesn’t itself tell us that he took himself to be acting courageously. Juan might have been so overwhelmed by the situation that it never even occurred to him that he was acting courageously. 4 Indolence, discussed in §3.7, is also a counterexample to the premise. See note 18.
62 Moral Responsibility and the Flicker of Freedom
3.2 Robustness Perhaps the most well-known objection to the fine-grained analysis of Frankfurt cases is due to Fischer (1994: 140–147). Fischer is willing to grant, at least for the sake of argument, that, in a case like Revenge, what the agent is really morally responsible for is performing the relevant action on his own. He also acknowledges that there is an alternative possibility to the agent acting on his own. However, Fischer contends that that alternative is “not sufficiently robust to ground the relevant attributions of moral responsibility” (1994: 140). So, while the agent may be morally responsible for acting on his own, this isn’t due even in part to the fact that he could have done otherwise than act on his own, which, if true, would be enough to falsify PAP. The crucial claim here is that the alternative possibility for action in which the agent doesn’t act on his own but is instead compelled to act by the neural control device is “not sufficiently robust to ground the relevant attributions of moral responsibility.” But why think that? Fischer’s answer is that the alternative isn’t one in which the agent freely does otherwise. According to Fischer, for an alternative possibility to be robust, in the sense that it could help ground the agent’s moral responsibility for what he did, it must be one in which the agent acts freely or at least freely refrains from or freely avoids doing something.5 But Fischer argues that the only alternative available to the agent in cases like Revenge (viz., the alternative in which the agent’s decision is compelled by the neural control device) doesn’t satisfy this requirement; it isn’t one in which the agent acts freely or freely avoids doing something and, consequently, it’s irrelevant per se to whether or why the agent is morally responsible for what he did. This argument has two premises. The first is that for an alternative to be robust, it must be one in which the agent acts freely or freely refrains from or freely avoids doing something. The second is that the only alternative available to the agent in cases like Revenge isn’t one in which the agent acts freely or in which the agent freely refrains from or freely avoids doing anything. It’s this second premise with which I take issue here. I’ll argue that there is an intelligible sense in which Jones, in the alternative sequence of events in which 5 The term “robust” has become a technical term in the literature on Frankfurt cases. The definition of it given in this sentence is the standard one. For a different use of the term, see Mele (2006: 92).
Objections and Replies 63
his decision to kill Smith is caused by the neural control device, freely avoids deciding on his own to kill Smith. Why think that, in the alternative sequence of events in which Jones’s decision to kill Smith is caused by the neural control device, Jones doesn’t freely avoid deciding on his own to kill Smith? Fischer (1994: 143) correctly notes that, in the alternative sequence, the agent in Frankfurt cases doesn’t first deliberate about and choose not to act on his own. That by itself, however, isn’t enough to establish that, in the alternative sequence, the agent doesn’t freely avoid acting on his own, as not all omissions are the result of deliberation and choice.6 This is especially true of omitting to decide. Typically, when an agent omits or refrains from deciding at t to A, the agent doesn’t first choose not to decide at t to A. Instead, the agent either makes a different decision at t (e.g., he decides at t not to A), or he makes no decision at all at t. To illustrate, consider Marla, who is deliberating about whether to attend a party this evening. Part of her wants to go; it will be a fun party, and she knows she’ll have a good time. Another part of her, though, would prefer a quiet evening at home. At t, Marla doesn’t decide to attend the party. There are various ways to fill in the story at this point. Perhaps Marla doesn’t decide at t to attend the party because she falls asleep then. Or perhaps she doesn’t decide at t to attend the party because she decides then not to attend. But it could just be that she doesn’t make a decision either way at t and continues deliberating or just starts doing something else. Suppose that’s what happened. Marla, though still wide awake, didn’t decide at t one way or the other; she remained, for a time anyway, undecided about whether to go or to stay. Call this version of the story Indecision. In it, did Marla freely avoid deciding at t to go to the party? I see no reason why she couldn’t have done so. To drive the point home, let’s add a few details to the case. Suppose Marla not deciding at t to go to the party isn’t a result of coercion, manipulation, or any other obviously freedom-subverting factor. Suppose, too, that Marla had the option to decide at t to attend the party and thus could have decided at t to go. Given these additional details, there are important respects in which Marla did freely avoid deciding at t to attend the party, even though she didn’t deliberate about and choose not to decide at t to attend.7
6 7
Robinson (2014: 439–440) also makes this point. Clarke (2014: 96–97) makes similar observations about a different sort of case.
64 Moral Responsibility and the Flicker of Freedom Something similar, I contend, is true of Jones in the alternative sequence of events in which his action is compelled by the neural control mechanism. Jones could have decided on his own in that case to kill Smith, and his not doing so isn’t a result of coercion, manipulation, or any other freedom-subverting factor. (True, the decision to kill Smith that Jones makes in the alternative sequence is a product of manipulation by the neuroscientist and his neural control device. But that doesn’t establish that Jones not deciding on his own is a result of manipulation, since whether the coercive mechanism causes Jones’s decision is contingent on whether Jones decides on his own. Jones not deciding on his own in the alternative sequence thus isn’t triggered by coercion but is itself a trigger of the coercion.) It seems, then, that there is indeed a legitimate sense in which Jones, in the alternative sequence of events, freely avoids deciding on his own to kill Smith, even though he doesn’t deliberate about and choose not to decide on his own. I’ve just identified some parallels between Indecision and Revenge. But there are important differences as well. In Indecision, Marla is aware of what her salient options are. She is aware, in particular, that deciding at t to attend the party is among her options. However, according to Taylor Cyr (2022), the same isn’t true of Jones in Revenge. Jones, being unaware of the neural control device, is unaware that his salient options are to decide on his own to kill Smith or be compelled by a coercive device to decide to kill Smith, a fact that, according to Cyr, undermines the claim that Jones, in the alternative sequence, freely avoids deciding on his own. Cyr’s argument is based on the plausible idea that an agent freely acts or avoids acting in a certain way only if he knew (or believed or was aware), or could have reasonably been expected to know (or believe or be aware), that acting or not acting in that way was among his options. To see the appeal of this claim, consider someone “who backs his car out of his garage unaware that a tiny kitten is snoozing beneath the rear tire” (Fischer and Ravizza 1998: 12). It sounds odd to say that the person in this case freely ran over the kitten, given that he didn’t know, and couldn’t have been expected to know, beforehand that running over the kitten was an option. Similarly, if Jones didn’t know (or at least believe) beforehand that not deciding on his own to kill Smith was an option for him, then it would seem equally odd to say that, in the alternative sequence of events in which he doesn’t decide on his own to kill Smith, Jones freely avoids deciding on his own to kill Smith. The
Objections and Replies 65
question, then, is whether Jones knew (or believed or was aware) that not deciding on his own to kill Smith was among his options. I say he did. Jones was unaware of the neural control device, to be sure. However, that doesn’t support the claim that Jones didn’t know (or at least believe) that one of his options was to not decide on his own to kill Smith. Jones believed (mistakenly, of course) that not deciding to kill Smith was among his options, and he is smart enough to infer from this that he could have avoided deciding on his own to kill Smith. (Obviously, if a person can avoid A-ing, then he can avoid A-ing-on-his-own.) Jones thus did believe, or could reasonably have been expected to believe, at least at some level, that he could have avoided deciding on his own to kill Smith. His ignorance of the neural control device is no barrier to his having that particular belief. According to Fischer, for an alternative possibility to be robust, it must be one in which the agent acts freely or at least freely refrains from or freely avoids doing something. I’ve argued that, in cases like Revenge, the agent could have freely avoided deciding on his own, which in turn suggests that that alternative possibility is sufficiently robust to ground the agent’s moral responsibility for deciding on his own. A slightly different argument for this conclusion emerges when we consider what, if anything, we could reasonably have expected Jones to do instead of what he did.8 When it comes to blameworthiness, a plausible test for whether an agent had a robust alternative to what he did is to ask whether there was something we (i.e., those of us who know all the relevant details of the case) could reasonably have expected the agent to do instead (where this something might just be not doing what he did). If there were no alternative possibilities for action fitting this description, that would be evidence that the agent lacked a robust alternative, as it would indicate that the agent lacked a reasonable opportunity to avoid doing what he did. If, on the other hand, there was something the agent could have done such that we could reasonably have expected him to do that instead of doing what he did, this, I suggest, would be an excellent candidate for a robust alternative, for then we could plausibly explain why the agent is blameworthy for doing what he did in part by appealing to the fact that he had a reasonable opportunity to avoid doing it.
8
See Widerker (2000, 2003) for some related considerations in favor of PAP.
66 Moral Responsibility and the Flicker of Freedom Let’s apply this reasonable expectations test to Revenge. Is there something we could reasonably have expected Jones to do instead of deciding on his own to kill Smith? Indeed, there is. Jones could have not decided on his own to kill Smith, and we could reasonably have expected him not to decide on his own to kill Smith. It follows from the reasonable expectations test just proposed that not deciding on his own is indeed a robust alternative possibility for Jones. The preceding points apply to the claim that Jones is blameworthy in Revenge for deciding on his own to kill Smith. It’s worth mentioning that similar points can be made with respect to the claim that Jones is blameworthy in Revenge for deciding at t to kill Smith. For all I’ve said about the case thus far, Jones could have freely decided at t not to kill Smith. Of course, had he done so, the neural control mechanism would have kicked in and compelled him to change his mind and to decide a moment later, at t +1, to kill Smith. Still, he could, we are now supposing, have freely avoided deciding at t to kill Smith. He thus had a robust alternative (by Fischer’s lights) to deciding at t to kill Smith. This conclusion is also supported by the reasonable expectations test for robustness; for supposing, as we now are, that Jones could have freely decided at t not to kill Smith, we could reasonably have expected Jones to do that at t instead of deciding then to kill Smith. There is, then, no obvious barrier to claiming that Jones is blameworthy in Revenge for deciding at t to kill Smith, and that he is blameworthy for doing so at least in part because he had a reasonable opportunity to avoid deciding at t to kill Smith.
3.3 Action Individuation According to Eleonore Stump, the fine-grained analysis of Frankfurt cases “requires the supposition that doing an act-on-one’s-own is itself an action of sorts,” an action that is distinct from the one the agent would have performed had the neuroscientist’s device compelled the agent’s behavior. She then argues that this supposition is either “confused and leads to counterintuitive results; or, if the supposition is acceptable, then it is possible to use it to construct [Frankfurt cases] in which there is no flicker of freedom at all” (1999: 301–302).
Objections and Replies 67
Central to Stump’s objection to the fine-grained analysis is her claim the analysis relies on “the supposition that doing an act-on-one’s-own is itself an action of sorts,” one that’s distinct from the action the agent would have performed had the neuroscientist’s device compelled the agent’s behavior. Later I’ll argue that the fine-grained analysis of Frankfurt cases doesn’t require any such supposition. For now, though, I leave the claim unchallenged and focus instead on Stump’s claim that the supposition in question either has counterintuitive results, or, if it turns out to be acceptable, can be used to construct Frankfurt cases with no flicker of freedom. Consider, first, Stump’s argument for the conclusion that the supposition has counterintuitive results. She contends that if A-ing-on-one’s-own is an action distinct from A-ing, that would have the counterintuitive result that Jones would have had alternative possibilities for action even if the neural control device had compelled his decision. In support of her contention, she invites us to consider the counterfactual scenario in which the device does compel Jones’s decision to kill Smith. Stump says, “if there were two alternative possibilities available to [Jones] in a standard [Frankfurt case] . . . , then there ought to be the same two alternative possibilities available to Jones in [the counterfactual scenario].” However, Stump thinks it’s clear that Jones lacks alternative possibilities in the counterfactual scenario in which his decision is compelled by the device. She claims that if the neuroscientist’s coercive mechanism had compelled Jones’s decision, “Jones would be entirely within his rights in claiming, afterwards, that he couldn’t have done otherwise than he did, and he wouldn’t be moved to rescind that claim by our insistence that there was an alternative possibility for his action” in the original version of the story in which his decision isn’t compelled by the neural control device (1999: 315). This first argument of Stump’s can be summarized as follows. If Jones had alternative possibilities for action in the actual sequence of events, then he should have those same alternatives in the counterfactual sequence of events in which his decision is compelled by the neuroscientist’s device. But Jones doesn’t have alternative possibilities for action in the counterfactual sequence. So, he doesn’t have them in the actual sequence either. The second premise of this argument, which says that Jones lacked alternative possibilities in the counterfactual sequence, is false. In both the actual sequence of events in which Jones decides on his own and in the counterfactual sequence of events in which the neural control device compels Jones’s
68 Moral Responsibility and the Flicker of Freedom decision, the device is rigged to compel Jones’s decision if, but only if, Jones doesn’t decide on his own to kill Smith. The main difference between the two sequences of events, then, is whether Jones takes advantage of the opportunity to decide on his own. In the actual sequence he does, whereas in the counterfactual sequence he doesn’t. Up until the point of decision, though, Jones has the same options available to him in both the actual sequence and the counterfactual sequence, for in both cases it’s up to him whether he decides on his own at t to kill Smith or whether his decision to kill Smith is produced by the neuroscientist’s coercive device instead. But what’s implausible or counterintuitive about that? To be sure, in the counterfactual scenario, Jones’s decision is compelled by the device. However, contrary to what Stump suggests, that’s compatible with Jones having had alternative possibilities, since whether the device compels the decision is contingent on whether Jones decides on his own to kill Smith, and whether he decides on his own to kill Smith is at least partly up to him. I turn now to Stump’s argument for the conclusion that if A-ing-on-one’s- own and A-ing are distinct actions, then we can build a Frankfurt case with no flicker of freedom whatsoever. Let D stand for the decision to kill Smith, let O stand for Jones deciding on his own to kill Smith, and let’s assume, as Stump believes proponents of the fine-grained analysis must, that D and O are distinct actions. Stump claims that if O is a separate action from D, we can construct a Frankfurt case in which Jones is powerless to avoid O-ing. Because O is a mental act of deciding, it will, Stump says, be correlated with a neural sequence the occurrence of which could in principle be initiated by outside forces like the neuroscientist’s device. The neuroscientist could thus compel Jones to O by rigging the device to bring about the pertinent neural sequence. But if so, it would seem we have all the materials necessary to construct a case in which the counterfactual intervener [i.e., the neuroscientist] desires not just some act [D] on the part of the victim but also the further act O, as well as the act of doing O-on-his-own if there is such an action and any further iterated acts of doing on one’s own. We can stipulate that the counterfactual intervener controls all these acts in virtue of controlling the firings of neurons in the neural sequences correlated with each of these acts. If the victim doesn’t do these acts, the coercive neurological mechanism will produce them. (1999: 317)
Objections and Replies 69
In a case like this, Stump says, “there are no alternative possibilities for action of any sort” (1999: 317). But if Jones does O in this new case, without being compelled to do so by the coercive device, it should be clear, Stump thinks, that he could be morally responsible for O-ing, his lack of alternative possibilities notwithstanding. Here, then, we seem to have a Frankfurt case with no flicker of freedom and thus one to which the fine-grained analysis doesn’t apply. A central assumption underlying Stump’s attempt to produce a Frankfurt case with no flicker of freedom is that a neuroscientist could cause Jones to O using the coercive mechanism implanted in Jones’s brain. This assumption merits further investigation. Recall that O stands for Jones deciding on his own to kill Smith, where, again, the locution “on his own” indicates that Jones’s decision was an exercise of his own, natural, unaided agency and thus wasn’t a product of external coercion or force by the likes of the neuroscientist and his neural control device. Having been reminded of this, we should ask ourselves whether the neuroscientist could use his neural control device to cause Jones to O. It would seem not, for O seems to be something that, by its very nature, isn’t caused by such outside forces. But if that’s right, then Stump is mistaken to suppose that a neuroscientist could cause Jones to O, and, accordingly, is mistaken to think that she has provided us with a blueprint for producing Frankfurt cases in which there is no flicker of freedom. For if it’s impossible for an outside force like the neuroscientist and his neural control device to cause Jones to O, then the decision he causes Jones to make in the counterfactual sequence of events isn’t identical to O, in which case Stump’s new Frankfurt case isn’t one in which Jones had no alternative to O. Stump, however, contends that the neuroscientist could have used his device to cause Jones to O. On her view, if the neuroscientist had done this, O couldn’t then coherently be described as something Jones did on his own. But it would be the very same action as the one Jones actually performed. Compare: if Earl were to give me the shirt off his back, it would still be the same shirt, but it could then no longer be accurately described as the-shirt- Earl-is-wearing. Similarly, Stump’s thought seems to be that if the neuroscientist had caused Jones’s decision, the decision he caused would have been the same one that Jones actually made, though it couldn’t then be accurately described as a decision Jones made on his own. Attention to two ways that we might try to render plausible the assumption that O and D are distinct
70 Moral Responsibility and the Flicker of Freedom actions (of the same type) will enable us to see that this response of Stump’s doesn’t circumvent the objection at issue. One way to get O to come out as a distinct action would be to adopt a historical approach to event and action individuation, one according to which events, actions included, are individuated in part by their causes. On this approach to event and action individuation, part of what makes X the event it is its causal history. A similar event in a different possible world with a different causal history thus wouldn’t be X (though it might be the same type of event as X). It should be obvious that if this is the right way to individuate actions and other events, Stump is mistaken to suppose that the neuroscientist could have caused Jones to perform the very same action that Jones performed on his own. Any action the neuroscientist caused Jones to perform would have a different causal history than the action Jones performed on his own, and thus, according to the historical approach to event and action individuation just adumbrated, wouldn’t be identical to the action Jones performed on his own. Another way of getting O to come out as a separate action would be to first adopt a fine-grained account of action individuation, one according to which X and Y are distinct actions if they exemplify different act-properties, and then to assume that A-ing on one’s own is a different act-property than A-ing. Given that assumption, the fine-grained account of action individuation implies that O (Jones deciding on his own to kill Smith) is indeed a different action than D (his deciding to kill Smith). Notice, however, that it also implies that the neuroscientist couldn’t cause Jones to O, for whatever action the neuroscientist might have caused Jones to perform would not be something Jones did on his own, and so, according to the present version of the fine-grained account of action individuation, would not be identical to the action Jones performed on his own, as the two actions wouldn’t have all the same act-properties. On both the historical and fine-grained approaches to event and action individuation, O may indeed be a different action than D. However, neither approach is consistent with Stump’s claim that the neuroscientist could compel Jones to O, an assumption that is necessary if her new Frankfurt case is to be one in which there is no flicker of freedom. Since these are the only two approaches to action individuation that I know of which would allow us to say that O is not identical to D, and since Stump is assuming for the sake of argument that O is not identical to D, I conclude that she has failed to produce a Frankfurt case with no flicker of freedom.
Objections and Replies 71
My discussion of Stump’s objection has thus far been conducted on the assumption that O and D are distinct actions. But suppose that assumption is mistaken and that the decision Jones makes on his own in the actual sequence of events is the same decision he would have made had he been compelled by the neuroscientist’s coercive mechanism. Would this new supposition cast doubt on the adequacy of the fine-grained analysis of Frankfurt cases? Stump thinks it would, for on her view, the analysis “requires the supposition that doing an act-on-one’s-own is itself an action of sorts,” an action distinct from the one the agent would have performed had the neural control device been among the causes of the agent’s behavior (1999: 301). Contrary to what Stump claims, though, the fine-grained analysis of Frankfurt cases doesn’t require that supposition. So, even if the supposition is objectionable, that’s no problem for the analysis. Recall that, according to the fine-grained analysis, although Jones isn’t blameworthy in Revenge for deciding to kill Smith, he is blameworthy for deciding on his own to kill Smith (among other things, perhaps). Admittedly, there are various ways of interpreting that claim, one of which is that, although Jones isn’t blameworthy for making a decision to kill Smith, he is blameworthy for the token decision to kill Smith that he made. That, it seems, is how Stump is interpreting the claim. But that’s not how I’m interpreting it, nor, I think, is it how most other proponents of the fine-grained analysis interpret it. As I interpret it, the claim is that Jones isn’t blameworthy for deciding to kill Smith but is blameworthy for the fact that his decision to kill Smith was an exercise of his own, natural, unassisted agency. On this way of interpreting the claim, Jones is blameworthy not for his decision but for a feature of or fact about that decision. Here’s a slightly different way of putting the point: according to the fine- grained analysis, Jones isn’t blameworthy for the fact that he decided to kill Smith, since, through no fault of his own, he couldn’t have prevented that fact from obtaining, though he is blameworthy for the fact that he decided on his own to kill Smith, as he could have prevented that more fine-grained fact from obtaining. Notice that, in making this claim, proponents of the fine- grained analysis aren’t committed to treating Jones deciding on his own to kill Smith as a distinct action, nor are they committed to saying that Jones is blameworthy for a decision he made, one he would have avoided making if only he hadn’t decided on his own to kill Smith. So, even if Stump is right that A-ing and A-ing-on-one’s-own aren’t distinct actions, by itself that does
72 Moral Responsibility and the Flicker of Freedom nothing whatsoever to impugn the fine-grained analysis of Frankfurt cases, since the fine-grained analysis doesn’t require us to treat A-ing and A-ing- on-one’s-own as separate actions. Stump criticizes the fine-grained analysis of Frankfurt cases on the grounds that it relies on the assumption that “doing an act-on-one’s-own is itself an action,” an assumption which she regards as problematic. I’ve argued that Stump hasn’t given us a compelling reason to suppose that the assumption is problematic and that, even if it is, that only impugns one version of the fine-grained analysis. But the analysis itself doesn’t require anything resembling the claim that doing an act-on-one’s-own is itself an action and so emerges from Stump’s criticism unscathed.
3.4 An Epistemic Issue According to Carl Ginet, it would be a mistake to say that Jones is blameworthy for deciding on his own to kill Smith rather than as a result of the neuroscientist’s coercive device. Why? Because Jones neither knew nor should have known at the time that he was making the decision to kill Smith on his own rather than as a result of a neural control device, and, according to Ginet, Jones could be blameworthy for deciding on his own rather than as a result of such a device “only if, at the time, he knew or should have known that he was doing so” (1996: 407). Ginet’s objection to the fine-grained analysis relies on two premises. The first is that Jones neither knew nor should have known at the time that he was deciding on his own to kill Smith rather than as a result of the coercive mechanism. The second is that Jones is blameworthy for deciding on his own to kill Smith rather than as a result of the coercive mechanism only if he knew or should have known at the time that he was doing so. This second premise appears to be an instance of a more general principle, which I’ll refer to as the knowledge requirement for blameworthiness, according to which a person is blameworthy for behaving in a certain way only if he knew or should have known at the time that he was behaving that way.9
9
See also Ginet (2000).
Objections and Replies 73
One difficulty with Ginet’s objection has to do with the knowledge requirement on which it hinges. Here’s a straightforward counterexample to that requirement: Million Dollar Button: Kathy knows that if she presses the button in front of her, a million dollars will immediately be deposited into her bank account, no questions asked. But there’s a catch. There’s a twenty-five percent chance that pressing the button will also result in the destruction of a small village. Moreover, if Kathy presses the button, she’ll have no way of ascertaining whether the village was destroyed. Kathy elects to press the button, and presses it, despite being aware of the risks. As a result, she becomes a millionaire, and a small village is destroyed.
In this case, Kathy neither knew nor should have known at the time that she was destroying a small village. At most what she knew or should have known is that there was a chance that, by pressing the button, she would be destroying a small village. She is plausibly to blame for destroying the village nevertheless, contrary to what the knowledge requirement implies. Kathy is blameworthy for destroying the village in this case, but only in virtue of being blameworthy for pressing the button. Had she not been culpable for pressing it, she wouldn’t have been culpable for destroying the village either. Her blameworthiness for destroying the village is therefore indirect, inherited from the responsibility she bears for pressing the button in the first place. Million Dollar Button therefore doesn’t threaten a version of the knowledge requirement restricted to direct or uninherited blameworthiness; but others do. Consider Arsenic: Rob believes that the stuff in the sugar bowl is arsenic, though he isn’t warranted in believing this, and he puts a heaping dose of it in Dan’s coffee with the intention of killing Dan. As it turns out, the stuff in the sugar bowl is arsenic and Dan dies as a result.
Rob is directly blameworthy for putting arsenic in Dan’s coffee, despite the fact that he didn’t know or even reasonably believe at the time that he was doing so; otherwise, it’s hard to see how he could be blameworthy for killing Dan, and he is derivatively blameworthy for doing that. So, even when restricted to nonderivative responsibility, the knowledge requirement is false.
74 Moral Responsibility and the Flicker of Freedom Presumably, though, there is some weaker epistemic requirement for blameworthiness, perhaps something like this: a person is directly blameworthy for behaving in a certain way only if the person was aware, or should have been aware, at least at some level, that he was behaving in that way. And it seems that we can use this weaker requirement to run an updated version of Ginet’s objection. Jones, it might be said, was neither aware nor should he have been that he was deciding on his own rather than as a result of the coercive device. This claim, in conjunction with the weaker epistemic requirement for blameworthiness just adumbrated, entails that Jones isn’t directly blameworthy for deciding on his own to kill Smith rather than as a result of the device. That conclusion seems right, so far as it goes. However, it poses no difficulty for the fine-grained analysis of Revenge being defended here. The analysis doesn’t say that Jones is blameworthy for the contrastive fact that he decided on his own rather than as a result of the neural control device. It says, rather, that Jones is blameworthy for deciding on his own to kill Smith. But, as I’ll argue momentarily, Jones was aware, at least at some level, that he was deciding on his own to kill Smith. The weaker epistemic requirement for blameworthiness is therefore consistent with the version of the fine-grained grained analysis being defended here. It could be objected that if Jones is blameworthy for deciding on his own to kill Smith, as the fine-grained analysis claims, then he must also be blameworthy for the contrastive fact that he made the decision to kill Smith on his own rather than as a result of the neural control device. But we have already established that Jones isn’t to blame for that contrastive fact. Hence, we should concede that Jones isn’t blameworthy for deciding on his own to kill Smith either. This objection is unsound. Jones can be blameworthy for deciding on his own to kill Smith even though he isn’t blameworthy for the contrastive fact that he made the decision to kill Smith on his own rather than as a result of the neural control device. To see this, consider Cheesesteak: Jymmie is in Philadelphia deliberating about where to get one of those famous Philly cheesesteaks she has heard so much about. Some local cheesesteak aficionados tell her that the best two places to go are Geno’s Steaks and Tony Luke’s. Jymmie decides to go to Geno’s. But another classic option is Pat’s King of Steaks. However, for reasons that
Objections and Replies 75 will forever remain mysterious, none of the locals Jymmie talked to bothered to tell her about Pat’s. She thus never knew that Pat’s was an option.
Jymmie was aware at the time that she was eating a cheesesteak at Geno’s, and no doubt we could concoct a version of the story in which she is blameworthy for eating a cheesesteak at Geno’s. But Jymmie wasn’t aware that she was eating at Geno’s rather than Pat’s, since she wasn’t aware that eating at Pat’s was an option. It follows, given the weaker epistemic requirement on blameworthiness, that Jymmie isn’t blameworthy for eating a cheesesteak at Geno’s rather than at Pat’s. But that’s consistent with her being blameworthy for eating a cheesesteak at Geno’s. In a similar way, Jones can be blameworthy for deciding on his own to kill Smith, even if, owing to his ignorance of the neuroscientist and his neural control mechanism, Jones isn’t blameworthy for deciding on his own rather than as a result of that mechanism. While Jones had no idea that deciding as a result of the neural control device was an option, he presumably was aware, at least at some level, that he was deciding on his own to kill Smith. Hence, the fact that Jones wasn’t aware that he was deciding on his own rather than as a result of a coercive mechanism poses no threat to the fine-grained analysis of Frankfurt cases. Some people balk at the claim that Jones was aware that he was deciding on his own to kill Smith. They claim that we don’t normally have any beliefs about or awareness of whether we are doing things on our own. But I disagree. The belief, of course, is typically dispositional. Few of us, though, are agnostic about this aspect of the causal history of our behavior. Absent any reason to suppose differently, we often believe (don’t we?) that our actions are instances of our own, natural, unassisted agency, and Jones, we may assume, is no different in this regard.
3.5 An Artificial Separation At the heart of the fine-grained analysis of Revenge is the claim that Jones is blameworthy for deciding on his own to kill Smith but not for deciding to kill Smith. Some have worried that this claim slices things a bit too thin, and that it can’t plausibly be maintained that Jones is blameworthy for deciding on his own to kill Smith but not for deciding to kill Smith simpliciter. According to
76 Moral Responsibility and the Flicker of Freedom Michael Otsuka, for example, the claim “is controversial, since it is arguable that one needs to draw too fine a distinction in order to maintain that Jones is blameworthy for killing Smith on his own while at the same time denying that he is blameworthy for killing Smith” (1998: 690). In a similar vein, Robert Kane contends that the fine-grained analysis “artificially separates” moral responsibility for doing something on your own from moral responsibility for doing it. “In general,” he says, “if we are responsible for doing something on our own, we are responsible for doing it.” And the same is true of Jones, he thinks. He insists that if Jones acted on his own, there is no reason to say that he is not blameworthy for his action (1996: 41).10 The fine-grained analysis does indeed slice things pretty thin. There’s no denying it. But what exactly is objectionable about that? Jones decides on his own to kill Smith and Jones decides to kill Smith are two related but nevertheless distinct states of affairs. Why should it not be feasible to suppose that Jones is blameworthy for the former state of affairs but not the latter? It may be true, as Kane claims, that, in general, a person who is blameworthy for A-ing-on-his-own is also blameworthy for A-ing. But that’s neither here nor there. From the fact that two things typically go together we can’t infer that it’s impossible to prize them apart.11 Ordinarily, an agent who is blameworthy for not trying to A and for deciding not to A is also blameworthy for not A-ing. All Clear is a case in point. In that case, John is blameworthy for not trying to save the child, for deciding not to save her, and also for not saving her. But we can’t generalize from relatively ordinary cases like that. Sometimes an agent is blameworthy for not trying to A, and for deciding not to A, and yet isn’t blameworthy for not A- ing. This is illustrated by cases like Sharks. Recall that, in Sharks, John isn’t to blame for not saving the drowning child, even though he is to blame for not trying to save her and for deciding not to save her. Similarly, it may be true that, in general, a person who is morally responsible for A-ing-on-his-own is also morally responsible for A-ing, but by itself that doesn’t support the conclusion that any agent who is morally responsible for A-ing-on-his-own must also be morally responsible for A-ing.
10 Hunt and Shabo (2013) raise a similar objection to the claim that agents in certain Frankfurt cases are blameworthy for A-ing at t but not for A-ing simpliciter. The points made in what follows apply, mutatis mutandis, to the Hunt/Shabo version of the objection. 11 Robinson (2012: 184) makes a similar point.
Objections and Replies 77
According to the fine-grained analysis, Jones is blameworthy for deciding on his own to kill Smith but not for deciding to kill Smith. It’s instructive to note that ordinary moral judgments are often no less fine-grained, no less precise about what, exactly, a person is morally responsible for. “It’s not what you said, it’s how you said it” is a familiar accusation. So is “he did the right thing but for the wrong reason.” But if a person can be blamed for the way in which he performed a certain action and/or for performing the action for the wrong reason yet not blamed for the action itself, we are again left to wonder what’s so implausible about the claim that an agent could be morally responsible for A-ing-on-his-own but not for A-ing. Perhaps the worry is that the distinction between blameworthiness for A-ing and blameworthiness for A-ing-on-one’s-own is somehow ad hoc or unmotivated. This possibility is suggested by Kane’s remark that the fine- grained response “artificially separates” responsibility for performing an action from responsibility for performing the action on one’s own. But if that’s the worry, it’s unfounded, for there is a principled reason to suppose that blameworthiness for A-ing and blameworthiness for A-ing-on-one’s-own come apart in Frankfurt cases like Revenge. A plausible explanation of why blameworthiness for not trying to A and blameworthiness for not A-ing come apart in cases like Sharks is that while the agent featured in those sorts of cases had at least some control over whether he tried to A, the agent had no control over whether he A-ed. Now, normally, having it within your power to try to save a drowning child and having it within your power to save her go together, which explains why, normally, someone who is blameworthy for not trying to save a child is also blameworthy for not saving her. But in a case like Sharks, while the agent had some control over whether he tried to rescue the child, he had no control over whether he successfully rescued her, which explains why he can be blameworthy for not trying to save the child even though he isn’t blameworthy for not saving her. A similar explanation is available for why blameworthiness for A-ing-on- one’s-own and blameworthiness for A-ing come apart in cases like Revenge. Jones had no control over whether he decided to kill Smith, though he apparently did have some control over whether he decided on his own to kill Smith. Put somewhat differently, Jones had no say concerning whether the state of affairs Jones decides to kill Smith obtains, though he evidently did have some say about whether the state of affairs Jones decides on his own to
78 Moral Responsibility and the Flicker of Freedom kill Smith obtains. According to the fine-grained response, it’s this fact that grounds the difference in blameworthiness for the two states of affairs. The difference therefore isn’t artificial or unmotivated; it’s grounded in the difference in control Jones had over the two states of affairs in question. Because Jones had no control over whether the state of affairs Jones decides to kill Smith obtains, he isn’t blameworthy for that state of affairs. However, since he did have some control over whether the state of affairs Jones decides on his own to kill Smith obtains, there is no obvious barrier to his being blameworthy for that more fine-grained state of affairs. In ordinary cases in which there is no evil neuroscientist waiting in the wings to make sure that things happen in a particular way, we tend to assume that the agent had control over whether he A-ed and thus over whether he A-ed on his own. This plausibly explains why blameworthiness for A-ing and blameworthiness for A-ing-on-one’s-own typically go together. But there is arguably a morally significant difference between such ordinary cases and Frankfurt cases like Revenge. In the ordinary cases, the agent presumably had it within his power to avoid A-ing and thus had it within his power to avoid A-ing-on-his-own. So, in those cases we have no reason to suppose that the agent is morally responsible for A-ing-on-his-own but not for A- ing. But things are importantly different in cases like Revenge. In those cases, the agent couldn’t help A-ing, which is why he isn’t to blame for doing so. However, the agent apparently could have avoided A-ing on his own. Hence, the reason for thinking that he isn’t blameworthy for A-ing can’t be extended to show that he isn’t blameworthy for A-ing-on-his-own. Proponents of the fine-grained analysis therefore have a principled explanation of why blameworthiness for A-ing and blameworthiness for A-ing-on-one’s-own tend to go together, even though they arguably come apart in Frankfurt cases like Revenge.
3.6 The Moral Luck Objection According to Linda Zagzebski, in a case like Revenge It is only an accident that [the neuroscientist] exists, and if he had not existed the agent would have had alternate possibilities. And if he had
Objections and Replies 79 had alternate possibilities he would have done the very same thing in the same way. He is, therefore, just as responsible as he would have been if he had had alternate possibilities. To say otherwise is to permit the agent too great a degree of positive moral luck. He can’t get off the moral hook that easily. (2000: 245)
The basic idea here seems to be this: denying that Jones is blameworthy for deciding to kill Smith permits “too great a degree of positive moral luck.” Hence, we ought to grant that Jones is blameworthy for so deciding, the fact that he couldn’t have avoided doing so notwithstanding. Denying that Jones is blameworthy for deciding to kill Smith does indeed require that we tolerate a certain kind of moral luck. There’s no denying it. The question, then, is whether the kind of moral luck it requires us to tolerate is intolerable. I’ll argue that it isn’t. Moral luck is typically defined as luck that affects how much praise or blame an agent deserves. It’s controversial whether such luck is possible, but suppose for the sake of argument that it is. Then I think critics of PAP like Zagzebski will be hard pressed to identify anything problematic about attributing moral luck to agents in Frankfurt cases. To illustrate the point, consider a particular kind of moral luck—what’s sometimes known as “resultant moral luck.” Resultant luck is when luck affects the consequences of one’s behavior. Here’s a standard example of it. A highly skilled sniper takes aim at his intended victim, fires, but, luckily for the victim, a bird flies in the path of the bullet knocking it off course, giving the victim time to escape. Had luck in the form of the bird not intervened, the sniper’s act of pulling the trigger would have resulted in the victim’s death. That’s resultant luck. Resultant moral luck is when resultant luck affects how much praise or blame an agent deserves. So, for example, if the sniper in the preceding case deserves less blame than he would have had he succeeded in killing his intended victim, that would be an instance of resultant moral luck. It’s controversial whether resultant moral luck is possible, but suppose for the sake of argument that it is. Suppose, for example, that while the sniper deserves some blame for trying to kill his intended victim, he would have deserved even more blame had he succeeded in killing her. The sniper is thus lucky that he isn’t blameworthy for killing the victim and lucky as well not to deserve more blame than he does, and this is true, you’ll note, even though
80 Moral Responsibility and the Flicker of Freedom his motives and intentions are just as they would have been had he been blameworthy for killing the victim. On the supposition that resultant moral luck is possible, it’s hard to see anything problematic about the sort of moral luck entailed by the fine-grained analysis. If that analysis is correct, then Jones is lucky not to be blameworthy for deciding to kill Smith, and this is true even though his motives and intentions are just as they would have been had he been blameworthy for deciding to kill Smith. But what, exactly, is problematic about that? If resultant moral luck is possible, why not this kind of moral luck as well (Frankfurt moral luck, we might call it)? Suppose, though, that moral luck (understood as luck that affects how much praise or blame an agent deserves) isn’t possible. Even so, it seems that luck can affect what an agent is morally responsible for without affecting how much praise or blame the agent deserves.12 To see this, compare the case in which the bird flies into the path of the sniper’s bullet with one in which it doesn’t and in which the sniper successfully shoots and kills the victim. If resultant moral luck is impossible, the sniper is equally blameworthy (i.e., deserves the same amount of blame) in both cases. However, he arguably isn’t blameworthy for the same things in these two cases. Barring any exculpating considerations, the sniper is blameworthy in the first case for trying to kill the victim but not for killing her (since he didn’t kill her in that first case), whereas he is blameworthy in the second case for killing the victim. Luck thus seems to impact what the sniper is blameworthy for even if it doesn’t affect how much blame he deserves. Fischer (1986) makes much the same point using the following example. Broken Phone: Peter looks out the window of his house one evening and notices a man being assaulted by several powerful-looking thugs. It occurs to Peter that he had better call the police, but not wanting to be inconvenienced and fearing that the thugs might find out and seek vengeance on him and his family, Peter decides to let sleeping dogs lie. Unbeknownst to Peter, though, and through no fault of his own, his telephone line had been cut, so he couldn’t have called the police even if he had tried.13
12
13
Fischer (1986: 256) and Zimmerman (2002) make similar claims. See also Swenson (2019). This example is adapted from van Inwagen (1983: 165–166).
Objections and Replies 81
As Fischer notes, “any inclination to believe that [Peter] is morally responsible for failing to call the police results from” a failure to distinguish different things for which Peter might be to blame. “[Peter] acts reprehensibly,” Fischer says, “and is morally responsible for something. [Peter] is morally responsible, for instance, for failing to try to call the police, for failing to dial the police number, etc. But,” given that he couldn’t have called the police, he “is not responsible for not calling the police, i.e., for not successfully reaching the police” (1986: 254–255). Notice, though, that it’s largely a matter of luck that Peter gets off the hook for not contacting the police. After all, it was a fluke that his phone wasn’t working, and if it had been working, he would have had the option of calling the police, in which case he would have been blameworthy for not calling them (and not just for deciding not to call them and for not trying to call them). Peter is thus extraordinarily lucky to escape blame for not contacting the police. However, Fischer contends that Peter doesn’t deserve less blame than he would have deserved had he also been blameworthy for not calling the police. Peter deserves the same amount of blame in either case. It’s just that, in the actual case, he is blameworthy for fewer events and states of affairs than he would have been if he had had the option of contacting the police. If he had had that option, he would have been blameworthy for at least three things: deciding not to call the police, not trying to call the police, and not calling them. But since he didn’t have the option to call the police, he is only blameworthy for the first two things. Fischer summarizes the point this way: “whereas a certain kind of moral luck applies to the specification of the content of moral responsibility, it does not apply to the extent or degree of blameworthiness” (1986: 256). Something similar can be said about Jones in Revenge. Jones acts reprehensibly in that case and is no doubt blameworthy for something. He may be blameworthy for deciding on his own to kill Smith, for not trying harder to avoid deciding to kill Smith, and perhaps too for deciding at t to kill Smith. But he isn’t blameworthy for deciding to kill Smith. He isn’t blameworthy for doing so because, as luck would have it, the neuroscientist and his device were on the scene, making it impossible for him to do otherwise than decide to kill Smith. Jones is thus lucky to escape blame for deciding to kill Smith (just as the sniper is lucky to escape blame for killing his intended victim and just as Peter is lucky to escape blame for not calling the police.) But, as we have seen, it doesn’t immediately follow that Jones is worthy of less blame than he would have been had he been blameworthy for deciding
82 Moral Responsibility and the Flicker of Freedom to kill Smith. All that follows is that he is blameworthy for fewer things than he would have been blameworthy for in that case. Proponents of the fine- grained analysis can therefore agree with Zagzebski that Jones is “just as responsible as he would have been if he had had alternate possibilities,” if this means that Jones is worthy of just as much blame as he would have been in an ordinary version of the case in which the neural control device is absent and in which he could have avoided deciding to kill Smith. It’s just that Jones isn’t blameworthy for quite the same things he would have been blameworthy for in that ordinary case. On this view, the presence of the neuroscientist and his device affects what Jones is blameworthy for but not necessarily the amount of blame of which Jones is worthy. But what’s objectionable about that? Nothing that I can see.14 It’s worth pointing out that, if the position just articulated is defensible, proponents of the fine-grained analysis have the makings of a plausible error theory concerning our initial intuitions about cases like Revenge. Perhaps the reason it seems intuitively plausible that Jones is blameworthy in that case for deciding to kill Smith is that we are running together a true judgment about the degree to which Jones is blameworthy (i.e., a judgment about how much blame Jones deserves) with a false judgment about the scope of Jones’s blameworthiness (i.e., a judgment about which things Jones is blameworthy for). The fact that Jones lacked the option to avoid deciding to kill Smith doesn’t in this case have an impact on the degree to which Jones is blameworthy (i.e., on how much blame he deserves). It’s plausible that Jones deserves just as much blame in Revenge as he would have in an ordinary version of the story in which the neuroscientist and his device are absent and in which Jones could have avoided deciding to kill Smith. But in acknowledging this fact it can be easy to slide into the further judgment that Jones must therefore be blameworthy in Revenge for the same things he would have been blameworthy for in that ordinary version of the story. But, as we have seen, 14 Andrew Khoury (2018) objects to the position just articulated on the grounds that it commits its adherents to the implausible thesis that the scope of a person’s moral responsibility (i.e., the range of things the person is morally responsible for) is morally irrelevant. (Frankfurt [1988: 100] says something similar.) In response, I deny that proponents of the position are committed to that implausible thesis. I develop this response at length in an as yet unpublished paper in which I offer a partial theory about how the scope and degree of an agent’s moral responsibility are determined, one that explains how luck can affect the scope of an agent’s moral responsibility without affecting the degree to which the agent is morally responsible. Including that material here, though, would take us too far afield.
Objections and Replies 83
this slide, though natural, isn’t always warranted. Just because Jones deserves the same amount of blame in Revenge as he would have had the neuroscientist and his device been absent, it doesn’t follow that he is deserving of blame for the same things he would have been blameworthy for in their absence.15
3.7 The No-Good-Excuse Argument McKenna (2005: 175–176; 2008: 773), taking his cue from Frankfurt (1969, 2003), identifies yet another argument for the conclusion that Jones is blameworthy in Revenge for deciding to kill Smith (despite not being free to avoid doing so). McKenna calls it the no-good-excuse argument. Central to the argument is the following principle about excuses, which McKenna labels “PM (for presence of excusing factor must morally or causally matter).” PM: Something counts as an excuse for what an agent did only if its presence reveals either that the agent did not act from a culpable motive, or that the agent’s action arose from a causally deviant source. (2008: 782)
The fact that Jones couldn’t have avoided deciding to kill Smith doesn’t reveal that he didn’t act from a culpable motive (he did), nor does it indicate that his action arose from a causally deviant source (it didn’t; it arose from his own natural, unaided agential resources). So, if PM is true, Jones can’t appeal to that fact as an excuse for deciding to kill Smith. Since Jones doesn’t have any other excuse for deciding to kill Smith, it seems that he is blameworthy for doing so. What to make of this argument? Jones’s lack of options doesn’t reveal that he didn’t act from a culpable motive, nor does it reveal that his behavior was produced in a causally deviant fashion. Jones did act from a culpable motive, and his action wasn’t produced in a causally deviant way. The question, then, is whether it follows from these facts that his lack of options doesn’t count as an excuse for what he did. The question, in other words, is whether PM is true.
15
See Swenson (2019) for a similar view.
84 Moral Responsibility and the Flicker of Freedom Arguably, it isn’t. Compare again Minimal Decency with Heroic Effort (see §3.1). Neither Carolina’s nor Juan’s action in those cases is produced in a causally deviant fashion, and both act from laudable motives (both want to save the village), yet Carolina is “excused” from praise, whereas Juan isn’t.16 What “excuses” Carolina is the fact that she wasn’t under any pressure not to do the minimally decent thing. That fact, however, doesn’t indicate that her action was produced in a causally deviant way, nor does it tell us anything about the moral quality of her motives. In particular, it doesn’t reveal that she didn’t act from a morally good motive (she did). Evidently, then, there can be “excuses” from praise that don’t indicate that the person’s behavior was produced in a causally deviant fashion and that don’t tell us anything about the moral quality of the person’s motives. But if that’s true for praise, why not for blame, too? And, indeed, I think it is true for blame. Consider an example of David Widerker’s (2003: 61) in which a man, Green, stays home from work because he is too lazy to go in, but who subsequently discovers that he is sick (with COVID, let’s imagine), a fact that gives him a sufficiently good reason to stay home. Call this case Indolence. Widerker points out that, although Green’s illness isn’t among his reasons for staying home, it seemingly does provide him with a good excuse for not going to work. Let’s assume, for the moment, that Widerker is right about that. (Don’t worry; we’ll revisit this assumption in the next several paragraphs.) Note that Green’s illness doesn’t reveal that he didn’t act from a culpable motive (he definitely did), nor does it reveal that his action was produced in a causally deviant fashion (we may safely assume that it wasn’t). Indolence thus appears to be a counterexample to PM. Appearances, as we know, can be deceiving. In response to Widerker, McKenna (2008: 779–780) claims that Green is blameworthy for staying home. After all, Green stayed home simply because he was too lazy to go to work. He is therefore just as blameworthy as he would have been had he not been sick. McKenna then argues, with some plausibility, that the reason it might initially seem as if Green’s illness provides him with a good excuse is 16 An excuse, as I use that term, is a consideration showing that an agent isn’t blameworthy for his bad behavior, and a person is excused when there is such a consideration. There are no parallel terms when it comes to praise, though there should be. Since I’m unable to come up with any good ones, I use “excuses” (in scare quotes) to indicate considerations showing that an agent isn’t praiseworthy for her good behavior, and “excused” (in scare quotes) to indicate that there is such a consideration and thus that the agent isn’t praiseworthy for the relevant behavior.
Objections and Replies 85
that we are conflating the question of whether Green is blameworthy with the question of whether it would be appropriate for Green’s boss to actually blame Green for staying home. McKenna agrees that, for various reasons, it might not be appropriate for Green’s boss to hold Green responsible for staying home (e.g., by blaming or sanctioning him). However, McKenna goes on to point out that just because it wouldn’t be appropriate for others to blame Green for staying home from work, it doesn’t follow that Green isn’t blameworthy for staying home. A person can be blameworthy even if it isn’t appropriate for others to overtly blame him. As I indicated in the previous paragraph, McKenna’s handling of Widerker’s example isn’t without plausibility. However, I want to suggest an alternative take on the example, one that I find even more plausible and that supports Widerker’s initial assessment of the case. Luck, we have seen, can affect which things a person is blameworthy for even if it doesn’t affect how much blame the person is worthy of, and I suggest that the same sort of luck highlighted in cases like Broken Phone is also on display in Indolence. Green isn’t blameworthy for missing work, and any inclination to think that he is blameworthy for doing so is the result of a failure to distinguish different things for which Green might be blameworthy. Green acted irresponsibly and is no doubt blameworthy for something. He is no doubt blameworthy for his initial decision to stay home from work or at least for making it for the reason he did (remember, he made that decision before he was aware that he was ill) and perhaps too for initially not going to work. But, as luck would have it, he isn’t blameworthy for the fact that he missed work on the day in question, given that he (luckily) had a very good reason to miss work that day. As we have also seen, it doesn’t follow from this that Green is worthy of less blame than he would have been had he not been sick (in which case he would have been blameworthy for missing work). All that follows is that he is blameworthy for fewer things than he would have been blameworthy for in that case. We can therefore agree with McKenna that Green is just as blameworthy (i.e., worthy of just as much blame) as he would have been had he not been sick. He just isn’t blameworthy for the same things he would have been blameworthy for in that case. Green’s illness thus affects what he is blameworthy for but not the amount of blame he deserves. Why prefer this analysis of Widerker’s example to the one McKenna proposes? One reason is that it enables us to capture what’s most plausible about
86 Moral Responsibility and the Flicker of Freedom both authors’ positions. It captures the initial intuition to which Widerker appeals that Green isn’t to blame for missing work given that he was sick with a contagious illness and thus had a very good reason not to go in, while also capturing what’s most plausible about McKenna’s position, which I take to be that there is something in the story for which Green is blameworthy and that Green is arguably deserving of just as much blame as he would have been if he hadn’t been ill. A second reason to prefer this analysis of Widerker’s example concerns the fact that it fits with another plausible principle about excuses. It’s plausible that a person has an excuse for not performing an action A, an action he would ordinarily be obligated to perform, if the person lacked a reasonable opportunity to A. (This, you’ll note, would explain why Peter isn’t blameworthy in Broken Phone for not contacting the police and why John isn’t blameworthy in Sharks for not rescuing the drowning child.) But even though Green could have gone to work, and thus had the opportunity to go to work (remember, could implies ability and opportunity), he clearly lacked a reasonable opportunity to go work given that he was ill.17 He therefore isn’t blameworthy for not eventually going, whatever else he might be blameworthy for.18 Consider, finally, a different story. Brown, through no fault of his own, is sick as dog and so, for that reason, stays home from work. End of story. Brown, I take it, is blameless in this case for staying home from work. His illness provides him with a good reason to stay home, and he stays home (at least in part) for that very reason. (In this respect he is unlike Green, who stays home not because he’s ill but because he’s lazy.) However, the fact that Brown is sick doesn’t reveal that he “did not act from a culpable motive,” nor does it reveal that Brown’s action “arose from a causally deviant source.” It’s
17 This sentence highlights the fact that the notion of a reasonable opportunity at play here isn’t equivalent to the notion of an agent’s options. Strictly speaking, going to work may have been an option for Green, but that doesn’t mean he had a reasonable opportunity to go to work, as it typically wouldn’t be reasonable to expect someone who is sick with a contagious and potentially dangerous illness to go to work. 18 It’s worth noting that if this analysis of Indolence is correct, then the case is also a counterexample to the main premise of the revised irrelevance argument considered in §3.1. The fact that Green was ill doesn’t help explain why he missed work, nor does it tell us anything about the sort of thing Green took himself to be doing. Yet, it is relevant to an assessment of whether Green is blameworthy for missing work.
Objections and Replies 87
simply silent on those issues. Yet it still seems that it provides Brown with an acceptable excuse for staying home from work. Hence, PM is false. Why does Brown’s illness provide him with an acceptable excuse, though? The principle about reasonable opportunity identified a moment ago supplies a plausible answer. That Brown was sick as a dog reveals that he didn’t have a reasonable opportunity to go to work (insofar as it’s typically unreasonable to expect people to go to work when they are so sick, even if it’s true that they could go), which in turn suggests that he isn’t blameworthy for not going.19
3.8 Responsibility for Act-Features The final objection to the fine-grained analysis that I want to consider is due to Sartorio (2019). The basic problem with the analysis, Sartorio says, is that it identifies “the ultimate locus of our responsibility in things that are not acts . . . but ‘features’ of acts, or facts concerning the causal history of acts.” And, as she goes on to point out, “these are not typically regarded as the kinds of things that we can be basically responsible for.” She thus finds it “hard to see how they can be the ultimate locus of our responsibility, as the [fine-grained analysis] says” (2019: 100). Sartorio agrees that we can be morally responsible for facts about the causal histories of our behavior. However, she contends that when we are morally responsible for such facts, our responsibility for them is inherited from the responsibility we bear for other states and events. But the fine- grained analysis maintains not just that Jones is morally responsible for deciding on his own to kill Smith (a fact about the causal history of his action), but also that his moral responsibility for so deciding is basic, meaning that it isn’t derived from his moral responsibility for anything else. So, if Sartorio is right that moral responsibility for such facts must be derivative, the fine- grained analysis can’t hope to succeed, not as it stands anyway.
19 Franklin (2013, 2018) appeals to the notion of reasonable opportunity in developing a theory of excuses. The points I’ve made here are indebted to his discussion of these issues. However, unlike Franklin, I have doubts about whether the principle of reasonable opportunity adequately accounts for the full range of recognized excuses.
88 Moral Responsibility and the Flicker of Freedom A central assumption of Sartorio’s objection is that moral responsibility for facts about the causal history of an agent’s action can’t be basic. But why not? Sartorio notes that “a pretty standard assumption of theories of responsibility is that what we are basically responsible for is (if anything) certain acts of ours (perhaps including acts of omission); most commonly, these are taken to be mental acts such as decisions” (2019: 100). If this assumption were correct, we would have an answer to the question at hand. It seems to me, however, that the assumption isn’t correct and that moral responsibility for facts about the causal histories of our behavior needn’t always be inherited. Having argued for that claim, I’ll then explain how, even if I’m mistaken on this point, the fine-grained response can be modified to take Sartorio’s contention into account. Why think we are basically responsible only for our behavior (actions and omissions)? The answer, I think, has to do with control. It’s plausible that we are only directly (or basically) morally responsible for things over which we have direct control. The appeal of that claim can be illustrated by reflecting on responsibility for outcomes that are the consequences of our behavior. Responsibility for outcomes seems always to be inherited and thus not basic. Why is that? A plausible answer is that we don’t have direct control over outcomes. We have control over them only indirectly, in virtue of having control over the behavior from which they result. So, if we only have direct control over our behavior or some subset of our behavior, such as our decisions, then we could only ever be basically morally responsible for our behavior. Responsibility for facts about the causal history of our behavior would then always have to be nonbasic. The crucial assumption here is that we only have direct control over our behavior. But I contend that Frankfurt cases like Revenge contradict that assumption. Whether the neural control device produces Jones’s decision depends on whether Jones decides on his own to kill Smith, and whether Jones decides on his own appears to be at least partly up to Jones. The question, though, is whether the control that Jones has over whether he decides on his own is direct or indirect. I say it’s direct. Why? Because Jones doesn’t have control over whether he decides on his own to kill Smith in virtue of having control over whether he does or doesn’t do something else. He can just decide on his own, or not. He can act on the “unaided” motivations (unaided, i.e., by the neuroscientist and his neural control mechanism) that actually move him to decide at t to kill Smith, or he can resist acting on
Objections and Replies 89
those motivations, in which case the neuroscientist’s device will kick in and compel Jones to decide to kill Smith (perhaps by enhancing Jones’s existing motivations so that they are irresistible). Note that an agent might resist acting on a desire simply by not acting on it. There may, of course, be cases in which an agent can resist acting on certain desires only by performing or omitting to perform some further action of resistance. Perhaps you can resist lashing out on social media only if you omit watching network news programs, and a thief may be capable of resisting the urge to steal only if he reads a passage from his favorite philosopher.20 In many cases, though, we needn’t do anything further to resist the desire to A. We can just resist it. To illustrate, consider Indecision again. Recall that Marla is deliberating about whether to attend a party this evening. Part of her wants to go to the party, while another part of her would prefer a quiet evening at home. At t, Marla omits to decide to attend the party, but whether she acts at t on her desire to attend the party by deciding then to go to the party is partly up to her. She could have acted on that desire at t, but she didn’t; she continued to deliberate instead. Bearing these points in mind, return to Revenge. There may be things that Jones could have done to help him resist his unaided desire to kill Smith. Perhaps he could have actively reminded himself that Smith has a family who loves him, or perhaps he could have imagined how disappointed his own mother would be with him for deciding to kill an innocent person in cold blood. But it seems that Jones could also have resisted his unaided desire to kill Smith in a more basic way, simply by not acting on it, just as he might have resisted acting on that desire in an ordinary version of the story in which there is no neuroscientist or neural control device present. Of course, in that ordinary version of the story, if Jones had resisted acting on his unaided murderous desire at t, he very well might have omitted to decide to kill Smith, whereas, in Revenge, if he hadn’t acted on that unaided desire at t, the neural control device would have compelled him to decide a moment later, at t +1, to kill Smith. Still, I see no reason why he couldn’t have resisted his unaided to desire to kill Smith in this basic way, just as he could have done in an ordinary version of the story. And, if he could have resisted that unaided
20
The example of the thief is due to Sartorio (2019: 101).
90 Moral Responsibility and the Flicker of Freedom desire, then it seems he had some direct control over whether he decided on his own to kill Smith. Think about it like this. In ordinary cases, having direct control over whether you act on a particular motivation goes hand in hand with having control over whether you perform the relevant action. Not acting on the motivation typically involves not performing the action in question. For example, if Marla doesn’t act on her desire to attend the party, she presumably won’t attend, as there is no neuroscientist waiting in the wings to force her to attend if she doesn’t do so on her own. But in cases like Revenge, things are different. Because the neural control mechanism is there, waiting to cause Jones’s decision if he doesn’t act on his own, Jones’s not acting on his own unaided motivations won’t result in his not deciding to kill Smith, for in that case, the neural control mechanism will force him to so decide. Sartorio says that if Jones is directly morally responsible for deciding on his own to kill Smith, “the question arises: What could possibly ground that basic responsibility fact?” (2019: 103). The points I just made about control provide a partial answer to the question. Part of what grounds Jones’s moral responsibility for deciding on his own to kill Smith is that he could have avoided doing so, and he could have avoided doing so without first having to do anything else. He thus had some direct control over whether he decided on his own to kill Smith. It’s that fact, I contend, that partly grounds his moral responsibility for deciding on his own to kill Smith. Note that this is much the same answer we would give to the question of what could ground basic responsibility for other things. According to most theories of responsibility, part of the answer about what grounds basic responsibility for X has to do with the fact that the agent had the requisite sort of control over X. Proponents of the fine-grained analysis can say the same thing about what grounds Jones’s responsibility for the fact that he decided on his own to kill Smith. Part of what grounds his responsibility for that fact is that he had the requisite sort of control over it; he could have avoided deciding on his own at t to kill Smith. Critics may, of course, reject this answer to the question. But we have yet to see anything implausible about it. So far, I’ve argued that agents in Frankfurt cases like Revenge can be directly morally responsible for acting on their own in virtue of having some direct control over whether they act on their own unaided motivations. Suppose, though, that Sartorio is correct that we are only ever indirectly or derivatively morally responsible for facts about the causal histories of our
Objections and Replies 91
behavior. Even so, I think there is a way of modifying the fine-grained analysis to accommodate this point. According to Sartorio, “a thief could be responsible for the fact that his desire to steal caused his decision to steal, but this could be because he failed to resist that (resistible) desire. In that case, I submit, he is basically responsible for his failure to resist the desire to steal, and only derivatively for the fact about the causal history of his decision.” To illustrate the point, she invites us to “imagine that the thief knows that he is generally able to resist the urge to steal if he looks at a passage on Kantian philosophy . . . , and on that particular occasion he decided not to look at the passage.” Sartorio says that the thief “seems responsible for the fact that his desire to steal was causally efficacious,” but she insists that “his responsibility for this fact is inherited from his responsibility for the decision not to look at the passage” (2019: 101). It seems to me that proponents of the fine-grained analysis could say something similar about Frankfurt cases like Revenge. Jones presumably could have tried harder not to decide to kill Smith, for example, by actively resisting the forces that led him to decide at t to kill Smith. Perhaps, as I suggested earlier, he could have reminded himself of Smith’s loving family or even looked at a picture of them, an action he knows would make it less likely that he would act on his unaided desire to kill Smith. But Jones didn’t do that and, as a result, decided on his own to kill Smith. So, we can say that Jones is indirectly responsible for deciding on his own to kill Smith, in virtue of being directly responsible for not looking at the picture, just as Sartorio’s thief is indirectly responsible for the fact that his desire to steal was causally efficacious in virtue of being directly responsible for not looking at the passage on Kantian philosophy. On this version of the fine-grained analysis, we can grant, at least for the sake of argument, that whatever moral responsibility Jones bears for deciding on his own to kill Smith is inherited from his responsibility for not actively resisting the resistible forces that actually led him to decide as he did. However, a proponent of (this version of) the analysis would insist that Jones’s responsibility for not actively resisting the resistible forces that led him to decide as he did is grounded in part in the fact that Jones could have resisted those forces (though, of course, if he had, other forces would still have moved him to decide to kill Smith). So, what Jones is directly blameworthy for, on this version of the analysis, is his failure to try harder to avoid making such a bad decision. He is only indirectly blameworthy for deciding on his own.
4 Frankfurt Cases Thus far my discussion of Frankfurt cases has focused almost exclusively on Revenge, which, as I mentioned, is a relatively rudimentary case (as such cases go). But, as I’ll now argue, the analysis is no less plausible when applied to other, more complex Frankfurt cases.1
4.1 Prior Signs I begin with Frankfurt’s original case. The example, as Frankfurt presents it, is rather abstract. So, to make it more concrete, I’ll augment it with details from Revenge. Original Revenge: Black . . . wants Jones to [decide to kill and to kill Smith]. Black is prepared to go to considerable lengths to get his way, but he prefers to avoid showing his hand unnecessarily. So he waits until Jones
1 Several authors have argued that divine foreknowledge provides the perfect Frankfurt case, one in which there are no flickers of freedom (see, e.g., Hunt [1996, 1999, 2002, 2003] and Fischer [2022]). Their argument, in a nutshell, is this: If God foreknows that Jones is going to decide (on his own at t) to kill Smith, then Jones couldn’t have done otherwise than decide (on his own at t) to kill Smith. But God’s foreknowledge doesn’t cause Jones’s bad behavior, nor does it provide Jones with a valid excuse for that behavior. Hence, Jones is blameworthy for deciding (on his own at t) to kill Smith, even though he couldn’t have done otherwise. There are two main difficulties with this argument. First, it’s controversial whether foreknowledge is incompatible with the freedom to do otherwise. I won’t discuss that issue here, though, for two reasons. First, doing so would require a book of its own, and second, I have nothing to add to what others have said about the issue. (For a recent discussion of it that I find particularly attractive, see Swenson [2016].) Second, if divine foreknowledge is incompatible with the freedom to do otherwise, this is probably because it requires determinism (see Todd [n.d.]) and determinism is incompatible with the freedom to do otherwise. In that case, my response to the foreknowledge argument would be identical to what I say below, toward the end of §4.1, about the deterministic version of Frankfurt’s original case.
Moral Responsibility and the Flicker of Freedom. Justin A. Capes, Oxford University Press. © Oxford University Press 2023. DOI: 10.1093/oso/9780197697962.003.0004
Frankfurt Cases 93 is about to make up his mind what to do, and he does nothing unless it is clear to him (Black is an excellent judge of such things) that Jones is going to decide to do something other than what he wants him to do. If it does become clear that Jones is going to decide to do something else, Black takes effective steps to ensure that Jones decides to do, and that he does do, what he wants him to do. Whatever Jones’s initial preferences and inclinations, then, Black will have his way. . . . Now suppose that Black never has to show his hand because Jones, for reasons of his own [specifically, because he wants to get revenge against Smith], decides to perform and does perform the very action Black wants him to perform. (1969: 835–836)
The fact that Black “waits until Jones is about to make up his mind what to do, and . . . does nothing unless it is clear to him . . . that Jones is going to decide to do something other than what he wants him to do” suggests that Black is prepared to intervene before Jones can make an alternative decision. In this respect, the case differs from Revenge. In Revenge, Jones could have decided at t not to kill Smith, though if he had, the neural control device would have kicked in and compelled him to change his mind. But Frankfurt seemingly wants to rule out that alternative possibility. It seems he wants us to assume that Black will allow Jones to decide on his own only if he knows that Jones is going to decide on his own to kill Smith. If, however, Jones is about to decide on his own not to kill Smith, Black will intervene to prevent Jones from making that alternative decision and will somehow force Jones to decide to kill Smith. Understanding the case in that way, though, requires Black to know in advance whether Jones is going to decide on his own to kill Smith. But how could Black possibly know something like that? Frankfurt (1969: 835, n. 3) attempts to address the question by inviting us to imagine that there is a sign Black uses to determine whether he needs to intervene. Perhaps Jones’s face invariably twitches whenever he is about to decide to kill someone but never when he is about to do anything else. Because Jones’s face twitched just as he began deliberating about whether to kill Smith, Black could be confident that Jones was going to decide on his own to kill Smith and thus that he (Black) needn’t intervene. But if Jones’s face hadn’t twitched at that time, Black would have taken that as his cue to intervene and would have done so
94 Moral Responsibility and the Flicker of Freedom immediately, forcing Jones to decide to kill Smith, before Jones had a chance to make a different decision.2 Introducing the prior sign solves the problem of how Black could know in advance whether he needs to intervene and force Jones’s hand, but it raises another complication, which takes the form of a dilemma. Either the sign by which Black predicted that Jones was going to decide on his own to kill Smith was an infallible indicator of what Jones was about to do, or it wasn’t. Both options, though, are problematic for Frankfurt’s argument.3 Suppose, first, that the facial twitch by which Black predicted Jones’s behavior was indeed an infallible indicator that Jones was going to decide on his own to kill Smith. If so, then Jones’s decision would appear to have been causally determined; for how could the twitch be an infallible indicator of what Jones was going to decide unless it was, or was indicative of, a deterministic cause of Jones’s decision, a cause which, given the laws of nature, necessitated that Jones would decide on his own to kill Smith? It seems, then, that we must also suppose that Jones’s decision was deterministically caused, in which case there are two further possibilities: either deterministic causation of action is compatible with the freedom to do otherwise or it isn’t. Suppose, first, that deterministic causation of action is compatible with the freedom to do otherwise. Then the fact that the decision Jones made at t to kill Smith was deterministically caused doesn’t entail that Jones couldn’t have done otherwise at t than decide then to kill Smith. Black, having seen the twitch, could be confident that Jones would decide at t to kill Smith, but it doesn’t follow, given our current working assumption, that Jones wasn’t free to do otherwise at t. For all we have said thus far, Jones could have freely decided at t not to kill Smith. Of course, had he done so, Black would presumably have intervened and compelled Jones to decide a moment later to kill Smith. Still, if determinism is compatible with the freedom to do otherwise, then Jones could have freely done otherwise at t than decide on his own then to kill Smith. In this version of the story, proponents of the fine-grained response would say that, because Jones lacked a fair opportunity to avoid deciding 2 See Blumenfeld (1971) and Stump (1996) for similar suggestions. What I have to say about Original Revenge applies, mutatis mutandis, to the prior sign case developed by Blumenfeld and Stump. 3 The dilemma developed in what follows is inspired by, but differs from, the Dilemma Defense discussed in §1.3.
Frankfurt Cases 95
to kill Smith simpliciter, he isn’t morally responsible for so deciding. There are, however, several other things for which Jones arguably is morally responsible. He is arguably morally responsible for deciding on his own to kill Smith, for not trying harder to avoid deciding to kill Smith, and for deciding at t to kill Smith, for he could have avoided deciding on his own to kill Smith, could have tried harder to avoid deciding to kill Smith, and also could have avoided deciding at t to kill Smith. Similar problems arise if we assume that the facial twitch by which Black predicted Jones’s upcoming behavior wasn’t an infallible indicator of what Jones was going to do on his own but was merely a reliable, though ultimately fallible, predictor of Jones’s behavior. For if the twitch wasn’t an infallible indicator of what Jones was going to do, the fact that Jones displayed the sign provided Black no guarantee that Jones was going to decide on his own at t to kill Smith. Jones could have displayed the sign yet still decided on his own at t against killing Smith. So, if the twitch wasn’t an infallible indicator of what Jones was going to do, then, since Black didn’t intervene, it appears that Jones could have done otherwise at t than decide to kill Smith. Of course, if Jones had done otherwise at t than decide to kill Smith, Black presumably would still have intervened and forced Jones to decide a moment later to kill Smith. Jones therefore isn’t blameworthy for deciding to kill Smith simpliciter, since he lacked a fair opportunity to avoid doing so. But, again, there are several other things he did have a fair opportunity to avoid (e.g., deciding on his own to kill Smith, deciding at t, etc.) and thus for which he may very well be blameworthy. Suppose again that the twitch was, or was indictive of, a deterministic cause of the decision Jones made at t to kill Smith. But this time let’s suppose that determinism is incompatible with the freedom to do otherwise. Then Jones couldn’t have done otherwise than decide on his own at t to kill Smith, in which case a proponent of the fine-grained response would say that Jones isn’t morally responsible for deciding to kill Smith, for deciding on his own to kill Smith, or for deciding at t to kill Smith, as he couldn’t have avoided any of those things.4
4 As I explain below, this judgment isn’t ultimately based on the fact that Jones’s behavior was causally determined. It’s based, rather, on the fact that Jones’s couldn’t have avoided his bad behavior and thus didn’t have a fair opportunity to avoid it. In making that judgment, I’m therefore not appealing to incompatibilism.
96 Moral Responsibility and the Flicker of Freedom Is this assessment of the case plausible? You might think not. After all, Jones was an otherwise ordinary agent, and he decided to kill Smith despite knowing that it was wrong to do so. How could he not be blameworthy for what he did? Well, because not doing what he did wasn’t an option for him, which means he lacked a fair opportunity to avoid doing what he did. Consider Sharks again. John is an otherwise ordinary agent, and he didn’t rescue the drowning child despite believing quite reasonably that he had an obligation to rescue her. As we have seen, though, John isn’t blameworthy for not rescuing the child, and he isn’t blameworthy for not rescuing her precisely because rescuing her wasn’t an option for him. The same, I contend, is true of Jones in the deterministic version of Original Revenge, given the assumption that determinism is incompatible with the freedom to do otherwise. What does seem clear is that Jones is a morally bad person. Anyone who decides to kill another person in cold blood despite believing that it’s wrong to do so is clearly a morally flawed individual. Moral disapprobation of Jones and his behavior thus seems warranted. But it doesn’t follow that Jones is blameworthy for what he did, as disapprobation isn’t the same as blame. It’s entirely possible to have a negative moral assessment of an individual on the basis of his bad behavior while also acknowledging that he isn’t blameworthy for that behavior. We can, for example, rightly judge a person to be insensitive, careless, or malevolent based on something the person did or failed to do without deeming the person blameworthy for the bad behavior in question or for the bad character traits manifested therein. It might not be the person’s fault that he is the way he is or that he behaved as badly as he did. The fact that Jones is a bad guy who behaved badly therefore doesn’t eo ipso mean that he is blameworthy for his bad behavior. Some people may find these last few claims difficult to swallow. Surely, they might say, someone who deserves moral criticism for something is also deserving of blame for that thing.5 Those who feel this way should keep in mind the following terminological point made in section 1.1: a person deserves blame for what he did, in the relevant sense of “blame,” if and only if the person deserves to be the target of reactive attitudes like resentment,
5
See, e.g., Adams (1985).
Frankfurt Cases 97
indignation, and guilt for what he did. But clearly a person might be a bad person, and thus might deserve certain forms of moral criticism, without deserving any of the reactive attitudes just mentioned. Again, it might not be the person’s fault that he is the way he is or that he behaved as badly as he did.6 Susan Wolf ’s example of JoJo provides a nice illustration of the point. JoJo is the favorite son of Jo the First, an evil and sadistic dictator of a small, undeveloped country. Because of his father’s special feelings for the boy, JoJo is given a special education and is allowed to accompany his father and observe his daily routine. In light of this treatment, it is not surprising that little JoJo takes his father as a role model and develops values very much like Dad’s. As an adult, he does many of the same sorts of things his father did, including sending people to prison or to death or to torture chambers on the basis of whim. (1987: 53–54)
As Wolf rightly points out, “In light of JoJo’s heritage and upbringing—both of which he was powerless to control—it is dubious at best that he should be regarded as responsible for what he does. It is unclear whether anyone with a childhood such as his could have developed into anything but the twisted and perverse sort of person that he has become” (1987: 54). But even if JoJo isn’t to blame for his bad behavior, he is still a terrible person doing terrible things. His lack of moral responsibility for his character and the behavior in which it results doesn’t change that. We can, then, agree that Jones behaves badly and that his bad behavior is indicative of a flawed character. However, if determinism is incompatible with the freedom to do otherwise, and if Jones’s behavior was completely causally determined by factors beyond Jones’s control, he couldn’t have done otherwise than decide (on his own at t) to kill Smith, in which case it’s not at all obvious that he is blameworthy for deciding (on his own at t) to kill Smith.7
6 “They don’t know yet who’s going to win this exchange,” the Duke said. “Most of the Houses have grown fat by taking few risks. One cannot truly blame them for this; one can only despise them” (Herbert [1965: 107–108]). 7 These points apply, mutatis mutandis, to the time-travel case developed by Spencer (2013) and to the foreknowledge argument mentioned in note 1. For a similar response to Spencer’s time-travel case, see McCormick (2017).
98 Moral Responsibility and the Flicker of Freedom
4.2 The Willing Addict Similar difficulties beset a second attempt by Frankfurt to develop a counterexample to PAP. In his 1971 paper “Freedom of the Will and the Concept of a Person,” Frankfurt discusses a case that’s importantly different from cases like Revenge and its variants and which he claims is also a counterexample to the principle. It features a drug addict whose desire to use a certain drug is so powerful that the addict couldn’t have resisted the desire no matter how hard he might have tried. However, the addict doesn’t even try, for “he is altogether delighted with his condition. He is a willing addict, who would not have things any other way” (Frankfurt 1971: 19).8 Because the willing addict’s use of the drug resulted from an irresistible desire, he couldn’t have done otherwise than use the drug. However, given that the addict is a willing addict—willing in the sense that, in using the drug, he does what he really wants to do (i.e., what he wants to want to do)—Frankfurt claims that he can be morally responsible for using drugs, nonetheless. Here, then, Frankfurt thinks, we have another counterexample to PAP. A central premise of this argument is that the willing addict is morally responsible for using the drug even though he couldn’t have avoided using it. But that’s hardly obvious. Why think it’s true? Frankfurt doesn’t say, exactly, though he does make a couple of suggestive comments about the issue (1971: 19). He points out that, in using the drug, the willing addict did exactly what he wanted to want to do and therefore can’t claim to have been forced against his will to take the drug, nor can he claim to be a mere bystander, dragged along by the force of his addiction. Frankfurt also notes that, because the addict did exactly what he wanted to want to do, he wouldn’t have done anything differently even if he could have done something different. Neither point, though, whether separately or in conjunction, supports the conclusion that the willing addict is morally responsible for using the drug. To see this, consider Sharks yet again. John, you’ll recall, isn’t blameworthy in that case for not rescuing the child. Recall, too, that John wasn’t forced
8 The unwilling addict may well be a creature of philosophical fiction. Most (perhaps even all) real addicts don’t have literally irresistible desires to use drugs (Pickard [2015]). But all Frankfurt needs is a metaphysically possible case, and I grant that the case as he describes it is metaphysically possible. See also Capes (2012) for a discussion of similar cases.
Frankfurt Cases 99
against his will not to rescue the child, and he could have at least tried to rescue her. But he didn’t really want to. He was, we may imagine, perfectly satisfied with the decision he made not to (try to) help the child. He therefore wouldn’t have rescued the child even if he could have rescued her. In these respects, he is just like the willing addict. Yet, as even Frankfurt (1994) acknowledges, John isn’t blameworthy in Sharks for not rescuing the child. The fact that, in Sharks, John couldn’t have rescued the child even if he had tried supports the conclusion that, whatever else John may be blameworthy for in that case, he isn’t blameworthy for not rescuing the child. Given the symmetry thesis, we should say something similar about the willing addict. The fact that he couldn’t have avoided using the drug even if he had tried supports the conclusion that, whatever else he may be blameworthy for, he isn’t blameworthy for using the drug. That’s not to say there is nothing morally untoward about the addict or his behavior. As I argued above, in connection with Original Revenge, a person can be open to moral criticism for his behavior even if he isn’t blameworthy for it. We can thus acknowledge the addict’s moral flaws without deeming him blameworthy for them or the behavior in which they inevitably result. Nor is it to say that the willing addict is completely off the hook. Just as John may be blameworthy in Sharks for not trying to save the child even though he isn’t blameworthy for not saving her, so too the willing addict may be blameworthy for not trying to resist the desire that drove him to use the drug, even if he isn’t blameworthy for ultimately giving in to that desire. A more well-developed argument for the conclusion that the willing addict is morally responsible for taking the drug (even though he couldn’t have avoided taking it) is due to Chandra Sripada (2017). Sripada’s argument is based on the following case: Willing Exploiter: Will desires to view exploitive pornography, and these desires “are deeply expressive of his self.” He “has a narcissistic kind of self-love at his core. He is attracted to the idea that he is in a position of dominance over others and the exploitiveness of the pornographic material is thus exactly what he finds so deeply gratifying.” Will “thus stands strongly in favour of his desires to view exploitive images and wouldn’t change a thing.” Moreover, “the desires to view the images are sufficiently powerful in their own right that, though he doesn’t and wouldn’t ever try to resist these desires, were he to try, he would fail. (2017: 802–803)
100 Moral Responsibility and the Flicker of Freedom Sripada claims that Will is morally responsible (because blameworthy) for viewing the exploitive images. He also claims, plausibly, that Will “is not relevantly different from the willing addict with respect to moral responsibility,” so that if Will is morally responsible for viewing exploitive images, then so too the addict is morally responsible for using the drug (2017: 804). It follows from these two claims that the willing addict is morally responsible for using the drug. The crucial premise here is that Will is blameworthy for viewing the exploitive images. Sripada offers two lines of support for that premise. The first is that it’s intuitively plausible. “[I]t intuitively strikes us,” Sripada says, “that [Will] is worthy of blame for viewing the exploitive images.” The second appeals to the idea that “expressing ill will in one’s action is a sufficient condition for being blameworthy for it” (2017: 803). Given that Will expressed ill will in viewing the exploitive images, it follows that he is blameworthy for viewing them. I’m not persuaded by either of these attempts to motive the premise. I for one don’t find intuitively plausible the claim that Will is blameworthy for viewing the exploitive images. (I’m assuming, of course, that Will couldn’t have avoided viewing them and that he’s not responsible for having the desires that drove him to view them.) Nor is it plausible that acting with ill will is sufficient for being blameworthy for one’s action. I elaborate on both points in turn. I don’t deny that there might be something for which Will is blameworthy. For instance, he is perhaps blameworthy for not trying to resist his desire to view exploitive images (just as John, in Sharks, is blameworthy for not trying to save the drowning child). But given that he couldn’t have resisted the temptation to view such images even if he had tried his hardest to do so, it isn’t at all obvious that he is worthy of blame for viewing them. What does seem clear is that Will is a morally bad person. Anyone who has the sort of desires he has and who endorses them in the way he does is clearly a deeply flawed individual. Moral disapprobation of him and his behavior thus seems warranted. But, again, it doesn’t follow that Will is to blame for what he did, as disapprobation isn’t the same as blame. We can, then, agree that Will’s behavior is immoral and is indicative of a deeply flawed character. We can also agree that Will is blameworthy for not trying to resist the desires that led him to view exploitive images. But these obvious facts leave open whether Will deserves blame (in the reactive
Frankfurt Cases 101
attitude sense) or sanctions for viewing exploitive imagines, and, as I say, it seems to me that he isn’t to blame (in that sense) for viewing the images, given that he couldn’t have avoided viewing them even if he had tried his best to do so. In this respect, Will seems just like John in Sharks, who is blameworthy for not trying to rescue the child but isn’t blameworthy for not rescuing her given that he couldn’t have rescued her even if he had tried his best to do so. What, though, of the fact that Will’s behavior displays ill will and a lack of due regard for others? Isn’t that sufficient to render him blameworthy for the behavior? It isn’t. Jojo, too, you’ll note, acts with ill will. But, again, it’s hardly obvious that he is to blame for his bad actions.9 The willing addict may be blameworthy for immediately giving in to the desire to use drugs and for not trying harder to resist that desire. However, we have yet to see a compelling reason to think that he is blameworthy for eventually giving in and using drugs, and thus have yet to see a compelling reason to think that the case is a counterexample to PAP. We have, however, seen reason to think that the addict isn’t blameworthy for using. I conclude that Frankfurt’s Willing Addict doesn’t provide us with a clear counterexample to PAP.10
4.3 Blockage Several years ago, while on his way home from the local supermarket, Martin turned right at the usual place, not noticing that, due to some construction, he couldn’t have gone in any other direction—not even back the way he came, as the construction crew had moved their heavy-duty equipment in behind him as he passed. Notice, though, that Martin didn’t turn right because of the construction. He turned right because that was the way home, the way he wanted to go, the way he would have gone even if there hadn’t been any construction blocking the other routes.11 9 Other counterexamples to the claim that acting with ill will is sufficient for blameworthiness are easy to construct. See, e.g., many of the examples of manipulation discussed by Mele (2019). 10 For a similar treatment of the willing addict, see Fara (2008). 11 This story is a slightly modified version of one told by Fischer (1994: 242, n. 22).
102 Moral Responsibility and the Flicker of Freedom Because of the construction, Martin couldn’t have gone in any other direction, but he could have at least chosen to turn left or chosen instead to continue straight (though, of course, he would have been prevented by the construction from carrying out those choices). He also could have omitted to turn right, by simply remaining stationary at the intersection. Anecdotes like this therefore don’t provide us with cases in which all an agent’s alternative possibilities for action are blocked. Still, such stories are suggestive. The interesting thing about them is that while most of the agent’s (relevant) options are closed off, the circumstances responsible for that fact aren’t among the causes of the agent’s actual behavior. The agent behaves as he does for reasons of his own, reasons that having nothing whatsoever to do with the fact that other courses of action he might have considered taking are unavailable to him. This suggests that perhaps all a person’s alternative possibilities for action could be blocked, including mental options like making a different choice, without the circumstances responsible for the blockage being among the causes of the agent’s behavior. For example, perhaps there is a neural analogue of construction, something that could block all neural pathways that would be involved in an agent making a different choice, without causing the agent to make the choice that he does. If so, we could tell just the sort of story Frankfurt thinks would yield a counterexample to PAP, a story in which factors that play no role in the etiology of what a person does or decides to do nevertheless eliminate every alternative possibility for action the agent would otherwise have had, including, it would seem, those pesky flickers of freedom that seem to keep flaring back up. David Hunt (2000, 2003) has proposed a version of Revenge along these lines. In Hunt’s version of the story, the neural control device “blocks neural pathways” in advance but without interfering in any way with the neural events associated with the natural deliberative process that leads to Jones’s decision at t to kill Smith. The mechanism, we are told, allows that process to unfold just as it would have if the mechanism weren’t in place. All it does is block “all alternatives to the series.” However, “owing to a fantastic coincidence the pathways it blocks just happen to be all the ones that will be unactualized in any case, while the single pathway that remains unblocked is precisely the route the man’s thoughts would be following anyway (if all neutral pathways were unblocked). Under these conditions,” Hunt says, Jones remains “responsible for his thoughts and actions,” since the mechanism only
Frankfurt Cases 103
blocks alternative thoughts and actions but isn’t among the causes of Jones’s decision and subsequent actions (2000: 218). Examples like this have come to be known as “blockage cases,” for obvious reasons. We can thus refer to Hunt’s version of Revenge as Revenge- B (for blockage). What to make of it? Does Revenge-B give us an example of the desired sort, one in which Jones is morally responsible for deciding to kill Smith even though couldn’t have avoided doing so? It’s not obvious that it does. This becomes apparent when we consider the following question about the case posed by Fischer (1999: 119): did Jones “have access to a scenario in which his neural path makes contact with or ‘bumps up against’ the blockage?” There are problems either way. Suppose, first, that Jones’s deliberative process could have bumped up against the blockage. In that case, Jones could have avoided deciding at t to kill Smith, for had his actual deliberative process bumped up against the blockage at t, that would have forced Jones to decide to kill Smith. However, it would presumably take some time for the blockage to put Jones back on track, so to speak. So, if Jones did “have access to a scenario in which his neural path . . . ‘bumps up against’ the blockage,” it looks as if he could have avoided deciding at t to kill Smith after all. Note, too, that Jones could have avoided deciding on his own to kill Smith. Thus, to the extent that Jones is morally responsible for something in Revenge-B, the fine-grained analysis of Frankfurt cases suggests that while Jones isn’t morally responsible for deciding to kill Smith simpliciter, since he couldn’t have avoided doing so, he can still be morally responsible for deciding at t to kill Smith and also for deciding on his own to kill Smith. It could be objected that the alternative possibility just identified, in which Jones bumps up against the blockage before deciding to kill Smith, is insufficiently robust to ground Jones’s moral responsibility (see Fischer [1994: 131–159] and Hunt [2000: 212–213]).12 Perhaps, but whether that’s so depends on how we are to understand what the neural bumping involved in 12 Hunt (2000: 208–216) discusses another case, which he calls BJS2 (BJS is short for Black, Jones, and Smith), that involves a similar alternative possibility (the possibility that Jones momentarily avoids deciding to kill Smith, only to be forced by Black’s mechanism to decide “a split second after t” to kill Smith [2000: 212]). Hunt contends that Jones is morally responsible in this case for deciding to kill Smith, but that the alternative possibility available to Jones is insufficiently robust to ground Jones’s moral responsibility. However, I disagree, both for the sorts of reasons articulated in the next few sentences of this paragraph and for the sorts of reasons set out in section 3.2.
104 Moral Responsibility and the Flicker of Freedom the alternative possibility represents. If it represents an attempt, or even the beginnings of an attempt, by Jones to pursue other options, then the alternative possibility arguably would be a robust option for Jones. If, for example, the bumping represents an attempt by Jones to (freely) continue deliberating at t, that would count as a robust alternative on any plausible account of robustness. It would certainly pass the reasonable expectations test for robustness proposed in section 3.2, for we could reasonably have expected Jones not to decide on his own at t to kill Smith and to instead attempt (or begin to attempt) at t to continuing deliberating. But what if bumping into the blockage doesn’t represent (the beginnings of) an attempt by Jones to do something else at t? What if it’s just a brief neural glitch on the ineluctable path to deciding to kill Smith, an occurrence over which Jones has no control whatsoever? Then I would agree that the alternative possibility in question isn’t robust and thus isn’t the sort of option called for by PAP. But in that case, I would argue, as I did in connection with Frankfurt’s Original Revenge and his Willing Addict, that Jones isn’t morally responsible for deciding (on his own at t) to kill Smith, and this precisely because he lacked any robust alternative possibilities. He may, however, still be morally criticizable, insofar as his decision reflects poorly on his moral character, and he may be to blame for not trying harder to avoid making such a morally bad decision. But if he couldn’t have avoided deciding (on his own at t) to kill Smith, then we have reason to conclude that he isn’t morally responsible for doing so. Suppose now that Jones couldn’t have bumped up against the blockage. This is how I think Hunt wants us to understand the case, for as Fischer points out, “to have access to the blockage, there would have to be an intermediate set of neural events, different from the actual neural events, that is, as it were, a ‘bridge’ between the actual neural process and the blockage. But even these intermediate events are presumed to be blocked in Hunt’s example” (1999: 119). But if the blockage is indeed so tight that it prevents even the slightest deviation from the actual path, it isn’t clear that the blockage doesn’t compel Jones to decide at t to kill Smith. One way to bring this out is to ask how we are supposed to distinguish the case as Hunt envisions it in which the blockage doesn’t compel Jones’s decision from a version in which the blockage does compel his decision. If there were room to bump up against the blockage, the answer would be straightforward, for in that case we could say that Jones’s
Frankfurt Cases 105
unaided deliberative process alone led to his decision to kill Smith, since that process didn’t interact with the blockage in any way, whereas if it had bumped up against the blockage, the blockage would then have been causally implicated in the production of the decision. But since we are now assuming that there is no room for such neural bumping, we can’t distinguish the two scenarios in that way. So, then, how are we to distinguish them? Absent a compelling answer to this question, it isn’t clear that the blockage can be excluded from the etiology of the decision Jones made at t to kill Smith. But if Jones’s decision to kill Smith was indeed compelled by the blockage, his moral responsibility for any of the resulting events or states of affairs will be in serious doubt. Can’t we, though, just stipulate that the blockage plays no causal role in the actual sequence of events? I don’t think we can. What we want to know is whether Revenge-B provides us with a conceptually coherent case in which there are circumstances that eliminate all of a person’s alternative possibilities for action but that have nothing to do with production of the person’s behavior. Simply stipulating that the blockage plays no role in the etiology of the decision Jones made at t to kill Smith, without any explanation of how that could be, would seem to beg the question at issue, excluding by fiat, as it were, the possibility that the blockage did contribute to Jones’s decision. So, unless proponents of blockage cases can distinguish in a principled way the scenario in which the blockage doesn’t causally contribute to Jones’s decision from the one in which it does, it isn’t obvious that we have a case of the sort critics of PAP are looking for, one in which the circumstances that eliminate the agent’s alternative possibilities don’t compel, and aren’t otherwise among the causes of, the agent’s actual behavior.13
13 I thus agree with Fischer (1999: 119) when he notes that “the example is difficult to imagine (and thus properly to evaluate).” Pereboom (2001: 18) offers a similar assessment of the case, and much of what I’ve said here is indebted to his discussion of the issue. Even Hunt (2005) acknowledges the difficulty. He writes: “The central difficulty is that the conditions barring Jones’s access to alternative pathways and guaranteeing his decision to kill Smith must be distinguished, in some non–ad hoc way, from the sorts of conditions that would beg the question against incompatibilism by causally determining Jones’s decision.” And although Hunt thinks that “blockage cases do seem intuitively different at some level from cases of straightforward causal determination,” he grants that they are “probably not the magic bullet for which PAP’s critics are looking” (2005: 131–132).
106 Moral Responsibility and the Flicker of Freedom
4.4 Modified Blockage There may, however, be a way of modifying cases like Hunt’s so that it avoids this last difficulty. It involves adopting the following “neuro-fictional” story developed by Alfred Mele and David Robb (1998: 104) about how a person’s brain might operate.14 Imagine there are “decision nodes” in Jones’s brain, each associated with a different decision Jones might make, and that the “lighting up” of a node represents the associated decision. Most relevant for present purposes are nodes N1 and N2. The lighting up of N1 represents a decision by Jones to kill Smith, and the lighting up of N2 represents a decision on his part not to kill Smith. When a neural process “hits” one of these decision nodes, it lights up that node, unless the process is either preempted by some other process or the decision node in question has been deactivated (or “neutralized,” as Mele and Robb put it). Now imagine there are two neural processes going on in Jones’s head, process X, which is Jones’s own indeterministic process of deliberation, and process P, a deterministic process of which Jones is completely unaware that was initiated by the neuroscientist using his neural control device. P, being a deterministic process, will inevitably hit N1 at t, whereas, X, being an indeterministic process, might or might not hit N1 at t (e.g., it might hit N2 at t instead). In the event that X and P both hit N1 at t, X will light up N1 (and thus issue in the associated decision at t) and P won’t. This is because Jones’s brain is more sensitive to his own process of deliberation than to artificial processes like P. If, however, the two processes diverge at t—for example, because P hits N1 at t while X hits N2 at t—then P will light up N1 (and thus issue in the associated decision at t to kill Smith) and X won’t light up N2. This is because, by t, the neural control device will have already deactivated N2 along with every other decision node in Jones’s brain except N1. As it happens, both X and P hit N1 at t. Call this version of the case Revenge-MB (for modified blockage). It might not seem a lot like Revenge-B at first, as there isn’t any mention of neural pathways being blocked. But, as Derk Pereboom points out, it’s clear that the deactivation of decision nodes that occurs in the story makes it “very much
14 See also Timpe (2003). Mele and Robb’s story features a guy named Bob, who decides at t2 to steal Ann’s car, but the conceptual machinery it employs can obviously be adapted to fit the narrative about Jones and his decision to kill Smith, which is the narrative typically featured in discussions of Frankfurt cases.
Frankfurt Cases 107
like a blockage scenario” (2001: 18). To make this clearer, we can replace the idea that the neural control device deactivates other decision nodes with the idea that it blocks them, so that no deliberative process could “hit” any node other than N1. Unlike Hunt’s version of the story, though, this modified version promises to provide a way of distinguishing the scenario as it transpires, in which Jones’s unaided deliberative process leads him to decide at t to kill Smith, from the counterfactual scenario in which the decision Jones makes at t to kill Smith is caused by the neural control device. In the actual sequence of events, Jones decided on his own at t to kill Smith, and not as a result of the device, because it was X (Jones’s own, unaided process of deliberation) and not P that lit up N1 at t, whereas Jones would have decided at t to kill Smith as a result of the device if, but only if, it had been P that lit up N1 at t and not X. If Revenge-MB is coherent, we have just the sort of case we are looking for, one in which the circumstances that make it impossible for the agent to avoid behaving as he did at t aren’t among the causes of what the agent did at t. Because the neural control device blocked all decision nodes other than N1 and would have compelled Jones to decide at t to kill Smith had Jones not decided on his own at t to kill Smith, Jones had no choice but to decide at t to kill Smith. However, because Jones decided on his own at t to kill Smith, without any “help” from the neuroscientist or his neural control mechanism, the circumstances that eliminated Jones’s other options play no role whatsoever in the causal history of his decision. But is the story coherent?15 As I see it, the answer depends largely on whether trumping preemption is possible. Trumping preemption is a kind of causal preemption in which there are two or more distinct causal processes that, if they reach completion and aren’t preempted, are individually sufficient to produce a certain effect, all of these processes reach completion, but one of them preempts the others, causing the effect. When this sort of preemption occurs, the efficacious process is said to “trump” the preempted processes, which, because they are preempted, don’t cause the effect, though they would have caused that effect (at the very same time), had they not been trumped by the efficacious process.
15 For the main objections to it, see Ekstrom (2002), Ginet (2003), Ginet and Palmer (2010), Goetz (2002), Kane (2003), Pereboom (2001: 13–18), and Widerker (2000, 2003). Most of these objections deny, without much argument, the possibility of trumping preemption and won’t work if such preemption is in fact possible.
108 Moral Responsibility and the Flicker of Freedom Revenge-MB is supposed to be a case of trumping preemption. Jones’s decision at t to kill Smith issues from his own indeterministic decision-making process, X. That process trumps P, the deterministic process initiated by the neuroscientist, which would have compelled Jones to decide at t to kill Smith, had it not been preempted by Jones’s own deliberative process. So, is trumping preemption possible? Beats me.16 I’m inclined to think it is, though I acknowledge that the issue is extremely delicate. Either way, though, Revenge-MB isn’t a counterexample to PAP. Suppose that trumping preemption is possible. Then stories like Revenge- MB appear to be conceptually coherent and to provide us with cases in which the agent couldn’t have done otherwise at t but in which the circumstances that make that so aren’t among the causes of the agent’s behavior. Even so, the case wouldn’t be a counterexample to PAP, for while Jones could perhaps have tried harder not to make such a bad decision, and while he could have avoided deciding on his own to kill Smith, he couldn’t have avoided deciding (at t) to kill Smith no matter how hard he might have tried. The fine-grained analysis of Frankfurt cases thus tells us that Jones isn’t morally responsible in Revenge-MB for deciding to kill Smith, nor is he responsible in that case for deciding at t to kill Smith, since he couldn’t have avoided either of those things. Jones may, however, be morally responsible for deciding on his own to kill Smith and for not trying harder to avoid deciding to kill Smith, as he could have avoided deciding on his own to kill Smith and could also have tried harder not to make such a bad decision. This assessment of the case is even more plausible if trumping preemption is impossible. Suppose it is impossible and that cases such as Revenge-MB that appear to demonstrate its possibility are in fact cases of simultaneous overdetermination, as is sometimes suggested. If so, then the decision Jones made at t to kill Smith was indeterministically caused by X, Jones’s own decision-making processes, and also deterministically compelled by P, the process initiated by the neuroscientist.17 In that case, Jones can be morally responsible for deciding as a result of process X, insofar as he could have avoided deciding as a result of that process, but he isn’t morally responsible for deciding (at t) to kill Smith, given that his decision was compelled by the 16 For defenses of trumping preemption, see Schaffer (2000) and Mele and Robb (1998, 2003). For doubts about it, see Bernstein (2015b) and Hitchcock (2011). 17 The case now resembles that told by Funkhouser (2009: 361).
Frankfurt Cases 109
neural control mechanism in such a way that left him powerless to avoid deciding (at t) to kill Smith.18
4.5 Self-Imposed Blockage Even if the fine- grained analysis succeeds against blockage cases like Revenge-B and Revenge-MB, that would be a hollow victory for proponents of the analysis if there were other blockage cases in which an agent is morally responsible for what he did at t, and for doing it on his own at t, even though the agent couldn’t have avoided performing the relevant action on his own at t. A case fitting that description would be a counterexample to PAP and obviously wouldn’t be vulnerable to the fine-grained analysis, at least not any version of it articulated thus far. Bradford Stockdale (2022) aims to produce just such a case. Because the agent featured in Stockdale’s example is himself responsible for making sure that he has no alternative at the time of action but to perform the relevant action at that time, we may refer to cases like it as “self-imposed blockage cases.” I’ll refer to the example itself as Guru. Here it is: A very forgetful self-control guru, Gary, knows that it is his twenty-fifth wedding anniversary today. Gary lives in an indeterministic world, though this does not preclude there being instances of deterministic causation in Gary’s world. Knowing that he is always forgetting even the most important things as a result of getting lost in his work, Gary wants now, at a time t (before he begins work for the day), to ensure that he will decide at a later time, t2 (after he gets off work that day), to take his wife out for dinner. Since Gary is such an exceptionally talented self-control guru, he has discovered a way to initiate a deterministic process in his brain that will ensure that a certain decision is made at a particular time of his choosing. When Gary thinks certain thoughts in a specific sequence, he
18 Pace Funkhouser (2009), who claims that Jones would still be morally responsible for deciding to kill Smith. Funkhouser’s argument for this claim is a version of the irrelevance argument discussed in section 3.1.
110 Moral Responsibility and the Flicker of Freedom can initiate a deterministic process in his own brain that will cause him to make a decision at a particular time unless his indeterministic deliberation issues in the same decision at that same time. He then uses this technique to initiate a deterministic process (D) that will cause him to decide at time t2 to take his wife to dinner unless his indeterministic deliberation issues in a decision at t2 to take his wife out to dinner. Gary is so forgetful that, as soon as he starts working, he forgets about his intention to decide at t2 to take his wife to dinner. He also forgets that he initiated and possesses D. D is screened off from the rest of Gary’s consciousness and, as such, plays no role in Gary’s deliberation. Now, as it just so happens, prior to t2 Gary remembers that it is his anniversary and decides on his own at t2 to take his wife out for dinner as a result of his indeterministic deliberation. D plays no role in his decision, as it was screened off, and Gary completely forgot about his earlier decision to take his wife to dinner and ensure the decision was made at t2 by D. However, if Gary had not decided on his own at t2 to take his wife out to dinner, D would have resulted in the decision at t2 to take his wife out. (2022: 32–33)
Stockdale contends that Guru is a counterexample to the following version of PAP: “a person is basically (non-derivatively) morally responsible for what she has done at time t only if, at t, she could have done otherwise” (2022: 31). Because this version of the principle is restricted to basic (i.e., direct, nonderivative) moral responsibility, Stockdale refers to it as PAP-B (for basic). Gary, Stockdale says, is “basically morally responsible for the decision he makes at t2 to take his wife to dinner,” even though, at t2, he couldn’t have avoided making that decision then (2022: 31). What’s more, Stockdale contends that cases like Guru aren’t vulnerable to the fine-grained analysis. Either Gary decides at t2 as a result of his own indeterministic process or as a result of his own deterministic process D. At t2, those are his only options. But “whichever process results in Gary’s decision,” whether D or the indeterministic process that actually led to the decision, “[the decision] will be one [Gary] made on his own” (2022: 33–34). Gary therefore couldn’t, at t2, have done otherwise than decide on his own at t2 to take his wife to dinner. Thus, even if what Gary is most directly morally responsible for is deciding on his own at t2 to take his wife to dinner, Stockdale insists that we would still have
Frankfurt Cases 111
a counterexample to PAP-B, since, at the time of decision, Gary couldn’t have avoided making that decision on his own at t2. There are several points I want to make about this case. The first thing to note about it is that it inherits the metaphysical baggage of modified blockage cases. It requires the possibility of trumping preemption, since it assumes that Gary’s natural indeterministic process of deliberation trumps the special deterministic process D initiated by Gary earlier in the day. Thus, if trumping preemption isn’t possible, then the case, as formulated by Stockdale, isn’t possible either. Suppose, though, that trumping preemption is possible. Even so, Guru isn’t a counterexample to PAP. It’s not a case in which a person is morally responsible for what he did but in which the person lacked a fair opportunity to do otherwise. Gary may be morally responsible for deciding (on his own at t2) to take his wife to dinner, but he had a fair opportunity to avoid doing so. Gary could have refrained earlier in the day from initiating the deterministic process, and if he had, he might very well have decided at t2 not to take his wife to dinner and to continue working instead. However, he resisted the temptation to do the wrong thing. He did so by making sure that, one way or another, he would decide (on his own) at t2 to take his wife to dinner. He may deserve some credit for all this, but if he does, he deserves it in part because he could easily have given in to the temptation to do otherwise but didn’t. Although not essential to my purposes in this book, I’ll also briefly argue that Guru isn’t a counterexample to the version of PAP targeted by Stockdale (i.e., PAP-B). It wasn’t within Gary’s power, at t2, to avoid the decision he made at that time to take his wife to dinner. But whatever moral responsibility Gary bears for that decision derives solely from his earlier action of initiating D, an action that ensured that Gary would decide later, at t2, to take his wife to dinner. We therefore don’t have a case in which a person is basically morally responsible for what he did at a time even though, at that time, he couldn’t have done otherwise then. Stockdale insists that Gary is “basically morally responsible for the decision he makes at t2 to take his wife to dinner because the responsibility for that decision is not inherited from any of [Gary’s] earlier actions.” And he thinks that “the responsibility for that decision is not inherited from any of [Gary’s] earlier actions” because “Even though [Gary] initiated D earlier, D plays no role in his deliberation or decision” (2022: 33). I assume that when Stockdale says “D plays no role in [Gary’s] deliberation or decision” he means
112 Moral Responsibility and the Flicker of Freedom that D isn’t a cause of Gary’s deliberation or decision. Granted. But it follows from this that Gary’s responsibility for his decision isn’t inherited from his earlier actions only if we add the further premise that Gary’s responsibility for his decision can be inherited from those earlier actions only if they are among the causes of that decision. However, Stockdale offers no reason to accept that further premise. In the absence of a compelling defense of it, his argument for the conclusion that Gary is “basically morally responsible for the decision he makes at t2” is incomplete at best. There is, however, something to be said for the opposite conclusion that Gary isn’t basically responsible for his t2 decision. Suppose Gary didn’t have a fair opportunity to avoid his earlier action of activating D and that he isn’t morally responsible for that earlier action (e.g., because it was the result of an irresistible desire or because Gary accidently initiated D, without being aware that he did so). In that case, I no longer have the intuition that Gary is morally responsible for deciding (on his own at t2) to take his wife to dinner, since, in this revised version of the case, he didn’t have a fair opportunity to do otherwise. Indeed, the story is now relevantly similar to modified blockage cases like Revenge-MB, and, as I’ve argued, the agent in cases like that isn’t (directly) morally responsible for the relevant decision. Thus, unless we assume that Gary is morally responsible for his earlier action, an action that ensured he would subsequently decide (on his own at t2) to take his wife to dinner, it’s not at all clear that Gary is morally responsible for deciding (on his own at t2) to take his wife to dinner. This suggests that, at best, Gary is only indirectly or derivatively morally responsible for so deciding, though he may, of course, be directly morally responsible for other things, like making the decision as a result of his indeterministic deliberative process, without having to be helped along by D.
4.6 Limited Blockage The original blockage cases were supposed to be cases in which all an agent’s alternative possibilities for action are blocked off. But, as McKenna (2003) observes, one needn’t eliminate all an agent’s alternative possibilities for action to get a counterexample to PAP. All one would need is a case in which all the agent’s robust alternatives are blocked off, where a robust alternative,
Frankfurt Cases 113
you’ll recall, is one that is relevant per se to explanation of whether or why the agent is morally responsible. If the agent in a case like this is nevertheless morally responsible for what he did, then we would have a counterexample to PAP, a case in which an agent is morally responsible for what he did but not even in part because he could have avoided doing it. McKenna (2003) was the first to try to develop a counterexample to PAP along these lines. In McKenna’s main example, which he calls Brain Malfunction, Casper must choose between doing a good thing (pressing a button and thereby saving a village from a deadly disease) and doing a bad thing (pressing a different button and thereby stealing millions of dollars from his colleagues). He chooses to do, and then does, the bad thing for reasons of his own. Unbeknownst to Casper, though, he couldn’t have chosen to do, and couldn’t have done, the good thing owing to “a small lesion on his brain that blocked the neural pathway constitutive of (or correlated with)” doing the good thing (2003: 210). Rest assured that the brain lesion had absolutely nothing to do with his doing the bad thing. He would have done the same thing, in the same way, and for the same reasons, even if the lesion hadn’t been there and thus even if it had been within his power to do the good thing. All the lesion did was to block the neural pathway associated with doing the good thing. But since Casper made no attempt to do the good thing, the lesion didn’t play a role in guiding Casper’s actual deliberative process. All it did was to ensure that Casper didn’t do the good thing. But it didn’t prevent Casper from not doing the bad thing; Jones could have refrained from pressing the button and stealing the money from his colleagues. Indeed, McKenna invites us to image that there were “oodles and oodles of alternatives” available to Casper. “Casper could have sung a little ditty and done a cutesy jig like Shirly Temple, finishing off with a set of jazz hands; or begun citing nursery rhymes; or made an attempt to eat his fist; or any number of equally ludicrous and irrelevant things” (2003: 212–213). McKenna claims that Casper is blameworthy for doing the bad thing in this case, and given the details of the case, that claim certainly seems plausible. With the exception of the brain lesion, Casper is an otherwise normal, mentally healthy adult, who knows that he ought not to do the bad thing and believes that he should do the good thing instead. What’s more, although Casper couldn’t have done the good thing, he at least could have refrained from doing the bad thing. When we bear all this in mind, it certainly seems that Casper is blameworthy for what he did. However, McKenna argues that
114 Moral Responsibility and the Flicker of Freedom while Casper is indeed to blame for doing the bad thing, this isn’t due even in part to the fact that he could have refrained from doing it. If McKenna is right about that, then Brain Malfunction is a counterexample to PAP. The crucial premise of McKenna’s argument is the claim that simply refraining from doing the bad thing (without also doing the good thing) isn’t a robust alternative possibility for action. But why think that? McKenna notes that not all morally significant alternative possibilities are robust. To illustrate the point, he invites us to consider a case he calls Needed Medication in which Tal finds Daphne unconscious and in need of a special prescription medication, which, unbeknownst to Tal, Daphne has stored in the aspirin bottle. Though Tal could accidentally retrieve the needed medication from the aspirin jar (e.g., not finding the medication in its usual place, he might accidently stumble upon it in a frantic search for the stuff), this surely isn’t the sort of alternative possibility that can plausibly ground moral responsibility. The reason, according to McKenna, is that “It cannot be morally expected of Tal that he consider the option of fetching the [medicine] from the jar marked ‘aspirin’ ” (2003: 208). Cases like Needed Medication suggest that, for an alternative possibility to be robust, it’s not enough for it to be morally significant; it must also be an alternative that, given the agent’s “agent-relative deliberative circumstances,” it would have been reasonable for the agent to have considered as an alternative to the action he actually performed (2003: 209). But, returning now to Brain Malfunction, McKenna contends that Casper’s not doing the bad thing (without also doing the good thing) doesn’t satisfy this requirement for robustness, for “we might simply build into the case that Casper would find this option irrelevant,” the idea being, I take it, that if Casper would find the option irrelevant, it wouldn’t have been reasonable, given his deliberative perspective, for him to have considered it as an alternative to the action he actually performed (2003: 211). It’s thus not even partly in virtue of the fact that Casper could have omitted to do the bad the thing that he is blameworthy for doing it. In other words, omitting to do the bad thing (without also doing the good thing) isn’t a robust option, at least not for Casper. The difficulty with this line of argument, as Michael Robinson points out, is that whether it would be reasonable for an agent to consider something as an alternative to the action the agent performs “is a function of what [the] agent ought to find reasonable (or relevant), not what [the] agent in fact finds reasonable (or relevant)” (2014: 441). Interestingly, McKenna agrees. He says
Frankfurt Cases 115
that it would be a misreading of his position to assume that “the scope of reasonableness from an agent-relative perspective is limited to the values and standards endorsed by the agent.” On the contrary, McKenna insists that the scope of what it would be reasonable for the agent to consider is “subject to objective criteria of rationality and truth” (2003: 209). But if so, then it looks like not doing the bad thing satisfies McKenna’s proposed criterion after all, as it would be reasonable for Casper to consider simply not doing the bad thing as an alternative to doing it. That not doing the bad thing is a robust alternative is further evidenced by the reasonable expectations test for robustness proposed in section 3.2. Is there anything Casper could have done such that we could have reasonably expected him to do it instead of what he did? Of course, there is. He could have decided not to do the bad thing and could have refrained from doing it, and we could reasonably have expected him to do that instead of doing the bad thing. Casper couldn’t have done the good thing in Brain Malfunction, but he could have at least not done the bad thing, a fact which arguably helps explain why he is blameworthy for doing the bad thing. To put the point in a different way: Casper had a fair opportunity to avoid doing the bad thing; he had the powers of reflective self-control, and his situation afforded him a reasonable chance to exercise those powers to avoid doing the bad thing, a fact that seems relevant to an explanation of why Casper is blameworthy in Brain Malfunction for doing the bad thing. Brain Malfunction, then, isn’t a counterexample to PAP. Brain Malfunction isn’t the only sort of limited blockage case, though. David Widerker has proposed a variation on the limited blockage strategy. In one of Widerker’s stories called Z-Persons, it’s predetermined that Jones will either decide (on his own) at t to break a promise or momentarily lose consciousness at t instead, but it’s not within Jones’s power to bring about this latter alternative. This is because Jones is a Z-person. Z-people, as Widerker describes them, are people who “behave in the following way: when in a deliberative situation in which they are strongly tempted to act immorally, they invariably either succumb to the temptation and make the wrong decision, or they lose consciousness for one second, and regain it thereafter” (2006: 169). It’s thus not up to Jones whether he decides (on his own) at t to break the promise or whether he passes out at t instead. Jones therefore doesn’t have a robust alternative to deciding (on his own) at t to break the promise. Jones, of
116 Moral Responsibility and the Flicker of Freedom course, is unaware of all this, and believes himself to be an ordinary person with ordinary options for action. We are to suppose, moreover, that if Jones does decide at t to break his promise, the decision will be uncaused, and that if he instead momentarily loses consciousness at t, no one will force him thereafter to decide to break the promise. (Indeed, we can suppose that after t, the chance to make the decision simply won’t arise again.) Deciding to break the promise therefore isn’t inevitable for Jones, but, again, because the only alternative possibility (Jones momentarily losing consciousness at t) isn’t one Jones himself can actualize, it seems that it isn’t a robust alternative and is thus irrelevant per se to whether or why Jones is blameworthy for what he does. In the end, Jones decides on his own at t to break his promise, despite being aware that, in doing so, he’s behaving very badly. Widerker claims that, in a case like this, it’s plausible that the agent is blameworthy for how he behaved at t even though it wasn’t an option for him to avoid behaving that way at t. Jones chose at t to do something he knew to be immoral, without taking himself to have any justification or excuse for doing so. His action therefore “expressed a lack of respect for morality on his part” (2006: 182), and, since this expression wasn’t caused by anything outside of Jones (indeed, it wasn’t caused by anything at all), Widerker finds it plausible that Jones is blameworthy for the decision, his lack of robust alternative possibilities notwithstanding. There are two difficulties with Widerker’s attempt to provide a counterexample to PAP. First, it’s far from obvious that the case, as described, is coherent. We are to imagine that the only thing that can happen if Jones retains consciousness at t is that Jones decides then to break his promise. I can see how that might be if there is some causal process up and running that will deterministically issue in the decision unless it’s preempted by Jones’s loss of consciousness. But absent any such causal process, I see no reason to think that, if Jones retains consciousness at t, he couldn’t simply refrain from deciding at t to break his promise. It’s therefore far from clear that we have here a case in which Jones lacks a robust alternative possibility for action. Second, supposing the case is coherent, why think Jones is culpable for the decision he made at t to break his promise? Once we recognize that Jones had no choice about what happened at t, that he basically just had to wait and see whether he would decide then to break his promise or whether he would simply pass out instead, it’s by no means obvious that he is culpable
Frankfurt Cases 117
for deciding to break his promise. Indeed, it seems to me that he is not blameworthy. It may be true that Jones’s decision “expressed a lack of respect for morality on his part,” and that he made the decision without being caused to do so by anything outside of himself or, indeed, by anything at all. Note, though, that he couldn’t help expressing a lack of respect for morality. Through no fault of his own, he had no option but to behave in a way that expressed moral disrespect. I therefore don’t see how we can use the fact that his behavior expressed a lack of respect for morality as a basis for blaming him, even if his action was completely uncaused. As we have seen with other cases like Willing Exploiter, expressing ill will or a lack of respect for morality isn’t enough on its own to render an agent blameworthy in the relevant sense. Widerker claims that if we deny that Jones is blameworthy for his decision, we will “have to give up certain basic intuitions about moral blameworthiness.” In particular, he says, “we will have to give up the intuitive assumption that” if a person violates an obligation “in the absence of having an adequate justification or excuse, then he is blameworthy for so acting,” an assumption “often viewed as being explicative of the very notion of moral obligation” (2006: 183). The assumption Widerker identifies is indeed an appealing one. Fortunately, we needn’t give it up if we exonerate Jones for deciding to break his promise; for while Jones did violate, without justification, his obligation not to (decide to) break his promise, proponents of PAP will insist that he has an excuse, namely, he had no control over whether he decided to break his promise. To assume that this isn’t a good excuse for his behavior would beg the question against PAP (at least this is true in the absence of any other reason to think Jones is blameworthy).
4.7 Buffers The basic strategy behind limited blockage cases is to find examples in which an agent’s robust alternatives are somehow blocked off in ways that don’t eliminate the agent’s moral responsibility. A similar strategy underlies what have come to be known as buffer cases. Buffer cases have the following basic structure. As in most Frankfurt cases, there is a neuroscientist waiting to
118 Moral Responsibility and the Flicker of Freedom compel the agent to decide to A if the agent doesn’t decide on his own to A. And, as usual, the agent decides on his own at t to A, so that the neuroscientist never has to intervene. However, it’s stipulated that for the agent to have decided not to A, he would first have had to pause and consider (reasons for) not A-ing. Pausing to do this wouldn’t automatically result in the agent making a different decision, even in an ordinary case in which there is no neuroscientist waiting in the wings. But pausing to consider (reasons for) not A-ing is a necessary condition of the agent making a different decision. It constitutes a mental buffer of sorts between the decision the agent actually makes and any alternative decisions he might have made if he could have made them. To make an alternative decision, the agent would first have had to enter (so to speak) that mental buffer zone. But if he had entered it, by seriously contemplating the reasons not to A, the neuroscientist would have intervened and compelled him to decide to A. David Hunt (2000, 2005) and Derk Pereboom (2000, 2001, 2014) independently developed the buffer cases. Let’s consider Hunt’s (2005: 132–134) case—a variation on Revenge—first. In it, for Jones to avoid deciding to kill Smith, Jones would first have had to seriously consider the possibility of not killing Smith, which he could have done at t. (Indeed, we may suppose that that was the only thing Jones could have done at t other than decide to kill Smith.) As things play out, though, Jones never seriously considers that possibility and decides at t to kill Smith. However, if Jones had considered at t the possibility of not killing Smith, the neuroscientist would have picked up on this via his neural control device and would then have used that device to compel Jones to decide at t +1 to kill Smith. Call this Buffered Revenge. Hunt, Pereboom, and others claim that buffer cases are counterexamples to PAP. Their claim, as applied to Buffered Revenge, is that Jones is blameworthy for deciding to kill Smith even though he couldn’t have avoided doing so. But the fine-grained analysis of Frankfurt cases suggests a different assessment of the case. It suggests that, because Jones couldn’t have avoided deciding (at some time or other) to kill Smith, he isn’t morally responsible (and thus isn’t blameworthy) for deciding to kill Smith simpliciter. Jones could, however, have avoided deciding on his own to kill Smith, he could have tried harder to avoid deciding to kill Smith, and he could have avoided deciding at t to kill Smith, all by considering then the possibility of not killing Smith. What’s more, these alternative possibilities appear to be robust, for we could reasonably have expected Jones not to decide on his own at t to
Frankfurt Cases 119
kill Smith and to instead consider then the reasons for not killing Smith. So, while Jones isn’t blameworthy for deciding to kill Smith, he is blameworthy for deciding on his own to kill Smith, for deciding at t to kill Smith, and for not trying harder to avoid deciding to kill Smith, and he is blameworthy for these things at least in part because he had a fair opportunity to avoid them.19 Pereboom’s (2000, 2001) buffer case isn’t relevantly different from Buffered Revenge. More recently, however, Pereboom (2014) has proposed the following buffer case that he claims isn’t vulnerable to the “timing objection” just articulated. Tax Cut: Jones can vote for or against a modest tax cut for those in his high-income group by pushing either the “yes” or the “no” button in the voting booth. Once he has entered the voting booth, he has exactly two minutes to vote, and a downward-to-zero ticking timer is prominently displayed. If he does not vote, he will have to pay a fine, substantial enough so that in his situation he is committed with certainty to voting (either for or against), and this is underlain by the fact that the prospect of the fine, together with background conditions, causally determines him to vote (although, to be clear, these factors do not determine how he will vote). Jones has concluded that voting for the tax cut is barely on balance morally wrong, since he believes it would not stimulate the economy appreciably, while adding wealth to the already wealthy without helping the less well off, despite how it has been advertised. He is receptive and reactive to these general sorts of moral reasons: he would vote against a substantially larger tax cut for his income group on account of reasons of this sort, and has actually done so in the past. He spends some time in the voting booth rehearsing the relevant moral and self-interested reasons. But what would be required for him to decide to vote against the tax cut is for him to vividly imagine that his boss would find out, whereupon due to her political leanings she would punish him by not promoting him to a better position. In this situation it is causally necessary for his not deciding to vote for the tax cut, and to vote against it instead, that he vividly imagine
19 Ginet (2002) was the first to suggest the timing response to buffer cases. It has also been defended by Franklin (2011) and Palmer (2011, 2013). In Capes (2016) and Capes (2022), I offer a subtly different response to the buffer cases. I still find that response attractive, but I think the timing objection articulated here is more in keeping with the symmetry argument of c hapter 2.
120 Moral Responsibility and the Flicker of Freedom her finding out and not being promoted, which can occur to him involuntarily or else voluntarily by his libertarian free will. Jones understands that imagining the punishment scenario will put him in a motivational position to vote against. But so imagining is not causally sufficient for him to decide to vote against the tax cut, for even then he could still, by his libertarian free will, either decide to vote for or against (without the intervener’s device in place). However, a neuroscientist has, unbeknownst to him, implanted a device in his brain, which, were it to sense his vividly imagining the punishment scenario, would stimulate his brain so as to causally determine the decision to vote for the tax cut. Jones’s imagination is not exercised in this way, and he decides to vote in favor while the device remains idle. (Pereboom 2014: 23)
Jones decides “at t1, a few moments before t3,” where “t3 is the last moment that Jones, by his lights, can make a decision to vote prior to the expiration of the two-minute window” (2014: 23). Pereboom claims that, in this case, “Jones is blameworthy for choosing to vote in favor of the tax cut by t3 despite the fact that for this he has no robust alternative possibility” (2014: 23). The proponent of the timing objection will, of course, disagree, insisting that, because Jones couldn’t have avoided deciding to vote for the tax cut by t3, he isn’t blameworthy for doing so, though he is blameworthy for other things, like deciding at t1 to vote for the tax cut and for not doing what he knew he needed to do to make the right decision (viz., consider his boss punishing him). Pereboom anticipates this reply and attempts to forestall it. He grants that Jones has “an alternative to deciding at t1—for example, continuing to deliberate and deciding at t2 instead.” However, he contends that “as the case is set up, Jones has no robust alternative to making his decision by t3,” a fact that, according to Pereboom, is “sufficient for Jones not being blameworthy for making his decision at t1” (2014: 25–26). I agree that “Jones has no robust alternative to making his decision by t3.” This follows from the simple fact that Jones has no alternative, robust or otherwise, to making his decision by t3. But I disagree that this fact is “sufficient for Jones not being blameworthy for making his decision at t1.” Jones is blameworthy for deciding at t1 to vote for the tax cut in part because he knew that he shouldn’t decide at t1 (or at any other time) to vote for it and that he had a fair opportunity to avoid deciding at t1 to vote for it by imagining
Frankfurt Cases 121
the punishment scenario involving his boss. Moreover, this alternative possibility passes the reasonable expectations test for robustness, for, given that “Jones understands that imagining the punishment scenario will put him in a motivational position to vote against” the tax cut, we could reasonably expect Jones not to decide at t1 to vote for the tax cut and to instead imagine at t1 the punishment scenario. These facts, in conjunction with other background assumptions (e.g., that Jones is sane, etc.) strike me as sufficient to render Jones blameworthy for deciding at t1 to vote for the tax cut.
4.8 Modified Buffers If the fine-grained response to Frankfurt cases is correct, then buffer cases like Buffered Revenge and Tax Cut aren’t counterexamples to PAP. The agent in those cases is blameworthy for A-ing (on his own) at t and for not trying harder to avoid A-ing, since he had a fair opportunity to avoid A-ing (on his own) at t and to try harder to avoid A-ing. However, he isn’t blameworthy for A-ing simpliciter, since he couldn’t have avoided A-ing no matter how hard he might have tried. Perhaps, though, the difficulties I’ve raised for buffer cases can be circumvented. Perhaps there is a way to tweak the cases so that the fine-grained analysis no longer applies to them. McKenna (2018) has suggested a modified buffer case that, he claims, fits this bill. The problem with cases like Buffered Revenge and Tax Cut, McKenna thinks, is that the cue for the neuroscientist to intervene is a free action of the agent. (In Buffered Revenge it’s Jones considering not killing Smith, and in Tax Cut it’s Jones imagining his boss’s reaction). The solution McKenna proposes is to keep the basic structure of buffer cases but make the cue for intervention an involuntarily mental occurrence. McKenna outlines the strategy as follows: Outside the context of a Frankfurt-style example, and assuming an indeterministic context amenable to libertarian freedom, it is easy to imagine cases in which an agent’s free act is not causally determined and in which, were she do to otherwise, her doing so would require that some relevant reason or motivational state have arisen, where whether or not this occurs would also be undetermined by way of some non-voluntary
122 Moral Responsibility and the Flicker of Freedom process. In general, when we reason practically or deliberate, there is some luck as to what reasons do or do not come to mind, or what motivations have what strength they do. This can depend on how much coffee one has had or how high one’s hormone levels happen to be at a certain time. There should be nothing problematic about supposing that such a process is indeterministic and that it is fully compatible with exercises of libertarian freedom. I propose that a Pereboom-style prior sign strategy [be] applied to cases such as these. (2018: 3125)
Applying this strategy to Tax Cut, McKenna offers the following variation on that case: Tax Cut 2: Jones considers voting for or against the tax cut, just as he does in Pereboom’s original case, and he [decides to vote] for the tax cut at t for the same reasons he does at time t in Pereboom’s case. Jones’s doing so just then . . . was not causally determined, since at any time antecedent to his doing so, the thought of his boss learning of his vote against might occur to him by way of a non-voluntary process, whereupon (in the absence of an intervener) he might then exercise his libertarian free will to vote against the tax cut. But as things unfold, Jones proceeds to reason and deliberate in the absence of any further motivation-boosting reasons to vote against, considering pro or con reasons weighted as they were just as in Tax Cut. When he decides on his own to vote for the tax cut, nothing interfered with his doing so. . . . Were he to have imagined the boss-scenario, an intervener then would have intervened [forcing him to decide to vote for the tax cut]. (2018: 3126)
In this version of the case, Jones couldn’t have voluntarily avoided deciding at t to vote for the tax cut, since he couldn’t have voluntarily considered the punishment scenario involving his boss. But neither was it inevitable that Jones would decide at t to vote for the tax cut, since a nonvoluntary process might have resulted instead in Jones vividly imagining at t the punishment scenario. Thus, something else could have happened at t. But whether it did isn’t something over which Jones had any control. It therefore wouldn’t have been reasonable to expect Jones not to decide (on his own) at t to vote for the tax cut. It’s therefore quite plausible that Jones didn’t have a robust alternative to deciding (on his own) at t to vote for the tax cut. So, unlike in Tax Cut,
Frankfurt Cases 123
we have in Tax Cut 2 an indeterministic Frankfurt case in which the agent lacked a robust alternative possibility to A-ing and also lacked a robust alternative to A-ing on his own at t. Do we also have a counterexample to PAP? It depends, of course, on whether Jones is blameworthy for deciding (on his own at t) to vote for the tax cut. It’s not obvious that he is, though. Note that Tax Cut 2 is structurally very similar to Widerker’s Z-Persons. In both cases, Jones had no way to avoid behaving (on his own) at t in the way he did. It was causally possible that he not behave that way at t. However, in neither case was that alternative possibility one Jones could actualize. He thus had no control in either case over whether he acted (on his own) at t in the way he did. And, as I mentioned in connection with Z-Persons, when we keep that fact firmly in mind, it seems that Jones isn’t blameworthy for behaving (on his own at t) as he did.20 McKenna claims that “two points count in favor of treating” Jones’s decision in Tax Cut 2 as sufficiently within Jones’s control for him to be morally responsible for it, even by incompatibilist standards for control. First, McKenna points out that Jones’s decision wasn’t “deterministically produced and . . . [flowed] from typical agential resources arising from the agent’s own motivation, process of deliberation and so on.” Second, McKenna claims that, outside the context of a Frankfurt example, it even remains true up until just prior to the agent’s (putatively) free act that she might have done otherwise, consistent with the past and the laws. After all, without any Frankfurt intervener present, the pertinent motivation-boosting event might have occurred, and the agent then might have exercised her libertarian free will to act otherwise. Hence, it is true to say of the agent
20 It has been suggested to me that, in this paragraph, I have begged the question against the Frankfurt-defender. But I see things differently. Frankfurt cases like Tax Cut 2 are supposed to provide us with cases in which the agent lacked a robust alternative but is blameworthy for what he did, nonetheless. But why should we accept the claim that the agent in such cases is blameworthy? It might be said that that claim is just intuitively obvious. But that’s what I deny. I claim that, when we focus carefully on the features of the case as described, it’s not intuitively obvious that the agent is blameworthy, which means that we need further reason to suppose that he is blameworthy. My claim, then, is that the mere assertion that the agent in a case like Tax Cut 2 is blameworthy is not intuitively obvious and thus requires further defense. By itself, that claim doesn’t beg the question; it’s simply to insist that a premise in the argument against PAP needs further support. Note, moreover, that I do go on to argue below for the claim that we should judge Jones not to be blameworthy for his action in Tax Cut 2.
124 Moral Responsibility and the Flicker of Freedom just prior to her acting as she did that she was able to act otherwise. (2018: 3125–3126)
Given all this, there seems to be no reason to doubt that Jones would have been morally responsible for deciding (at t) to vote for the tax cut, had the intervener not been present. But in Frankfurt cases like Tax Cut 2, the intervener does nothing but monitor the agent’s mental states. It’s thus implausible to suppose that the intervener makes a difference to the agent’s moral responsibility. How, we might wonder, could the intervener affect whether Jones is morally responsible for his decision given that the intervener doesn’t intervene in the actual course of events? So, since Jones would have been blameworthy for his decision had the intervener been absent, he should be no less blameworthy for that decision when the intervener is present. Hence, McKenna concludes that Jones is blameworthy in Tax Cut 2 for his decision to vote for the tax cut even though there were no robust alternative possibilities available to him. The first thing to say about this argument is that it takes for granted the thought that because the neuroscientist doesn’t affect the actual sequence of events, his presence is therefore irrelevant to Jones’s moral responsibility. But that, as we saw in our discussion of Frankfurt’s irrelevance argument (§3.1), isn’t something that we can take for granted. Let’s set that point aside, though, for I think there is a second and even deeper problem with McKenna’s argument. The argument relies on the claim that Jones would have been blameworthy for deciding to vote for the tax cut in a version of Tax Cut 2 that doesn’t include the neuroscientist. However, I believe we have reason to reject that claim. Jones, I’ll argue, isn’t blameworthy for his decision whether the intervener is present or not. It’s true that the intervener has no impact on Jones’s moral responsibility, but that’s because Jones wouldn’t have been morally responsible even in a non-Frankfurt version of the story, one in which no intervener is present. Why not? Well, because Jones had no control over whether he decided at t to vote for the tax cut or whether he instead began considering the punishment scenario involving his boss. As in Widerker’s Z-Persons, he just had to wait and see what happened. And this is true intervener or no intervener. In a version of Tax Cut 2 sans the neuroscientist, Jones could have avoided deciding to vote for the tax cut, if the thought of his boss punishing him had
Frankfurt Cases 125
occurred to him; for if that thought had occurred to him at t, then, in the absence of the neuroscientist, it might have led him to decide at t +1 against voting for the tax cut. But it doesn’t follow from this that Jones could have avoided (i.e., had the option to avoid) deciding (on his own) at t to vote for the tax cut, and, in fact, it’s not true in that case that he could have avoided deciding (on his own) at t to vote for the cut. Given Jones’s psychological make up—given, in particular, that (a) he could have avoided deciding (on his own) at t to vote for the cut only if the thought of his boss punishing him had occurred to him, and (b) it wasn’t up to him whether that thought came to mind—it simply isn’t true that he could have avoided deciding at t to vote for the tax cut. Jones’s inability to avoid deciding then to vote for the cut is secured by the relevant facts of his psychology. All the neuroscientist does (in the original version of Tax Cut 2) is to ensure that if the pertinent thought does occur to Jones at t, Jones will still end up making the decision to vote for the tax cut a few moments later. But the neuroscientist’s presence (in the original version of Tax Cut 2) is irrelevant to whether Jones had the option to avoid deciding (on his own) at t to vote for the tax cut. Once we recognize this, once we see that Jones had no real choice about what happened at t, McKenna’s claim that Jones would have been to some extent culpable for his decision in a version of the case in which the neuroscientist is absent loses much of its force. Of course, Jones might be blameworthy for other things. He might be blameworthy for not trying harder to avoid making such a bad decision. But once we bear in mind that he couldn’t have avoided deciding (on his own) at t to vote for the tax cut no matter how hard he might have tried, it’s just not obvious that he is morally responsible for deciding (on his own at t) to vote for the tax cut. McKenna claims that “outside the context of a Frankfurt example, it . . . remains true up until just prior to the agent’s (putatively) free act that she might have done otherwise, consistent with the past and the laws.” But, as we have just seen, that claims is false (at least if it means that “up until just prior to the agent’s (putatively) free act” she had the option [i.e., the ability and opportunity] to do otherwise). What is true about Jones in Tax Cut 2 is that, in the absence of the intervener, Jones would have had some control over whether he decided (after t) to vote for the tax cut, if “the pertinent motivation-boosting event” had occurred at t; for, again, if the motivation- boosting event had occurred at t, then, in the absence of a Frankfurt intervener, Jones would then have been free (after t) to make an alternative
126 Moral Responsibility and the Flicker of Freedom decision (after t). However, Jones had no control over whether the relevant motivation-boosting event occurred at t or whether he instead decided then to vote for the tax cut. It simply wasn’t up to him which of those two events occurred at t. This is because, unlike in Tax Cut, Jones didn’t have the ability to actively bring about that motivation-boosting event at t. And, again, this is true whether the intervener is present or not. All the intervener does is to make sure that Jones doesn’t make an alternative decision, if the motivation- boosting event happens to occur. But Jones has the same amount of control at and prior to t over what happens at t whether the intervener is present or not. We can thus agree with McKenna that the intervener has no impact on Jones’s moral responsibility. What’s not clear, I claim, is that Jones would have been morally responsible in the absence of the intervener. Indeed, given that Jones had no control over whether he decided (on his own at t) to vote for the tax cut or instead began at t to imagine the punishment scenario, it seems to me that Jones isn’t blameworthy for deciding (on his own at t) to vote for the tax cut. McKenna imagines a critic who insists “that for an action to be directly free in the libertarian [i.e., incompatibilist] sense,” it must be true that the “agent’s ability to act otherwise, just when she acts directly freely, must not depend upon any other indeterministic events breaking one way rather than another. The indeterminacy must be, in some sense, pure, so that it hangs only on the agent’s choosing one way or the other given her precise motivational configuration” (2018: 3126). McKenna rejects this position, and rightly so. It isn’t at all plausible. I bring this point up because I want to make clear that the view of McKenna’s imaginary critic plays no part in my objection to McKenna’s modified buffer case. My claim isn’t that an agent is morally responsible for what he did at t only if the indeterminacy is “pure, so that it hangs only on the agent’s choosing one way or the other given her precise motivational configuration.” My claim, rather, is that a person is blameworthy for how he behaved only if the person had sufficient control over whether he behaved as he did, which, as I see it, requires that the agent had, at some point, a fair opportunity to avoid behaving as he did. Since Jones, in Tax Cut 2, lacked the requisite ability to refrain from deciding at t to vote for the tax cut, he had no control over whether he decided (on his own at t) to vote for it, and thus arguably isn’t blameworthy for deciding (on his own at t) to vote for it.
Frankfurt Cases 127
Of course, McKenna and other critics of PAP might disagree with this judgment. My point, though, is that it isn’t just intuitively obvious that the judgment is mistaken. Once we bear the relevant facts of the case in mind— in particular, once we bear in mind that Jones couldn’t have actively brought about the only other alternative possibility (viz., his vividly imagining at t his boss punishing him), and thus basically just had to wait and see whether he would decide at t to vote for the tax cut or instead vividly imagine his boss punishing him—it isn’t obvious that he is blameworthy for deciding to vote for the tax cut, and, as I say, it seems to me that he is not blameworthy for that decision. Absent a compelling reason to think that he is blameworthy, we still lack a clear counterexample to PAP.
4.9 What Really Counts The way the discussion of alleged counterexamples to PAP often proceeds might give the impression that the central issue dividing critics of the principle from those of us who would defend it is whether it’s possible for someone to be morally responsible for his behavior without having had the option to behave differently. But, as Frankfurt correctly points out, “The critical issue concerning PAP . . . is not whether it is always possible that an agent who is morally responsible for performing a certain action might have acted differently. Rather, it is whether that possibility—even assuming that it is real—counts for anything in determining whether he is morally responsible for what he did” (2003: 340). Frankfurt insists that it doesn’t, and that this can be demonstrated using examples in which the agent could have done otherwise. Frankfurt (1969) characterized cases like (Original) Revenge as ones in which there are circumstances that make an action unavoidable for its agent but that aren’t among the action’s causes. In the end, however, Frankfurt (2003) contends that “the usefulness of the examples . . . does not really depend upon supposing that they describe circumstances that actually make an action altogether unavoidable while playing no role in bringing the action about.” In Frankfurt’s estimation, the examples “effectively undermine the appeal of PAP even if it is true that circumstances that do not bring an action about invariably leave open the possibility that the action might not
128 Moral Responsibility and the Flicker of Freedom be performed. What the examples are essentially intended to accomplish,” he says, “is to call attention to an important conceptual distinction. They are designed to show that making an action unavoidable is not the same thing as bringing it about that the action is performed.” He goes on to claim that “Appreciating this distinction tends to liberate us from the natural but nonetheless erroneous supposition that it is proper to regard people as morally responsible for what they have done only if they could have done otherwise” by making it “easy to see that what really counts [when it comes to moral responsibility for an action] is not whether [the] action was avoidable but in what way it came to be that the action was performed” (2003: 339–340).21 Frankfurt is certainly correct that “making an action unavoidable is not the same thing as bringing it about that the action is performed.” It may also be true that cases like Revenge can help us appreciate the distinction.22 However, it’s doubtful that “Grasping the distinction makes it easy to see that what really counts is not whether an action was avoidable but in what way it came to be that the action was performed” (2003: 340). One could acknowledge the distinction Frankfurt highlights without accepting his conclusion about “what really counts” when it comes to assessing moral responsibility. Indeed, one might come to the exact opposite conclusion. That is, one might conclude that what matters when it comes to moral responsibility isn’t how the agent’s action came to be performed but whether the agent could have avoided performing it. The etiology of the action might be relevant on this view, but only insofar as it bears on whether the agent could have avoided performing the action. A hybrid view is also possible, according to which both the causal history of the action and the option to avoid performing it are independently relevant to whether the agent is morally responsible for the action. Acknowledging the distinction Frankfurt highlights might aid in identifying these competing positions, but by itself it doesn’t help us decide between them.
21 See Sartorio (2017a) for a helpful discussion and further illustration of these points. McKenna (2000: 93) suggests a similar position. 22 It’s worth noting that the distinction can be illustrated using mundane examples. The other day some friends invited my wife and me over to their house, and being the convivial sort, we accepted the invitation. The invitation was undoubtedly among the circumstances that brought it about that we visited our friends, but it needn’t have rendered our visit unavoidable. Similarly, my belief that ice cream is delicious may have causally contributed to my eating some last night. But it needn’t have rendered my act of eating ice cream unavoidable.
Frankfurt Cases 129
The point can be illustrated by considering competing assessments of another famous example of Frankfurt’s: The Unwilling Addict. This addict “hates his addiction and always struggles desperately, although to no avail, against its thrust. He tries everything that he thinks might enable him to overcome his desires for the drug. But these desires are too powerful for him to withstand, and invariably, in the end, they conquer him.” What makes him an unwilling addict is that he “has conflicting first-order desires: he wants to take the drug, and he also wants to refrain from taking it . . . [but] it is the latter desire that he wants to be effective” (1971: 12). In this respect, he is importantly different from the willing addict, who isn’t conflicted in this way. Frankfurt’s position is that the unwilling addict isn’t morally responsible for taking the drug, and, depending on how the details of the case are spelled out, I would agree.23 But why is it that this addict isn’t morally responsible for taking the drug? What’s his excuse supposed to be? As you would expect, Frankfurt’s answer has to do with facts about how the action came to be—specifically with the fact that the addict’s behavior is caused by a desire (the desire to use) that the addict doesn’t identify with or endorse and that therefore doesn’t express the addict’s true self.24 But you needn’t agree with Frankfurt’s specific suggestion about why the addict is off the hook to accept the more general idea that it has something to do with the etiology of the addict’s behavior and nothing at all to do with the fact that he couldn’t have done otherwise. Defenders of PAP such as myself will, of course, disagree, rejecting both Frankfurt’s specific suggestion about why the addict is off the hook, as well as the more general causal history approach of which it’s an instance. We contend that the addict isn’t to blame for using the drug and that this is so in part because he couldn’t have avoided using it. We have here two opposing explanations for why the unwilling addict isn’t morally responsible for using the drug. Bearing them in mind, consider the following questions. Does acknowledging the distinction between “making an action unavoidable” and “bringing it about that the action is performed” favor one of these explanations over the other? Does it make Frankfurt’s causal history explanation more appealing than the explanation suggested
23 E.g., we would need to assume that the addict couldn’t have avoided, and isn’t responsible for, his addiction. 24 See Frankfurt (1971) for Frankfurt’s original position and Frankfurt (1988) for developments.
130 Moral Responsibility and the Flicker of Freedom by PAP? Does it undermine the intuitive appeal of the claim that the addict is off the hook in part because he couldn’t have done otherwise? Does it make it “easy to see that what really counts is not whether an action was avoidable but in what way it came to be that the action was performed”? The answer to all these questions, I think, is no. Indeed, bearing in mind the distinction between “making an action unavoidable” and “bringing it about that the action is performed,” I find the explanation of our judgment about The Unwilling Addict suggested by PAP to be initially much more plausible than the explanation Frankfurt puts forward. To see why, consider a second addict. He too “hates his addiction and always struggles desperately . . . against its thrust,” and he too is an unwilling addict insofar as he wants his desire to refrain from taking the drug to be effective, though it isn’t. However, unlike the first unwilling addict, the desire that leads this second addict to use isn’t irresistible, just very difficult to resist. In this respect, he is more like real-life addicts, who often have the option to not use drugs.25 Note that the causal history of both addicts’ actions (how those actions came to be) is quite similar, the main difference being the strength of the desire that motivated it. Whereas the one addict got high because of an irresistible desire to do so, the other got high because of a very strong but resistible desire to do so. So, if, as Frankfurt claims, all that really matters for moral responsibility is how an agent’s action came to be performed, one might have expected that this second addict would also be off the hook for using the drug. Indeed, Frankfurt’s specific suggestion about why the first unwilling addict is excused has just this implication. The second addict didn’t identify with or endorse the desire that led him to act, and his action didn’t express his true self any more than the first unwilling addict’s action did. So, if that’s enough to get the first unwilling addict off the hook, it should be enough to get the second addict off the hook as well. Arguably, though, it isn’t. Both addicts knew (we may suppose) that they shouldn’t use the drug, but whereas the first addict couldn’t have avoided using it on the occasion in question, the second could have. In the absence of any additional exculpating information, I therefore see no reason to completely absolve the second addict of responsibility for what he did.
25
See Pickard (2015) for a discussion of addiction and the power to do otherwise.
Frankfurt Cases 131
True, it would have been extremely difficult for this second addict to resist the temptation to use. However, the fact that it would have been very difficult to avoid doing the wrong thing typically doesn’t fully absolve an agent of responsibility for doing it, though it may be a mitigating factor, a consideration that reduces the amount of blame a person deserves without rendering the person completely blameless. So, even if we regard as a mitigating factor the fact that it would have been extremely difficult for this second addict to resist the temptation to use, I still see no reason to deny that he deserves at least some blame for taking the drug, given that he knew taking the drug was wrong and had it within his power at the time to avoid taking it. So, we have two unwilling addicts. One isn’t to blame for using, while the other arguably is. But why is that? Why is it that the original unwilling addict isn’t morally responsible for getting high, whereas the second is? PAP again supplies us with an intuitively satisfying answer: the second addict could have refrained from getting high, whereas the first couldn’t have. Note, moreover, that, at least at first glance, it’s not at all clear how the general causal history approach that Frankfurt favors might answer the question, given the similar causal histories of the two actions. This isn’t to say that proponents of the causal history approach have nothing to say about the matter. They definitely do.26 It’s just to point out that, contrary to what Frankfurt claims, acknowledging the fact that “making an action unavoidable is not the same thing as bringing it about that the action is performed” doesn’t make it “easy to see” that “what really counts” when it comes to determining whether a person is morally responsible for what he has done is how the person’s action came to be and not whether the person could have done otherwise, nor does it render PAP any less intuitively appealing. The principle remains as attractive as ever even after we have acknowledged the distinction to which Frankfurt draws our attention.
26
See, in particular, Fischer and Ravizza (1998), McKenna (2013), and Sartorio (2016a: ch. 4).
5 Confirmation Not Counterexample It’s time to take stock. According to the principle of alternative possibilities (PAP), a person is morally responsible for what he did only if he could have avoided doing it. At first blush, this principle seems quite plausible, as it provides a principled explanation of the exculpatory force of certain widely recognized excuses. However, critics of the principle insist that its initial appeal is ultimately a result of hasty generalization. Daniel Dennett, for example, claims that PAP “is initially motivated by little more than inattentive extrapolation from familiar cases” (1984: 553). Here Dennett echoes Frankfurt (1969), who claims that the initial appeal of the principle is due to the fact that, when seeking illustrations of it, we tend to focus on too narrow a range of cases. Attention to a wider variety of examples, these critics contend, reveals clear and compelling counterexamples to the principle. As we have seen, however, matters aren’t as clear cut as they initially seem to be. Some of the alleged counterexamples to PAP do pose a prima facie challenge to the principle, insofar as there is some intuitive pull to the claim that the agent featured in those examples is morally responsible for doing something he couldn’t have avoided doing. However, upon further examination, we have reason to be suspicious of, and ultimately to reject, that intuition. For one thing, the intuition that the agent is morally responsible for what he has done even though he couldn’t have avoided doing it isn’t unambiguous. To see what I mean, consider Revenge one last time. The extent to which we have the intuition that Jones is blameworthy in that case for deciding to kill Smith depends, at least for many of us, on which features of the story we focus on. When we concentrate on the features of the case highlighted by critics of PAP (e.g., the fact that the neuroscientist and his neural control device play no role in the production of Jones’s decision), there is indeed some intuitive pull to the claim that Jones is blameworthy (and thus morally responsible) for deciding to kill Smith even though he couldn’t have avoided Moral Responsibility and the Flicker of Freedom. Justin A. Capes, Oxford University Press. © Oxford University Press 2023. DOI: 10.1093/oso/9780197697962.003.0005
Confirmation Not Counterexample 133
doing so. When we attend to other features of the story, however, matters are much less clear. Consider, in particular, the fact that there was a neural control device in Jones’s head that inevitably would have compelled him to decide to kill Smith had he not decided on his own to kill Smith. The presence of this device makes it so that Jones couldn’t have avoided deciding to kill Smith no matter how hard he might have tried to do so. One way or another, then, Jones was going to decide to kill Smith. The only question to be settled by Jones was whether he would make the decision on his own or whether the device would force him to make it. Now, when I focus on those features of the case, it’s not at all clear to me that Jones is blameworthy for deciding to kill Smith. Indeed, it seems to me that while Jones is blameworthy for deciding on his own to kill Smith, for not trying harder to avoid deciding to kill Smith, and perhaps too for deciding at t to kill Smith, he isn’t blameworthy for deciding to kill Smith. Thus, while it seems clear that Jones is blameworthy for something in Revenge, my intuitions about whether he is blameworthy specifically for deciding to kill Smith are somewhat ambivalent.1,2 Of course, not everyone will share my intuitive ambivalence about the case. There are those to whom it seems rather obvious that Jones is blameworthy in Revenge for deciding to kill Smith even though he couldn’t have avoided doing so. Still, the intuition described in the preceding paragraph in favor of the opposite judgment isn’t idiosyncratic. If it were, the claim that
1 Or consider Revenge 2 (see §1.3). In that case, Jones could have avoided deciding to kill Smith (i.e., he had an ability and an opportunity to avoid deciding to kill Smith). However, the only way he could have avoided doing so is by refusing to comply with a terrible threat that we couldn’t reasonably have expected him not to comply with. Given this, it’s hardly obvious that he is blameworthy for deciding to kill Smith, though, again, he arguably is blameworthy for deciding on his own to kill Smith, for not trying harder to avoid deciding to kill Smith, and for deciding at t to kill Smith, as we could have reasonably expected him to avoid those things. 2 Michael McKenna has suggested to me that this is just an expression of my intuition that PAP is true. I’m not sure that’s quite right, though. PAP is a general principle, but I haven’t here expressed an intuition about anything general. What I’ve done, rather, is express a concrete judgment about a particular case in light of key features of that case. In any event, even if the passage in question is just an expression of my intuition that PAP is true, the basic point still stands. For many of us, our intuitions about whether Jones is blameworthy for deciding to kill Smith are ambiguous. There is some intuitive pull to the claim that Jones is blameworthy for so deciding and also some intuitive pull to the claim that he isn’t. So, in the absence of any reason to prefer one of these intuitions to the other, we lack any firm basis for making a judgment about whether Jones is blameworthy for deciding to kill Smith.
134 Moral Responsibility and the Flicker of Freedom Frankfurt cases are counterexamples to PAP would presumably be much less controversial than it is. It seems, then, that we have varying intuitions about whether someone in Jones’s position could be blameworthy for doing something he couldn’t have avoided doing. Consequently, simply appealing to our intuitions about such cases isn’t going to be enough to settle the question of whether a person can be blameworthy for behavior that was unavoidable for him. If that question can be settled, doing so will require further reflection and argument. One of my central aims in this book has been to develop such an argument. To this end, I turned my attention to cases like Sharks that involve omissions (or inaction more generally). The agent in such cases decides not to A, doesn’t try to A, and doesn’t A, though he could have decided instead to A and could have tried to A. However, he couldn’t have A-ed even if he had tried. Consequently, while he may be blameworthy for deciding not to A, not deciding to A, and not trying to A, he isn’t blameworthy for not A-ing. The scope of the agent’s moral responsibility in cases like this is thus determined in part by what the agent could and couldn’t have done. This conclusion bears on our evaluation of cases like Revenge, for there is a strong presumption in favor of a symmetrical view of moral responsibility, one according to which the determinants of moral responsibility are the same regardless of whether it’s moral responsibility for omissions or moral responsibility for actions that’s at issue, and we have seen no compelling reason to abandon that presumption. So, given that moral responsibility in omission cases like Sharks is determined in part by what the agent could and couldn’t have done, we should conclude that moral responsibility in action cases like Revenge is likewise determined in part by what the agent could and couldn’t have done. The agent in such cases performs an action A on his own at t, though he could have avoided A-ing on his own, could have tried harder not to A, and, in some instances, could have avoided A-ing at t. However, he couldn’t have avoided A-ing even if had tried his best to avoid A-ing. Consequently, while he may be blameworthy for A-ing on his own, for not trying harder not to A, and for A-ing at t, he isn’t blameworthy for A-ing. This doesn’t mean, though, that the agent in such cases deserves less blame than he would have deserved had he been blameworthy for A-ing. He may deserve just as much blame as he would have deserved had he been blameworthy for A-ing. It’s just that he doesn’t deserve blame for quite the same things he would have deserved blame for had he been blameworthy for
Confirmation Not Counterexample 135
A-ing. This fact, together with the fact that there are various, closely related things for which an agent in cases like Revenge might be morally responsible, perhaps explains why it initially seems so plausible to suppose that the agent in such cases deserves blame for A-ing even though he couldn’t have avoided A-ing. We are perhaps conflating a true judgment about how much blame the agent deserves with a false judgment about which items the agent deserves blame for. What, then, are we to conclude about the Frankfurt cases? The first and most obvious conclusion is that they aren’t counterexamples to PAP. To get a counterexample to the principle, we would need a case in which a person is morally responsible for what he did but not even in part because he had a fair opportunity to do otherwise, and, as we have seen, Frankfurt cases don’t fit that bill. But there is, I think, a second, somewhat less obvious conclusion to be drawn. Peter van Inwagen says that the Frankfurt cases “are of the first importance for an understanding of the relationship between free will [i.e., the freedom to do otherwise] and moral responsibility” (1997: 375). Few, I suspect, would disagree with this sentiment, given the central role those cases play in various arguments against PAP. However, if the fine-grained analysis of Frankfurt cases advanced in this book is correct, such cases are important for an additional reason, namely, they tend to confirm rather than disconfirm that principle. Suppose the fine-grained analysis of Frankfurt cases and my defense of it are correct. If so, then the agent’s moral responsibility in those cases is determined in precisely the way we would expect it to be if PAP were true. Moreover, I think we would be hard pressed to explain why the agent’s moral responsibility is determined in that way without appealing to the likes of PAP. That principle, or one like it, would seem to be the most obvious and straightforward explanation for why the agent’s moral responsibility in the sorts of cases we have considered is determined in the way it is. So, if the fine-grained analysis of Frankfurt cases and my defense of it are correct, it provides us with some (defeasible) reason to think PAP is true. Thus, whatever we may conclude about PAP in the end, it can hardly be regarded as an “inattentive extrapolation from familiar cases.” The principle can indeed be extrapolated from familiar cases, but the extrapolation of the principle from those cases is well supported by further considerations. By contrast, many of the central arguments against PAP, and against
136 Moral Responsibility and the Flicker of Freedom extrapolating it from the sorts of cases thought to support the principle, are based primarily on some debatable judgments about fairly recherché thought-experiments, the metaphysical coherence of which is sometimes questionable at best. On the whole, then, when we test the principle using the method of cases, it emerges from the process looking pretty good.
References Adams, Robert Merrihew. 1985. “Involuntary Sins.” Philosophical Review 94: 3–31. Alvarez, Maria. 2009. “Actions, Thought-Experiments and the ‘Principle of Alternate Possibilities.’” Australasian Journal of Philosophy 87: 61–81. Bernstein, Sara. 2015a. “The Metaphysics of Omissions.” Philosophy Compass 10: 208–218. Bernstein, Sara. 2015b. “A Closer Look at Trumping.” Acta Analytica 30: 41–57. Blumenfeld, David. 1971. “The Principle of Alternate Possibilities.” Journal of Philosophy 68: 339–344. Brink, David O., and Dana K. Nelkin. 2013. “Fairness and the Architecture of Responsibility.” Oxford Studies in Agency and Responsibility 1: 284–313. Cain, James. 2014. “A Frankfurt Example to End All Frankfurt Examples.” Philosophia 42: 83–93. Capes, Justin. 2012. “Action, Responsibility, and the Ability to Do Otherwise.” Philosophical Studies 158: 1–15. Capes, Justin. 2014. “The Flicker of Freedom: A Reply to Stump.” Journal of Ethics 18: 427–435. Capes, Justin. 2016. “Blameworthiness and Buffered Alternatives.” American Philosophical Quarterly 53: 270–280. Capes, Justin. 2019. “Strict Moral Liability.” Social Philosophy and Policy 36: 52–71. Capes, Justin. 2022. “Against (Modified) Buffer Cases.” Philosophical Studies 179: 711–723. Capes, Justin, and Philip Swenson. 2017. “Frankfurt Cases: The Fine- Grained Response Revisited.” Philosophical Studies 174: 967–981. Clarke, Randolph. 1994. “Ability and Responsibility for Omissions.” Philosophical Studies 73: 195–208. Clarke, Randolph. 2011. “Omissions, Responsibility, and Symmetry.” Philosophy and Phenomenological Research 82: 594–624. Clarke, Randolph. 2014. Omissions: Agency, Metaphysics, and Responsibility. New York: Oxford University Press. Cyr, Taylor. 2021. “Semicompatibilism and Moral Responsibility for Actions and Omissions: In Defense of Symmetrical Requirements.” Australasian Journal of Philosophy 99: 349–363. Cyr, Taylor. 2022. “The Robustness Requirement on Alternative Possibilities.” Journal of Ethics 26: 481–499. Dennett, Daniel C. 1984. “I Could Not Have Done Otherwise—So What?” Journal of Philosophy 81: 553–565.
138 References Ekstrom, Laura Waddell. 2002. “Libertarianism and Frankfurt-Style Cases.” In The Oxford Handbook of Free Will, ed. Robert Kane, 309–322. New York: Oxford University Press. Fara, Michael. 2008. “Masked Abilities and Compatibilism.” Mind 117: 843–865. Fischer, John Martin. 1986. “Responsibility and Failure.” Proceedings of the Aristotelian Society 86: 251–270. Fischer, John Martin. 1994. The Metaphysics of Free Will: An Essay on Control. Cambridge, MA: Blackwell. Fischer, John Martin. 1999. “Recent Work on Moral Responsibility.” Ethics 110: 93–139. Fischer, John Martin. 2006. My Way: Essays on Moral Responsibility. New York: Oxford University Press. Fischer, John Martin. 2010. “The Frankfurt Cases: The Moral of the Stories.” Philosophical Review 119: 315–336. Fischer, John Martin. 2017. “Responsibility and Omissions.” In The Ethics and Law of Omissions, eds. Dana Nelkin and Samuel Rickless, 148–162. New York: Oxford University Press. Fischer, John Martin. 2022. “The Frankfurt-Style Cases: Extinguishing the Flickers of Freedom.” Inquiry: An Interdisciplinary Journal of Philosophy 65: 1185–1209. Fischer, John Martin, and Mark Ravizza. 1991. “Responsibility and Inevitability.” Ethics 101: 258–278. Fischer, John Martin, and Mark Ravizza. 1998. Responsibility and Control: A Theory of Moral Responsibility. New York: Cambridge University Press. Fischer, John Martin, and Patrick Todd, eds. 2015. Freedom, Fatalism, and Foreknowledge. New York: Oxford University Press. Frankfurt, Harry. 1969. “Alternate Possibilities and Moral Responsibility.” Journal of Philosophy 66: 829–839. Frankfurt, Harry. 1971. “Freedom of the Will and the Concept of a Person.” Journal of Philosophy 68: 5–20. Frankfurt, Harry. 1988. The Importance of What We Care About. New York: Cambridge University Press. Frankfurt, Harry. 1994. “An Alleged Asymmetry between Actions and Omissions.” Ethics 104: 620–623. Frankfurt, Harry. 2003. “Some Thoughts concerning PAP.” In Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities, eds. David Widerker and Michael McKenna, 339–348. Aldershot, UK: Ashgate. Franklin, Christopher E. 2011. “Neo-Frankfurtians and Buffer Cases: The New Challenge to the Principle of Alternative Possibilities.” Philosophical Studies 152: 189–207. Franklin, Christopher E. 2013. “A Theory of the Normative Force of Pleas.” Philosophical Studies 163: 479–502. Franklin, Christopher E. 2018. A Minimal Libertarianism: Free Will and the Promise of Reduction. New York: Oxford University Press.
References 139 Funkhouser, Eric. 2009. “Frankfurt Cases and Overdetermination.” Canadian Journal of Philosophy 39: 341–369. Gettier, Edmund L. 1963. “Is Justified True Belief Knowledge?” Analysis 23: 121–123. Ginet, Carl. 1996. “In Defense of the Principle of Alternative Possibilities: Why I Don’t Find Frankfurt’s Argument Convincing.” Philosophical Perspectives 10: 403–417. Ginet, Carl. 2000. “The Epistemic Requirements for Moral Responsibility.” Philosophical Perspectives 14: 267–277. Ginet, Carl. 2002. “Review of Living without Free Will.” Journal of Ethics 6: 305–309. Ginet, Carl. 2003. “In Defense of the Principle of Alternative Possibilities: Why I Don’t Find Frankfurt’s Argument Convincing.” In Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities, eds. David Widerker and Michael McKenna, 75–90. Aldershot, UK: Ashgate. Ginet, Carl, and David Palmer. 2010. “On Mele and Robb’s Indeterministic Frankfurt- Style Case.” Philosophy and Phenomenological Research 80: 440–446. Glannon, Walter. 1995. “Responsibility and the Principle of Possible Action.” Journal of Philosophy 92: 261–274. Goetz, Stewart. 2002. “Alternative Frankfurt-Style Counterexamples to the Principle of Alternative Possibilities.” Pacific Philosophical Quarterly 83: 131–147. Goetz, Stewart. 2005. “Frankfurt-Style Counterexamples and Begging the Question.” Midwest Studies in Philosophy 29: 83–105. Haji, Ishtiyaque. 1992. “A Riddle Regarding Omissions.” Canadian Journal of Philosophy 22: 485–502. Herbert, Frank. 1965. Dune. New York: Ace Books. Hitchcock, Christopher. 2011. “Trumping and Contrastive Causation.” Synthese 181: 227–240. Hunt, David P. 1996. “Frankfurt Counterexamples: Some Comments on the Widerker-Fischer Debate.” Faith and Philosophy 13: 395–401. Hunt, David P. 1999. “On Augustine’s Way Out.” Faith and Philosophy 16: 1–26. Hunt, David P. 2000. “Moral Responsibility and Unavoidable Action.” Philosophical Studies 97: 195–227. Hunt, David P. 2002. “On a Theological Counterexample to the Principle of Alternative Possibilities.” Faith and Philosophy 19: 245–255. Hunt, David P. 2003. “Freedom, Foreknowledge, and Frankfurt.” In Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities, eds. David Widerker and Michael McKenna, 159–183. Aldershot, UK: Ashgate. Hunt, David P. 2005. “Moral Responsibility and Buffered Alternatives.” Midwest Studies in Philosophy 29: 126–145. Hunt, David P., and Seth Shabo. 2013. “Frankfurt Cases and the (In)Significance of Timing.” Philosophical Studies 164: 599–622. Kane, Robert. 1985. Free Will and Values. Albany: State University of New York Press. Kane, Robert. 1996. The Significance of Free Will. New York: Oxford University Press.
140 References Kane, Robert. 2003. “Responsibility, Indeterminism and Frankfurt-Style Cases: A Reply to Mele and Robb.” In Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities, eds. David Widerker and Michael McKenna, 91–106. Aldershot, UK: Ashgate. Kearns, Stephen. 2011. “Responsibility for Necessities.” Philosophical Studies 155: 307–324. Khoury, Andrew. 2018. “The Objects of Moral Responsibility.” Philosophical Studies 175: 1357–1381. Larvor, Brendan. 2010. “Frankfurt Counter-Example Defused.” Analysis 70: 506–508. Lehrer, Keith. 1968. “Cans without Ifs.” Analysis 29: 29–32. Leon, Felipe, and Neal A. Tognazzini. 2010. “Why Frankfurt-Examples Don’t Need to Succeed to Succeed.” Philosophy and Phenomenological Research 80: 551–565. List, Christian. 2019. Why Free Will Is Real. Cambridge, MA: Harvard University Press. McCormick, Kelly. 2017. “A Dilemma for Morally Responsibility Time Travelers.” Philosophical Studies 174: 379–389. McIntyre, Alison G. 1994. “Compatibilists Could Have Done Otherwise: Responsibility and Negative Agency.” Philosophical Review 103: 453–488. McKenna, Michael. 2000. “Assessing Reasons-Responsive Compatibilism: Fischer and Ravizza’s Responsibility and Control.” International Journal of Philosophical Studies 8 (1): 89–114. McKenna, Michael. 2003. “Robustness, Control, and the Demand for Morally Significant Alternatives: Frankfurt Examples with Oodles and Oodles of Alternatives.” In Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities, eds. David Widerker and Michael McKenna, 201–218. Aldershot, UK: Ashgate. McKenna, Michael. 2005. “Where Strawson and Frankfurt Meet.” Midwest Studies in Philosophy 29: 163–180. McKenna, Michael. 2008. “Frankfurt’s Argument against the Principle of Alternative Possibilities: Looking beyond the Examples.” Nous 42: 770–793. McKenna, Michael. 2013. “Reasons-Responsiveness, Agents, and Mechanisms.” In Oxford Studies in Agency and Responsibility, Vol 1., ed. David Shoemaker, 151–184. New York: Oxford University Press. McKenna, Michael. 2018. “A Critical Assessment of Pereboom’s Frankfurt-Style Example.” Philosophical Studies 175: 3117–3129. Mele, Alfred. 2003. “Agents’ Abilities.” Nous 37: 447–470. Mele, Alfred. 2006. Free Will and Luck. New York: Oxford University Press. Mele, Alfred. 2019. Manipulated Agents: A Window to Moral Responsibility. New York: Oxford University Press. Mele, Alfred, and David Robb. 1998. “Rescuing Frankfurt-Style Cases.” Philosophical Review 107: 97–112. Mele, Alfred, and David Robb. 2003. “BBs, Magnets and Seesaws: The Metaphysics of Frankfurt-Style Cases.” In Moral Responsibility and Alternative Possibilities: Essays
References 141 on the Importance of Alternative Possibilities, eds. David Widerker and Michael McKenna, 127–138. Aldershot, UK: Ashgate. Metz, Joseph. 2020. “Keeping It Simple: Rethinking Abilities and Moral Responsibility.” Pacific Philosophical Quarterly 101: 651–668. Naylor, Margery Bedford. 1984. “Frankfurt on the Principle of Alternate Possibilities.” Philosophical Studies 46: 249–258. Nelkin, Dana Kay. 2008. “Responsibility and Rational Abilities: Defending an Asymmetrical View.” Pacific Philosophical Quarterly 89: 497–515. Nelkin, Dana Kay. 2011. Making Sense of Freedom and Responsibility. New York: Oxford University Press. O’Connor, Timothy. 2000. Persons and Causes. New York: Oxford University Press. Otsuka, Michael. 1998. “Incompatibilism and the Avoidability of Blame.” Ethics 108: 685–701. Palmer, David. 2011. “Pereboom on the Frankfurt Cases.” Philosophical Studies 153: 261–272. Palmer, David. 2013. “The Timing Objection to the Frankfurt Cases.” Erkenntnis 78: 1011–1023. Palmer, David. 2014. “Deterministic Frankfurt Cases.” Synthese 191: 3847–3864. Pereboom, Derk. 2000. “Alternate Possibilities and Causal Histories.” Philosophical Perspectives 14: 119–138. Pereboom, Derk. 2001. Living without Free Will. Cambridge: Cambridge University Press. Pereboom, Derk. 2014. Free Will, Agency, and Meaning in Life. New York: Oxford University Press. Pereboom, Derk. 2021. Wrongdoing and the Moral Emotions. New York: Oxford University Press. Pickard, Hanna. 2015. “Psychopathology and the Ability to Do Otherwise.” Philosophy and Phenomenological Research 90: 135–163. Pike, Nelson. 1965. “Divine Omniscience and Voluntary Action.” Philosophical Review 74: 27–46. Robb, David. 2020. “Moral Responsibility and the Principle of Alternative Possibilities.” In The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), ed., E. N. Zalta, https://plato.stanford.edu/archives/fall2020/entries/alternative-possib ilities/. Robinson, Michael. 2012. “Modified Frankfurt-Type Examples and the Flickers of Freedom.” Philosophical Studies 157: 177–194. Robinson, Michael. 2014. “The Limits of Limited Blockage Cases.” Philosophical Studies 169: 429–446. Robinson, Michael. 2019. “Robust Flickers of Freedom.” Social Philosophy and Policy 36: 211–233. Sartorio, Carolina. 2005. “A New Asymmetry between Actions and Omissions.” Nous 39: 460–482.
142 References Sartorio, Carolina. 2016a. Causation and Free Will. New York: Oxford University Press. Sartorio, Carolina. 2016b. “PAP-Style Cases.” Journal of Philosophy 113: 533–549. Sartorio, Carolina. 2017a. “Frankfurt-Style Examples.” In The Routledge Companion to Free Will, eds., Kevin Timpe, Meghan Griffith, and Neil Levy, 179– 190. New York: Routledge. Sartorio, Carolina. 2017b. “The Puzzle(s) of Frankfurt-Style Omission Cases.” In The Ethics and Law of Omissions, eds. Dana Nelkin and Samuel Rickless, 133–147. New York: Oxford University Press. Sartorio, Carolina. 2019. “Flickers of Freedom and Moral Luck.” Midwest Studies in Philosophy 43: 93–105. Schaffer, Jonathan. 2000. “Trumping Preemption.” Journal of Philosophy 97: 165–181. Shabo, Seth. 2011. “Agency without Avoidability: Defusing a New Threat to Frankfurt’s Counterexample Strategy.” Canadian Journal of Philosophy 41: 505–522. Shabo, Seth. 2016. “Robustness Revisited: Frankfurt Cases and the Right Kind of Power to Do Otherwise.” Acta Analytica 31: 89–106. Shoemaker, David. 2015. Responsibility from the Margins. New York: Oxford University Press. Silver, Kenneth. 2018. “Omissions as Events and Actions.” Journal of the American Philosophical Association 4: 33–48. Speak, Daniel. 2002. “Fanning the Flickers of Freedom.” American Philosophical Quarterly 39: 91–105. Spencer, Joshua. 2013. “What Time Travelers Cannot Not Do (But Are Responsible for Anyway).” Philosophical Studies 166: 149–162. Sripada, Chandra. 2017. “Frankfurt’s Unwilling and Willing Addicts.” Mind 126: 781–815. Steward, Helen. 2008. “Moral Responsibility and the Irrelevance of Physics: Fischer’s Semi-Compatibilism vs. Anti-Fundamentalism.” Journal of Ethics 12: 129–145. Steward, Helen. 2009. “Fairness, Agency and the Flicker of Freedom.” Nous 43: 64–93. Steward, Helen. 2012a. A Metaphysics for Freedom. New York: Oxford University Press. Steward, Helen. 2012b. “The Metaphysical Presuppositions of Moral Responsibility.” Journal of Ethics 16: 241–271. Stockdale, Bradford. 2022. “Moral Responsibility, Alternative Possibilities, and Acting on One’s Own.” Journal of Ethics 26: 27–40. Strawson, Peter. 1962. “Freedom and Resentment.” Proceedings of the British Academy 48: 1–25. Stump, Eleonore. 1996. “Libertarian Freedom and the Principle of Alternative Possibilities.” In Faith, Freedom, and Rationality, eds. Daniel Howard-Snyder and Jeff Jordan, 73–88. Totowa, NJ: Rowman and Littlefield. Stump, Eleonore. 1999. “Moral Responsibility and Alternative Possibilities: The Flicker of Freedom.” Journal of Ethics 3: 299–324. Swenson, Philip. 2015. “A Challenge for Frankfurt-Style Compatibilists.” Philosophical Studies 172: 1279–1285.
References 143 Swenson, Philip. 2016a. “The Frankfurt Cases and Responsibility for Omissions.” Philosophical Quarterly 66: 579–595. Swenson, Philip. 2016b. “Ability, Foreknowledge, and Explanatory Dependence.” Australasian Journal of Philosophy 94: 658–671. Swenson, Philip. 2019. “Luckily, We Are Only Responsible for What We Could Have Avoided.” Midwest Studies in Philosophy 43: 106–118. Timpe, Kevin. 2003. “Trumping Frankfurt: Why the Kane-Widerker Objection Is Irrelevant.” Philosophia Christi 5: 485–499. Todd, Patrick. n.d. “Foreknowledge Requires Determinism.” Philosophy and Phenomenological Research, forthcoming. van Inwagen, Peter. 1978. “Ability and Responsibility.” Philosophical Review 87: 201–224. van Inwagen, Peter. 1983. An Essay on Free Will. Oxford: Clarendon Press. van Inwagen, Peter. 1997. “Fischer on Moral Responsibility.” Philosophical Quarterly 47: 373–381. Vargas, Manuel. 2013. Building Better Beings: A Theory of Moral Responsibility. Oxford: Oxford University Press. Vihvelin, Kadri. 2013. Causes, Laws, and Free Will: Why Determinism Doesn’t Matter. New York: Oxford University Press. Wallace, R. Jay. 1994. Responsibility and the Moral Sentiments. Cambridge, MA: Harvard University Press. Watson, Gary. 1996. “Two Faces of Responsibility.” Philosophical Topics 24: 227–248. Whittle, Ann. 2018. “Responsibility in Context.” Erkenntnis 83: 163–183. Widerker, David. 1995. “Libertarianism and Frankfurt’s Attack on the Principle of Alternative Possibilities.” Philosophical Review 104: 247–61. Widerker, David. 2000. “Frankfurt’s Attack on Alternative Possibilities: A Further Look.” Philosophical Perspectives 14: 181–201. Widerker, David. 2003. “Blameworthiness, and Frankfurt’s Argument against the Principle of Alternative Possibilities.” In Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities, eds. David Widerker and Michael McKenna, 53–74. Aldershot, UK: Ashgate. Widerker, David. 2006. “Libertarianism and the Philosophical Significance of Frankfurt Scenarios.” Journal of Philosophy 103: 163–187. Widerker, David, and Michael McKenna, eds. 2003. Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities. Aldershot, UK: Ashgate. Wolf, Susan. 1987. “Sanity and the Metaphysics of Responsibility.” In Responsibility, Character, and the Emotions: New Essays in Moral Psychology, ed. Ferdinand David Schoeman, 46–62. New York: Cambridge University Press. Wolf, Susan. 1990. Freedom within Reason. New York: Oxford University Press. Wyma, Keith. 1997. “Moral Responsibility and Leeway for Action.” American Philosophical Quarterly 34: 57–70.
144 References Zagzebski, Linda Trinkaus. 1991. The Dilemma of Freedom and Foreknowledge. New York: Oxford University Press. Zagzebski, Linda Trinkaus. 2000. “Does Libertarian Freedom Require Alternative Possibilities?” Philosophical Perspectives 14: 231–248. Zimmerman, David. 1994. “Acts, Omissions and Semi-Compatibilism.” Philosophical Studies 73: 209–223. Zimmerman, Michael J. 1988. An Essay on Moral Responsibility. Totowa, NJ: Rowman and Littlefield. Zimmerman, Michael J. 2002. “Taking Luck Seriously.” Journal of Philosophy 99: 553–576. Zimmerman, Michael J. 2015. “Varieties of Moral Responsibility.” In The Nature of Moral Responsibility: New Essays, eds. Randolph Clarke, Michael McKenna, and Angela M. Smith, 45–64. New York: Oxford University Press.
Index For the benefit of digital users, indexed terms that span two pages (e.g., 52–53) may, on occasion, appear on only one of those pages. ability, 6–7, 16–17, 19, 34, 86, 125–26, 133n.1 ability to do otherwise, 6, 125–26. See also freedom to do otherwise and opportunity to do otherwise actual sequence, 16–17, 61, 67–68, 71, 105, 106–7, 124 Adams, Robert Merrihew, 96n.5 agency, 7, 11–12, 14–15, 19–20, 69, 71, 75 All Clear, 23, 24–25, 32–34, 35–36, 38, 42– 43n.25, 43, 44, 47–49, 50, 55, 76 Alvarez, Maria, 14–15, 16, 18, 19 Arsenic, 73 basic responsibility. See direct responsibility Bernstein, Sara, 51n.30, 108n.16 blame, the nature of, 3, 96–97, 100–1 blockage cases, 101–5 limited, 112–18 modified, 106–9, 111, 112 self-imposed, 109–12 Blumenfeld, David, 94n.2 Brain Malfunction, 113–16 Brink, David. 6–7, 14n.21 Broken Phone, 80, 85, 86 buffer cases, 117–21 modified, 121–27 Buffered Revenge, 118–19, 121 Cain, James, 16n.26 Causal determinism. See determinism Cheesesteak, 74–75 Clarke, Randolph, 4nn.5–6, 23n.3, 30n.12, 33–34, 35–36, 35n.17, 35n.18, 37–39, 40, 42, 51, 51nn.29–30, 51n.31, 63n.7 compatibilism, 7, 8n.12, 12 control, 7, 8, 13, 31–32, 41–42, 43, 44, 53– 55, 77–78, 88–91, 104, 117, 122–23, 124, 125–26
actual causal, 32, 42, 43, 53–55 direct, 88–91 regulative, 32, 42, 53, 54–55 situational, 6–7, 14n.21, 16–17 could have done otherwise, 1, 4–6, 8, 13, 16– 18, 19, 57, 59, 62, 95, 110, 127–28. See also ability to do otherwise Cyr, Taylor, 25n.5, 25n.6, 64–65 Dennett, Daniel, 132 determinism, 7, 8n.12, 11–12, 17–19, 92n.1, 94, 95, 96, 97, 106, 108–11, 116, 123 deterministic causation. See determinism dilemma defense, 17–19, 94n.3 direct responsibility, 3–4, 5n.8, 16n.26, 73– 74, 88, 90–91, 110–12 Ekstrom, Laura Waddell, 107n.15 Fara, Michael, 6, 101n.10 fine-grained analysis, 19–21, 22, 23–24, 57, 62, 66–67, 68, 69, 71–72, 74, 75–76, 77, 78, 80, 81–83, 87, 90–91, 103, 108, 109, 110–11, 118–19, 121, 135 Fischer, John Martin, 5n.7, 11n.14, 12– 13n.18, 19–20, 22–23, 32, 35n.18, 37n.23, 42–43n.25, 53–55, 59n.2, 62, 63, 64–65, 66, 80, 80n.12, 81, 92n.1, 101n.11, 103–4, 105n.13, 131n.26 flicker of freedom, 19–20, 66, 68, 69 foreknowledge, 11, 92n.1, 97n.7 Frankfurt, Harry, 1–2, 13, 14–15, 16, 17n.27, 19–20, 22, 24n.4, 33, 34, 35–36, 35n.17, 35n.18, 36n.19, 37–38, 42, 57–61, 82n.14, 83, 92–94, 98–99, 102, 127– 31, 132 Franklin, Christopher E., 16n.26, 87n.19, 119n.19
146 Index free will, 1–2, 7n.11, 9–10, 11, 119–20, 122, 123–24, 135 freedom to do otherwise, 6, 7, 8n.12, 9–12, 92n.1, 94, 95, 96, 97, 135 Funkhouser, Eric. 108–9nn.17–18 Gettier, Edmund, 1–2 Ginet, Carl, 16n.26, 19n.29, 72–75, 107n.15, 119n.19 Glannon, Walter, 35n.18, 36n.19 Goetz, Stewart, 19n.29, 107n.15 Guru, 109–12 Haji, Ishtiyaque, 35n.18, 36n.19 Herbert, Frank, 97n.6 Hero, 37–38 Heroic Effort, 60–61, 84 Hitchcock, Christopher, 108n.16 Hunt, David, 11n.15, 22n.1, 50–51, 76n.10, 92n.1, 102–7, 118–19 incompatibilism, 7, 8n.12, 11–12, 17–18, 95n.4, 105n.13, 123, 126 Indecision, 63, 64, 89 indirect responsibility, 3–4, 5n.8, 16n.26, 73, 88, 90–91, 112 irrelevance argument, 57–61, 86n.18, 109n.18, 124 irrelevance thesis, 24–25, 57–59 JoJo, case of, 97, 101 Kane, Robert, 19n.29, 75–76, 77, 107n.15 Kearns, Stephen, 25n.5, 27n.7 Khoury, Andrew, 82n.14 Larvor, Brendan, 15n.22, 15n.24 Lehrer, Keith, 8 Leon, Felipe, 5n.7 libertarianism, 119–20, 121–22, 123– 24, 126 List, Christian, 12n.16 Make it Rain, 26, 27 McCormick, Kelly, 97n.7 McIntyre, Alison, 33, 34, 35–36, 35n.18, 42 McKenna, Michael, 2n.2, 12–13n.18, 13n.19, 13n.20, 57–58, 83, 84–86, 112–15,
121–22, 123–24, 125–27, 128n.21, 131n.26, 133n.2 Mele, Alfred, 5, 5n.8, 6n.9, 7n.10, 12– 13nn.17–18, 39–40, 62n.5, 101n.9, 106, 108n.16 Metz, Joseph, 7n.10 Million Dollar Button, 73 Minimal Decency, 60–61, 84 moral luck, 78–83 moral responsibility, definition of, 2 Naylor, Margery Bedford, 16n.26 Needed Medication, 114 Nelkin, Dana, 2–3n.4, 6–7, 12n.16, 14n.21 O’Connor, Timothy, 16n.26 omissions, 4, 8, 23–24, 27–29, 32–43, 50–56, 63, 88, 134 on his own, 13–15, 16–17, 18–21, 23–24, 37, 38, 55–56, 62–63, 64–65, 66, 67–68, 69–70, 71–72, 74, 75–76, 77–78, 81–82, 87, 88–90, 91, 92n.1, 93–95, 97, 103–4, 106–7, 108, 109, 110–11, 112, 115–16, 117–19, 121, 122–23, 124–25, 126, 133, 134 definition of, 19–20, 69 opportunity, 6–7, 9, 14n.21, 45–46, 49, 67– 68, 86, 87 opportunity to do otherwise, 6–7, 16–17, 19, 20, 21, 55, 65, 66, 94–96, 111, 112, 115, 118–19, 120–21, 125–26, 133n.1, 135 options, 6, 7–8, 9–10, 14n.21, 23, 28n.10, 30, 35–36, 37–38, 42, 54, 63, 64–65, 67–68, 74–75, 81, 82–83, 86n.17, 96, 102, 103–4, 107, 110–11, 114, 115–16, 117, 124–26, 127, 128, 130 Original Revenge, 92–97 Otsuka, Michael, 20n.30, 75–76 Palmer, David, 59n.2, 107n.15, 119n.19 Pereboom, Derk, 2n.3, 12n.17, 105n.13, 106– 7, 107n.15, 118–20, 121–22 Pickard, Hanna, 98n.8, 130n.25 Pike, Nelson, 11n.14 praise, the nature of, 3 principle of alternative possibilities, 1–9, 14, 16–19, 20–21, 55–56, 62, 65n.8, 79, 98, 101, 102, 104, 105, 105n.13, 108, 109,
Index 147 110, 111, 112–14, 115, 116, 117, 118–19, 121, 123, 127–28, 129–30, 131, 132–34, 133n.2, 135–36 diachronic reading of, 5–6 restricted to direct responsibility, 4, 5n.8, 16n.26, 110–11 significance of, 9–13 synchronic reading of, 5 principle of derivative blameworthiness, 46–50 principle of transmission of responsibility, 43–46, 47–48, 49 prior sign cases, 92–97, 121–22 Ravizza, Mark, 12–13n.18, 22–23, 23n.3, 35n.18, 37n.23, 42–43n.25, 53–55, 64– 65, 131n.26 remedial obligations, 30–31 Revenge, 13, 14–15, 16, 17–21, 23–24, 35, 50, 53, 54–56, 57, 59, 59n.2, 62–63, 64, 65, 66, 71, 74, 75–76, 77–78–, 81–83, 88, 89–91, 92, 93, 98, 102–3, 118, 127– 28, 132–35 Revenge-B, 103, 105, 106–7, 109 Revenge-MB, 106–9, 112 Revenge 2, 16–17, 19, 20, 133n.1 Robb, David, 1n.1, 106, 108n.16 Robinson, Michael, 16n.26, 63n.6, 76n.11, 114–15 robustness, 62–66, 103–4, 112–13, 114, 115– 16, 117–19, 120–21, 122–23, 124 reasonable expectations test for, 66, 99, 103–4, 115, 120–21 Sartorio, Carolina, 2n.2, 12–13n.18, 27n.9, 36n.22, 41, 43–45, 46–47, 48, 49, 58, 58n.1, 60, 87–88, 89n.20, 90–91, 128n.21, 131n.26 Schaffer, Jonathan, 108n.16 semicompatibilism, 12 Shabo, Seth, 15n.22, 22n.1, 76n.10 Sharks, 22–25, 26–27, 28–29, 30, 31–34, 35– 39, 40, 41–43, 44, 47–49, 50, 53, 54–56, 76, 77, 86, 96, 98–99, 100–1, 134 Shoemaker, David, 2n.3 Silver, Kenneth, 52n.32 Sloth, 35–39, 40, 41–43
Speak, Daniel, 16n.26 Spencer, Joshua, 97n.7 Squeaky Button, 58–60 Sripada, Chandra, 99–100 Steward, Helen, 15, 15n.22 Stockdale, Bradford, 109–12 Strawson, Peter, 13n.19 Stump, Eleonore, 12n.17, 66–72, 94n.2 Swenson, Philip, 16n.26, 40n.24, 42–43n.25, 80n.12, 83n.15, 92n.1 symmetry argument, 22–24, 119n.19 symmetry thesis, 23–24, 50–55, 99, 134 Tax Cut, 119–21, 122–23, 125–26 Tax Cut 2, 122–23, 124–27 Timpe, Kevin, 106n.14 Todd, Patrick, 11n.14, 58n.1 Too Far Away, 29–30 trumping preemption, 107–9, 111 Two Buttons, One Bomb, 45, 46, 47, 49, 55 Two Buttons, One Bomb 2, 45–46, 47, 49–50 Two Buttons, One Bomb 3, 49–50, 55 Unwilling Addict, 129–31 van Inwagen, Peter, 7n.11, 8, 16n.26, 80n.13, 135 Vargas, Manuel, 10n.13 Vihvelin, Kadri, 12n.16 Voting Booth 1, 39, 40 Voting Booth 2, 39–40 Wallace, R.J., 6–7, 27–28, 27n.7 Watson, Gary, 2n.3 Whittle, Ann, 27n.9 Widerker, David, 2n.2, 13n.20, 19n.29, 58n.1, 65n.8, 84–86, 107n.15, 115–16, 117 Willing Addict, 98–101, 104, 129 Willing Exploiter, 99–101, 117 Wolf, Susan, 2–3n.4, 97 Wyma, Keith, 19n.29, 20n.30 Zagzebski, Linda Trinkaus, 11n.15, 12n.17, 78, 79, 81–82 Zimmerman, David, 35n.18, 36n.19 Zimmerman, Michael J. 2n.3, 80n.12 Z-Persons, 115–17, 123, 124