185 97 2MB
English Pages 218 Year 2014
Morality in Times of Naturalising the Mind
Philosophische Analyse / Philosophical Analysis
Herausgegeben von/Edited by Herbert Hochberg, Rafael Hüntelmann, Christian Kanzian, Richard Schantz, Erwin Tegtmeier
Volume / Band 59
Morality in Times of Naturalising the Mind
Edited by Christoph Lumer
ISBN 978-1-61451-799-3 e-ISBN (PDF) 978-1-61451-801-3 e-ISBN (EPUB) 978-1-61451-939-3 ISSN 2198-2066 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2014 Walter de Gruyter Inc., Boston/Berlin Printing: CPI books GmbH, Leck ♾ Printed on acid-free paper Printed in Germany www.degruyter.com
Contents Introduction Morality in Times of Naturalising the Mind – An Overview.........3 CHRISTOPH LUMER Part I: Free Will, Responsibility and the Naturalised Mind 1 Naturalizing Free Will – Empirical and Conceptual Issues .........45 MICHAEL PAUEN 2 Libet's Experiments and the Possibility of Free Conscious Decision .......................................................................................63 CHRISTOPH LUMER 3 The Effectiveness of Intentions – A Critique of Wegner ...........105 CHRISTOPH LUMER Part II: Naturalising Ethics? – Metaethical Perspectives 4 Neuroethics and the Rationalism/Sentimentalism Divide ..........127 MASSIMO REICHLIN 5 Experimental Ethics – A Critical Analysis ................................145 ANTONELLA CORRADINI Part III: Naturalised Ethics? – Empirical Perspectives 6 Moral Soulfulness & Moral Hypocrisy – Is Scientific Study of Moral Agency Relevant to Ethical Reflection? ....................165 MAUREEN SIE Part IV: Neuroethics – Which Values? 7 The Rationale Behind Surgery –Truth, Facts, Values ................195 ARNALDO BENINI Biographical Notes on the Authors ................................................203 Name Index ....................................................................................207
Introduction
Morality in Times of Naturalising the Mind – An Overview CHRISTOPH LUMER The scientific and philosophical attempts of recent decades to provide a neurophysiologic or otherwise naturalising explanation of the mind had an impact on ethics too, albeit with some delay. There are experiments to find the neurophysiologic bases of moral judgement and action. Neurophysiological and psychological studies on the causes of actions have provoked debates about the existence of our freedom of will, or responsibility and of practical rationality altogether; and experimental as well as medical interventions on the brain have led to the emergence of neuroethics of such interventions. The developments driven by neurophysiology go along with a general strengthening of the efforts to study the empirical bases of morals and moral action – apart from the physiological, also their psychological, cognitive and evolutionary bases. This naturalistic wave, in turn, has provoked comments on the interpretation of particular empirical findings as well as more general debates about the role and use of empirical information in ethics. Here the spectrum of positions ranges from unconditional naturalism, which sees such empirical research as the very aim of fully developed philosophy, through various intermediate metaethical conceptions, which defend the methodological autonomy of ethics but give empirical information a more or less important role in it, to apriorism, which views (normative) ethics as a purely conceptual matter and denies any relevance of empirical research for philosophy. The present volume (apart from Benini’s chapter on neuroethics) contributes to the latter, i.e. interpretative and metaethical, debates with chapters in a reflective spirit and often with a critical evaluation of major developments of the naturalistic enterprise in ethics. Apart from presenting the chapters of this volume, this introduction will provide some background information and orientation in the form of brief overviews and metaethical assessments of the main fields of the just mentioned empirical
4
CHRISTOPH LUMER
research on morals and their bases: 1. neurophysiology and 2. psychology of action and decision, 3. moral physiology (i.e. the science of the neurophysiological basis of moral judgements and actions), and 4. moral psychology.
1. Neurophysiology of Action and Decision Neurophysiology of action and decision explores the physiological mechanisms behind our decisions, actions, sense of agency and behind the (auto- or hetero-) attribution of such events – in particular: time, place and the interrelation of the respective neurological processes. More specific questions regard: What is the role of cognitive, control and suppression processes in decision? Where do these processes take place, probably in the prefrontal cortex, but exactly where and when? What is the role of emotional mechanisms (in the basal ganglia etc.) for decision and action execution? What are the physiological mechanisms of reward? How and where are goals and subgoals processed, in the frontopolar cortex? Which are the unconscious determinants of apparently free actions? What are the time and role of conscious decision? Does it occur simultaneously or later than the real physiological “decision”? Is it identical to or supervenient on the real “decision”, or has it only a secondary function, whereas the real “decision” occurs earlier and unconsciously? How are the capacity to control impulsive behaviour and certain action tendencies, like sexual, aggressive or compulsive, as well as the general level of activity influenced by neuromodulators, such as serotonin and dopamine, neurotransmitters and hormones? Are there ideomotor actions, i.e. actions that are caused by merely thinking of that action, and how do ideomotor actions function – e.g. because mere representation of actions and representation for executive purposes are materialised in the same place? Is awareness of action localised in the same place during intention formation as during action execution? What are good physiological predictors of actions? What are the neural correlates of self-ascriptions of actions? What is the mechanism of intention recognition in other subjects? Is it mediated by mirror neurons? What are the physiological correlates of dysfunction in actions, and, conversely, what are the consequences of certain brain lesions for our decisions and actions? And so on.
Morality in Times of Naturalising the Mind – An Overview
5
Most of the respective findings are not directly relevant for ethics 1 – not even for ethics in a broad, Aristotelian sense, which includes the prudential aspects of our actions – but some definitely are; and sometimes it is difficult to predict which questions in the end will have ethically relevant answers. Additionally, some findings are indirectly relevant, e.g. if physiological observation helps answer ethically relevant but essentially psychological questions like whether certain decisions are taken on a more emotional or a more rational basis.2 (If we know in which brain areas the respective kind of processing takes place and if we dispose of respectively fine-grained observations of this processing, e.g. by functional magnetic resonance imaging (fMRI), we may establish the answer to the empirical emotion-reason question physiologically.) Two complexes which are definitively relevant for ethics and have attracted more than ephemeral attention among ethicists are, first, the restriction of freedom and our capacity of control by anomalous physiological factors like frontal lobe disorder, insufficient serotonin levels (due to disease, for example), psychopathy or autism, and, second, unconscious determinants of our normal decisions or directly of our normal actions. The first complex has a directly practical relevance in jurisprudence and therefore has found much attention in law but not that much in philosophy, where, however, a general and also neurophysiologically informed theory of gradually restricted responsibility should be developed (Churchland 2002: 211-219). The second complex instead, during the last two decades has received much attention in philosophy as well as in the general public in the sequel of neurophysiologist Benjam Libet’s studies of unconscious determinants of intentions and actions (scientific 1
2
Therefore, the present discussion of the neurophysiology of decision and action is very selective. Some overviews of this field are: first introduction: Ward 2006: chs. 8; 13; extensive overviews and some detailed discussion: Berthoz 2006; Jeannerod 1997; Passingham & Lau 2006; Spence 2009; Vartanian & Mandel 2011. Another example is the physiological confirmation of the psychological differentiation between actions caused by present intentions (the paradigm of intentional action) and habitualised, automatic, i.e. unconsciously initiated and performed actions; the actions of these two classes are caused by completely different neurophysiological pathways (Neal et al. 2011: 1429).
6
CHRISTOPH LUMER
synthesis of these studies: Libet 1985; further main elaboration: Libet 2004). Libet claims to have found that intentions are preceded (by about 500 ms) and determined by unconscious but physiologically measurable (on the vertex and temples) electrical readiness potentials in the brain, which lead to the predetermined action if no conscious veto of the subject intervenes. Even for compatibilists, who believe that causal determinacy of decisions in itself does not exclude freedom of decision, such a finding would completely undermine our traditional theory and practice of ascribing freedom of decision and responsibility. This is so because the traditional theory and practice bet on (conscious) rational deliberation and decision, which is able to find new possibilities of action, to consider and respect relevant reasons and to critically and consciously scrutinise these possibilities and their consequences. If, however, the main “decision” is already taken in the form of a readiness potential and the conscious intention only reflects this “decision”, then the action cannot be determined by such a rational deliberation and decision and, hence, cannot be free. Since this is a fundamental ethical question Libet’s releases have led to a huge debate with thousands of publications; the present volume adds two more to them (chapters 1 and 2). Whereas a considerable portion of the commentators, including several (allegedly realistic) philosophers of mind and e.g. the neurologists Gerhard Roth and Wolf Singer, simply accept Libet’s findings as one of the definite proofs that freedom of decision is an illusion, many empirical scientists, however, have harshly criticised the methods, measurements and interpretation of Libet’s studies. And philosophers of action as well as ethicists, in addition to advancing in part similar criticisms, have found conceptual faults in these studies, like confusing an urge to act with an intention to act or overlooking general distal intentions or equating freedom with indeterminacy; probably there is no ethicist engaged in this debate who accepts Libet’s conclusion (detailed discussion and references: Pauen and Lumer, chapters 1 and 2 of this volume). The upshot of this critique is that Libet has observed only urges to act (instead of intentions), which, furthermore, in part were only artificially induced by the experiments themselves and whose timing is still entirely unclear, and that the assumed determining effect of the readiness potentials for action is not more than a methodical artefact.
Morality in Times of Naturalising the Mind – An Overview
7
In light of this devastating critique, which has completely demolished Libet’s attack on traditional ideas of freedom of decision and action, it is surprising how much weight it is still given. Sometimes the impression is that some of Libet’s followers accept his theory because they are fascinated by a picture of consciousness in general and of intention as well as of the self in particular as something like a computer display or the measurement display of some other machine, whose indications are produced by the machine and which can tell a bit about what is going on inside but has no functional role in the machine’s operation; let us call this the “display” theory of consciousness, intention etc.3 The opposite, personalist view, of course, would not deny that consciousness is only the tip of an iceberg of an immensely complex and mighty, unconscious “underwater” structure, but it would stress that this conscious tip in large part and in many important respects effectively controls the ensemble. The critique of Libet’s work, of course, does not imply that the personalist view has now been proved to be true and the display theory to be false, however Libet’s findings do not contribute anything to a proof to the contrary.
2. General Psychology of Action The current neuro-hype notwithstanding, general psychology of action is more directly relevant for ethics than the respective physiology because ethics and rationality theory normally use psychological and not physiological categories. They do this because, in the end, they have to propose directly applicable rules of action or decision, which, therefore, contain conditions whose fulfilment is (mostly) epistemically (directly) accessible to the subject – like one’s own beliefs, desires, emotions, in contrast to neurophysiological states. In order to be able to propose good rules, ethics and rationality theory then need empirical information about the (sufficient) conditions or consequences of such epistemically directly accessible conditions. In particular, these theories need information as to whether their proposals are realisable, whether 3
Some exponents of this view also call its main point the “zombie theory” (Koch & Crick 2001; Clark et al. 2013), where this label is not intended to mean, as usually, that we do not have consciousness but only that this consciousness does not decide, whereas the unconscious machinery decides.
8
CHRISTOPH LUMER
they are not realised necessarily (such that there is no choice to behave differently and a respective proposal would be nonsensical) and whether they are sufficiently good in relation to other possible proposals (hence information about possible alternatives and the consequences is needed); all of this information contains psychological concepts in the antecedent or in the consequent condition (Lumer 2007a; 2007b: sects. 6-8). Accordingly, many parts of decision psychology are highly relevant for a (prudential) rational decision theory as well as for general ethics (Lumer 2007b). Even the debate between Kantians and Humeans, whether an apriori approach can make justifications of morals motivationally relevant and influential or whether reliance on the subject’s desires is indispensable and will shape the content of morals, is mostly and essentially a debate about a decision psychological question. In recent decades empirical research in this field has provided a wealth of useful information (overviews: Camerer 1995; Crozier & Ranyard 1997; Hardman 2009; Koehler & Harvey 2004; Manktelow 2012; Payne et al. 1993). One result e.g. is that deciders are very flexible with respect to the decision criterion used at a time; in a certain sense they decide how to decide, thereby considering in particular the preciseness and costs (mostly time) of the decision mode and adapting it to the importance of the current decision (Payne et al. 1993); this result could even be the blueprint for a rational decision design. However, from the studies in decision psychology, some “destructive” results have garnered the most attention from ethicists, namely findings which allegedly show that humans decide less rationally than is usually assumed. Often agents do not follow the advice of rational decision theory to maximise expected utility (many important proofs were provided by Kahneman and Tversky (conspectus: Kahneman 2011: part IV)); they miscalculate probabilities; instead of rational calculations they use rules of thumb (Gigerenzer 2010). On the one hand, this scope of philosophers’ attention sometimes seems to be a masochistic delight in the destruction of a noble self-image of humankind. On the other hand, a more in-depth examination of such results could sometimes even reveal deeper forms of rationality – like e.g. second order maximisation (i.e. optimisation of the optimising process itself) or dealing with cases where statistical justifications of maximising expected utility do not hold –, which have not yet been sufficiently
Morality in Times of Naturalising the Mind – An Overview
9
captured by philosophical rationality theories and, therefore, superficially have been branded as irrational (e.g. Buchak 2013). Beyond decision psychology, psychology of action in the last twenty years has provided a number of results which might challenge a traditional conception of action, which is taken to be the basis of our practices of prudential decision, giving reasons, civil and moral responsibility. The theory of prudential rationality and ethics have to reply at least to the following findings. (i) Automatic actions: Some philosophers of action have already discussed automatic actions, in particular automatic routine actions, like eating “munchies” from a bowl in front oneself or shifting a car’s gear, several decades ago (e.g. Melden 1961: 86; 97-100; 202-203; historical overview: Pollard 2010). More recently, however, psychologists of action have found still other types of automatic actions, i.e. actions which are initiated and executed without attention, e.g. mimicking one’s interlocutor or conditioned reflexes, and shown their pervasiveness (overviews: Bargh & Barndollar 1996; Bargh & Chartrand 1999). The ethical problem with automatic actions is that they are not or at least do not seem to be caused by respective intentions – which however is required for an action in the traditional sense. (ii) Spontaneous unconscious intention and action: Instead of being produced by an automatism, unconscious action can also be produced by a, usually rather simple, spontaneous unconscious deliberation and intention, which react creatively to the current situation and to a very limited degree consider the pros and cons of at least two options, e.g. during a conversation to sit down on a chair vs. to remain standing, or to open the window vs. not doing anything in this respect. Although there seems to be an intention in such cases, the fact that this intention is unconscious may imply that it is not subject to critical scrutiny by our reason and hence we are not responsible for the action. (iii) (Subliminal) priming of decisions: There is a huge mass of experiments showing that subjects who have been exposed to, i.e. primed by, certain perceptions, which (unconsciously) activate related ideas are influenced in their later decisions by these ideas, e.g. after having worked on a language test which contained several words having to do with old age (or politeness or rudeness etc.) subjects behaved accordingly, e.g. they walked more slowly (Bargh et al. 1996; Bargh & Chartrand 1999: 466). In these experiments the priming was so inconspicuous that the subjects did not even detect that they had
10
CHRISTOPH LUMER
been exposed to some accumulation of words of a certain semantic group; and similar effects occur if the priming words are presented subliminally, i.e. so briefly that they are not consciously perceivable at all. All this means that the later conscious intention has been influenced unconsciously. If we are regularly exposed to priming effects, then isn’t the lion’s share of our decisions unfree? (iv) Unknowing one’s intention and action: Psychologist Daniel Wegner has collected a long list of empirical findings where people feel that they are willing an act that they are not doing or, conversely, are not willing an act that they in fact are doing, e.g. alien hand syndrome (because of a neurological lesion one hand seems to act autonomously), table turning in spiritistic séances, or believing that one moves one’s hand though only an optical illusion makes another person’s – moving – hand seem to be one’s own (Wegner 2002: chs. 1-2). Wegner infers from this that we have no direct knowledge about our will’s causing our actions and that our respective beliefs are cognitive constructs on the basis of the empirical information at hand (ibid. 67-69). He goes on to claim and proposes a respective model that our will (or its physical basis) only provides information about but does not cause our actions, which instead are prepared and brought about by unconscious processes (Illusion-of-conscious-andempirical-will thesis) (ibid. 68; 96; 146; 342). All of these findings stress the role of the unconscious in the production of action, and have contributed to the view which is called “the new unconscious” (Hassin et al. 2005) – in contrast to the “old”, Freudian, motivated unconscious –, i.e. a cognitive unconscious (sometimes similar to the “unseen” and complex processing of a computer) that can account for many “higher” mental processes. Although these findings have been regarded as confirmations of the display theory and though they are, of course, challenges for a traditional conception of action and responsibility, which have to be discussed carefully, in the end careful and more targeted, differentiated discussion might also merely result in some revision of the traditional picture but not lead to a plain corroboration of the display theory. Ad i: Automatic actions: The traditional view of actions is intentional-causalist: actions, by definition, are caused (in the right way) by respective conscious intentions (or their physiological basis). This, however, does not imply that the causally effective intention is a singular proximal intention; it may be e.g. a general or distal intention. At least part of
Morality in Times of Naturalising the Mind – An Overview
11
the automatic actions, in particular habitualised routine actions, go back to intentions formed some – or even long – time ago and thereby fulfil the definitional conditions of an action (Lumer, forthcoming). Others instead may not fulfil the conditions but, precisely for that reason, are no longer be considered actions – without any need to change the definition of ‘action’. Ad ii: Spontaneous unconscious intention and action: Spontaneous unconscious actions could be a limiting case of actions. On the one hand, they are caused by (something like?) a deliberation and intention; on the other, because this deliberation and intention are unconscious, they did not pass the more thorough critical check of an attentive consciousness and hence the resulting behaviour may be something we are less responsible for and possibly not an action. (If unconscious deliberation reaches a critical point, revealing a critical feature of an action this often leads to attracting conscious attention; frequently, however, conscious consideration is required in the first place for detecting problematic points, so that unconscious deliberation cannot have a sufficiently deep critical function.) Ad iii: (Subliminal) priming of decisions: Subliminal influences on an intention do not question the status of the intention and action as such but they will make them less rational. Already psychoanalysis has revealed (other types of) unconscious influences on our decisions. The critical moral of this insight was that a reflective person should know about and study such possible unconscious influences to raise her level of rationality. An analogous lesson should be drawn for subliminal priming as well. Ad iv: Not knowing one’s intention and action: The traditional, intentional-causalist conception of action does not require that agents later remember their (comprehensive) intention or have direct, firsthand knowledge of their action.4 Therefore, the seemingly conflicting findings, collected by Wegner, regarding the lack of such knowledge, do not contribute anything to refuting the traditional conception. Wegner’s further, much stronger Illusion-of-conscious-and-empirical-will thesis, instead, is in contrast to the very idea of the intentional4
Anscombe and some of her followers, however, postulated such a direct, in particular not mediated by observation, knowledge about one’s action as a definitional characteristic of action (Anscombe 1957: §§6; 8; 16; 28). But this conception is intended to be an alternative to the traditional, intentionalcausalist theory of action. The findings collected by Wegner help to refute this opponent of the traditional theory.
12
CHRISTOPH LUMER
causalist conception of action. However, Wegner has absolutely no proof for this thesis – apart from a reference to Libet’s findings, which have been discussed above –, and there is evidence to the contrary, which sustains the intentional-causalist view (Pauen and Lumer (chs. 1 and 3), this volume).
3. Moral Physiology At present moral physiology is a rapidly evolving field of research. In this introduction only a once-over of it with some comments can be given to get an idea of the studies done in this field and of their ethical relevance (more detailed overviews from an ethical point of view: Levy 2009; Polonioli 2009; Reichlin, this volume: ch. 4, sect. 1). Probably the best know results of moral physiology, which have evoked much philosophical discussion, regard the neurophysiological counterparts of moral reasoning and judgement, in particular about moral dilemmas as the Trolley Problem (Greene et al. 2001; 2004; summary: Greene 2005). While their brains were being scanned with fMRI, subjects had to decide what to do in hypothetical situations like these: Bystander: “A runaway trolley is headed for five people who will be killed if it proceeds on its present course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks where it will kill one person instead of five. Should you turn the trolley in order to save five people at the expense of one? Most people say yes.” (Greene et al. 2004: 389; nearly identical: Greene et al. 2001: 2105.) Footbridge: “As before, a trolley threatens to kill five people. You are standing next to a large stranger on a footbridge spanning the tracks, in-between the oncoming trolley and the hapless five. This time, the only way to save them is to push this stranger off the bridge and onto the tracks below. He will die if you do this, but his body will stop the trolley from reaching the others. Should you save the five others by pushing this stranger to his death? Most people say no.” (Greene et al. 2004: 389; nearly identical: Greene et al. 2001: 2105.)
Mikhail’s figures about these scenarios are: In the Bystander dilemma 90% say they would rescue the five persons, in the Footbridge dilemma only 10% do so (Mikhail 2007: 149). This
Morality in Times of Naturalising the Mind – An Overview
13
difference is astonishing because from a consequentialist point of view both situations are prima facie equal: five persons are saved at the cost of one. Greene explained the difference by an evolutionarily developed, emotionally felt inhibition to cause serious bodily harm to another subject in a personal situation (physical contact, short distance, face to face etc.), as in the Footbridge dilemma, whereas impersonally caused harm, as in the Bystander dilemma, does not elicit the inhibiting emotion. The latter type of harming was not yet possible when such emotions evolutionarily developed in our ancestors and, therefore, was not included in the naturally rejected forms of social behaviour, so that impersonal harming can be decided cognitively and rationally. (Greene et al. 2004: 389-390; Greene et al. 2001; Greene 2005: 59.) Greene’s “dual-process theory” adds that beside these emotional responses there are rational deliberations, which can and sometimes do outweigh the emotions; however, rational considerations need more time than the spontaneous emotional reactions (Greene 2007). Greene supports his explanation by fMRIdata: The brain regions associated with cognitive control (anterior cingulated cortex and dorsolateral prefrontal cortex) were more active in subjects when they considered the Bystander dilemma, whereas the brain regions associated with emotion and social cognition (medial prefrontal cortex, superior temporal sulcus, posterior cingulate cortex, temporal poles, amygdala) were more active when subjects considered the Footbridge dilemma. Furthermore, the minority who in the Footbridge case decided in a “utilitarian” way (sacrificing the fat man to save five) also had the emotional activation but it was counteracted by an additional higher cognitive activation; this conflict led also to longer response times in these persons. (Greene et al. 2001: 2106-2107; 2004: 390-391.) In addition, Greene has taken the fact that in the Footbridge dilemma patients with lesions of the ventromedial prefrontal cortex (VMPFC), who lack emotional inhibition against antisocial and irrationally short-sighted behaviour endorse the “utilitarian” judgement and decide faster than normals (Ciaramelli et al. 2007; Koenigs et al. 2007) as a further confirmation of his explanation (Greene 2007). Now some ethicists have tried to make a normative ethical point of these findings in moral physiology. Peter Singer uses these results to argue against intuitionist approaches in general and against
14
CHRISTOPH LUMER
intuitionist objections to utilitarianism in particular. According to Greene’s explanation, the majority’s contrasting intuitions about cases of saving five by sacrificing one in the Bystander and in the Footbridge dilemma reduce to the difference when the respective way of killing has been invented (before or after the evolutionary introduction of emotional barriers against killing fellow men), which, of course, is morally irrelevant (Singer 2005: 247-248). The longer reflection time of the utilitarian minority in the Footbridge dilemma as well as the participation of more cognitive brain areas in their decisions show that the utilitarian decision is more rational and, hence, ethically to be preferred (ibid. 349-351). Greene associates himself with Singer’s argument and adds to it that deontologism, which sustains the majority view in the Footbridge dilemma (i.e. it forbids pushing the fat man from the bridge) is actually based on intuitive, emotional decisions, which only later are rationalised (in the Freudian sense) by the deontological ethical theory; this theory is only a confabulation of reasons for an arational, emotional decision, based on historically arbitrary developments. Consequentialism, however, is not driven by emotion (or at least not by the sort of “alarm bell” emotion that drives deontologism), it is inherently cognitive and rational – it systematically considers all values and flexibly weighs them –, although it does have some affective component. (Greene 2008: 39; 41; 57; 59-65.) Given these origins, for Greene it is clear which ethical theory is preferable (ibid. 76). This remarkable march through from moral physiology to normative ethics is too straight to remain uncriticised (Reichlin and Corradini, this volume (chs. 4 and 5); some intuitionist critique: e.g. Levy 2006; Sinnott-Armstrong 2008: ch. 2.1-2.2). Whereas the physiological part has been widely accepted, already the psychological theory contains the following problems (among others). 1. Even if one accepts the main idea of Greene’s explanation it remains unclear how and when rational, cognitive considerations can trump emotional reactions. 2. The difference between personal and impersonal killing seems to be really important but not sufficient for explaining the subjects’ responses. Mikhail e.g. has tested a series of further variations of the Trolley situation; one of them e.g. is “Drop Man”, which is very similar to the Footbridge dilemma, however, in Drop Man the large stranger is standing on a trapdoor, which you can open by remote control, thus making him fall onto the tracks etc. as before. Although this is now an impersonal killing,
Morality in Times of Naturalising the Mind – An Overview
15
consent to the rescue measure (killing one for saving five) increases from 10% (Footbridge) to only 37% (Drop Man) (Mikhail 2007: 149), thus remaining still far below the 90% (Bystander) consent in the initial impersonal killing dilemma. The responses to Mikhail’s other scenarios show that rejection switches to approval only bit by bit, depending on various conditions, which may be differently important for different people. Even if every subject had only one central reason for his decision this reason cannot be interpersonally identical, there must be several of them. However, it is more likely that most subjects reacted to several reasons, to which they gave interpersonally different importance. Many explanations, including Greene’s, of the moral judgements in the Trolley scenarios, therefore, are false because they neither are able to capture the gradual switch of the judgements with intermediate percentages of consent between the two extreme scenarios (Bystander and Footbridge) nor do they explain the minority judgments. 3. In addition, the consequentialist judgements have not been explained either. Greene seems to suggest: there is exactly one “rational” way to decide morally, namely the utilitarian; once people are able to suppress the emotionally induced decision tendencies and start to deliberate really cognitively, then they arrive at utilitarian judgements. Given the many contrasts between normative ethicists, not only between utilitarians and deontologists, this is rather unlikely. A comprehensive moral psychology should explain how it comes to the various types of moral judgment. 4. Another problem of the experimental results and the psychological explanation is that the type of emotion(s) has not been surveyed and remains entirely unclear. In the Footbridge scenario subjects may be worried about possible penal consequences (if the trolley unpredictably comes to a stop before the Footbridge, pushing down the large stranger appears to be plain manslaughter); subjects may have bad conscience because their personal morals prohibit the action they are thinking about; they may feel an emotional inhibition to do something dreadful; they may feel pity for their imagined victim; etc. In particular the whole discussion mostly ignores the difference between moral emotions like guilt, indignation or gratification, which are caused by moral judgements, and e.g. prosocial emotions (emotions near to and at the basis of morals) like sympathy or respect for persons, creatures and valuable things. 5. Moral emotions, as just said, are caused by specific moral judgements;
16
CHRISTOPH LUMER
hence they cannot explain these judgements, the explanation goes just the other way round. However, (allegedly intuitive) moral judgements and principles are ontogenetically acquired and strongly influenced by culture and, to an interpersonally quite different degree, by personal rational considerations as well as by prosocial emotions and motives (i.e. emotions and motives near to morals) (Lumer 2002: 182-186; for intercultural and socio-economic differences in moral judgements: Haidt et al. 1993). There may be also an anthropological, e.g. emotional, basis of morality; but this basis has to be identified; and certainly it does not lead directly to predefined moral criteria or even singular judgements but only via long cognitive processes, which have to be investigated in much more detail. In any case, moral emotions are no evidence for a fixed natural mechanism; they may even be the result of rational reflection about moral principles, which then have been adopted and now cause the respective emotions. Therefore, Greene has no strong argument to generally discount ethicists’ (deontological or even consequentialist) reflections as (Freudian) rationalisations (cf. Greene 2008: 68-69). 6. The “utilitarian” decisions of VMPFC patients cannot simply be the consequence of a lack of (social) emotions such that rationality alone determines their judgements. In the Ultimatum Game (explanation below, in sect. 4), where, among others, indignation and personally costly punishment is tested, VMPFC patients seek (revenge driven) retaliation more than normals (Moll & Oliveira-Souza 2007). The ethical part of Greene’s (and Singer’s) argument for utilitarianism and against deontologism has, of course, also been criticised. 1. Greene is careful enough not to simply deduce utilitarianism from empirical findings, because this would violate Hume’s Law. Instead, he uses a strong normative, metaethical premise, namely that a cognitive, systematic and universal moral which considers all values and weighs them flexibly is better than a moral which is limited in these respects. The use of this normative premise, however, makes his argument weaker than it first may appear. Its probative force depends on this premise, which now has to be justified, something that Greene does not do. What is more, all empirical, physiological or psychological, findings have no probative force at all in his argument, they are completely irrelevant to the ethical argument because its other necessary premises are analytical judgements about the definitional qualities of utilitarian
Morality in Times of Naturalising the Mind – An Overview
17
morals (like summing up all individual utilities), in particular about its criteria for moral valuation and obligation (cf. Corradini, this volume (ch. 5)). Hence what initially seemed to be a justification of moral principles on the basis of empirical findings turns out to be an analytical argument, which has nothing to do with these findings. There is not even an attempt to overcome Hume’s Law; and this in a certain sense is good news. 2. Utilitarianism is not the only moral system which satisfies Greene’s adequacy condition; many other welfare ethics do so as well, e.g. prioritarianism, moderate welfare egalitarianism, leximin and Rawls’ principles of justice. Greene does not show why exactly utilitarianism should be the right, rational ethical system. 3. Above Greene’s model has already been criticised to the effect that it does not explain when and why some emotional process determines the moral judgement and action and when and why rational moral judgements gain predominance. The critique just raised adds a new aspect to this problem: It remains unclear why which “rational” moral principles are considered to be just, are personally adopted and how they can acquire motivational force. Of course, this question is also about the psychic basis of rational morals, which remain unanalysed. If utilitarianism (or some other welfare ethics) relies on emotions (like sympathy or respect for persons etc.) we have to study which emotion and how, as well as how this emotion or the moral principles justified by it “translate” into motivation. Here central parts of a moral psychology are entirely missing. 4. Greene’s appraisal of the VMPFC patients’ moral judgements is a bit surprising. Usually this lesion is considered to be devastating, in particular because these patients no longer feel emotional warnings regarding risky consequences and, therefore, are no longer able to control spontaneous impulses which lead to irrational or antisocial behaviour (Damasio 1994: chs. 8-9 (= pp. 165-222)). However, once VMPFC patients’ moral judgements coincide with a utilitarian view, Greene considers the absence of the emotional brake and the patients’ decision to be particularly rational. One problem with this view is – if we accept Greene’s psychological theory for a moment – that the presence of the emotional inhibition against personal killing is assessed as a kind of harmful instinct, whereas one should perhaps, to the contrary, regret the absence of a natural inhibition against impersonal killing. Another question about moral judgements which has been ardently discussed on the basis of physiological data is whether
18
CHRISTOPH LUMER
moral judgements are intrinsically motivational, i.e. whether (a certain form of) ethical internalism is true. Some philosophers have argued that patients with lesions of the VMPFC (Roskies 2003: 5558) or psychopaths (with various brain damages (Kiehl 2008)) (Deigh 1996) make more or less normal moral judgements but are not motivated to act on them, so that ethical internalism is empirically false. While philosophers have accepted the physiological part of this argument the philosophical interpretation remains controversial. Some have doubted and others reaffirmed that the patients’ judgements were really moral judgements (cf. the contributions of Kennett & Fine, Roskies and Smith in: SinnottArmstrong 2008; Nichols 2002; Cholbi 2006). Another critical point in this debate is the interpretation and significance of “ethical internalism”. First, most forms of ethical internalism can be rescued from falsification by weakening the respective hypothesis e.g. to a 99% statistical correlation. Second, taking the internalist claim to be an empirical hypothesis (‘moral judgement actually leads to the respective motivation’) and then attacking it, probably is a straw man fallacy; not even Kant held such an hypothesis. Some, in a broad sense, normative interpretation of internalism probably makes much more sense; Bernard Williams e.g. took the connection of moral demands to one’s motives to be a condition of their authority (Williams 1979); another normative reading of internalism is to consider it as an adequacy condition for a valid justification of morals: if some “justification” of a moral system (under certain conditions, of course) does not lead to a respective motivation then it is not a good justification. In any case, the physiological information in this philosophical discussion plays only a minor role; it is sufficient to know that there are some forms of brain damage which leave apparent moral judgements intact but impair moral motivation. Some other topics of moral physiology, apart from general contributions to brain mapping of mental activities related to morals (e.g. Moll et al. 2002a; 2002b; Heekeren et al. 2003), have been moral action and moral emotion. Moll and colleagues (2006) e.g. studied the brain activities during charitable donation with the help of fMRI and found that the mesolimbic reward system is engaged by donations in the same way as when monetary rewards are obtained; in addition, orbitofrontal areas, which also play key roles in more primitive mechanisms of social attachment and aversion, specifically mediate decisions to donate or to oppose societal causes; and more
Morality in Times of Naturalising the Mind – An Overview
19
anterior sectors of the prefrontal cortex, which are associated with control of impulsive behaviour and pursuing (long-term) goals, are distinctively recruited when altruistic choices prevail over selfish material interests, thus materialising a principled moral decision. When studying brain activities during Ultimatum Games, Sanfey and colleagues (2003) confirmed the role of emotions for costly moral punishing behaviour. Several fMRI studies corroborate the long suspected vicinity of amoral disgust and indignation: they have partially overlapping neural substrates (e.g. Moll et al. 2005). Finally, Rizzolatti’s discovery of mirror neurons and the explanation of their functioning illuminates the physiological basis of sympathetic feelings and actions. Mirror neurons are called so because they have a double function. On the one hand, they are activated when we act or have certain feelings and express them externally (physiognomically, vocally or gesturally), on the other, the same neurons are activated when we perceive others who behave alike, i.e. when they move or express their feelings in that way. In the second, passive case, the perception of other persons’ behaviour or emotional expression, via the mirror neurons causes a mostly invisible micro repetition of this behaviour or expression, which generates a memory based activation of the practical sense of the movement or of the emotionally feelings, thereby leading to an empathetic, i.e. felt understanding of the other person’s intention or emotion. Often, under certain conditions, the latter form of empathy leads to compassion or sympathy, which, finally, may motivate benevolent action. (Overview: Rizzolatti & Sinigaglia 2008: in particular ch. 7.) While Rizzolatti investigates mainly the cognitive side of empathy, Tania Singer and colleagues study more the emotional and motivational consequences of (cognitive) empathy. They have found e.g. that in empathic pain (for others who receive electro shocks) the usual pain centres are activated but not those sensory fields which in normal corporal pain identify the bodily origin. Furthermore, for feeling empathic pain it is not necessary to see e.g. the other’s face; if there are other evidences of pain mere imagination is sufficient to elicit empathic pain. Hence for evoking empathic emotion it can be sufficient to have some sort of information about the other’s well-being; it is more important to capture the significance for the other person. (Singer et al. 2004.) With respect to empathy driven altruistic helping, physiological data confirmed what moral psychologists had found before (e.g. Coke et
20
CHRISTOPH LUMER
al. 1978), namely that stronger empathic pain (as well as similar prior personal experience) increases willingness to costly helping (Hein et al. 2011). Moral physiology has been discussed here somewhat more in detail because the general neuro-surge of the last twenty years has had its strongest impact so far within practical philosophy in metaethics, in particular in the discussion about the foundations and the justification of morals. The hitherto presented explanatory models of moral judgement or action, including the most famous, i.e. Greene’s model, are much too simplistic and therefore easy to falsify; this probably will change in the future with more targeted studies and more precise methods of inquiry. But so far moral psychology has provided much more fine-grained explanations than moral physiology. It is really astonishing that moral physiologists mostly ignore the psychological results. Moreover, the ethical importance of the physiological findings sketched here is very limited. The apparent immediate relevance of Greene’s model for the decision between deontological and consequentialist morals de facto did not obtain. Empirical information about psychopaths or persons with acquired sociopathy (VMPFC-patients) is ethically important; but the psychological information about them (about their exact mental capacities and disabilities) is ethically more relevant than the physiological explanation. Similar assessments hold for Moll & colleagues’ findings about the neural bases of moral decisions and emotions (psychological decision models already told us e.g. that moral considerations make up one group of aspects in general multiattribute decisions) or for the physiological explanation of empathy (that empathy exists and can cause sympathy and then benevolent motivation, of course, has long been investigated in psychology). There are justifiable doubts that the direct relevance of neurophysiological findings for ethics will increase with advanced research. Similar reasons for the lesser importance of neurophysiology, as they have been mentioned above in the discussion of the physiology of action, hold for moral physiology as well. The main concern of ethics, as a piece of practical philosophy, is to answer the question ‘What shall I / we do from a moral point of view?’ and thereby to influence our decisions in a free way and into a moral direction. This is possible only by submitting “material”, considerations, reasons, which can affect our deliberation in a noncoercive way because they fit to the kind of mental processes and
Morality in Times of Naturalising the Mind – An Overview
21
variables present in deliberation. Now, deliberative decisions are taken via mental attitudes like desires and beliefs. Hence, to influence decisions in a free, non-manipulative way (and in a moral direction) we need to know the way of functioning or the psychology of moral decision, in particular the possibilities and limits of influencing by information, enlightenment and rational reflection – ethicists are limited to these measures, they are not neurologists who want to repair or remodel brain structures –, how and which information under which conditions changes decisions and the ways of deciding. Ethicists need this type of psychological knowledge to obtain an overview of the various ways of judging and deciding, to be able to reckon with the inalterabilities of our ways of deciding and in order to be able to develop and propose the morally best among those ways of deciding which are reachable by providing information and arguments. Hence the directly needed knowledge is psychological, it is about and (at least primarily) uses the categories of what is subjectively accessible. However, there is a role for moral physiology in ethics as well, but it is a secondary, ancillary role. In order to go beyond the recognition of behavioural relations and to reveal the phenomenal psychic processes, psychology is dependent on introspective reports (in a very broad sense). However, these reports cannot be quantitatively precise; in addition, aimed introspection interferes with the processes to be observed. If, one day in the future, we have rather precise general mappings of mental on physiological processes and have still much more detailed physiological in-vivo observation techniques at our disposal, then physiological data may help provide much more precise psychological analyses – e.g. of how intense some feeling was and how and how strongly it influenced some decision. A second role of moral physiology is explanatory; moral physiology, one day, will explain the moral psychological laws. Of course, our mental experience is only the surface of the workings of a mighty unconscious machinery (which does not exclude that main decisions and settings of the future course take place on this level), and the leaps between successive phenomenal experiences, in the end, can be explained only physiologically. However, this kind of physiological knowledge will help to understand the mental processes e.g. during deliberation, whereas probably only psychological knowledge can be used to design morally good and cognitively accessible ways of deciding.
22
CHRISTOPH LUMER
4. Moral Psychology The main objects of inquiry in moral psychology are moral actions and decisions, moral motives, moral emotions and moral judgements – where, however, “moral” sometimes (apart from “moral judgement”) is meant in a broad sense that includes actions which conform to morality and also includes decisions and emotions which systematically lead to actions conforming to morality but which are not guided by moral principles. Moral motives or emotions in the narrow sense are motives and emotions respectively caused by moral judgements;5 moral decisions and actions in the narrow sense in turn are (mainly) caused by moral motives or emotions in the narrow sense or by moral judgements. (Humeans, of course, deny that moral judgements can, as the main cause, effect actions. But this is an empirical hypothesis not an analytical stipulation.) Moral psychologists have always hypothesised that one or the other of these phenomena is prior with respect to the others in the sense of determining the others’ content. Rationalists, for example, take moral judgements to be prior to the other phenomena; in Hume’s psychology sympathy is the leading element, in Schopenhauer’s it is compassion. Presently we are witnessing an emotivist surge, according to which moral emotions (in the broad sense) determine the content of moral judgements and motivation (see below). In order to make this hypothesis comprehensible some current studies of the single objects of moral psychology have to be considered. Let us start with moral decision and action. Many ethicists presuppose that moral judgement (more or less) determines moral action, so that it would be sufficient to elicit the right moral judgement to make people act in the morally right way. Empirical evidence, however, shows that moral judgement and action are quite independent (Nunner-Winkler 1999). The reason for this is that decision psychology of moral actions does not differ from that of other actions, i.e. it is a pondering of pros and cons of various options in the mould of rational decision theory, where moral considerations make up only one of the relevant aspects and have to 5
For some other proposals of defining ‘moral emotion’ see e.g.: Prinz & Nichols 2010: 119-120. The definition used here is a narrower version of their second definition.
Morality in Times of Naturalising the Mind – An Overview
23
be “represented” by respective motives or desires (Lynch 1978; Heckhausen 1989: 301-302). Nor have psychologists found traces of a bipartite decision system, as hypothesised by Kant (e.g. 1977: BA 36-37 / 1903, IV: 412-413), i.e. where apart from this decision theoretic, instrumentalist decision mode there also exists a second decision mode determined by the laws of reason.6 Altogether, however, the psychology of moral and immoral decision is somewhat neglected in current research. Motives for acting morally can be differentiated into several main groups. Apart from 1. motives which coincidentally conform to moral requirements (e.g. good pay for a humanitarian job), there are 2. motives of rational cooperation, i.e. desires to improve social reactions to one’s own actions (in particular avoiding punishment and receiving reward or mutual cooperation) or to obtain advantages, which can only or better be reached by cooperation, 3. selftranscendent motives to further and care for some object (person, collective, place, artefact, institution, ideal etc.) different from oneself but to which one feels attached – as in love or affection, creative expansion by means of one’s works, or collectivism and pride in one’s community and culture –, 4. (general) prosocial motives which aim at other beings’ well-being or flourishing without presupposing an already existing personal relationship (in particular sympathy or compassion and respect for persons, other living beings or things felt to be valuable in themselves), and 5. moral motives (in the narrow sense), i.e. motives which have their origin in a moral judgement (cf. Lumer 2002: 169-182). Different approaches to justifying morals have been based on different groups of these motives. Game theoretical foundations of ethics and contractualism of the Hobbesian line are based on motives of rational cooperation, an ethics of caring makes recourse to self-transcendent motives, certain forms of moral sentimentalism and Schopenhauer’s theory rest upon general prosocial motives, and moral rationalism presupposes moral motives. Correspondingly representatives of these approaches have been interested in quite different studies of motives for acting morally. Some foci of psychological research on motives for acting morally have been: Contracting for mutual advantage is the paradigm of rational cooperation. It works well when these contracts are warranted by external instances with 6
Discussion of several Kantian decision psychologies: Lumer 2002/2003.
24
CHRISTOPH LUMER
sanctioning power. If, however, such an external authority is not available a sort of homo oeconomicus rationality, which seeks to cleverly maximise the satisfaction of selfish preferences, recommends cheating to get the advantages of cooperation but not to pay the price of it, which, if anticipated by both partners and under certain fairly general conditions, makes rational cooperative agreements impossible – says rational game theory. Psychological evidence, however, does not confirm this prediction. People do not behave like homini oeconomici. For one thing, their moral motives make them more honest than an homo oeconomicus; for another, retaliatory emotions make them punish cheaters, which has an additional deterrent effect; in addition, fair players to a certain degree recognise other fair players and limit cooperation to them, and because cooperators in a selectively cooperating environment are more successful than non-cooperators such cooperative behaviour has been favoured by evolution (Frank 1988; Kiesler et al. 1996; Mansbridge 1990; Parks & Vu 1994). This combination, called “strong reciprocity”, of cooperatively procuring service and punishing non-cooperation is pervasive in social life (Gintis et al. 2005). Similar results have been obtained by exploring cooperative behaviour in Ultimatum Games: The first player can divide a given amount of goods, usually money, between herself and a second player as she pleases. However, then comes the second player’s turn. If he accepts the division both players receive the goods as assigned by the first player; if he does not accept the distribution both get nothing. If the second player were a homo oeconomicus he would accept any distribution proposed by the first player which gives him more than zero percent because even one percent is better for him than nothing. However, this is not what has been observed. For one thing, second players usually accept only offers which at least approach the equal distribution of 50% to 50%; i.e. they really pay for punishing an unfair first player, e.g. by rejecting a 20% offer. They do this out of indignation and driven by a revenge motive. For another, first players mostly do not make very low offers in the first place, because of their fairness ideals or because they fear their proposal will be rejected. (Fehr & Gächter 2001; Henrich et al. 2004.) So there must be non-selfish motives. Self-transcendent motives often are altruistic and mostly are important supporters of acting morally. However, since many of them – though not all, think e.g. of
Morality in Times of Naturalising the Mind – An Overview
25
a person whose life project is to care for the needy – are bound to definite individual persons, small groups or limited projects and hence are not universalistic, so that they might not define what is moral, they have not found much interest among present-day ethicists. Among prosocial motives empathy driven benevolence, unlike respect for persons and things, has instead been the object of much psychological research. In particular the question whether this kind of benevolence is really altruistic or only egoism in disguise – e.g. I help you because I want to terminate my distress from seeing you suffering – has been studied thoroughly; in a series of ingenious experiments Daniel Batson has excluded at least the most common selfish explanations for the majority of subjects (e.g. Batson & Oleson 1991; overview: Stich et al. 2010). The motivational mechanism in these cases is that empathic cognition generates sympathetic emotion, which in turn induces a (motivating) intrinsic (i.e. non-instrumental) emotion-bound desire for the other’s improved well-being.7 Of course, this does not exclude that, additionally, one hedonistically and hence in the end selfishly tries to optimise one’s sympathetic feelings (i.e. minimise pity and maximise shared joy) by helping others. After all, it feels better not to live among miserable people. Analogous double mechanisms of (i) emotions inducing new and emotion-dependent intrinsic desires besides (ii) hedonistically striving for optimising one’s emotions seem to exist for many moral motives in the narrow sense like conscientious motives, revenge motives or indignation motives. Guilt or bad conscience, for example, first and foremost is an emotion – or better: can be identical to two different emotions, first, a deconcretised fear of punishment or of losing affection and, second, the more mature version, a decline of self-esteem after a negative moral self-evaluation).8 Anticipatory (i.e. before acting) negative moral self-esteem, on the one hand, can induce an intrinsic desire to be morally good, and, on the other, it can remind us of the fact that executing the considered action would lead to a still worse 7
8
On various forms and development of empathy: Hoffman 2000. For the general mechanism of emotions inducing new intrinsic desires see: Lumer 2012. Good or quiet conscience analogously consists of, first, peace of mind and comfort in not having to fear any punishment and, second, positive selfesteem, positive moral self-evaluation, moral satisfaction with and pride in oneself.
26
CHRISTOPH LUMER
self-esteem, which is hedonically bad; of course, one can have the latter thought even without an already reduced self-esteem. Posterior negative moral self-esteem, on the one hand, induces intrinsic desires of redemption or self-punishment, and, on the other, it can provoke various hedonistic desires and intentions, e.g. to avoid the respective type of action in the future for hedonistic reasons or to improve one’s self-esteem (and get rid of present guilt feelings) by doing particularly good action; and again (apart from getting rid of present guilt) one can also form these desires and intentions independently of a present low self-esteem. (Lumer 2002: 180-181; cf. also Prinz & Nichols 2010: 137-139.) So moral motives in the narrow sense and prosocial motives work via respective emotions, which induce new intrinsic motives or which are the aim of hedonic desires. Respect for persons and valuable objects has not been the object of much attention in psychological research, whereas empathic emotions or vicarious affects have been extensively studied (see e.g. Batson & Oleson 1991; Coke et al. 1978; Hoffman 2000); some results have already been reported above. What has been neglected somewhat, though, is the fact that, apart from negative, unpleasant sympathy, pity, compassion or commiseration, there is also positive, pleasant sympathy with another sentient being’s positive well-being – although positive sympathy is weaker than negative. An important feature of prosocial emotions is that in the main 9 they do not depend on moral judgements. Therefore, they may be apt for justifying moral judgements – e.g. in such a way that the degree of a certain form of universalistic sympathy or of the underlying well-being define moral value. – Moral emotions in the narrow sense can be divided into four groups: 1. self-blame emotions, e.g. guilt, low moral self-esteem, shame; 2. self-praise emotions, such as moral pride, moral satisfaction with oneself and positive moral self-esteem; 3. other-blame emotions, like indignation, outrage, loathing, disgust or contempt; 4. other-praise emotions, such as moral admiration or appreciation.10 Most of these emotions have been investigated in social psychology; here is not the place to report the respective 9
10
There is, however, a modulatory effect of moral judgements on them; moral condemnation of a suffering person e.g. can reduce or block pity. Prinz (2007: 68-86) and Prinz & Nichols (2010: 122) make a similar distinction but leave out praise-emotions.
Morality in Times of Naturalising the Mind – An Overview
27
details. A general question regards the origin of singular episodes of moral emotion. An answer to this question has already been given above in the definition of ‘moral emotion in the narrow sense’, namely that they originate from a moral judgement (which may be unconscious), with the consequence that the theory of moral emotions in the narrow sense refers to a theory of moral judgements. (An alternative hypothesis to this cognitivist view assumes that moral emotions are caused directly e.g. by perception or imagination, without intermediate cognitive judgements; however, given the sophistication and cultural diversity of moral emotional reactions such a direct causation is hardly plausible. Some moral psychologists think that the CAD theory gives an answer to this objection. The CAD theory holds that there are three main areas of moral concern: 1. community, which regards violation of communal codes including hierarchy, 2. autonomy, having to do with individual rights violations, and 3. divinity, regarding violations of puritysanctity; and these three areas are aligned with three corresponding emotions: contempt, moral anger and disgust (Rozin et al. 1999; Shweder et al. 1997). However, the objection just mentioned turns also on this explanation: Communal codes, ideas of individual rights, and ideas of divinity are so sophisticated and interculturally different that their noncognitive functioning is highly implausible. In addition, the suggestion that the emotional background and the vicinity to amoral emotions (Rozin et al. 2009) indicates a natural origin of the triggering conditions of these emotions fails likewise because of the cultural diversity of the respective norms.) If moral emotions in the narrow sense bear on moral judgements where do moral judgements come from? One tradition in philosophy, represented e.g. by G.E. Moore, William Ross or in recent times by Robert Audi, Michael Huemer and in a way also by John Rawls, sees (basic) moral judgements as philosophically unexplainable intuitions; and some intuitionists, tending towards moral objectivism, consider intuitions to be something like perceptions of objective moral truths. This position, however, is (at least) psychologically unsatisfactory because even if intuitions were philosophically or cognitively impenetrable they should be explained psychologically, at least for recognising whether and how they really represent objective moral facts – as it is done in perceptual psychology and physiology with respect to the empirical reality of our perceptions. Therefore, Sinnott-Armstrong and his coauthors
28
CHRISTOPH LUMER
(2010) try to explain what, from the subject’s perspective, is an unexplainable popping-up of an intuition; they explain it as the result of an unconscious application of a moral heuristic (in Gigerenzer’s sense). One group of heuristics is moral rules; and another very important heuristic is the affect heuristic: ‘If thinking about an act makes you feel bad, then it is morally wrong.’ (ibid. 260). Though this explanation goes a step beyond mere intuitionism it is still unsatisfactory because it leaves open where those moral rules or the moral emotion come from. A developmental-psychological tradition of explaining moral judgements has developed following Piaget (Piaget 1965; Kohlberg 1981; 1984; Turiel 1983; 1998; Nunner-Winkler 2011; Kagan 2008). A general characteristic of these theories is that they explain the ontogenetic development of our moral judgements as a progress of several “logical” stages by evolving general higher modes of cognition, which are applied to moral questions, e.g. the passage from concrete to more abstract and general thinking, the development of the competence to understand other persons’ mental states or of the competence to understand reasons behind social rules. In times of naturalising the moral mind these approaches have been criticised, first, as ignoring the intuitive, automatic formation of moral judgements in favour of assuming conscious reflection and, second, as ignoring the primary role of affects in producing moral judgements and betting on the cognitive application of moral principles instead (e.g. Haidt 2001; Hauser 2008: 21-25; 3839; 137). These criticisms, though, are somewhat superficial and often attack a straw man. Of course, moral judgements often pop up as intuitions and are accompanied by affects, but intuitions have their origins, which, however, may be rather cognitive instead of affective, and we have seen that moral emotions refer back to moral judgements. This reply to the critique does not mean that the theories in the developmental-psychological tradition are correct. Actually, they have several defects like (often) giving insufficient weight to prosocial motives or disregarding other sources of morality like seeking advantage in cooperation, proposing unclear stage differentiations and providing very gappy explanations. However, since adult morality is also due to cognitive development – it is no accident that moral standards of people with lower as compared to higher socioeconomic status and education are, in the mean, much more conventional and rigid (Haidt et al. 1993: 619; 624) – and since
Morality in Times of Naturalising the Mind – An Overview
29
cognitivist developmental-psychological theories are much more sophisticated in integrating various sources of morality (cognitive development, prosocial and moral motives, contractarian “logic” …) and in explaining singular steps of moral development than fashionable present-day physiological and emotivist theories of moral judgement, the potential of those theories and of the explanatory force of rational development as one source of moral development is currently grossly underrated. Contemporary emotivist theories of moral judgement (e.g. Haidt 2001; 2012; Haidt & Bjorklund 2008; Haidt et al. 1993; Nichols 2004; Prinz 2007; in part: Greene 2005; 2007; 2008; Greene et al. 2001; 2004) assume that moral judgements always or mostly have an emotional genesis, they are arrived at as a consequence of moral emotions (in Haidt these are emotionally felt moral intuitions including moral emotions). Of course, moral emotions have a specific range of eliciting conditions but, as these theories hold, their fulfilment leads directly to the moral emotion, which then gives rise to the moral judgement. (In cognitivist theories of moral emotions, instead, the cognitive (but not necessarily conscious) moral judgement is the eliciting condition.) Since subjects have only little access to the emotion generating process the eventual justification of a moral judgement occurs later, mostly it is a rationalisation and often a mere confabulation. Haidt also allows a very limited influence of reflection on moral intuitions, and Nichols and Prinz allow for a second mechanism of reasoning generated moral judgements; but with respect to the emotional generation of moral judgements their models tend to be rather nativist. Haidt e.g. assumes six modules for the main themes of morality (2012) and endorses the CAD theory (Rozin et al. 1999; 2009), and Nichols and Prinz tend in this direction too (Prinz & Nichols 2010: 140-141).11 11
Hauser ( 2008) has developed a non-emotivist theory of moral judgement, which could be used by emotivists as well: the universal moral grammar theory. (Another universal moral grammar theory has been provided by Mikhail (2007; 2011).) According to this theory, we possess innate and not verbally known moral principles (prohibition of killing, injuring, cheating, stealing, breaking promises, adultery and the like (Hauser 2008: 54)), i.e. the universal moral grammar, which during socialisation is automatically adapted to the domestic culture and morality in particular by permitting exceptions to originally unrestricted prohibitions. A difference with respect to the emotivist models is that, according to Hauser’s theory, after analysing the situation, in particular the agent’s intentions, this
30
CHRISTOPH LUMER
These theories have been criticised, in particular by ethicists (e.g. Fine 2006; Levy 2009: 6-7; Corradini and Reichlin, this volume (chs. 4 and 5)): The models mostly do not distinguish between prosocial and moral emotions and may perhaps fit better to prosocial than to moral emotions. The nativist tendency does not capture the cultural or even individual formation of moral norms and the cultural, individual and ontogenetic diversity and specificity of their contents (different people are e.g. indignant about quite different things). The criteria for our moral judgements and many singular cases are too complex to be processed by automatic mechanisms so that the models cannot explain many moral judgements (Haidt’s model e.g. does not explain the minority views in the Bystander and Footbridge scenario, Greene’s model explains only the views in these two extreme scenarios but not the answers in the intermediate Trolley scenarios). The theories mention only the topics but say next to nothing about the exact contents of the (emotional) morality. Insofar as the models admit some influence of moral reflection on moral judgements these influences are not well integrated into the main model. Fast intuitions can be the result of prior learning or of unconscious inferential processes, and they can have the status of hypotheses, which then are subject to critical scrutiny whether they can be justified – as, say, a mathematician deals with some intuition about the solution of a mathematical problem. Hence the existence of such intuitions does not say anything about the truth of the generalised intuitionism. Sticking to judgements which one cannot justify may be the consequence of having acquired the respective criteria from authorities, so that one may surmise the justifiability of these criteria but not know a real justification (higher education tendentially leads to querying authority-based principles and hence to reducing acceptance of mere authority-based principles). One may even have forgotten a justification and remember only that there was one. In Haidt’s tricky cases (eating one’s dead pet dog, incest between siblings who use contraceptives, masturbating with a dead
system immediately provides the moral judgement and only later is an emotion added. Some problems of this theory are these. The principles are vague, not universal and only deontological in nature (moral valuation is missing). The theory does not explain how subjects can develop individual morals. It leaves out rational designing of morality and the role of prosocial motives. Finally, Hauser does not really try to prove this theory.
Morality in Times of Naturalising the Mind – An Overview
31
chicken and later eating it etc. (Haidt et al. 1993; Haidt 2001: 814817)) there may be sensible rules in the background whose application only in this particular situation does not make sense, so that many subjects who have only a vague idea of the reasons behind these rules become unsure. As a consequence the reactions in such tricky cases do not say that much about the normal cases of moral judgement. Altogether, the models are not based on empirical studies of conscious reflection prior to moral judgement, and they are too simplistic for explaining the many sources of and influences on morality. Resuming the just sketched discussion, one can note that all these psychological models of moral judgements are one-sided in one way or the other, always neglecting some sources and mechanisms. Hence we need a more integrative model, which may work like this. A child’s original adoption of moral standards could be heteronomous via, first, seeking social gratification (avoiding punishment for immoral behaviour and pursuing reward for good behaviour) and, second, belief in authority, i.e. the belief that the standards introduced by the socialising agents will be good and important also for the child himself. A strong changer of the standards once adopted then is cognitive progress. This entails, first, gradually understanding more complex, more general and more abstract moral standards as well as their justifications (which first may be taught to the child and adolescent but also acquired autonomously by own reflection), second, seeking coherence, i.e. trying to arrive at fewer but more comprehensive standards, which capture earlier more concrete standards, as well as trying to eliminate contradictions emerging in this process and, third, as a consequence of a more critical attitude towards authority, asking for primary justifications for moral standards adopted so far and eventually discarding them if no satisfying justification is obtained. The content of cognitive progress is neutral with respect to the content of morality. Such a content may be introduced now during ontogenesis via autonomous and universal sources of morality which do not themselves depend on already adopted moral criteria: namely prosocial motives (sympathy and respect) and rational cooperation. Finally, the moral criteria adopted in this way may lead to instances of moral judgement mainly via the usual cognitive processes of judgement formation, which also permit unconscious processing followed by intuitions, emotional emphasising of important features,
32
CHRISTOPH LUMER
influencing “correct” cognitive processing by primes etc. (Lumer 2002: 182-186). All in all, we have seen that moral psychology in principle, because of the type of knowledge it is trying to obtain, which among others speaks of tokens that make up our deliberation, adds much more of the empirical information we need in normative and metaethics than moral physiology. Moral psychologists have already provided many interesting results (for some more specific praise: Stich et al. 2010: 202), which deserve more attention in ethics and in moral physiology. Along these lines, moral physiology has mostly an ancillary function – Cushman et al. (2010: 47) claim e.g. that mainly neuroscientific findings revealed the participation of emotions in moral reasoning, admitting, though, that there are respective psychological evidences as well. So far physiological research has fulfilled this ancillary function only to a rather limited degree, and this is also due to a pervasive disregard of moral psychological findings by moral physiologists.
5. Overview of the Present Volume The chapters of this volume contribute to various parts of the fields of research just outlined. The aim of Michael Pauen’s “Naturalizing Free Will – Empirical and Conceptual Issues” is to show that naturalising empirical research of consciousness and agency does not undermine our self-understanding as self-conscious and responsible agents but leads to an improved understanding of these qualities. Free will is his example for proving this claim. He provides a compatibilist conception of free will as self-determination by one’s own preferences, which in turn could be effectively rejected by one’s decision; and he criticises incompatibilist conceptions as having fewer advantages and leading to requirements like ultimate control, which are self-contradictory and hence unrealisable even in an indeterminate world. The second part of Pauen’s chapter defends the reality of free will against Libet’s and Wegner’s physiological and psychological objections. One main argument against Libet’s interpretation of his results is e.g. that the decisive intention in Libet’s experiments is already formed when the subjects accept the experimenter’s specific instructions, so that a study of the readiness potentials and urges to
Morality in Times of Naturalising the Mind – An Overview
33
act prior to the single actions cannot reveal anything about the freedom of the relevant intention. Pauen’s critique of Wegner’s theory of the “illusion of conscious will” includes reports of some experiments which show that intentions cause respective actions. Christoph Lumer’s “Libet’s Experiments and the Possibility of Free Conscious Decision” and the related “The Effectiveness of Intentions – A Critique of Wegner” are detailed critiques of these two main physiological and psychological attacks on the possibility of free will. In the Libet-chapter, after showing that the truth of Libet’s interpretation of his experiments would indeed imply that even a compatibilist freedom of the will as well as actions (in the action philosophical sense) would not exist, Lumer compiles a wealth of criticisms of this interpretation, which question e.g. the temporal order of the readiness potential and the urge to act, deny that Libet has ever studied intentions (because an urge to act is not an intention), and stress the decisiveness of the prior intention. The final section broadens the defence of the existence of free will against a much more general idea of Libet’s mind-time theory, according to which conscious experiences are always only the end of an amplifying process; and positively it sketches a theory of the functional role of consciousness in intention formation. Lumer’s “The Effectiveness of Intentions – A Critique of Wegner” defends the intentional-causalist conception of action (in an action an intention causes the respective behaviour in the right way) against Wegner’s illusion-of-conscious-will thesis. While Lumer accepts Wegner’s main idea, i.e. that our posterior knowledge about our intentions and their causing our actions rests on the constructive processing of the available empirical evidence, and takes it to be compatible with intentional causalism, he criticises those less central parts of Wegner’s model which indeed are incompatible with intentional causalism and shows them to be unfounded. Massimo Reichlin’s chapter “Neuroethics and the Rationalism/ Sentimentalism Divide” criticises Haidt’s and Greene’s emotivist theories of moral judgement and sketches an alternative model of moral judgement, which combines sentimentalism and cognitivism. Some of Reichlin’s criticisms of the emotivist theories are that these models disregard personal identity and the ontogenesis of moral judgements as well as their practical function, and they overlook the reflective part in the formation of moral judgements: moral
34
CHRISTOPH LUMER
judgements are the result of a reflective dealing with spontaneous moral emotions. These criticisms are then turned into positive hypotheses which delineate a model of moral judgement: Emotions, in particular sympathy, are necessary conditions for authentic moral judgements but are potentially in conflict with personal interests, so that reflection has to choose between the various options; the resulting decision then is motivated by emotion but justified by reflection. Metaethically categorised, because of the universality of the respective emotions, this leads to a sentimentalistically enriched cognitivism without moral realism. Antonella Corradini’s “Experimental Ethics – A Critical Analysis”, first, defends the justificatory capacity of traditional moral philosophy in general and an intuitionist approach in ethics in particular against attacks by some experimental philosophers who, with the help of moral psychology, try to show that these intuitions are not reliable. Corradini responds in three ways: with a counterattack on the conclusiveness of the experimental results (e.g. it is not necessarily the moral part of the cognitive process that is responsible for the variations of moral intuitions); by referring to more sophisticated intuitionist approaches (of John Rawls and Richard Hare), which can deal with divergent intuitions; and by a fairly general methodological objection, namely that according to Hume’s Law empirical findings cannot undermine the ethical quality of moral judgements. Second, Corradini works out methodological difficulties of the neurophysiological explanations of moral judgements, e.g. whether they really explain moral beliefs and not perhaps amoral repugnance. In particular she criticises Greene’s empirically based attack on deontology: From a deontological perspective, emotions are only contingent concomitants; therefore, their presence does not say anything about the validity of deontology. Maureen Sie in her chapter “Moral Soulfulness and Moral Hypocrisy – Is Scientific Study of Moral Agency Relevant to Ethical Reflection?” argues for the general claim that ethics is dependent on empirical investigation of moral agency in order to be able to suggest realistic moral aims. Mainly, however, she rejects two strong, scientifically nurtured attacks on the moral nature of our apparently moral actions and argues for a revision of the traditional picture of moral agency. First, many findings of moral psychology seem to show that the moral reasons we provide for our actions are
Morality in Times of Naturalising the Mind – An Overview
35
only confabulations so that these actions are not really moral. Sie replies that even if the reasons given later were not conscious during the decision, they usually played a role in the decision; they work unconsciously like perception does in routine movements. Second, experiments by Batson and colleagues seem to show that the motive behind moral action is not the desire to be moral but to appear so. Sie replies with a critique that the examined actions were not really morally obligatory and with a reinterpretation: We learn the contents of morals by taking part in a moral practice; the desire to appear to be moral is part of our disposition to adopt the morality of our environment. Arnaldo Benini’s chapter “The Rationale Behind Surgery – Truth, Facts, Values” is a contribution to neuroethics, i.e. the applied ethics of interventions in the neurological realm, from a neurosurgeon’s point of view. The question Benini wants to answer is which information, values and criteria should enter into or determine the decision about medical treatment. As an example he discusses several forms of brain tumor, their consequences and risks as well as the possibilities and chances for surgical removal. Several consequences have to be pondered in the respective decisions: life expectancy, risks, pain, functional impairment etc. accompanying non-intervention on the one hand, and range and probability of life prolongation, risks, impairment of mental functions, pain, fear and nuisance etc. caused by the various forms of medical treatment. Benini makes the case for full autonomy of patients with respect to the valuation and pondering of these consequences and hence letting them decide on the basis of the best information provided by qualified physicians.
REFERENCES Anscombe, G[ertrude] E[lizabeth] M[argaret] (1957): Intention. Oxford: Blackwell. Bargh, John A.; Kimberly Barndollar (1996): Automaticity in Action. The Unconscious as Repository of Chronic Goals and Motives. In: Peter M. Gollwitzer; John A. Bargh (eds.): The Psychology of Action. Linking Cognition and Motivation to Behavior. New York; London: Guilford Press. 457-481. Bargh, John A.; Tanya L. Chartrand (1999): The Unbearable Automaticity of Being. In: American Psychologist 54. 462-479.
36
CHRISTOPH LUMER
Bargh, John A.; M. Chen; L. Burrows (1996): Automaticity of social behavior. Direct effects of trait construct and stereotype activation on action. In: Journal of Personality and Social Psychology 71. 230-244. Batson, C. Daniel; K. C. Oleson (1991): Current status of the empathy-altruism hypothesis. In: Prosocial Behavior 12. 62-85. Berthoz, Alain ( 2006): Emotion and Reason. The Cognitive Science of Decision Making. Translated by Giselle Weiss. Oxford [etc.]: Oxford U.P. Buchak, Lara (2013): Risk and Rationality. Oxford: Oxford U.P. Camerer, Colin [F.] (1995): Individual Decision Making. In: John H. Kagel; Alvin E. Roth (eds.): The Handbook of Experimental Economics. Princeton, NJ: Princeton U.P. 587-703. Cholbi, M. (2006): Belief Attribution and the Falsification of Motive Internalism. In: Philosophical Psychology 19. 607-616. Churchland, Patricia Smith (2002): Brain-Wise. Studies in Neurophilosophy. Cambridge, MA; London: MIT Press. Ciaramelli, Elisa; Michela Muccioli; Elisabetta Làdavas; Giuseppe di Pellegrino (2007): Selective deficit in personal moral judgment following damage to ventromedial prefrontal cortex. In: SCAN 2. 84-92. Clark, Andy; Julian Kiverstein; Tillmann Vierkant (eds.) (2013): Decomposing the Will. Oxford: Oxford U.P. Coke, Jay S.; C. Daniel Batson; Katharine McDavis (1978): Empathic Mediation of Helping. A Two-Stage Model. In: Journal of Personality and Social Psychology 36. 752-766. Crozier, Ray [W.]; Rob Ranyard (1997): Cognitive process models and explanations of decision making. In: Rob Ranyard; W. Ray Crozier; Ola Svenson (eds.): Decision Making. Cognitive Models and Explanations. Oxford: Routledge. 3-20. Cushman, Fiery; Liane Young; Joshua D. Greene (2010): Multi-System Moral Psychology. In: John M. Doris; The Moral Psychology Research Group: The Moral Psychology Handbook. Oxford: Oxford U.P. 47-71. Damasio, Antonio R. (1994): Descartes’ Error. Emotion, Reason, and the Human Brain. New York: G. P. Putnam’s Sons. Deigh, John (1996): Empathy and Universalizability. In: Larry May; Marilyn Friedman; Andy Clark (eds.): Mind and Morals. Essays on Cognitive Science and Ethics. Cambridge, MA; London: MIT Pr.; Bradford Book. 199-219. – Reprinted in: John Deigh: The Sources of Moral Agency. Essays in Moral Psychology and Freudian Theory. Cambridge: Cambridge U.P. 1996. 160-180. Fehr, Ernst; Simon Gächter (2001): Fairness and Retaliation. In: L. GerardVaret; Serge-Christophe Kolm; Jean Mercier Ythier (eds.): The Economics of Reciprocity, Giving and Altruism. Basingstoke: Macmillan. 153-173. Fine, Cordelia (2006): Is the emotional dog wagging its rational tail, or chasing it? Reason in moral judgment. In: Philosophical Explorations 9. 83-98.
Morality in Times of Naturalising the Mind – An Overview
37
Frank, Robert H. (1988): Passions within Reason. The Strategic Role of the Emotions. New York; London: Norton. Gigerenzer, Gerd (2010): Rationality for Mortals. How People Cope with Uncertainty. 2nd ed. New York; Oxford: Oxford U.P. Gintis, Herbert; Samuel Bowles; Robert [T.] Boyd; Ernst Fehr (eds.) (2005): Moral Sentiments and Material Interests. The Foundations of Cooperation in Economic Life. Cambridge, MA; London: MIT Press. Greene, Joshua D. (2005): Emotion and cognition in moral judgement. Evidence from neuroimaging. In: Jean-Pierre Changeux; Antonio R. Damasio; Wolf Singer; Y. Christen (eds.): Neurobiology of Human Values. Berlin; Heidelberg: Springer. 57-66. Greene, Joshua D. (2007): Why are VMPFC patients more utilitarian? A dualprocess theory of moral judgment explains. In: Trends in Cognitive Sciences (TICS) 11, No.8. 322-323. Greene, Joshua D. (2008): The Secret Joke of Kant’s Soul. In: Walter SinnottArmstrong (ed.): Moral Psychology. Vol. 3: The Neuroscience of Morality. Emotion, Brain Disorders, and Development. Cambridge, MA; London: MIT Press. 35-79. Greene, Joshua D.; Leigh E. Nystrom; Andrew D. Engell; John M. Darley; Jonathan D. Cohen (2004): The Neural Bases of Cognitive Conflict and Control in Moral Judgment. In: Neuron 14. 389-400. Greene, Joshua D.; R. Brian Sommerville; Leigh E. Nystrom; John M. Darley; Jonathan D. Cohen (2001): An fMRI Investigation of Emotional Engagement in Moral Judgment. In: Science 293, 14 September 2001. 21052108. Haidt, Jonathan (2001): The Emotional Dog and Its Rational Tail. A Social Intuitionist Approach to Moral Judgment. In: Psychological Review 108. 814-834. Haidt, Jonathan (2012): The Righteous Mind. Why Good People are Divided by Politics and Religion. London: Allen Lane (Penguin). Haidt, Jonathan; F. Bjorklund (2008): Social intuitionists answer six questions about moral psychology. In: Walter Sinnott-Armstrong (ed.): Moral Psychology. Vol. 2: The Cognitive Science of Morality. Intuition and Diversity. Cambridge, MA; London: MIT Press. 181-217. Haidt, Jonathan; Silvia Helena Koller; Maria G. Dias (1993): Affect, Culture, and Morality, or Is It Wrong to Eat Your Dog? In: Journal of Personality and Social Psychology 65. 613-628. Hardman, David (2009): Judgment and Decision Making. Psychological Perspectives. Malden, MA: Wiley-Blackwell. Hassin, Ran R.; James S. Uleman; John A. Bargh (eds.) (2005): The New Unconscious. New York [etc.]: Oxford U.P. Hauser, Marc D. ( 2008): Moral Minds. How Nature Designed Our Universal Sense of Right and Wrong. 3rd ed. London: Abacus.
38
CHRISTOPH LUMER
Heckhausen, Heinz (1989). Motivation und Handeln. 2nd, completely revised ed. Berlin [etc.]: Springer. Heekeren, Hauke R.; Isabell Wartenburger; Helge Schmidt; Hans-Peter Schwintowski; Arno Villringer (2003): An fMRI study of simple ethical decisionmaking. In: Cognitive Neuroscience and Neuropsychology 14, No 9, 1 July 2003. 1215-1219. Hein, Grit; Claus Lamm; Christian Brodbeck; Tania Singer (2011): Skin conductance response to the pain of others predicts later costly helping. In: PLoS One 6,8. E22759. Henrich, Joseph; Robert Boyd; Samuel Bowles; Colin Camerer; Ernst Fehr; Herbert Gintis (eds.) (2004): Foundations of Human Sociality. Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies. New York [etc.]: Oxford U.P. Hoffman, Martin L. (2000): Empathy and Moral Development. Implications for Caring and Justice. Cambridge: Cambridge U.P. Jeannerod, Marc (1997): The Cognitive Neuroscience of Action. Oxford: Blackwell. Kagan, Jerome (2008): Morality and Its Development. In: Walter SinnottArmstrong (ed.): Moral Psychology. Volume 3: The Neuroscience of Morality. Emotion, Brain Disorders, and Development. Cambridge, MA; London: MIT Press. 297-312. Kahneman, Daniel (2011): Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Kant, Immanuel ( 1977/1903): Grundlegung zur Metaphysik der Sitten. (11785; 21786.) (Groundwork for the Metaphysics of Morals.) In: Idem: Werkausgabe. Ed. by Wilhelm Weischedel. Vol. VII. Frankfurt, am Main: Suhrkamp 21977. 7-102. – Or in: Idem: Kants Werke. AkademieTextausgabe. Vol. 4. Berlin: de Gruyter 1903. 385-464. Kiehl, Kent A. (2008): Without Morals. The Cognitive Neuroscience of Criminal Psychopaths. In: Walter Sinnott-Armstrong (ed.): Moral Psychology. Volume 3: The Neuroscience of Morality. Emotion, Brain Disorders, and Development. Cambridge, MA; London: MIT Press. 119149. Kiesler, S.; L. Sproull; K. Waters (1996): A Prisoner’s Dilemma Experiment on Cooperation with People and Human-Like Computers. In: Journal of Social Psychology 70. 47-65. Koch, Christof; Francis Crick (2001): The zombie within. In: Nature 411,6840. 893. Koehler, Derek J.; Nigel Harvey (2004) (eds.): Blackwell Handbook of Judgment and Decision Making. Oxford: Blackwell. Koenigs, Michael; Liane Young; Ralph Adolphs; Daniel Tranel; Fiery Cushman; Marc Hauser; Antonio Damasio (2007): Damage to the prefrontal cortex increases utilitarian moral judgements. In: Nature, Letters 446. 908911.
Morality in Times of Naturalising the Mind – An Overview
39
Kohlberg, Lawrence (1981): Essays on Moral Development. Vol. I: The Philosophy of Moral Development. San Francisco, CA: Harper & Row. Kohlberg, Lawrence (1984): Essays on Moral Development. Vol. II: The Psychology of Moral Development. San Francisco, CA: Harper & Row. Levy, Neil (2006): Cognitive Scientific Challenges to Morality. In: Philosophical Psychology 19. 567-587. Levy, Neil (2009): Empirically Informed Moral Theory. A Sketch of the Landscape. In: Ethical Theory and Moral Practice 12. 3-8. Libet, Benjamin (1985): Unconscious cerebral initiative and the role of conscious will in voluntary action. In: Behavioral and Brain Science 8. 529566. Libet, Benjamin (2004): Mind Time. The Temporal Factor in Consciousness. Cambridge, MA; London: Harvard U.P. Lumer, Christoph (2002): Motive zu moralischem Handeln. In: Analyse & Kritik 24. 163-188. Lumer, Christoph (2002/2003): Kantischer Externalismus und Motive zu moralischem Handeln. In: Conceptus 35. 263-286. Lumer, Christoph (2007a): The Action-Theoretic Basis of Practical Philosophy. In: Christoph Lumer; Sandro Nannini (eds.): Intentionality, Deliberation and Autonomy. The Action-Theoretic Basis of Practical Philosophy. Aldershot: Ashgate. 1-13. Lumer, Christoph (2007b): An Empirical Theory of Practical Reasons and its Use for Practical Philosophy. In: Christoph Lumer; Sandro Nannini (eds.): Intentionality, Deliberation and Autonomy. The Action-Theoretic Basis of Practical Philosophy. Aldershot: Ashgate. 157-186. Lumer, Christoph (2012): Emotional Decisions. The Induction-of-IntrinsicDesires Theory. In: Alessandro Innocenti; Angela Sirigu (eds.): Neuroscience and the Economics of Decision Making. Abingdon, UK; New York: Routledge. 109-124. Lumer, Christoph (forthcoming): Reasons and Conscious Control in Automatic Actions. Lynch, John G. jr.; Jerry L. Cohen (1978): The Use of Subjective Expected Utility Theory as an Aid to Understanding Variables That Influence Helping Behavior. In: Journal of Personality and Social Psychology 36. 1138-1151. Manktelow, Ken (2012): Thinking and Reasoning. An Introduction to the Psychology of Reason, Judgment and Decision Making. Hove, East Sussex; New York: Psychology Press. Mansbridge, Jane J. (ed.) (1990): Beyond Self-Interest. Chicago; London: University of Chicago Press. Melden, Abraham I. (1961): Free Action. London; New York: Routledge and Kegan Paul; Humanities Press. Mele, Alfred R. (2009): Effective Intentions. The Power of Conscious Will. Oxford: Oxford U.P., USA.
40
CHRISTOPH LUMER
Mikhail, John (2007): Universal Moral Grammar. Theory, Evidence and the Future. In: Trends in Cogitive Sciences 11. 143-152. Mikhail, John (2011): Elements of Moral Cognition. Rawls’ Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment. Cambridge: Cambridge U.P. Moll, Jorge; Frank Krueger; Roland Zahn; Matteo Pardini; Ricardo de OliveiraSouza; Jordan Grafman (2006): Human fronto-mesolimbic networks guide decisions about charitable donation. In: Proceedings of the National Academy of Sciences of the United States of America (PNAS) 103, no. 42, Oct 9 (2006). 15623-15628. Moll, Jorge; Ricardo de Oliveira-Souza (2007): Moral judgments, emotions and the utilitarian brain. In: Trends in Cognitive Sciences 11. 319-321. Moll, Jorge; Ricardo de Oliveira-Souza; Ivanei E. Bramati; Jordan Grafman (2002a): Functional Networks in Emotional Moral and Nonmoral Social Judgments. In: NeuroImage 16. 696-703. Moll, Jorge; Ricardo de Oliveira-Souza; Paul J. Eslinger; Ivanei E. Bramati; Janaína Mourao-Miranda; Pedro Angelo Andreiuolo; Luiz Pessoa (2002b): The Neural Correlates of Moral Sensitivity. A Functional Magnetic Resonance Imaging Investigation of Basic and Moral Emotions. In: The Journal of Neuroscience 22, 7, 1st April 2002. 2730-2736. Moll, Jorge; Ricardo de Oliveira-Souza; Fernanda Tovar Moll; Fatima Azevedo Ignacio; Ivanei E. Bramati; Egas M. Caparelli-Daquer; Paul J. Eslinger (2005): The Moral Affiliations of Disgust. A Functional MRI Study. In: Cognitive and Behavioral Neurology 18,1, March. 68-78. Neal, David T.; Wendy Wood; Mengju Wu; David Kurlander (2011): The pull of the past. When do habits persist despite conflict with motives? In: Personality and Social Psychology Bulletin 37. 1428-1437. Nichols, Shaun (2002): How Psychopaths Threaten Moral Rationalism. Or, Is it Irrational to Be Amoral? In: The Monist 85. 285-304. Nichols, Shaun (2004): Sentimental Rules. On the Natural Foundations of Moral Judgment. Oxford [etc.]: Oxford U.P. Nunner-Winkler, Gertrud ( 2011): The development of moral understanding and moral motivation. In: Franz E. Weinert; Wolfgang Schneider (eds.): Individual development form 3 to 12. Findings from the Munich longitudinal study. 2nd ed. New York: Cambridge U.P. 253-290. Nunner-Winkler, Gertrud (1999): Moralische Motivation und moralische Identität. Zur Kluft zwischen Urteil und Handeln. In: Detlef Garz; Fritz Oser; Wolfgang Althof (eds.): Moralisches Urteil und Handeln. Unter Mitarbeit von Friedhelm Ackermann. Frankfurt, Main: Suhrkamp. 314-339. Parks, C. D.; A. D. Vu (1994): Social Dilemma Behavior of Individuals from Highly Individualist and Collectivist Cultures. In: Journal of Conflict Resolution 38. 708-718.
Morality in Times of Naturalising the Mind – An Overview
41
Passingham, Richard E.; Hakwan C. Lau (2006): Free Choice and the Human Brain. In: Susan Pockett; William P. Banks; Shaun Gallagher (eds.): Does Consciousness Cause Behavior? Cambridge, MA: MIT Press. 53-72. Payne, John W.; James R. Bettman; Eric J. Johnson (1993): The adaptive decision maker. Cambridge: Cambridge U.P. Piaget, Jean ( 1965): The moral judgement of the child. (Le jugement moral chez l’enfant. 1932.) Transl. by M. Gabain. New York: Free Press. Pollard, Bill (2010): Habitual Actions. In: Timothy O’Connor; Constantine Sandis (eds.): A Companion to the Philosophy of Action. Chichester: WileyBlackwell. 74-81. Polonioli, Andrea (2009): Recent Trends in Neuroethics. A Selected Bibliography. In: Etica & Politica / Ethics & Politics. 9,2. 68-87. Prinz, Jesse J. (2007): The Emotional Construction of Morals. Oxford: Oxford U.P. Prinz, Jesse J.; Shaun Nichols (2010): Moral Emotions. In: John M. Doris; The Moral Psychology Research Group: The Moral Psychology Handbook. Oxford: Oxford U.P. 111-146. Rizzolatti, Giacomo; Corrado Sinigaglia ( 2008): Mirrors in the Brain. How Our Minds Share Actions and Emotions. (So quel che fai. 2006.) New York: Oxford U.P. Roskies, Adina (2003): Are ethical judgments intrinsically motivational? Lessons from acquired ‘sociopathy’. In: Philosophical Psychology 16. 5166. Rozin, Paul; Jonathan Haidt; Katrina Fincher (2009): From oral to moral. Is moral disgust an elaboration of a food rejection system? In: Science 323. 1179-1180. Rozin, Paul; Laura Lowery; Sumio Imada; Jonathan Haidt (1999): The CAD Triad Hypothesis. A Mapping Between Three Moral Emotions (Contempt, Anger, Disgust) and Three Moral Codes (Community, Autonomy, Divinity). In: Journal of Personality and Social Psychology 76. 574-586. Sanfey, Alan G.; James K. Rilling; Jessica A. Aronson; Leigh E. Nystrom; Jonathan D. Cohen (2003): The Neural Basis of Economic Decision-Making in the Ultimatum Game. In: Science 300. 1755-1758. Shweder, Richard A.; Nancy C. Much; Manamohan Mahapatra; Lawrence Park (1997): The “Big Three” of Morality (Autonomy, Community, Divinty) and The “Big Three” Explanations of Suffering. In: Allan Brandt; Paul Rozin (eds.): Morality and Health. New York: Routledge. 119-169. Singer, Peter (2005): Ethics and Intuitions. In: Journal of Ethics 9. 331-352. Singer, Tania; Ben Seymour; John O’Doherty; H. Kaube; Raymond J. Dolan; Chris D. Frith (2004): Empathy for pain involves the affective but not sensory component of pain. In: Science 303. 1157-1162. Sinnott-Armstrong, Walter (2008) (ed.): Moral Psychology. Volume 3: The Neuroscience of Morality. Emotion, Brain Disorders, and Development. Cambridge, MA; London: MIT Press.
42
CHRISTOPH LUMER
Sinnott-Armstrong, Walter; Liane Young; Fiery Cushman (2010): Moral Intuitions. In: John M. Doris; The Moral Psychology Research Group: The Moral Psychology Handbook. Oxford: Oxford U.P. 246-272. Spence, Sean A. (2009): The Actor’s Brain. Exploring the cognitive neuroscience of free will. Oxford: Oxford U.P. Stich, Stephen; John M. Doris; Erica Roedder (2010): Altruism. In: John M. Doris; The Moral Psychology Research Group: The Moral Psychology Handbook. Oxford: Oxford U.P. 147-205. Turiel, Elliot (1983): The development of social knowledge, morality and convention. Cambridge: Cambridge U.P. Turiel, Elliot (1998): The development of morality. In: W. Damon (Series ed.); N. Eisenberg (Vol. ed.): Handbook of child psychology. Vol. 3: Social, emotional, and personality development. 5th ed. New York: Wiley. 863-932. Vartanian, Oshin; David R. Mandel (2011) (eds.): Neuroscience of Decision Making. New York; Hove: Psychology Press. Ward, Jamie (2006): The Student’s Guide to Cognitive Neuroscience. Hove; New York: Psychology Press. Wegner, Daniel M. (2002): The Illusion of Conscious Will. Cambridge, MA; London: MIT Press. Williams, Bernard: Internal and External Reasons (1979). In: Ross Harrsion (ed.): Rational Action. Studies in philosophy and social science. Cambridge; London: Cambridge U.P. 17-28.
PART I Free Will, Responsibility and the Naturalised Mind
Naturalizing Free Will – Empirical and Conceptual Issues MICHAEL PAUEN Abstract: It is often assumed that naturalization jeopardizes some of the most essential human abilities and maybe even our entire self-understanding as conscious, self-conscious, and responsible agents. Here it is argued that these worries are unfounded: Rather than threatening higher level abilities, naturalism, if successful, leads to an improved understanding of these properties. Free will provides a particularly good example. First, it is shown that free will, understood as self-determination, is compatible with determinism. Second, it turns out that, contrary to certain initial intuitions, dualism raises almost the same problems as physicalism does. Third, empirical results, particularly the well-known Libet experiments, do not provide decisive evidence against the existence of free will – some more recent experiments even seem to support free will. It is therefore concluded that naturalization poses no threat to free will; similar conclusions can be drawn for higher level human abilities in general.
One of the main tenets of scientific explanation is reduction: Scientists try to explain as many phenomena as possible with as few basic principles as possible, or to reduce as many phenomena as possible to a few basic principles. If successful, this strategy would give us as much control and understanding as possible. It is not just more economical if we can do with only a few principles, it would also improve our understanding because it tells us, how things are connected, and it would enhance control because we would come to know as much as possible about how things depend on each other. Taken by itself, this endeavor doesn’t look very problematic: Explaining something doesn’t jeopardize its existence – quite the contrary. If we know how something comes about then we have even less reason to put its existence into question. Still, reduction has a bad reputation these days. One of the main reasons is that reductionists tend to deny the existence of those phenomena that are not amenable to the naturalistic explanations at hand – behaviorism
46
MICHAEL PAUEN
is one notorious example, eliminative materialism is an even more problematic one. Behaviorism and eliminative materialism are now somewhat out of fashion. But reductionism seems to raise its head when it comes to other issues. One of them is the problem of free will. Many philosophers doubt that free will can be naturalized. There seems to be a very strong intuition that a decision-making process that can be reduced to biological processes in the brain is determined by those biological processes and the related laws – and not by the agent him or herself. This would seem even more so if the relevant laws are deterministic. Given that, in a deterministic world, any action can be reduced to events that occurred long before the agent’s birth, it seems obvious that the agent loses control under these conditions. In what follows, I will try to show that this intuition is false. My claim is that freedom, even in the strongest sense of the word, is jeopardized neither by a complete naturalistic account of decision making nor by determinism. So freedom can be naturalized – at least in principle. This latter caveat is necessary because it may well be that our ability to act freely, even if it’s not incompatible with naturalism per se, is put into question by specific empirical results. I will therefore discuss some of the more recent experiments. My claim will be that these experiments do not challenge the existence of free actions and that it is even unlikely that there are or will be such experiments, on a reasonable analysis of freedom.
1. Naturalization But before we start to discuss the problem of free will, let’s first try to clarify what “naturalization” means. I take it that there is no universally accepted definition of the term and it may well be impossible to come up with such a definition. As far as I can see, the most reasonable thing to do in this situation is to lay open how one uses the term. As far as complex human abilities are concerned, naturalization, as I will use this expression, stands for an act of translation that includes at least three different levels: A conceptual level, a behavioral level, and a level of implementation. On the conceptual level, we start with an analysis of higher level terms denoting complex human abilities like “consciousness”, “free will” or “self-consciousness”. This analysis should capture our pre-
Naturalizing Free Will – Empirical and Conceptual Issues
47
scientific understanding of the term in question as good as possible. So we have to determine what the criteria are that someone has to meet in order to count as “conscious”, “free” or “self-conscious”. Based on these criteria, the relevant behavioral abilities are identified on the second level: So what should a person be able to do in order to count as, say, “self-conscious”? One answer might be that one should be able to recognize oneself as oneself. Finally, on the third level, the psychological and biological implementation is at issue: So what are the psychological and neurobiological mechanisms underlying acts of self-consciousness? Perspective taking and executive functions might be among the relevant psychological mechanisms while neural activities in the temporoparietal junction and the prefrontal cortex seem to provide the neural basis. According to this idea, naturalizing would reveal the biological mechanisms underlying some of the most distinctive human abilities. Identifying these mechanisms might improve our understanding of how we can foster the development of these abilities and how disorders can be corrected. It may even be that understanding these mechanisms might tell us something about the higher order abilities themselves. In any case, it is difficult to see how naturalization, understood in this way, would jeopardize the relevant higher order ability. If the process of naturalization is successful, then (a) the conceptual analysis of the higher level term will really capture the meaning of this term, (b) the behavioral analysis will identify the relevant abilities and (c), on the level of implementation, the neurobiological and psychological mechanisms that do realize the property in question will be identified. If the analysis fails and we succeed only in determining some more primitive mechanisms that are not able to realize the complex process that we refer to with the higher level term – then naturalization fails. In neither case can we say that the higher level property is put into question by the process of naturalization. It may of course turn out, during the process of naturalization, that humans lack certain abilities that are necessary in order to realize the higher level property in question. So we may find out that we don’t have the ability that we thought we have. But this doesn’t mean that naturalization deprived us of an ability we used to have and that we even might continue to have, if we hadn’t tried to
48
MICHAEL PAUEN
naturalize it. It just means that we never had the property in question.
2. Naturalizing Freedom So what does all this mean, when we try to naturalize freedom? It means, first, that we have to capture our pre-scientific understanding of freedom by means of a conceptual analysis, second, that we have to look for the behavioral abilities that are requested in order to instantiate the property in question, and finally, on the level of implementation, we have to determine the psychological and neurobiological mechanisms underlying these abilities. But why would one think that naturalization jeopardizes freedom, to begin with? There are several reasons why philosophers have thought or still think that this is so. The most important reason concerns the conceptual level: Many philosophers think that the absence of determinism is a conceptual implication of our prescientific understanding of freedom. On this assumption, the conflict is obvious enough: If it turns out that decision making is a natural ability that can be reduced to physical processes in the brain and the body and if those physical properties are governed by deterministic laws, then there can be no freedom in this world. Thus, naturalizing the mental including decision making processes would result in denying freedom. Second, it is argued that, independently of the previous problem, several behavioral and neurophysiological experiments have shown that humans don’t have the abilities that are required for acting freely. This is true particularly for Benjamin Libet’s famous experiments according to which the real decision is made by subconscious brain processes rather than conscious acts of will.
3. Freedom and Determinism – the Conceptual Objection I will discuss these objections one after the other. So let’s first talk about the conceptual objection. Advocates of this objection hold that freedom and determinism are incompatible (Kane 1989; Keil 2007; Seebaß 1993; Van Inwagen 1982; Widerker 2006). This claim is probably the most vigorously debated one in the entire discussion on
Naturalizing Free Will – Empirical and Conceptual Issues
49
free will. For obvious reasons, I will only be able to scratch the surface of this discussion. Although this claim has a high prima facie plausibility, I will argue that we should reject it. But even if we don’t, it will turn out that naturalization doesn’t play a decisive role here. If you think that freedom and determinism are incompatible then you might also have similar troubles in a world in which supernatural entities are involved in decision making processes. 3.1 Conceptual Requirements The crucial question, then, is: What are the conceptual requirements of freedom? Philosophers and non-philosophers alike disagree as to what the positive features of freedom are. Fortunately, however, they disagree much less when it comes to negative features, that is, features that free actions certainly do not have. Two of these features and the related intuitions are of specific importance. First, we would never say that an action is free if the agent acted under compulsion. So freedom and compulsion are certainly incompatible. It would be obviously unjust to hold somebody responsible when she acted under compulsion that is, she was forced to do what she did. So the first criterion is that, in order to be free, an action must not be brought about by external force. Positively speaking, the requirement is that an action has to be autonomous, where being autonomous just means that the action is not determined by forces external to the agent. Let’s call this the principle of autonomy. Although many philosophers would deny that this criterion is sufficient, almost nobody would deny that it’s necessary. Second, we would also deny that an action is free if it occurs by pure chance. The obvious difference between a free action and a chance event is that the former, unlike the latter, is brought about by an agent. It would follow that agency is the second criterion. Note that this is indispensable also because we hold agents responsible for their actions if these are free. But how could we hold someone responsible for a chance event, given that chance events are, by mere definition, not under anyone’s control?
50
MICHAEL PAUEN
3.2 Self-Determination The easiest way to account for these two principles is to translate freedom into self-determination. Saying that an action is selfdetermined just means that it is not determined by external forces. So self-determination does justice to the principle of autonomy. Likewise, saying that an action is self-determined means that it is not a chance event. Chance events are not determined by whatever agents there might be. Rather, self-determination implies that the action in question is determined by the agent him or herself, so it also does justice to the principle of agency. It would follow that “acting freely” can be translated into “being self-determined”. So imagine a person who has a strong and standing belief that stealing is reprehensible. If the person pays for the goods in her basket and does so because she has this belief then this would give us a reason to say that her action was selfdetermined. 3.3 The Self Taken by itself, this does not answer the question whether or not freedom and determination are compatible. In order to make progress, we have to say a few words about the “self” that is supposed to determine his or her own action. Of course there is a vigorous debate on the problem of the self, and many philosophers have denied that the self exists (Dennett 1991; Metzinger 2003; Minsky 1988). In the present context however, we can sidestep these discussions because what is needed here is a fairly unambitious notion of a self as it is an agent who is able to determine her own actions. So what we need are just those features of an agent that are relevant for acting and decision making. For obvious reasons, the agent’s own preferences, her beliefs and desires play a decisive role among these features. In fact, talking about an agent who is able to determine her own actions merely implies that the agent has certain beliefs and desires and that it is these beliefs and desires that determine her action. Of course, not every belief and not every desire that an agent happens to have can count as her own belief and desire such that it can motivate a self-determined action. Imagine an addict who has an insurmountable desire to take a drug. We would not say that the addict acts in a self-determined manner or freely, when he gives in and takes the drug. The most obvious way to
Naturalizing Free Will – Empirical and Conceptual Issues
51
account for this intuition would be to say that the desire to take the drug was not his own. This, however, requires a systematic distinction between preferences that can be attributed to an agent him- or herself, and those that cannot. There are several ways to make this distinction. As far as I can see, the most reasonable way to do so is to require that the preference be under the agent’s control. This, in turn, can be cashed out as meaning that the agent is able to make an effective decision against the preference in question. So let’s apply this criterion to the two examples above. In the case of the drug addict, the preference in question is certainly not under the agent’s control. If you are really addicted then making a decision to stop taking your drug will most certainly not be effective – that’s why this desire doesn’t count as your own preference and acting on this desire not as a selfdetermined action. On the other hand, it may well be that you could give up paying for the goods in your basket, were somebody to convince you that it is justified to steal, say, for political reasons. Given that your belief that stealing is reprehensible and the related desire are under your control, acting according to them would count as a self-determined action – even if you will actually never give up the belief and the desire in question. You could, however: That’s decisive. 3.4 Determinism Now, what does this mean for the compatibility of freedom and determinism? Again, it is obvious that we can only scratch the surface of a discussion that has been going on for centuries. I do think, however, that one can at least get a feeling of why it can make sense to say that there may be freedom even in a determined world. First, remember the above example of a person who has a firm and well reflected belief that stealing is reprehensible, and imagine that the person pays because she has this very belief. It would seem, then, that she is self-determined and free according to the standards above. If you think that determinism interferes with the ability to act freely then removing determination or breaking the deterministic causal chain should enhance freedom and self-determination. But this is clearly not the case, no matter where we break the causal chain. So imagine that the person’s beliefs and desires would no longer determine her decision. Would this give us more freedom?
52
MICHAEL PAUEN
Clearly not! Apart from weakening the connection between the agent and her action it would just raise the possibility for an action that goes against the agent’s beliefs and desires. So the agent might find herself stealing, although she is deeply convinced that stealing is reprehensible. So cutting the causal chain at this point clearly doesn’t give us more freedom. But maybe we have interrupted the causal chain just at the wrong place. So what if there is a moment of indeterminism just during the process of decision making such that there is one moment in which it is completely open whether or not the person will pay or steal. This is an intuition that plays an important role for proponents of incompatibilism (Chisholm 1982; Ginet 1966) and it has a high intuitive plausibility. Still, I don’t think that it does the trick. In order to see this, imagine that you make an important decision, say between accepting a job offer or turning it down. And imagine, just for reasons of simplicity, that you start reflecting about the reasons for the job and then continue to think about the reasons that speak against it, such that the moment of indetermination is right in between these two phases. The problem with this suggestion is that it deprives the considerations in the first half of the decision making process of any impact on the final decision. If there is really a “reset” in the middle of the process then, whatever happened before will have no effect on the outcome. So if you first thought about the reasons for the job offer and only after the reset about the reasons against it, then you will most probably make a decision against the job offer, no matter how strong the reasons for accepting it might have been. This doesn’t sound like a rational process of decision making and, what is more, it is certainly not a self-determined process, given that the reasons for the job-offer that the agent thought about in the first half of the process were her reasons. Finally, one might think that cutting the causal chain right before the agent’s birth might be the best way to cash out the intuition that freedom requires indetermination. This would stop the dependence on events before one’s birth while leaving intact the connection between one’s preferences and actions.1 Again, this suggestion sounds very plausible at first sight, but things look different as soon as the details come into focus, even if 1
Compare Van Inwagen, 1983.
Naturalizing Free Will – Empirical and Conceptual Issues
53
this case is much more tricky than the previous ones. In any case, it will turn out that cutting the causal chain at this point doesn’t enhance the degree of freedom either. This is not because events before the agent’s birth have no impact on the agent’s freedom – of course they have! On the other hand, events before the agent’s birth can also enhance or even enable freedom. But all this can be determined if we look at the agent’s actual situation, that is, if we look at whether or not her action is determined by her personal preferences or by external factors. The length of the causal chain of each of these factors is obviously irrelevant. Imagine there is some event that interferes with the agent’s ability to determine her own action: It is highly unlikely that we would change our assessment if we were to learn that the causal chain leading to this event has started during the agent’s lifetime – the only exception is if it was the agent himself who brought about the event. Conversely, imagine that there is some event which enhances the agent’s freedom – would we change our view if it turns out that the causal chain leading to this event started before the agent’s lifetime? If this is true, then the pure length of the causal chain is obviously irrelevant. What counts is, first, whether the event in question interferes with the agent’s ability to act in a self-determined manner or enhances it. Second, it is important whether or not the event depends upon the agent. Even in the absence of selfdetermination, an agent may be responsible for an action, provided that he is responsible for the lack of self-determination. So why wouldn’t it make sense to ask that the agent be responsible for all factors that determine the decision in question, including their entire causal history, such that all the relevant causal chains begin with the agent. Let’s call this “ultimate responsibility.” (Kane 1989; Strawson 1989; 1998) This may also be what people have in mind when they subscribe to Kant’s idea of “starting a causal chain”. It seems clear that there is no ultimate responsibility and no possibility for an agent to start a causal chain in a deterministic world. So wouldn’t this be a case where naturalization jeopardizes freedom? It may seem so. But again, things change if we have a closer look at the details. The reason is that ultimate responsibility is not a sensible idea, to begin with. So it doesn’t exist, no matter whether or not our world is deterministic.
54
MICHAEL PAUEN
In order to see this, imagine that we live in an indeterministic world and all the causal chains relevant for a specific action of mine start right when I am about to perform an action. So wouldn’t this give me the possibility to act with ultimate responsibility and to start a causal chain? It wouldn’t. Let’s assume that starting the causal chain at t1 means causing an event E1 at this time which, for obvious reasons, should qualify as an action and therefore has to depend on the agent’s intentions. But if it does, then the causal chain wouldn’t start at t1, rather, it would start at t0 latest,2 that is, when the agent’s intentions are formed. Alternatively, the causal chain might start at t1, but then the event does not depend on the agent’s intentions and thus does not qualify as an action, let alone a free action, which would require, in addition, that the intentions depend on the agent’s preferences. Note that we are talking about a non-determined world, so determinism can’t be the culprit. The problem is that the entire idea of starting a causal chain doesn’t make sense, to begin with. If it’s really the agent, who starts an action, then the action can’t be the beginning of a causal chain. Or if it really does start a causal chain then the action can’t depend on the agent’s intentions and therefore doesn’t qualify as her action. Note that this is true no matter whether we are talking about a determined or an undetermined world. This illustrates, once again, the above claim, namely that neither physicalism nor determinism are responsible for the notorious problems of free will. Even if it is certainly true that, psychologically speaking, dualist intuitions are among the main motives of incompatibilism, Cartesian dualism does not provide any serious advantage over physicalism. Apart from the fact that the incompatibility of freedom and determinism is at least questionable, problems with determinism can occur in a dualist universe with Cartesian souls as well as in a purely physicalistic world: God or an eternal fate might have determined whatever happens in a dualistic universe. If we set the problem of determinism aside then it is even harder to see why physicalism should be at odds with free will. Imagine that a person meets the criteria of your favorite account of free will, say because she acts in a self-determined way or her decisions turn out to be undetermined even by her own beliefs and desires, as agent 2
Provided that the intentions’ coming into existence is the beginning of a causal chain.
Naturalizing Free Will – Empirical and Conceptual Issues
55
causationists (Chisholm 1982) have it. And let’s say also that you expected that this was so due to some supernatural properties, say because the person’s conscious states are realized by some nonphysical stuff. But now it turns out that your expectation was wrong: Everything is purely physical. Would that give you a reason to revise your previous assessment? Clearly not. Provided that the person really meets the standards, then she will continue to do so, even after it has turned out that her mental states are physical states. Obviously, our ignorance regarding the person’s physical makeup is not among the criteria for freedom. So to sum up: I have argued that freedom can be translated into selfdetermination and self-determination means that an action is determined by the agent’s “personal preferences”. Personal preferences are those beliefs and desires that are under the agent’s control which, in turn, means that the agent can make an effective decision against the preference in question. While these criteria don’t imply determinism, they are compatible with it. As we have seen, indetermination does not enhance freedom. If it does anything, it leads to a loss of control. Still, some amount of indetermination might be acceptable, provided that it does not pass a certain threshold because otherwise it would undermine the agent’s control. But indetermination certainly does not lead to a more demanding idea of freedom. Furthermore, I have argued that, even if incompatibilism might be inspired by dualist intuitions, the difference between physicalism and dualism is almost irrelevant for the problem of freedom, seen from a systematical point of view. This is particularly so if we talk about the realization of those properties that are requested for an action to be free. If an action has the requested properties, it is free, no matter whether or not they are purely physically realized.
4. Empirical Results It would follow, then, that it is not very likely that naturalization, in general, will force us to reject freedom. However, it may well be that certain empirical results do, and quite a number of scientists are convinced that these results already exist. Most importantly, the experiments of Benjamin Libet have led many to the conclusion that
56
MICHAEL PAUEN
there is no freedom in our world. Daniel Wegner has even argued that there is no conscious will at all, no matter whether or not it is free. Still there are a number of other experiments that seem to support the idea that freedom does exist. Libet’s experiments have already been discussed sufficiently in the literature, so I will mention a few points only. As is fairly well known, Libet asked his subjects to perform a simple voluntary action, namely to flex the fingers or wrist of their right hand and to keep in mind when they decided to do so. At the same time, Libet recorded the onset of the symmetrical readiness potential, an activity in the brain that is known to be associated with voluntary action. Libet found that the conscious act of will occurred about 350 ms after the onset of the readiness potential (Libet 1985; 2004; Libet et al. 1983). These results have been taken to show that voluntary actions are determined by subpersonal brain processes only, while conscious acts of will are mere byproducts that have no effect whatsoever on our actions. According to Libet, the only way for an agent to exert some sort of conscious control over her action is to stop it by a “veto”, even if the action has already been initiated. However, due to certain methodological problems in the related experiments, this claim has been met with severe skepticism. Meanwhile, many critical points have also been raised regarding Libet’s main claims (Gomes 1999; 2002; Herrmann et al. 2008; Miller & Trevena 2002; Pauen 2004a; 2004b). One problem is that Libet’s experimental subjects had no choice between different options as it seems mandatory for a “real” decision. An alternative interpretation that has been confirmed by experimental results holds that the only “real” decision occurs when subjects make up their mind whether or not they should participate in the experiment. Deciding to participate means deciding to repeat the requested action 40 times. So once they had made up their mind, no other decision was necessary – they just did what they decided to do when agreeing to participate. According to Keller and Heckhausen (Keller & Heckhausen 1990), what Libet actually measures is not a conscious act of will; rather it’s just the preparation of an unconscious movement that we perform involuntarily every now and then. It is just that Libet’s instruction directed his subjects’ attention to the normally subconscious preparation of the movement which they then (mis-) interpreted as their conscious act of will.
Naturalizing Free Will – Empirical and Conceptual Issues
57
In addition, the lack of alternatives raises questions regarding the role of the symmetrical readiness potential. According to the standard interpretation, the symmetrical readiness potential determines the subsequent behavior. But given that the behavior was determined by the instruction, this assumption could not be tested in Libet’s experiments. It has, however, been tested subsequently by Herrmann et al. (Herrmann et al. 2008). As had to be expected from the original Kornhuber and Deecke experiments (Kornhuber & Deecke 1965) on the readiness potential, Herrmann et al. demonstrated that subjects are able to perform alternative movements even after the onset of the symmetrical readiness potential. It would follow that the onset of the readiness potential leaves room for alternatives. In a Libet-style experiment by Haggard and Eimer (Haggard & Eimer 1999), however, subjects did have a choice between two alternatives, i.e. pressing a button either with the left or the right hand. In addition, the authors measured the onset of the lateralized readiness potential which is more specific than the symmetrical readiness potential investigated by Libet. According to Haggard and Eimer, their experiment shows that it is the lateralized rather than the symmetrical readiness potential that determines what the subject is going to do; and it even seems to determine which hand the subject will move. Again, this interpretation can be challenged for various reasons. One important issue is that the experiment fails to distinguish two important stages of the process of decision making: First the decision what to do, second the decision when to do it. It may well be that the decision what to do preceded the decision when to do it in the Haggard and Eimer experiments, and that the former occurred well before the onset of the lateralized readiness potential. In addition, it is unclear whether the experiment really provides support for the idea that the action is determined by the lateralized readiness potential. The reason for these doubts is that in two out of eight subjects in one condition the conscious act of will occurred before the onset of the lateralized readiness potential. This is difficult to interpret on the assumption that the lateralized readiness potential causes the conscious act of will since this would imply that the effect precedes the cause. Daniel Wegner (Wegner 2002; 2003) has made an even more radical claim than Libet or Haggard and Eimer. Wegner does not
58
MICHAEL PAUEN
only deny that there is freedom of will, he even holds that there is no conscious will which may have an effect on our actions and decisions. Wegner concedes that it may seem to us that our intentions are effective, but this is an illusion, according to him. Our experience of conscious will and authorship is the mere product of a process of self-ascription that occurs after the action is completed. What is causally effective are subconscious brain processes which are not under the agent’s conscious control. Wegner refers to a number of studies that seem to support this claim. Among those are Libet’s experiments, but we have already seen that they can’t be taken to show that human actions are determined by subconscious brain processes rather than by conscious acts of will. In addition, Wegner refers to some of his own studies which, according to him, show that our feeling of agency is not reliable: We sometimes see ourselves as authors of actions that we did not initiate and sometimes fail to see us as authors of actions that we did initiate. But why does this mean that conscious will is an illusion? This would follow only upon the condition that conscious will requires that we are never wrong about it. But this is a fairly unrealistic and obviously too demanding condition. There are very few things, if any, between heaven and earth about which we are never wrong. And mental states, including perception, cognition and emotion are certainly not among them. Still, nobody denies that, errors notwithstanding, perception, cognition and emotion exist. And I can see no reason why we should accept much more demanding requirements for volition. I take it then, that neither Wegner nor Libet can show that free will, let alone, conscious will is an illusion. In fact, there are several empirical studies showing that conscious intention does have an effect. One of them has been published by Haggard et al. (Haggard et al. 2002) who try to show that Wegner’s idea of a post hoc ascription cannot be true. If it were then we should not be able to distinguish between intentions that are effective and those that are not. What counts, according to Wegner, is not the real causal process, but the external criteria that are assessed after the completion of the action. Consequently, there should be no difference in our attitude towards two intentions, one of them having an effect on a subsequent action, the other one not – provided that the relevant external circumstances are identical. But Haggard et al. can show that this is not true because there is an “intentional binding effect” that occurs only if an intention is in fact
Naturalizing Free Will – Empirical and Conceptual Issues
59
effective. Intentional binding means that we underestimate the time between an action and a subsequent event, e.g. a sound that has been caused by the action – provided that the action, in turn, has been caused by a related intention. Even if there is such an intention, the action is completed, and the subsequent event occurs, but the action has not been caused by the intention, we will overestimate the time between the action and its effect – contrary to what Wegner’s theory would predict. Finally, an experiment by Haynes et al. (Haynes et al. 2007) has shown that we can predict what a person will do on the basis of brain imaging data that obviously reflect conscious intentions. Haynes’s subjects had to decide whether to add or subtract numbers and were asked to perform the computation in question a few seconds after making this decision. It turned out that it was possible to predict the subject’s decision based on the imaging data. The most obvious way to explain these data is to assume that they refer to brain activity underlying the conscious intention to perform the computation in question. If this is true, then Haynes’s experiment would corroborate the idea that conscious intention is effective.
5. Conclusion I have tried to show that there is no necessary conflict between naturalization on the one hand and certain essential human abilities, particularly free will on the other. This is so because, first, free will is compatible with determination. So even if our world turns out to be deterministic, this would not amount to a refutation of free will. Second, I tried to show that there is only a weak dependence between free will and the ontological makeup of our world. Almost the same problems would occur, no matter whether or not our world contains some supernatural stuff in addition to physical matter. This makes it unlikely that naturalization will refute the existence of free will. Still, many authors believe that certain empirical results, particularly the experiments of Benjamin Libet, show that free will is an illusion. My third point was, therefore, that neither Libet’s own experiments, nor the most important follow-up studies show that there is no free will. Quite the contrary, there are several
60
MICHAEL PAUEN
experiments that support at least the idea that conscious intention is effective. Of course, it may well be that future experiments will show that there is no free will. But given the data that have been produced so far, this does not seem very likely. In any case, the above considerations should have shown that free will can be naturalized.
REFERENCES Chisholm, R. M. (1982): Human Freedom and the Self. In: G. Watson (ed.): Free Will. Oxford: Oxford University Press. 24-35. Dennett, D. C. (1991): Consciousness Explained. Boston; New York; Toronto: Backbay Books. Ginet, C. (1966): Might We Have no Choice? In: K. Lehrer (ed.): Freedom and Determinism. New York: Random House. 87-104. Gomes, G. (1999): Volition and the Readiness Potential. In: Journal of Consciousness Studies 6. 59-76. Gomes, G. (2002): The Interpretation of Libet's Results on the Timing of Conscious Events. A Commentary. In: Consciousness and Cognition 11. 221-230. Haggard, P.; S. Clark; J. Kalogeras (2002): Voluntary action and conscious awareness. [Research Support, Non-U.S. Gov't]. In: Nat Neurosci 5(4). 382385. doi: 10.1038/nn827. Haggard, P.; M. Eimer (1999): On the Relation Between Brain Potentials and the Awareness of Voluntary Movements. In: Experimental Brain Research 126. 128-133. Haynes, J.-D.; K. Sakai; G. Rees; S. Gilbert; C. Frith; R. E. Passingham (2007): Reading Hidden Intentions in the Human Brain. In: Current Biology 17. 1-6. Herrmann, C. S.; M. Pauen; B. K. Min; N. A. Busch; J. Rieger (2008): Analysis of a choice-reaction task yields a new interpretation of Libet's experiments. In: International Journal of Psychophysiology 67,2. 151-157. Kane, R. (1989): Two Kinds of Incompatibilism. In: Philosophy and Phenomenological Research 50. 219-254. Keil, G. (2007): Willensfreiheit. Berlin: De Gruyter. Keller, I.; H. Heckhausen (1990): Readiness Potentials Preceding Spontaneous Motor Acts. Voluntary vs. Involuntary Control. In: Electroencephalography and Clinical Neurophysiology 76. 351-361. Kornhuber, H. H.; L. Deecke (1965): Hirnpotentialänderungen bei Willkürbewegungen und passiven Bewegungen des Menschen. Bereitschaftspotential und reafferente Potentiale. In: Pflügers Archiv 284. 117.
Naturalizing Free Will – Empirical and Conceptual Issues
61
Libet, B. (1985): Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action. In: The Behavioral and Brain Sciences 8. 529539. Libet, B. (2004): Mind Time. The Temporal Factor in Consciousness. Cambridge MA: Harvard University Press. Libet, B.; C. A. Gleason; E. W. Wright; D. K. Pearl (1983): Time of Conscious Intention to Act in Relation to Onset of Cerebral Activities (ReadinessPotential). The Unconscious Initiation of a Freely Voluntary Act. In: Brain 106. 623-642. Metzinger, T. (2003): Being No One. The Self-Model Theory of Subjectivity. Cambridge: MIT Press. Miller, J.; J. A. Trevena (2002): Cortical Movement Preparation and Conscious Decisions. Averaging Artifacts and Timing Biases. In: Consciousness and Cognition 11. 308-313. Minsky, M. (1988): The Society of Mind. New York: Touchstone Books. Pauen, M. (2004a): Freiheit. Eine Minimalkonzeption. In: F. Hermanni; P. Koslowski (eds.): Der freie und der unfreie Wille. München: Fink. 79-112. Pauen, M. (2004b): Illusion Freiheit? Mögliche und unmögliche Konsequenzen der Hirnforschung. Frankfurt am Main: S. Fischer. Seebaß, G. (1993): Freiheit und Determinismus. In: Zeitschrift für philosophische Forschung 47. 1-22; 223-245. Strawson, G. (1989): Consciousness, Free Will, and the Unimportance of Determinism. In: Inquiry 32. 3-27. Strawson, G. (1998): Free Will. In: E. Craig (ed.): Routledge Encyclopedia of Philosophy. Oxford: Routledge. Van Inwagen, P. (1982): The Incompatibility of Free Will and Determinism. In: G. Watson (ed.): Free Will. Oxford New York: Oxford University Press. 46-58. Van Inwagen, P. (1983): An Essay on Free Will. Oxford: Clarendon Press. Wegner, D. M. (2002): The Illusion of Conscious Will. Cambridge MA: MIT Press. Wegner, D. M. (2003): The Mind's Best Trick. How we Experience Conscious Will. In: Trends in Cognitive Sciences 7,2. 65-69. Widerker, D. (2006): Libertarianism and the Philosophical Significance of Frankfurt Scenarios. The Journal of Philosophy 103(4). 163-187.
Libet’s Experiments and the Possibility of Free Conscious Decision CHRISTOPH LUMER Abstract: (2) In a famous series of experiments Libet has proved, many believe, that the way for human action is physiologically paved already before the conscious intention is formed. (3) A causal interpretation of these experiments which follows Libet’s lines implies that even a compatibilist freedom of action and decision, as well as actions in a narrow sense, do not exist. (4) A compilation of several critiques of the experiments’ interpretation, however, questions the most important parts of these interpretations, e.g. the temporal order, the nature of the conscious intention. (5) In addition, a more sophisticated picture of the work of intentions makes clear that in Libet’s experiments in most cases there were no proximal intentions to flex one’s finger but that these actions are intentional in virtue of the distal general intention to follow the experimenter’s requests. (6) Finally, an elaboration of the role of consciousness and deliberation in decisions shows how the latter intentions can be free despite being based on unconscious processes.
1. Introduction Traditional conceptions of action, intentionality, reason, freedom and responsibility have come under attack as a consequence of (more or less) recent findings in behavioural, cognitive and neurosciences. The most fundamental challenge in this respect is still Benjamin Libet’s experiments – and their followers – on spontaneous actions, which seem to show, and according to many have shown, that intentions do not play a decisive role in the production of actions. Though very much has been written about this challenge there is still a remarkable divide between those who think that Libet’s findings have finally proven the obsoleteness and vacuity of those traditional conceptions of action, freedom etc. and those who think that they have proven nothing in this respect. The aims of this chapter are threefold. The first is to systematise the many criticisms of Libet’s experiments as well as his
64
CHRISTOPH LUMER
interpretations of them and to filter out the remaining challenges to modernised, but still rather traditional, action theoretical conceptions. The second is to provide answers to these challenges by introducing adaptations of some of these conceptions to present-day empirical findings. And the third is to sketch a general theory of the role of consciousness and deliberation in intention formation, which fully restores freedom of action and decision.
2. Libet’s Experiments Libet conducted a series of experiments which purportedly show that the formation of a conscious intention may follow the proper physiological preparation for action:1 As physiological indicators show, the action is already in the offing in such a way that in normal cases it will be executed; and only afterwards a conscious intention is formed, which is just the effect of a physiological preparation for action. At least this is Libet’s interpretation. What, somewhat more precisely, are Libet’s findings? Voluntary actions are preceded by readiness potentials in the brain: Prior to the action the negative electric tension in the motor area and on the vertex of the brain rises continuously; and after the beginning of the action – i.e. the beginning of the innervation of the muscles (measured by electromyogram) – this negative potential declines sharply back to the baseline. Readiness potentials can be discriminated from the noise of electrical potentials in the brain mostly only by adding some dozens of single curves; Libet, in each case, added 40 curves (Libet 1985: 530; 535). These readiness potentials are usually interpreted as a preparation for action. The aim of Libet’s experiments was to compare the beginning of the readiness potentials with the time of forming the intention. In his main experiment, the spontaneous move plus timing experiment, the subjects were instructed to quickly move a finger or hand in an arbitrary moment after the beginning of the experiment whenever they wanted to do so, without preplanning these movements (ibid. 530). In addition, they had to observe when the intention or urge to 1
Libet’s original experiments and his theory are reported in: Libet et al. 1982; 1983a; 1983b. They are recapitulated in full detail in: Libet 1985: 529-539. The latter paper was published together with 24 critical comments (ibid. 539-558) from peers and a reply by Libet (ibid. 558-564).
Libet’s Experiments and the Possibility of Free Conscious Decision
65
move developed by remembering and later indicating the concurrent position of a rotating light spot (i.e. the clock) (ibid. 532). The subjects reported that prior to each action an urge to act developed, which arose spontaneously and out of nothing (ibid.). Readiness potentials began to develop at -550 ms (i.e. 550 ms before the beginning of the action measured in terms of the innervation of the muscles). The urge to act, i.e. the “conscious volition” or “will”, abbreviated by Libet as “W”, instead took place only at -200 ms, i.e. much later (350 ms) than the beginning of the readiness potentials (ibid. 529; 532). This means Libet found the chronological order represented in figure 1.
Fig. 1. Libet’s main experiment a r ←550ms→ ° W ° ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯→ t ° ←350ms → c ←200ms → ° r = onset of the readiness potential a = action W = willing, act of will or urge to move c = position of the clock at the moment of the read time The upper level of this diagram represents the physiological events: the readiness potential and the action; the intermediate level represents the mental events, in particular the willing; the lower level represents the events of the clock. The interval between r and a is measured physiologically; the time of c is calculated from the content of c itself.
Libet interprets this result as follows: The actions are initiated unconsciously; and the experience of a conscious intention is only a secondary result of the preparation for action already taking place (ibid. 536). This experiment has been replicated several times with similar results (Keller & Heckhausen 1990; Haggard & Eimer 1999; Trevena & Miller 2002). In another experiment conducted by Libet, the veto experiment, the subjects were instructed to make up their mind to carry out the action at a certain time but when the urge to act developed they were to decide against this action by a conscious veto (Libet 1985: 529). Under these circumstances, the subjects could indeed prevent the action’s execution. Readiness potentials at first developed normally
66
CHRISTOPH LUMER
but then, 150-250 ms prior to the intended action, they decreased to the baseline (ibid. 538). Haggard & Eimer (1999), apart from replicating Libet’s main findings, have undertaken a variation of Libet’s experiments. In their (second) experiment the subjects could not only choose the time but also which hand to move; meanwhile the subjects’ readiness potentials were measured (by EEG) above the motor fields. Again, the subjects had to report the time of their intention or urge to move. The results were roughly the same as those obtained by Libet. In addition, the lateralised readiness potentials occurred on the side appertaining to (i.e. contralateral to) the hand which later executed the movement, and again, well before the beginning of the urge to move. (Ibid. 128; 130 f.) This means that in the lateralised readiness potentials, apart from the timing, even the content of the action, right vs. left hand, seems to be determined already. Soon et al. (2008) repeated Haggard & Eimer’s Experiment (spontaneous moves with free choice of hand and time) with important modifications: Instead of recording the brain processes with the help of EEG, they used fMRI with fine-grained voxels ((3 mm)3); and instead of searching for increased activities in rather big brain areas they analysed the recorded material for such activity patterns, altered with respect to the baseline, that might permit to predict which hand was finally moved; since fMRI has good spatial but poor temporal resolution of only 500 ms with about a 2 sec delay, the clock for measuring the time of the respective intentions could consist of letters on a screen that changed every 500 ms. The earliest predictive pattern with a predictive accuracy of around 60% (Soon et al. 2008: 544, fig. 2; Haynes 2011: 93) was found in the frontopolar cortex (Brodmann Area 10) already 9 sec (7 sec plus 2 sec reaction time of fMRI) before the conscious intention (Soon et al. 2008: 544; Haynes 2011: 89).
3. Causal Interpretation of Libet’s Experiments The basic traditional idea about actions is that by actions our self, i.e. the phenomenally experienced kernel of our personality, controls some parts of our bodily and mental behaviour and thereby, via anticipated causal chains, also controls further events in the outer world as well as our future experiences. And the dominant
Libet’s Experiments and the Possibility of Free Conscious Decision
67
traditional operationalising theory of this idea is intentional causalism: Intentions – or “volitions” in older diction – are the hinge between the inner self and controlled behaviour. On the one hand, they consciously represent some behaviour (and its consequences) and, if it is within the range of our action capacity, they cause this behaviour via a correspondence providing action generating mechanism. The set of options a for which it holds that if the subject intends to execute the option a then option a is realised (by the action generating mechanism) makes up the subject’s freedom of action; the larger the set the more extended is our freedom of action.2 On the other hand, for being ours and of value for us, intentions have to come into being in a certain way, which makes up free will or freedom of decision. The currently most broadly accepted compatibilist conception of freedom of decision is a rational or reasons approach: A free decision, which establishes the intention, reflects the various options, considers their relevant consequences, evaluates them according to the subject’s preferences and integrates all these considerations into a comprehensive valuation, by which the action to be done is chosen. There is much room to consider more or fewer options and more or fewer consequences; the lower limit (= minimal deliberation) are two options, doing a or nothing, with only one relevant consequence of a. A rational decision, among other things, adjusts these quantities (of considered options and their consequences), i.e. the extent of the deliberation, to the decision’s importance. For being expressions of our inner self, the deliberation and its result, i.e. the intention, have to be (mostly) conscious.3 This traditional conception of action is incompatible with physicalism and eliminativism. However, it is compatible with the presently most common philosophical approaches to the mind-body problem in general and to mental causation in particular: identity 2
3
This idea is present in many philosophers from Aristotle through Augustine, Ockham, Aquinas, Descartes, Locke, Leibniz, Hume, Kant to contemporary theorists like Fred Adams, Richard Brandt, Bratman, Davidson, Goldman and Mele – to name only a few. The conditional conception of ‘freedom of action’ can already be found in Locke, Leibniz and Hume. Elaborations of these ideas and references can be found in: Lumer 2013 (for the “hinge structure” of intentions), Lumer 2005 (for the content of intentions and decisions) and Lumer 2002 (for a rationalist and autonomy conception of free decision).
68
CHRISTOPH LUMER
theory, (a broadly conceived) functionalism, emergentism, interactionism and epiphenomenalism – though in the case of epiphenomenalism the intention can only epiphenomenally “cause” the behaviour, i.e. actually the intention’s physiological basis causes the behaviour but the conditional ‘if the agent had not had the intention the action would not have been executed’ holds. To distinguish this epiphenomenalism in philosophy of mind from other forms of epiphenomenalism I will call it also “mental epiphenomenalism”. The following discussion of the challenges of Libet’s results for the traditional conception of action presupposes that one of the mind-body theories compatible with it is true; here there is no need to determine which one. They all assume that the mental has a physiological basis. The causation of action according to the traditional, intentional-causalist conception of action can then be schematised as in figure 2. Figure 2 represents only the epiphenomenalist version. Similar figures (which might be called “figure 2.a”, “figure 2.b” etc. respectively) could be drawn to represent the point of view of the other theories mentioned. The only changes would be that the causal relation between the physiological basis and the intention (i.e. “P(i) J i” of figure 2) would have to be replaced by the identity relation (e.g. “P(i) =pm i” in the fictitious figure 2.a) or the functional relation (e.g. “P(i) ʊf i” in the fictitious figure 2.b) etc.; the same holds for the relation between the deliberation and its physiological basis. In the following figures, I will continue to draw the mind-body relations only in an epiphenomenalistic way. But these figures should be understood as representing the point of view of the other theories too. The Fig. 2. Causation of actions, according to intentional causalism (epiphenomenalistically conceived) mental events: ⎯⎯⎯⎯→ t physical events:
d K P(d)
J
i K P(i)
J or K= causation d = (perhaps minimal) deliberation P(d) = physiological basis of the deliberation i = forming an intention P(i) = physiological underpinning of the intention a = action
J
a
Libet’s Experiments and the Possibility of Free Conscious Decision
69
problems raised by Libet’s experiments remain the same – independently of which particular theory of mental causation is accepted. Libet’s experiments and his interpretation of them, however, suggest a different, though not entirely clear causal order. There are two main interpretations. Causal interpretation 1: confirming intention: The first and seemingly straightforward interpretation of Libet’s findings says that the readiness potential first causes the intention’s physical basis (and thereby the intention itself), which then causes the action in the usually assumed way (cf. figure 3). Thus, the intention’s physical basis (P(i)) may or (more probably) may not be identical to some advanced stage of the readiness potential. According to this interpretation, the impending action is already preselected with the occurrence of the readiness potential and the way to action seems to be (nearly perfectly) paved (Libet 1985: 536); nonetheless, the forming of (the physiological basis of) the intention is a necessary step in the action’s causal history. The role of intention in this case probably would be to confirm the unconsciously preselected action (ibid. 538). Libet thinks confirming intention to be one possible interpretation of what he has found – besides the following interpretations (Libet 2004: 142; 145). – Even though in this interpretation the intention is still necessary for acting it no longer functions as a hinge between the inner self and the behaviour because the intention is not the result of a deliberation, which reflects and brings to bear the agent’s concerns; freedom of decision is lost – at least if one does not assume the readiness potentials to be the result of some conscious deliberation. – Although the confirming intention interpretation is formally compatible with Libet’s findings it is not very plausible. First, the same functional result would be obtained by the possibility of consciously vetoing the preselected action (Libet 1985: 538) – which is described below in interpretation 2.b. Vetoing would be more economical, though, which makes the veto theory (i.e. interpretation 2.b) more likely than the confirming intention interpretation. Second, without the observational task, agents probably would not even consciously feel the urge to act (Keller & Heckhausen 1990: 351-354) – what makes it unlikely that such an intention would be a necessary step to action.
70
CHRISTOPH LUMER Fig. 3. Interpretation 1 of Libet’s experiments: confirming intention
mental events: ⎯⎯⎯⎯→ t physical events:
r
J
i K P(i)
J
a
Kor J = causation r = onset of the readiness potential i = forming an intention P(i) = physiological underpinning of the intention a = action
Causal interpretation 2.a: physical epiphenomenalism: Another causal interpretation of Libet’s observations is more likely: physical epiphenomenalism. With the occurrence of the readiness potential the way to action is paved so neatly that the intention has no essential function in causing the action. The intention is only a secondary result of the preparation for action already taking place. It is a by-product, an epiphenomenon of the essential causes for action, i.e. the readiness potential (cf. figure 4); its function may be to inform us about the impending action. Since with this interpretation even the intention’s physiological basis is an epiphenomenon of the real cause (i.e. the readiness potential) for action this is a different kind of epiphenomenalism than the mental (philosophical) epiphenomenalism described above, i.e. the general theory of mind that mental events are causally infertile. The present form of epiphenomenalism may be called “physical epiphenomenalism”. If mental epiphenomenalism turns out to be the true general theory of mental causation, physical epiphenomenalism would add a further epiphenomenal relation to the already existing one, thus leading to some sort of double epiphenomenalism; the intention would only be the (mental) epiphenomenon of the (physical) epiphenomenon (i.e. the physiological basis of the intention) of the real cause r of action. (In figure 4 “P(i) J i” represents mental epiphenomenalism, whereas “r J P(i)” represents physical epiphenomenalism.) Mental and physical epiphenomenalism are conceptually independent of each other. Libet’s experiments may prove physical epiphenomenalism to be true (this remains to be discussed, however); but they say nothing about mental epiphenomenalism, their results are compatible with each of the mind-body theories taken into consideration above, from identity theory to mental epiphenomenalism. (Libet thinks the results of his experiments contribute to the philosophical, metaphysi-
Libet’s Experiments and the Possibility of Free Conscious Decision
71
cal discussion of the nature of mind, in particular that they falsify several metaphysical theories of mind, e.g. the identity theory (e.g. Libet 2004: 4-6; 11; 86-87; 158-159; 162-164; 167; 182; 184); but the respective claims rest on confusions, including a confusion of mental and physical epiphenomenalism.) – From the standpoint of intentional causalism, physical epiphenomenalism is still less attractive than the confirming intention interpretation because without the deliberative origin of the intention not only is freedom of decision missing but the causally interpreted freedom of action is missing as well; neither the intention nor its physiological basis play a causal role in bringing about the action. Fig. 4. Interpretation 2.a of Libet’s experiments: physical epiphenomenalism i K P(i)
mental events: ⎯⎯⎯⎯→ t physical events: N r
J
a
K or J or N = causation; other symbols as in fig. 3
Causal interpretation 2.b: veto theory: A further interpretation of Libet’s experiments is the “veto theory”. The main part of this interpretation is identical to the physical epiphenomenalist interpretation. However, there is an amendment saying that, possibly, there is some further but this time negative intention, a veto, which can stop the preparation of action (cf. figure 5). Libet backs this interpretation with his veto experiment. – The veto theory seems to be Libet’s causal interpretation of his experiments on spontaneous actions. – Whether the possibility of a veto, from the standpoint of the traditional conception of action, would be an improvement as compared to physical epiphenomenalism depends on the origin of such a veto, whether it is deliberative or not. This has to be seen in the following. But in any case even with the veto interpretation, the system of intentions (positive intention or veto) is deprived of its proposal function and freedom of action is reduced to two options, accepting or vetoing intentions which are unconsciously caused.
72
CHRISTOPH LUMER Fig. 5. Interpretation 2.b of Libet’s experiments: veto theory i1 K P(i1)
mental events: ⎯⎯⎯⎯→ t physical events: N r
J
M K 4 M L PP
a
K or J or Nor L = causation r = onset of the readiness potential i1 = forming of the first, positive intention i2 = forming the second, negative intention P(i) = physiological underpinning of intention i a = action || = interruption of a causal process shaded signs = possible, not necessary actual process
Many incompatibilists think that because Libet’s experiments show the action to be determined already before our conscious willing that this proves that freedom of decision does not exist. This thought, however, first, presupposes incompatibilism – which might be false – and, second and above all, in this general respect the experiments do not show anything radically new beyond what is already known about determinants of our behaviour before from psychology and physiology. On the other hand, several compatibilists argue that since compatibilism does not exclude determinacy and predictability of free decisions – on the contrary most forms of compatibilism even require them – Libet’s experiments do not say anything about the existence of free will and responsibility (e.g. Roskies 2011: 15). However, this reaction overlooks the brisance of Libet’s findings in the just given interpretation. Of course, in a compatibilist framework, predictability and determinacy of intentions or actions per se are no threat to free will; but certain forms of predictability and determinacy constitute such threats; and the above causal interpretations of Libet’s results are among them – as has already been suggested. According to the confirming intention interpretation (interpretation 1, fig. 3) and according to physical epiphenomenalism (interpretation 2.a, fig. 4), freedom of decision does not exist because the action is determined by the onset of the readiness potential, which also determines the intention, and because the onset of a readiness potential is not apt to procure freedom of decision. The readiness
Libet’s Experiments and the Possibility of Free Conscious Decision
73
potential is not apt to do that since, first, it is not a conscious state and hence cannot express the inner self and its concerns, and, second, it does not even represent anything like a rational unconscious decision, which integrates the agent’s various concerns into his verdict. The (pre-)motor areas and the vertex regions, i.e. the seats of the observed readiness potentials, are not connected to all the other areas containing the already highly processed information necessary for making a rational decision; they are merely executive areas. The (dorsal) prefrontal cortex instead is such a highly connected area and, therefore, considered the most plausible candidate for the physiological place of intention formation (for references see: Passingham & Lau 2006: 61-64). Furthermore, according to physical epiphenomenalism and veto theory (interpretations 2.a and 2.b, fig. 4 and 5), not even freedom of action exists because the intention (or its physiological basis) does not cause or influence and hence does not control the behaviour. And the, perhaps formally existing, freedom of action in the confirmingintention interpretation (interpretation 1) is void because the intention is not free. Finally, the veto (in interpretation 2.b, fig. 5) could maximally provide a negative freedom of action. This freedom of action would be reduced to two options, either letting the behaviour (a) already in the offing pass or vetoing it (¬a); and the vetoing instance would not have any influence on designing possible options. Whether or not this negative freedom of action were filled with some freedom of decision would depend on its deliberative basis, of which we have no trace so far. Libet, in later publications, simply assumes that vetoes, though perhaps being based on unconscious processes, are not specified by these processes and are hence free (in a presumed incompatibilist sense) (Libet 2004: 146147). However, apart from being without any foundation and in contrast with his general theory of mind, this assumed spontaneity of the veto would not be sufficient for a compatibilist freedom of decision because it lacks a deliberative basis. The preliminary question, however, is: Are these interpretations, in particular the veto interpretation, true? Are they sustained by the data?
74
CHRISTOPH LUMER
4. Critique of Libet’s Experiments and Their Interpretation There are many criticisms of Libet’s experiments and their interpretation. In the following, a systematic overview of the most important of them will be given and some new ones will be added. 4.1. The Time of W – Still Unclear Even after many years of discussion, the chronological order in Libet’s experiments is still unclear.4 Originally, Libet simply equated the read position of the clock with the time of the intention or urge W; this is represented above, in figure 1: “W” is exactly above “c”, which means the time of W is identical to the time when the clock was in the read off position c (T(W) = T(c)). But this assumption is too simple and ignores four interfering time intervals and hence four sources of error (although only three of them are de facto relevant). Actually, the following intervals, represented in figure 6, have to be taken into account for calculating the time of W from the time read by the subjects (cf. also Dennett 2003: 231-236). Fig. 6. Calculating the time of W W ←d1→ [Φ Φ(W)] ←d2→ ↓ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯→ t c ←d3→ PE(c) ←d4→ Φ(c) W = willing, act of will or urge to move Φ(W) = awareness of the willing (i.e. recognising to have a volition) c = position of the clock at the moment of the read time PE(c) = perception of the clock position Φ(c) = awareness of the clock position T(W) = T(c) – d1 – d2 + d3, with T(x) being the time of event x.
d1: The first interval Libet neglected is the time between the (conscious) volition or urge W and the awareness of this volition 4
The literature on the timing in Libet’s experiments on spontaneous movements and his time-on theory is immense. A good collection which deals with many aspects is: Consciousness and Cognition (2002); the contributions by Bolbecker et al., Breitmeyer, Gomes, Joordens et al., Pockett et al., Trevena & Miller, and van de Grind are particularly interesting.
Libet’s Experiments and the Possibility of Free Conscious Decision
75
Φ(W). Some hold that this interval exists because the urge to move (= W) is different from the realization that one has this urge (=Φ(W)) and because this realization takes some time (Breitmeyer in: Libet 1985: 540; Underwood & Niemi in: Libet 1985: 554; Roskies 2011: 20-21). Usually we are not aware of our conscious intentions; we have them but do not realise that we have them; awareness of the intention is an anomalous hyper-intention (ibid.). In order to calculate the time of the volition this interval has to be subtracted from the read position of the clock. – Now, an intention to move is indeed different from the awareness of this intention; both events are conscious mental states; but the first has the movement (or its valuation) as its content, whereas the second has the mental state of having formed this intention as its content; and the latter, introspective cognition takes time. However, e.g. in reaction time experiments, where subjects have to react as fast as possible to a certain signal, subjects react to the recognition of that signal and not to the meta-cognition of having recognised that signal; this metacognition is not requisite for the reaction and would unnecessarily defer it. This might hold for W as well. Roskies has claimed that there is a difference between perception, where the direct reaction is possible, and executive states, where instead first the metaconsciousness is required (Roskies 2011: 21). I see three reasons why this might be so. First, in deciding and in other cognitive operations the mind is actively engaged with another topic, e.g. inquiring which action is best, whereas in perception we can passively but attentively wait for the signal to appear. Second, ‘to be the best action’ is an abstract concept (in comparison e.g. to ‘black dot on the screen’); the activation of such concepts takes place in brain areas which are not directly connected to the motor system; and this may be different from perceptual pattern recognition. Third, fast reactions depend on a learned pattern recognition; since the content of an intention is e.g.: ‘a is the best action’, where we first have to find out for which a this holds, learning a pattern recognition is impossible here. Be that as it may, there seems to be a difference between reactions to intentions and reactions to perceptual stimuli. This difference comes into play in our case if W is really an intention; then d1>0. If, however, W is an urge to move – and this is the topic of a discussion we will soon address – then to be aware of W (i.e. to recognise: ‘I feel an urge to move’) is not necessary for voluntarily reacting to W, “Φ(W)” should be deleted from figure 6
76
CHRISTOPH LUMER
and hence d1=0, because an urge to move, felt in the finger, is like an exogenous inner perception; we can directly react to the urge instead (recognising e.g. ‘there is the urge’). d2: The second interval to be considered is the interval between the moment of the awareness of W, i.e. Φ(W) (or in case Φ(W) is not necessary for reacting to W: between W itself), and the perception of the clock PE(c) (Wasserman in: Libet 1985: 557; Roskies 2011: 20). This time is necessary for changing the object of our awareness (not the object we look at) even if one has stared at the clock the whole time (Underwood & Niemi in: Libet 1985: 555). For calculating the time of the volition from the observed time, this interval, too, has to be subtracted from the read time. (Libet incorrectly holds this interval to be equal to zero (Libet 1985: 560).) d3: The third interval to be considered is the time between the clock position c and the perception of this clock position PE(c). In order to calculate the time of the volition, this interval has to be added to the read time. Originally Libet had ignored this interval; in more recent writings he takes it into consideration and equates it with 50 ms (Libet 2004: 128). He justifies this with an experiment in which a weak skin stimulus (at random times) was delivered to the hand and the subjects had to note and report the clock time of the skin sensation. The reported sensation times showed a (mean) difference of about -50 ms (i.e. slightly earlier) from the actual stimulus times. Libet then applied this difference to the results of his voluntary movement experiments, i.e. for readjusting he added the inverse of these -50 ms to the measured time of W, obtaining -200 ms + 50 ms = -150 ms as his final timing of W (ibid.). However, this correction is flawed. In the skin stimulus timing experiment the time of an external perceptible event has to be determined, and in order to be compared with the clock position it has to cause a signal in the perceptive cells, which is transferred to the brain and then processed to become conscious. In the spontaneous movement experiment instead, the event W to be timed is internal, i.e. already conscious; the long perception process does not exist or is already over. In timing the skin stimulus (in a way similar to the timing of W), the perception time of the skin stimulus has to be initially equated with the perceived position of the clock, where the latter perception process (from the stimulus on the retina to the conscious picture) may take roughly the same time as the stimulus perception process; actually, according to Libet’s measurements, the difference is
Libet’s Experiments and the Possibility of Free Conscious Decision
77
-50 ms. In timing W, however, the whole interval of the perception process of the clock reading has to be added to the read time because when the subject becomes conscious of the clock position the indicated time is already delayed by this perception interval. The difference between the two measurements is illustrated in figure 7. The first and third line together represent the timing of W, whereas the second and third line together represent the timing of the skin stimulus s. The formula for calculating the time of the skin stimulus T(s) (“T(x)” means: the time of x) from the read time ‘c’ (= T(c)) is: T(s) = T(c) + [T(PE(c)) – T(c)] – [T(Φ(s)) – T(s)]; whereas the formula for calculating the time of W (leaving out the other intervals discussed here) is: T(W) = T(c) + [T(PE(c)) – T(c)]. Libet instead, incorrectly, used the first formula for calculating the time of W. Libet’s general theory of consciousness says that roughly 500 ms are needed for events to become conscious (Libet 2004: 101-102). If this were true, the interval for perceiving the clock position (T(PE(c)) – T(c), i.e. d3) would last 500 ms. Given that the (mean) read clock time of W was -200 ms, this alone (without considering d1, d2 and d4) would lead to timing W at +300 ms, i.e. after the beginning of the movement. But, given that after W a veto could still prevent the movement (and that backward causation is impossible), we run into a contradiction here. The whole theory of timing W and of conscious events seems to be fundamentally flawed. Fig. 7. Timing of W compared to timing of a skin stimulus s W °←50 ms→s ←450 ms?→ Φ(s) ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯→ t c ←500 ms?→ PE(c) W = willing, urge to move s = skin stiumulus Φ(s) = awareness of the skin stimulus c = position of the clock at the moment of the read time PE(c) = perception of the clock position
d4: The fourth interval lies between the perception of the clock PE(c) and the conscious awareness of the clock’s position Φ(c); this time is necessary for recognizing the exact position of the clock (Wasserman in: Libet 1985: 557). But Libet has argued correctly that this interval is irrelevant because in it the informational content (of
78
CHRISTOPH LUMER
the clock’s position) does not change, even if this information is consciously available only later on (Libet 1985: 560). Summarising, we have a chronological order as shown in figure 8. The time of the volition W then can be calculated from the read time T(c) via the following corrections: T(W) = T(c)-d1-d2+d3 (cf. figure 8.1). For Libet’s original simple equation of the read time and the time of W to be correct the following must hold: d1+d2=d3. If d1+d2 were shorter than d3, W would be later than originally assumed by Libet, i.e. later than -200 ms. But if d1+d2 were longer than d3, W would be earlier than originally assumed by Libet, i.e. earlier than -200 ms (cf. figure 8.1). Finally, if d1+d2 were much longer than d3, W could even occur before the onset of the readiness potential (cf. figure 8.2). This means that, since Libet’s experiments do not include measurements of d1, d2 and d3, they do not prove that the unconscious preparation for the action precedes W. Fig. 8.1. Experiment 1, reinterpretation 1 a ← 550ms → W ←d1→ Φ(W) ←d2→ ↓ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯→ t ° ← 350ms → c ←d3→PE(c)←d4→Φ Φ(c) r
Fig. 8.2. Experiment 1, reinterpretation 2 r ← 550ms → a W ←d1→ Φ(W) ←d2→ ↓ ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯→ t Φ(c) ° ← 350ms → c ←d3→PE(c)←d4→Φ r = onset of the readiness potential a = action W = willing, act of will or urge to move Φ(W) = awareness of the willing (i.e. recognising to have a volition) c = position of the clock at the moment of the read time PE(c) = perception of the clock Φ(c) = awareness of the clock position
Although Libet assures the accuracy and reliability of his subjects’ readings of the time of W, reporting that the standard deviations of these times for each subject’s 40 trials were close to 20 ms despite their interpersonally different means of W (Libet
Libet’s Experiments and the Possibility of Free Conscious Decision
79
2004: 128), critics have questioned this reliability, basing their doubts on replications of the experiments for which more detailed data have been published. Since reading the position of a rapidly revolving spot at a given time is a difficult task, as is relating such an event to the onset of one’s conscious event W (Mele 2011: 29; Dennett 2003: 234-235), there is considerable variability in the reported times of W. Haggard and Eimer, e.g. have undertaken a median split of early and late reported times of W for each subject and then calculated the means of these two groups. In the best case the means of the early and the late times were -231 ms and -80 ms respectively (Δ=151 ms); in the worst case they were -940 ms and -4 ms respectively (Δ=936 ms) (Haggard & Eimer 1999: 132; also referred to by: Mele 2011: 29). Pockett & Purdy report similar difficulties for their own measurements (cf. their detailed data: Pockett & Purdy 2011: 40-43), which makes them doubt that it is possible to measure accurately the time of an urge to move (ibid. 3839). Another concern about the timing of W is that the observation task affects the occurrence and time of W itself. Part of the instruction is: observe a rapidly rotating light spot; execute a finger movement; observe your preceding urge or intention to move; then notice and remember the clock’s position. These processes interact and prolong each other (Stamm in: Libet 1985: 554), so that – regardless of the measurement problems discussed so far – even a correctly measured and calculated time of W cannot represent the usual (without the observational tasks) time of W. The preceding criticisms have at least heavily undermined Libet’s hypothesis of the onset of the readiness potential preceding W. However, the considered foreruns amount to about maximally -1 sec. If, however, Soon’s (Soon et al. 2008) and Haynes’ (2011) results about physiologically made “decisions” at already -7 sec and interpretations analogous to those of Libet’s experiments turn out to be true, questioning the timing helps little to eliminate the threat to free will. But let us consider further problems of Libet’s experiments.5 5
Although the experiments of Soon and Haynes and their co-workers may indeed resolve the timing problem of W in Libet’s experiments in a very impressive way, they share all the other difficulties of Libet’s experiments (see below, subsections 4.2-4.5).
80
CHRISTOPH LUMER
4.2. The Interpretation of W – An Urge but not an Intention A very critical point of Libet’s experiments is the interpretation of W. During the experiments the subjects were asked to report “the time of appearance and conscious awareness of ‘wanting’ to perform a given self-initiated movement. The experience was also described as an ‘urge’ or ‘intention’ or ‘decision’ to move, though subjects usually settle for the words ‘wanting’ or ‘urge’” (Libet et al. 1983b: 627). In his later descriptions, however, Libet first describes W neutrally, as an “urge, desire, or decision to perform each [...] act”, “‘urge’ or desire” or “urge or intention to move”, “urge to move”, “urge or decision to move” (Libet 1985: 530; 532; 539), but later and in his final conclusion he interprets the mental event in question as an “intention” (ibid. 529, abstract; 532; 538; 539), “wanting” (ibid. 529, abstract; 532; 533; 534; 535; 539), “will” (ibid. 529, abstract) or “deciding” (ibid. 532) to act or to move, and his central short-cut for it is “W” = wanting to move (ibid. 529, abstract; 532; 533). (Analogous transformations e.g.: Libet 1999: 49.) Now an urge to do a and an intention to do a are quite different things. An intention to do a is a mental state, which is central (i.e. without any inner localisation), executive (i.e. with forming an intention to a one has made up one’s mind to execute a and under certain conditions the intention causes the execution of a or a respective attempt), and which reflects the agent’s desires, hopefully all desires and in an balanced way. Intentions have the hinge function, mentioned above (sect. 3), by stemming from and representing the agent’s desires, on the one hand, and reaching into the outer world by causing the represented behaviour a. There are proximal intentions to do a right now; and there are distal intentions to do a at a given time or under certain conditions in the future; and there are further logical forms of intentions like conditional or general intentions. An urge to do a, on the other hand, can be felt in the respective effector organs, and often it represents only one (or few) of the agent’s desires in a one-sided way and is not yet executive – think of an urge to shout at your boss. They are orientated on proximity and have no complex logical forms. We can take up an urge to do a in an intention and thus lend it executive power; but we can also refrain from doing so and reject the urge intentionally.
Libet’s Experiments and the Possibility of Free Conscious Decision
81
In order to render Libet’s experiments “revolutionary” in action theory and for sustaining the above explained concerns about the inexistence of free will, the (allegedly) powerless event W has to be an intention and not only an urge to do a – because intentions are subject to requirements of freedom of decision but urges are not. Now W seems to be only an urge and not a (proximal) intention. Libet’s subjects describe it as such; and when imitating Libet’s experimental setting, what I felt was an urge to press sensed in my hand, hence not an intention but an urge.6 Pockett & Purdy have confirmed the difference experimentally. Libet’s experiment was repeated in an “urge setting”, but with a reformulated question: subjects were asked when they felt the “urge” to press the key, the words “wanting” or “decision” were not mentioned. In a “decision setting”, subjects had to press a right or left key depending on whether the sum of two figures displayed on a screen (in the center of the Libet clock) was odd or even. Since a different pair of numbers was presented for each trial subjects were forced to form a present intention – and not to rely on some distal intention. In this setting subjects were asked when they had “decided” to press which key. (Pockett & Purdy 2011: 39.) The urge setting roughly replicated what Libet had found. In the decision setting, however, readiness potentials began much later than in the urge setting and were much weaker – sometimes even missing; the individual means of the reported decision times were very close to (immediately after) the means of the onset of the readiness potentials. (Ibid. 39-43.) This means that while the decision setting roughly confirms a temporal order of physiological processes that one would expect for proximal intention formation, the urge / Libet setting was markedly different, thereby disconfirming the hypothesis of the presence of proximal intentions in the Libet setting. 4.3. The Prior Physiological “Determination” of the Action – Not Determining A necessary premise in Libet’s questioning free decision (by physical epiphenomenalism or by confirming intention) is that the readiness potentials – or more generally: circumscribed predictive 6
Mele has stressed on several occasions that Libet’s W is an urge and not an intention to act (Mele 2007: 259-260; 2009: 50-51).
82
CHRISTOPH LUMER
physiological events – determine the later conscious decision. In order to be really critical for free will these physiological events (apart from not originating from conscious deliberation) have to determine the decision, if one leaves aside veto cases, by nearly 100%, and if one excludes external interruptions and limited ability, they must also determine the action by nearly 100%. This implies that the predictive accuracy of those physiological events with respect to the conscious decision and action, after excluding the just mentioned exceptions, should be close to 100%. This holds because with a lower accuracy the physiological indicators could represent action tendencies or information about relevant aspects of the action, whereas the real decision on them is taken later and possibly freely. (But keep in mind even a 100% predictability per se, in most compatibilist conceptions of free will, is not an obstacle to free will. A 100% predictable conscious decision is free if it is the result of a good conscious deliberation; and if this decision controls the action, this action is free as well. What is detrimental to free will is only that an unconscious “decision” is taken, which determines the conscious decision without ever passing through an effective conscious deliberation, i.e. the real “decision” is taken unconsciously before and perhaps on an insufficient informational basis.) Now, we do not know the predictive accuracy of the readiness potentials at all for Libet’s experiments. (Backward) recording of the EEGs was triggered only by the movement (more precisely by a positive EMG in the muscle), which means that readiness potentials that did not lead to an action were not registered at all (Libet 2004: 141 f.). Furthermore, finger flexings which were not preceded by a readiness potential could not be detected because the graphs of the readiness potentials were obtained by adding the records of 40 trials (Libet 1985: 530; 535). Hence the seemingly determining effect of the readiness potentials is only a methodical artefact; possible evidences which could disprove the determining power were simply ignored. And there seem to be disproving evidences. In an experiment conducted by Herrmann et al. (2008) subjects had to press a left or right button on an indication given at short notice; in this setting the readiness potentials appeared already earlier than the indication and hence before the action could have been determined. So the readiness potentials in this case may be part of a general preparation or expectation but do not determine the final action. Pockett & Purdy have provided further disproving evidence. In their
Libet’s Experiments and the Possibility of Free Conscious Decision
83
replication of Libet’s experiment with 390 trials of each of their subjects they identified about 12% of these trials with little EEG noise and in which, nonetheless, no readiness potential is visible; even adding the graphs of this 12% (about 45 trials) did not reveal the typical ramp of a readiness potential, suggesting that at least in these cases there was no readiness potential preceding the urge (Pockett & Purdy 2011: 35-36). (Their hypothesis about this is that readiness potentials arise only with attention to the finger movements; and in the cases without readiness potentials there was simply not enough attention to the movement (ibid. 36).) Soon et al., on the other hand, have assessed the predictive accuracy of the fMRI identified activation patterns preceding the conscious right-left choice by several seconds. It is nearly 60% in the most predictive area (59.1% at -8 sec and 59.4% at -2 sec in the lateral frontopolar cortex (Soon et al. 2008: 544, read from fig. 2)). Though this is significant at a P=0.05 level, it is not much above chance level, which is 50% in this case, but far from the nearly 100%, required for a determining event. This degree of accuracy leaves ample room for other factors influencing the decision and for the decisive role of the conscious intention itself. 4.4. The Action Type ‘Flexing a Finger’ – Not Revealing for “Real” Decisions The paradigmatic action in Libet’s and Haggard & Eimer’s experiments and in the respective replications was to flex a finger or hand where this flexing had no further relevant effect (apart from the scientific elaboration). The open choice regarded only the time of this flexing and additionally – in the Haggard & Eimer experiment – the side. (Libet 1985: 530; Haggard & Eimer 1999: 129.) Now, these are very particular and insignificant actions. The fears regarding the defunctionalisation of intentions and loss of free will are much less dramatic if restricted to these and similar actions – because of their insignificance, because of the minimal freedom of action and because they already relied on a (possibly rational and free) general distal intention with a vast freedom of action, namely the intention to comply with the experimenter’s requests. Although Libet initially admitted that his experiments do not exclude conscious initiation of voluntary actions based on conscious deliberation (Libet et al. 1983b: 641), he later tended to generalise his findings to a very
84
CHRISTOPH LUMER
spectacular result, affirming that the defunctionalisation of intentions holds for all actions with proximal intentions (Libet 1985: 532; 1999: 54). Libet does not provide any supporting argument for this generalisation. The kind of flexions studied are not an appropriate kind of action for experimentally disproving the existence of free will. This holds because freedom of decision – at least according to the prevalent compatibilist conception of freedom – depends on considering and weighing reasons for the identified options. However, in the experiments’ paradigm cases there are only very few options, which are unimportant in themselves and without relevant differences between them. So there are no reasons to choose between them (or at least it is hard to identify one); and the respective “decision” is not free even for philosophical reasons. This “decision” instead is open to being “taken” randomly (from reason’s perspective) as a consequence of fluctuations in our nervous system. Therefore, the experiments do not rule out that in situations where there are known reasons to choose one way or the other we consider these reasons and decide on this basis without irrational predetermination – as intentional causalism and the compatibilist rationalist model of free decision say (figure 2). (Cf. Roskies 2011: 17-18.) 4.5. The Theory – No Explanation of Decisions Libet has become famous for his experimental results on spontaneous actions. In addition, he has developed a general theory of consciousness. However, this approach is essentially incomplete when it comes to explaining spontaneous or deliberate actions. It does not say whether the real “decision” is taken by forming the readiness potentials; it does not explain what the function of the readiness potentials is, where the readiness potentials and thereby a perhaps previously taken, real “decision” originate from; and finally, it does not say where and in which way intentions are formed. Even present (2013) day neuropsychology seems to be rather far from providing a comprehensive and confirmed theory about these processes, though, of course, there are many partial (and conflicting) hypotheses. Pockett & Purdy e.g. point out that ramp-like potentials have also been found in very different brain areas which are not connected to voluntary movements; the general function of ramp-
Libet’s Experiments and the Possibility of Free Conscious Decision
85
like potentials seems to be related to expectation and anticipation. Pockett & Purdy surmise that this could also be the function of the readiness potentials – and not to prepare or even decide on a movement. (Pockett & Purdy 2011: 37; 40.) In addition, there are many hypotheses on the physiological localisation of intentions or decisions; and the strongest among these, sustained by much experimental evidence, point to the dorsal or lateral prefrontal cortex (list of references: Passingham & Lau 2006: 61) – and not to the motor areas or the vertex. Complaining about the lack of a theory when faced with strong empirical results seems to be misplaced. But what has caused furore in the case of Libet’s experiments was more their far reaching interpretation than the empirical results themselves. And if this interpretation includes an hypothesis on the defunctionalisation of conscious intentions then, to be able to accept this hypothesis, we need an explanation as to where and how the real decisions are formed, because in the end the real decisions may turn out to be still identical to forming an intention. This holds not only for Libet’s theoretical hypotheses but also for other approaches which deny or reduce the function of conscious intentions. A general problem for all these approaches is the informed decision challenge: Many of our actions react to rather complex situations with a wide variety of options, e.g. how to reply to a question or respond to a plea for help or to a boss’s request to do some unwelcome or barely legal task or what to buy during one’s weekly supermarket shopping; and often our actions take initiative independently of the present situation for realising long-term or short-term aims, e.g. buying an expensive consumer product, preparing a new project, newly or rearranging part of one’s home. This is simply an empirical feature of our behaviour. Nowadays, naturalist free will deniers consider conscious intentions mostly as a physical epiphenomenon. However, even if we concede for a moment that all the just described actions are initiated unconsciously and not as the effect of conscious intentions, there has to be a kind of “decision” and a brain location where the “decision” takes place and where a great deal of complex information is processed and where, finally, one of the many possible controllable behaviours (or a sequence thereof) is selected. In addition, for arriving at solutions as smart as those we seem to produce by conscious deliberation, the unconscious processing has to make use (at least) of the same
86
CHRISTOPH LUMER
information as the conscious deliberation does. If we can discard the possibility that the two information processing systems work in parallel as evolutionarily implausible, then they must be closely connected, with intentional causalism (including physically nonepiphenomenal mind body relations as they are conceived in the classical theories – identity theory, mental epiphenomenalism etc.) being one possibility for this kind of connection (i.e. the conscious deliberation or its physiological underpinning really makes the contribution to the effective decision it seems to make). In order to rule out this possibility and defend a physical epiphenomenalist conception of conscious decision with unconscious “decisions” as the real determiner of our actions, the naturalist free will denier has to provide a lot of empirical data and theory – which is lacking to date. Below I want to show that the classical, intentional-causalist connection is indeed the most plausible theory. In any case, Libet’s explanations do not master the informed decision challenge. First, the location of “decisions” cannot be that of the readiness potentials, i.e. the motor fields and the vertex of the brain, respectively, because, for one thing, the motor fields do not have sufficient associative connections to regions containing the respective information; hence they cannot integrate this information into a complex decision. For another thing, the single parts of the motor fields are associated with specific motor organs or movements and, therefore, cannot “decide” which of the many possible movements shall be executed. So, the “decision” has to be taken somewhere else, probably in the prefrontal cortex (cf. e.g. Passingham & Lau 2006; Goldberg 2009). Second, if, however, real “decisions” are taken in the frontal cortex the physiological basis of our intentions and mental decisions could be there too; and this reopens the possibility of a significant or even decisive role of conscious decisions and intentions for our actions. Third, given the enormously many options and the huge mass of possible consequences as well as their values which seem to be reflected in our complex actions, and given the high probability that because the wealth of respective information is distributed over various parts of the brain, an integration of all this information into one complex synthesising decision requires a global working place. Since such a global working place seems to be realisable in the best way via consciousness, it is rather likely that complex decisions are made
Libet’s Experiments and the Possibility of Free Conscious Decision
87
consciously. (Automatic actions, of course, are subject to quite different conditions and often are initiated unconsciously.)
5. The Power of Intentions – A Revision of Simplistic Versions of Intentional Causalism One result of the previous discussion was that W is only an urge to move and not an intention. However, if there were no intention at all this would make the situation worse than it was depicted by Libet. So, do the actions in Libet’s experiments rely on some intention, and if yes where is it? There are two main hypotheses on this question, both making recourse to intentions. H1: General distal intention: In Libet’s spontaneous movement experiments there are no proximal intentions; there is only a general distal intention to comply with the experimenter’s requirements; the single acts then are executed automatically (Keller & Heckhausen 1990; Mele mentions this as a “possibility”, though he favours H2: Mele 2007: 269; 2009: 62). H2: Individual proximal intentions after W: The urge to move is only taken as an invitation to act; a proximal intention is formed after it; and this proximal intention accepts (or does not accept) this invitation and then activates the motor process. (Mele 2007: 264265; 268-269; 2009: 57-58; 61-62.) The main trouble with H2, however, is that there is no empirical evidence for the existence of the claimed proximal intentions. Empirical evidence supports another picture: Sometimes proximal intentions will be formed, but probably only sometimes; and if there is such an intention it is formed only as a response to perceiving the urge to move. In normal cases of spontaneous finger flexing, we neither feel an urge to move nor form a proximal intention. So, mostly H1 is true in Libet’s experiments and perhaps sometimes H2. Keller & Heckhausen confirm this picture by an experiment in which the subjects had again to flex their fingers spontaneously, as in Libet’s experiment. However, instead of additionally having to observe wantings or urges they were distracted from finger movements by a backward counting task. When they had actually flexed their finger they were asked whether they had just moved and, if yes, about the respective intention. In most cases, the subjects could not even recall the movement itself and still less an intention, though the readiness potentials were qualitatively the same but
88
CHRISTOPH LUMER
weaker than in Libet’s experiments. (Keller & Heckhausen 1990: 351-353.) This makes it plausible that only Libet’s observation and timing task induces the perception of an urge to move, which normally is not felt and which may be misinterpreted as present intention. As noticed by Breitmeyer (in: Libet 1985: 539) and as I could observe in myself, if one omits the time reporting task there is no further conscious formation of a proximal intention after having distally planned to spontaneously move one’s finger. The resulting actions then feel much more like a tic, i.e. uncontrolled movements, even without a respective urge. H2 seems to be inspired by a too simplistic common sense picture of actions, according to which actions are always triggered by individual proximal intentions; and intentional causalism may seem to imply this as well. However, we have to distinguish between an intention causing and an intention triggering the respective action. What intentional causalism only requires is that the intention cause the action in a match-ensuring way. And this is also possible, at least in principle, if distal, individual or general intentions cause the actions by structuring some controlling elements of the executive system, whereby the actions are triggered by the agent’s perceiving a stimulus which indicates that the situation of execution has come. Now this possibility is an actuality for most of our actions, in particular for very many routines of everyday life like cooking or driving or the finger movements in typewriting. Intuitively this may be hard to accept. However, some experiments show that certain actions which rely on distal intentions cannot have been triggered by proximal intentions. An example familiar to common knowledge is starting to sprint: Sprinters perceive the starting shot and begin to run already 50-100 ms after the stimulus (i.e. the starting shot), probably long before being aware of the stimulus, which may occur only 300-500 ms after the stimulus (Libet 1985: 559). (Nonetheless sprinters report that they first heard the shot and then started to run. The explanation for this is: The sprinters were aware of their starting to run, too, only after having started to run.) In this case the agent has formed a distal individual intention to start running after the shot; however, the reaction time is so short that the sprinter did not consciously perceive the signal before starting to run; even less could he form the proximal conscious decision to start to sprint. This is possible because there are secondary, unconscious paths of
Libet’s Experiments and the Possibility of Free Conscious Decision
89
processing perceptual information which permit much faster reactions.7 However, one may doubt whether sprinters actually perceive the signal so late. There is another example, this time from the laboratory, which excludes this possibility, namely the Fehrer-Raab effect. First, a very short subliminal test-stimulus is presented to the subjects, e.g. a black disk on a computer screen. Then, with a delay of 10-80 ms after the onset of the test-stimulus, a masking stimulus, e.g. a black ring, is presented for a much longer time, e.g. for 126 ms. The delay between the onsets of these two stimuli is called “stimulus onset asynchrony”. The chronological order and the durations are such that the subjects will not be aware of the test stimulus; afterwards they report having seen only the masking stimulus, i.e. the ring. Subjects have been instructed, and hence formed the general intention, to press a key as fast as they can after perceiving “the” stimulus. In the test experiment the reaction times were about 160-165 ms after the onset of the – not consciously perceived – test-stimulus, independently of the stimulus onset asynchrony. The reaction times were the same in a control experiment with only one stimulus. This means that the reaction times in the test experiment are the same as if the subjects had reacted only to the subliminal stimulus that was not consciously visible to them. (Neumann & Prinz 1987: 201-202 f.) Hence there was no possibility to form a present intention. Starting to sprint and the Fehrer-Raab effect show that we can “program” ourselves, so to speak, with the help of conditional and individual or general distal intentions to automatically execute the intention later on. This makes intentions much more powerful than the simplistic picture of individual proximal intentions would permit because thereby we can react much faster and save the very limited resource of conscious awareness, and it extends the range of our intentions and intentionality considerably. Thus, the intentionalcausalist concept of action does not have to be changed: The intention causes the act (in a match-ensuring way), but in a cleverer way than is assumed by common sense, namely by programming ourselves to execute the intention automatically later on. 7
That such unconscious paths are effective is proved by phenomena like blindsight. For a general discussion of the effectiveness of unconscious information processing see e.g. Evans 2010; Libet 2004: 90-99.
90
CHRISTOPH LUMER
This possibility of automaticity also explains the General Distal Intention hypothesis (H1) about Libet’s experiments. The general distal intention to follow the experimenter’s instruction to repeatedly and spontaneously flex one’s finger was sufficient for performing the single flexions automatically without forming individual proximal intentions for each trial. However, there may be a problem in this explanation. Apart from the various differentiations of types of intentions (individual vs. general, proximal vs. distal etc.), we have to distinguish between fine and coarse intentions. Fine intentions describe the intended action as exactly – i.e. fixing the main parameters of the action – as is necessary for that action to be identified and executed by the executive system without conscious specification of further parameters. Coarse intentions, on the other hand, do not specify some parameters of the action sufficiently for being identifiable by the executive system. E.g. the time of the action may remain undetermined but likewise the type of movement, e.g. when I intend to buy a certain book and still have not yet established whether I will buy it in a bookstore, online or by phone etc. Goal intentions are usually coarse-grained, implementation intentions often are finegrained. Now in Libet’s experiments the subjects had to move their hands or fingers whenever they felt an urge to do so. This could mean that the time of action in the general distal intentions was not yet specified so that these intentions were only coarse intentions with the parameter “time of execution” left open, which then had to be determined by individual proximal intentions. But in Libet’s setting the time is so arbitrary that no kind of rationally justified decision can be taken in this respect. The parameter open at the time of the prior intention is so irrelevant that its specification by means of a (conscious) intention can be renounced. This arbitrariness opens two possibilities. The first is: E1: The subject forms the intention to flex her finger when she feels an urge to do so; this is a general distal fine intention, which uses the urge as the go signal. (E1.1: A variant of this possibility is that the subject initially, after the instruction, forms only a coarse intention, which is specified to an individual proximal fine intention after the first urges; however, when after a few trials, the subject has learned about the urge and the course of events, only then does she form the general distal and fine intention to flex after an urge.) The other possibility is: E2: The subject already in the general distal intention leaves the timing to the
Libet’s Experiments and the Possibility of Free Conscious Decision
91
executive system, i.e. to unconscious happenings, which here function as a sort of random device (cf. Pockett 2006: 16). If this possibility exists, as seems to be the case, then the executive system is able to carry out this general distal intention without receiving a further parameter specification, which means that the distal intention is already a fine intention. Perhaps the first possibility (E1) is more likely in situations with the additional task of observing the beginning of an urge to move and the instruction to flex when one feels this urge to move – as in Libet’s experiment –, whereas the second possibility (E2) is more likely with a task to flex spontaneously without observing instructions – as in experiment 1 of Keller & Heckhausen (1990: 351-355). That in such situations our executive system is able to determine the time of action, which from the perspective of the intentional system works like a random device, is a further very powerful extension of the possibilities of our intentional system because it relieves us from the labour of conscious decision in cases where the decision would be (rather) arbitrary and superfluous anyway. The second experiment of Haggard & Eimer leaves open a further parameter: for each trial, the subjects had to choose (rather) freely which hand to move (Haggard & Eimer 1999: 129). This task however was specified by the instruction to produce roughly equal numbers of left and right hand movements over the entire block and to avoid using obvious patterns such as left, right, left, right (ibid.). As already mentioned above, in qualitative terms the results of this experiment were the same as those of Libet’s experiments (ibid. 130 f.). Where are the fine intentions in Haggard & Eimer’s experiment? In this case it seems to be less plausible that, in addition to the time of action, the choice of the hand could also be delegated to the executive system, which again would work like a random device. Nonetheless some observations of my own trials confirm this interpretation. It is true that these observations are not representative and their interpretation is somewhat speculative. But on the whole they represent at least a possible interpretation, which is not ruled out by the experiments in question: 1. Choice of the hand by the executive system: Without the task of observing the time of the preparations for the movements these movements again felt like a tic, i.e. an uncontrolled spasm not preceded by a singular present intention. This means that in this case there is no proximal intention but at best an urge to act. And it seems
92
CHRISTOPH LUMER
as if the general prior intention could even delegate the choice of the hand to the executive system – in addition to the timing – thus revealing itself as a general fine intention. 2. Causing an urge to move by directing the attention: With the observational task I realized that I directed my attention long before the movement and before a respective urge to one finger. Then an urge arose in the respective finger, subsequently this finger flexed. It looks as if directing the attention was equal to or caused the urge and probably, before that, caused the readiness potential as the physiological basis of this urge. (Often the urge to move the finger was felt for a rather short and constant interval after the singular intention; and this interval might be roughly equal to the interval between the onset of the readiness potential and feeling the urge to move.) However, focusing my attention is not identical to forming an individual distal and fine intention. So I did not form an individual (minimally) distal fine intention. The fine intention in this case was like that in explanation E1: a general distal (at the beginning of the trials) and fine intention to flex after feeling an urge. 3. Additional individual intention for respecting the pattern instruction: The instruction to avoid obvious patterns sometimes led to considered choices of the hand to use. (‘Several times consecutively I have used the right finger; now it’s time for a change.’; ‘in order not to approach the pattern rrrlll I must take the right hand now.’ etc.) These choices, again, took place long before observing the urge to move, they led to focussing my attention on the chosen finger, where the urge to move developed (again, probably as the conscious companion of the later phase of the readiness potential), which, finally, led to the flexing. This means, in these cases there is first an individual distal and fine intention to flex a specific finger after an urge to do so. While the side is chosen deliberately, the timing is again left to the executive system.8 So, only in the third scenario are there individual (minimally) distal fine intentions. In the first and second scenario instead there are general distal fine intentions which leave the timing and the choice of the side to the executive system, although in the second scenario the executive system’s “random” choice of the side is again determined by focusing one’s attention. 8
Pauen proposes this explanation too (Pauen 2014).
Libet’s Experiments and the Possibility of Free Conscious Decision
93
All this means, in Libet’s as well as in Haggard & Eimer’s experiments the actions are caused by the respective intentions via the match-ensuring mechanism of the executive system. Some peculiarities are only that these intentions were mostly general distal and fine intentions, and additionally left some parameter specification (timing and choice of the side) to the executive system. Because these peculiarities are not included in the folk-psychology of action, several researchers may have been misled to search for and misidentify (individual proximal and fine) intentions where, in fact, they do not exist.
6. Free Decisions – Despite Unconscious Preparation The explanation just given reduces the processes in Libet’s experiment to complete normalcy in terms of intentional causalism – in addition providing some instructive technical amendments to this theory. The whole burden of sustaining the intentional-causalist explanation now rests on the general distal (and fine) intentions, which in order to answer to Libet’s challenges have to be free and conscious. Libet does not discuss the freedom of these intentions, but he has developed a general theory of consciousness, the time-on theory (Libet 2004: chs. 2-3; in particular 101-102), whose truth, unfortunately, would again imply the unfreedom of all of our decisions. The time-on theory says: T1: “Conscious and unconscious mental functions differ most importantly in the presence of awareness for the former and the absence of awareness in the latter” (Libet 2004: 101), which implies that the real information processing is done unconsciously; consciousness is only an addition. T2: To produce conscious experience, appropriate brain activities must proceed for a minimum duration of about 500 ms. An unconscious function might be transformed into a conscious one simply by increasing the duration of the appropriate brain activities. Libet formulated hypothesis T2 first for sensory experience only (ibid. 101-102), but then he extended it to all instances of awareness (ibid. 89; 198; 199-200). The exact version of Libet’s time-on theory is not very plausible in the light of the results of Libet’s own experiments on conscious perception of direct electrical stimulation of the brain: The summarising function of the stimulus train durations shows that
94
CHRISTOPH LUMER
stronger (than threshold) stimuli need less time for reaching awareness and that there is a clear (roughly inversely proportional) functional relation between stimulus strength and duration until awareness (ibid. 41, fig. 2.2.B). So a more plausible interpretation of these brain stimulation experiments seems to be that what is decisive are stimulus strength and duration together which lead to increasing some signal strength or electrical potential etc. until an awareness threshold is exceeded. For the following this correction is irrelevant; what is important is the general idea of consciousness as a process in which the signal strength of an information already present is amplified in some, time consuming, way until the awareness threshold is exceeded. I call this general theory of the physiological production of consciousness the crescendo theory, thereby underlining the idea of amplifying the strength of a signal with an already present information. The crescendo theory is a generalisation of the time-on theory, which captures its philosophical gist. If the crescendo theory of consciousness were true for conscious intentions and if the unconscious signal whose amplification leads to the conscious intention were not already the result of a conscious deliberation, there would not be freedom of decision (according to the rationalist conception of freedom of decision sketched in sect. 3). This holds because the conscious decision would not be an expression of our inner self and an integration of the agent’s preferences. The intention would even not be free if the signal at its basis were the result of a kind of unconscious deliberation because still the active participation and control of the inner self would be missing. However, we can see already from these descriptions that not the delay during the process of getting the intention conscious is per se detrimental for freedom, but the lack of a conscious deliberation. And this could be a resort for freedom. With the crescendo theory and without a respective deliberation a (possible) veto would not be free either, and it could not confer some freedom upon the complex of conscious intention and possible veto. Libet assumes that the conscious veto might not require preceding unconscious processes (2004: 146). This, of course, contradicts the crescendo theory; and since Libet does not offer good reasons for this exception I will ignore it. It would not help anyway in a compatibilist framework, though perhaps it could in an incompatibilist picture.
Libet’s Experiments and the Possibility of Free Conscious Decision
95
Is the crescendo theory, or more precisely that part of the crescendo theory which speaks of conscious intentions, true? First, for intentions it does not seem very likely that the unconscious decision is made rather fast and that the main part of the information processing is dedicated to amplifying the decision’s content. If there is such a process of an increase of the electrical potential in the case of intention forming – and not only in perception processes –, given the architecture of our brain and the role of intentions, it seems more likely that the increase of the electrical potential is due to processing various incoming signals and elaborating their content, thereby making “inferences” and the like so that the decision results only at the end.9 Second, the general crescendo theory is a generalisation of results from experiments with sensory perception, but especially its extension to endogenous mental events is problematic. Philosophers distinguish two directions of fit of mental events, where the content of perceptions should fit to the world, whereas the content of intentions is meant to make the world fit to it. In the case of perception the sensory stimuli are already there and often for a longer time. There is a signal with a given information, which has to be filtered, e.g. in terms of relevance, processed and perhaps brought to consciousness in an elaborated form, fitting to the external facts. The crescendo theory seems exactly to capture this process. For intentions, however, the situation is different. In the studies discussed here, the really “fitting-making” part of the process, i.e. the executive part, which translates the intention’s content into efferent signals, movements and, finally, effects in the external world, is more or less neglected. What is studied instead is the formation of the intention itself, i.e. the constitution of the starting point of the process, the determination of the design to which the world should fit. There is no plausible explanation why this determination should have a crescendo form; there is no unconscious ideal decision or the like which has to be depicted or represented in the conscious decision. Instead a new, endogenous decision has to be 9
A further observation supports this conjecture. The ramp-like 500 ms readiness potentials are present before conscious as well as unconscious actions; hence they cannot serve to make unconscious information conscious. Only the quantitative increase of the readiness potentials seems to make the difference for conscious awareness. (Keller & Heckhausen 1990: 351; 354-356.)
96
CHRISTOPH LUMER
“constructed” by choosing among possible options according to a valuation of these options. If the intention-regarding part of the crescendo theory is not very plausible, what sense can we make of the various empirical results obtained so far, and in particular what is the role of consciousness in intention forming? Is there room for free intentions? When answering these questions we have to regard the following somewhat generalised empirical findings of Libet. Often intentions or action plans to do some a “pop up” in our minds; and these plans are already so elaborated or adapted to the situation that they certainly are the result of a sophisticated unconscious processing but without relying on a preceding conscious deliberation; and very often they are executed rather immediately, though we can veto them, i.e. form an effective negative intention about them. The problems with this constellation were (cf. sect. 3) that freedom of action seems to be reduced to two possibilities (doing or not doing a), that the intention or plan does not result from a conscious deliberation and hence is not free; and this verdict may even be extended to the veto. However, I think we can amend this picture by further empirical considerations (not contained in Libet’s theory) which restore complete freedom of action and (compatibilist) freedom of decision. 1. Action plan as proposal: The appearing action plan probably is not yet an intention (as urges are not yet intentions); it is a plan, a proposal or, in epistemological terms, an hypothesis, which has to be examined and only eventually, after critical scrutiny, is turned into an intention. 2. Immediate knowledge of the proposal’s sense: When such a proposal appears, the agent usually knows its sense – if it has one – immediately, i.e. that with some probability it has a certain positively valued consequence. Accordingly she can consider or immediately discard the action proposal. 3. Universal search for validating an optimality judgement by deliberation: The fact that this proposal is rendered conscious has precisely the function of enabling critical scrutiny. Of course, some unconscious critical scrutiny probably begins already when the proposal is still unconsciously assembled (Dennett 2003: 237); but it will be rather limited. Consciousness, according to a by now widely accepted hypothesis, is the general workplace of the mind which makes information that otherwise would be encapsulated in one
Libet’s Experiments and the Possibility of Free Conscious Decision
97
module only universally available (e.g. Baars 1997). And this opens the possibility that the action proposal evokes associative reactions in all interesting parts of the brain, where these reactions may be positive or negative or, what is more important, provide more specific information about the action under consideration.10 That such a universal information search is necessary for possible intentions, instead of simply making some algorithmic and locally limited steps (like adding 2+2), has to do with the specific content of possibly rational decisions.11 The result of a rational deliberation can be condensed in an optimality judgement: ‘Action a is the best among the available options’, where the ‘degree of desirability of an action a’ is defined in terms of the intrinsic desirability of a’s consequences. Such an optimality judgement contains three (somewhat hidden) kinds of generalisations, which cannot positively be demonstrated to be true. (1) The chosen action is better than all its (relevant) alternatives. (2) For each option considered all the relevant consequences have been taken into account. (3) With respect to the uncertain assumptions about the actions’ consequences, the agent does not dispose of any information which implies a better justified cognition about the same topic. (This negative existential proposition expresses an epistemological requirement for uncertain cognitions, namely that where our database implies contradictory propositions about a topic we should doxastically adopt the one which is better justified on that database.) Since these three generalisations prevent a positive, e.g. deductive proof of the optimality judgement, the open associative search for relevant information regarding these generalisations – i.e. the search for perhaps better alternatives, for (further) relevant consequences, for stronger information regarding the relevant consequences – in all interesting parts of the brain is the best individually available and 10
11
Passingham and Lau see the peculiarity of the prefrontal cortex – which is often considered to be the place of intention formation – in combining two features, i.e. being part of the global workplace (it is e.g. the only region that receives inputs from all posterior regions of association cortex) and being the central executive. And because decisions require the integration of all relevant information, which requires the global workplace, they think at least nontrivial intentions are formed there. (Passingham & Lau 2006: 65-68; further literature can be found there too.) For the following empirical sketch of deliberation see: Lumer 2005: 241254.
98
CHRISTOPH LUMER
fast substitute for a positive proof, which can improve and validate optimality judgements.12 4. Flexible extents of deliberations; the simple cases: Now, the loss resulting from not finding the really best option, or the possible improvement engendered by finding a better option can be dramatic or minor; this depends in part on the importance of the decision to be taken. And because improving an optimality judgement costs time and effort there is a wide array of more or less extended deliberations or considerations of possible actions, where the actually invested effort often (and rationally so) reflects the importance of the decision. Therefore, in the most simple case, when an action proposal “pops” into consciousness this initiates the associative search for relevant information. If no possible negative consequence is found, the mere proposal, on the basis of knowing its positive sense, is transformed into an intention and then executed. The next, a bit more complex, cases are that a further positive consequence is found and, again, the intention and execution follow quite immediately or that a serious negative consequence appears and the proposal is blocked. So the latter kind of “veto” has the form of not proceeding to an intention. 5. More extensive cases: active conscious deliberation: Still more complex cases include a third possibility, apart from execution or vetoing, which was not visible in Libet’s experiments, namely to open an active conscious deliberation. This possibility is seized e.g. if the process of open association produces a negative and a positive consequence or only a mildly negative consequence of the action under consideration or if it produces a possibly better alternative. The conscious deliberation then consists of extending the search for consequences or extending the search for better alternatives and evaluating the options found in a more explicit and formal way: How good or bad are the single consequences of an option? What is their 12
Dennett has made important contributions to explaining the possibility of free decision in the face of Libet’s results, which share many features of the present proposal (Dennett 2003: 236-242). The present proposal, however, goes beyond Dennett’s explanation e.g. in the following respects. 1. It includes a clear account of the cognitive and freedom as well as autonomy providing function of consciousness. 2. This is based on an explanation of why good decisions need a universal workplace. 3. The proposal provides an explanation of the variability of deliberations, 4. which also permits minimal conscious deliberations for spontaneous actions.
Libet’s Experiments and the Possibility of Free Conscious Decision
99
total value? How does this total value compare to that of a given alternative? Of course, deliberation can be extended to any known degree of complexity. Complex deliberation can itself be intentionally driven. A big part of complex deliberation consists in letting one’s free associations work to produce further consequences, better options or corrective knowledge regarding the basis of a prediction; intentional deliberation can sustain this process by purposefully imagining the question or relevant concepts. The ideas which eventually emerge are, of course, the fruit of extensive unconscious processing. But they are always only suggestions, again brought to consciousness for being subject to conscious critical scrutiny with the help of criteria for their truth – e.g. propositions about consequences should be implied by information about the circumstances, the envisaged action and empirical laws – and for initiating a free associative search for possible objections. These objections and corrections may include also parts of the decision criteria themselves (Lumer 2009: 241-427; 521-529). 6. Acquiring intentions by deliberation without a decisional act: The transition from an idea of an action to the respective intention does not seem to require an explicit mental act of approving an intention. It seems as if when at the end of the deliberation the necessary information has been collected and approved, the last step to the intention is taken in the form of simply acquiring a dispositional intention, so that at a point certain the agent has the background knowledge to have the intention without mentally representing it. This explains why agents can form an intention after deliberation without an explicit act of decision. This makes the possibility of an individual proximal intention after W in Libet’s experiment on spontaneous acts (H2) somewhat more probable. This outline of the process of intention formation leaves ample room for freedom of decision and action although most of its conscious steps are based on massive unconscious information processing. So this is a way out of the impasse preordained by the – erroneous – idea that an unconsciously predetermined intention is necessarily unfree. What is decisive for freedom of decision is (1) that the unconsciously generated ideas are subsequently subject to conscious critical scrutiny, (2) that the whole process of intention formation takes the form of a conscious deliberation in search of the best action, i.e. where the pros and cons of the various options are considered and evaluated, and (3) where the agent consciously gives
100
CHRISTOPH LUMER
weight to her concerns. (That an intention is formed as the result of a deliberation in which the single steps are conscious but always preceded by unconscious processing is in itself not detrimental to freedom of decision.) A conscious deliberation takes place even in the most simple forms of decisions, where an action proposal is accompanied by knowledge of the sense of this action, by negative knowledge about negative consequences – despite a respective search – and where at least one alternative, doing nothing, is considered. The complexity of the deliberation can then be increased. Consciousness has three important roles in these processes. It helps to recruit relevant information – new hypotheses about consequences, possible alternatives, other critical aspects etc. – by exposing ideas and questions to the general workplace. It scrutinises hypotheses (of all kinds) by checking them against primary and secondary truth criteria – e.g. is a hypothesis implied by certain premises. And it is the way to bring in the subject’s concerns – conscious ideas under certain conditions are expressions of the kernel of the self. Concluding, it can be said, negatively, that although Libet’s experiments and theory are thought provoking, his experiments on spontaneous moves reveal next to nothing about intention formation and little about the processes leading to such moves in the various settings; they leave open too many possible causal interpretations. In particular, in themselves they prove nothing regarding the existence or inexistence of a (compatibilistically conceived) free will. Positively however, the amendment of empirically discovered possibilities, like distal fine implementation intentions and executive systems which decide on irrelevant leeway in decision-making at random, to intentional causalism extends considerably the realm of behaviour that can be explained as intentional. And the outline in the last section provides a new explanation of the effectiveness and possibly free character of decisions and intentions, based, among others, on various roles of consciousness: comprehensive criticism, universal information retrieval, complex serial algorithmic processing and participation of the self for choosing the personally best action. ACKNOWLEDGEMENTS: I would like to thank Patrick Haggard, Marc Jeannerod (†) and Hugh McCann for very helpful discussions of an earlier version of this paper.
Libet’s Experiments and the Possibility of Free Conscious Decision 101
REFERENCES Baars, Bernard J. (1997): In the theatre of consciousness. The workspace of the mind. New York: Oxford U.P. Consciousness and Cognition (2002): Vol. 11, issue 2 (= pp. 141-375). Dennett, Daniel C. (2003): Freedom Evolves. London: Penguin. Evans, Jonathan St. B. T. (2010): Thinking Twice. Two minds in one brain. Oxford: Oxford U.P. Goldberg, Elkhonon (2009): The New Executive Brain. Frontal Lobes in a Complex World. [Revised and Expanded Edition.] Oxford; New York: Oxford U.P. Haggard, Patrick; Martin Eimer (1999): On the relation between brain potentials and the awareness of voluntary movements. In: Exp. Brain Research 126. 128-133. Haynes, John-Dylan (2011): Beyond Libet. Long-term Prediction of Free Choices from Neuroimaging Signals. In: Walter Sinnott-Armstrong; Lynn Nadel (eds.): Conscious Will and Responsibility. Oxford; New York: Oxford U.P. 85-96. Herrmann, Christoph S.; Michael Pauen; Byoung-Kyong Min; Niko A. Busch; Joachim W. Rieger (2008): Analysis of a choice-reaction task yields a new interpretation of Libet’s experiments. In: International Journal of Psychophysiology 67. 151-157. Keller, I.; H[einz] Heckhausen (1990): Readiness potentials preceding spontaneous motor acts. Voluntary vs. involuntary control. In: Electroencephalography and Clinical Neurophysiology 76. 351-361. Libet, Benjamin (1985): Unconscious cerebral initiative and the role of conscious will in voluntary action. In: Behavioral and Brain Science 8. 529566. Libet, Benjamin (1989): Conscious Subjective Experience vs. Unconscious Mental Functions. A Theory of the Cerebral Processes Involved. In: Rodney M. J. Cotterill (ed.): Models of Brain Function. Cambridge [etc.]: Cambridge U.P. 35-49. Libet, Benjamin (1993): The neural time factor in conscious and unconscious events. In: Experimental and theoretical studies of consciousness 174. 123146. (Wiley, Chichester: Ciba Foundation Symposium.) Libet, Benjamin (1999): Do We Have Free Will? In: Benjamin Libet; Anthony Freeman; J. K. B. Sutherland (eds.): The Volitional Brain. Towards a Neuroscience of Free Will. Thorverton: Imprint Academic. 47-57. (Original publication: Journal of Consciousness Studies 6, no. 8/9 (1999). 47-57.) Libet, Benjamin (2004): Mind Time. The Temporal Factor in Consciousness. Cambridge, MA; London: Harvard U.P.
102
CHRISTOPH LUMER
Libet, Benjamin; Anthony Freeman; J. K. B. Sutherland (eds.) (1999): The Volitional Brain. Towards a Neuroscience of Free Will. Bowling Green: Philosophy Documentation Center. (Originally published in: Journal of Consciousness Studies 6, no. 8/9 (1999).) Libet, Benjamin; C. A. Gleason; E. W. Wright; D. K. Pearl (1983b): Time of conscious intention to act in relation to onset of cerebral activities (readiness-potentials). The unconscious initiation of a freely voluntary act. In: Brain 106. 623-642. Libet, Benjamin; E. W. Wright; C. A. Gleason (1982): Readiness-potentials preceding unrestricted ‘spontaneous’ vs. pre-planned voluntary acts. In: Electroencephalogr. Clin. Neurophysiology 54. 322-335. Libet, Benjamin; E. W. Wright jr.; C. A. Gleason (1983a): Preparation- or intention-to-act, in relation to pre-event potentials recorded at the vertex. In: Electroencephalogr. Clin. Neurophysiology 56. 367-372. Lumer, Christoph (2002): Entscheidungsfreiheit. In: Wolfram Hogrebe (ed.): Grenzen und Grenzüberschreitungen. XIX. Deutscher Kongreß für Philosophie, 23.-27. September 2002 in Bonn. Sektionsbeiträge. Bonn: Sinclair Press. 197-207. Lumer, Christoph (2005): Intentions Are Optimality Beliefs - but Optimizing what? In: Erkenntnis 62. 235-262. Lumer, Christoph (2009): Rationaler Altruismus. Eine prudentielle Theorie der Rationalität und des Altruismus. 2nd, supplemented ed. Paderborn: mentis. Lumer, Christoph (2013): The Volitive and the Executive Function of Intentions. In: Philosophical Studies 166. 511-527. Mele, Alfred R. (2007): Free Will. Action Theory Meets Neuroscience. In: Christoph Lumer; Sandro Nannini (eds.): Intentionality, Deliberation and Autonomy. The Action-Theoretic Basis of Practical Philosophy. Aldershot: Ashgate. 257-272. Mele, Alfred R. (2009): Effective Intentions. The Power of Conscious Will. Oxford: Oxford U.P. Mele, Alfred R. (2011): Libet on Free Will. Readiness Potentials, Decisions and Awareness. In: Walter Sinnott-Armstrong; Lynn Nadel (eds.): Conscious Will and Responsibility. Oxford; New York: Oxford U.P. 23-33. Neumann, Odmar; Wolfgang Prinz (1987): Kognitive Antezedentien von Willkürhandlungen. In: Heinz Heckhausen; Peter M. Gollwitzer; Franz E. Weinert (eds.): Jenseits des Rubikon. Der Wille in den Humanwissenschaften. Berlin [etc.]: Springer. 195-215. Passingham, Richard E.; Hakwan C. Lau (2006): Free Choice and the Human Brain. In: Susan Pockett; William P. Banks; Shaun Gallagher (eds.): Does Consciousness Cause Behavior? Cambridge, MA: MIT Press. 53-72. Pauen, Michael (2014): Naturalizing Free Will. Empirical and Conceptual Issues. In this volume.
Libet’s Experiments and the Possibility of Free Conscious Decision 103 Pockett, Susan (2006): The Neuroscience of Movement. In: Susan Pockett; William P. Banks; Shaun Gallagher (eds.): Does Consciousness Cause Behavior? Cambridge, MA: MIT Press. 9-24. Pockett, Susan; Suzanne C. Purdy (2011): Are Voluntary Movements Initiated Preconsciously? The Relationship between Readiness Potentials, Urges and Decisions. In: Walter Sinnott-Armstrong; Lynn Nadel (eds.): Conscious Will and Responsibility. Oxford; New York: Oxford U.P. 34-46. Roskies, Adina L. (2011): Why Libet’s Studies Don’t Pose a Threat to Free Will. In: Walter Sinnott-Armstrong; Lynn Nadel (eds.): Conscious Will and Responsibility. Oxford; New York: Oxford U.P. 11-22. Soon, Chun Siong; Marcel Brass; Hans-Jochen Heinze; John-Dylan Haynes (2008): Unconscious determinants of free decisions in the human brain. In: Nature Neuroscience 11. 543-545. Trevena, Judy Arnel; Jeff Miller (2002): Cortical Movement Preparation before and after a Conscious Decision to Move. In: Consciousness and Cognition 11. 162-190. Wegner, Daniel M. (2002): The Illusion of Conscious Will. Cambridge, MA; London: MIT Press.
The Effectiveness of Intentions – A Critique of Wegner CHRISTOPH LUMER Abstract: In this chapter a general and empirically substantiated challenge to the traditional, intentional-causalist conception of action is discussed, namely that the conscious will is, allegedly, illusory, which implies that intentions do not cause actions. This challenge has been advanced by Daniel Wegner as an implication of his model of the experience of conscious will. After showing that attempts to directly falsify Wegner’s illusion thesis have failed and that a real falsification will not be easily available, the challenge is answered here by criticising Wegner’s model: those parts of the model which should sustain the illusion thesis are not substantiated. The rest of the model, however, should enrich our self-reflexive dealing with our desires, intentions and actions.
1. A Challenge to the Intentional-causalist Conception of Action and the Aim of this Chapter The aim of this chapter is to defend the traditional, intentionalcausalist conception of action against a challenge raised by recent neuropsychological theories, in particular by Daniel Wegner’s theory. The traditional conception of action is intentional-causalist: An action consists of a behaviour which is caused (in a non-deviant way) by a respective intention, where this intention itself is actually or possibly the result of a deliberation which aims at fulfilling our desires.1 This conception of action expresses what is valuable in 1
Proponents of an intentional-causalist conception of action, who have held that actions are caused by intentions or volitions, are e.g. Aristotle, Augustine, Ockham, Thomas Aquinas, Descartes, Locke, Leibniz, Hume, Kant, or contemporary theorists like Fred Adams, Richard Brandt, Bratman, Davidson, Goldman and Mele. The second idea, i.e. that intentions are based on actual or possible deliberation, which represents the higher faculties of humans, is developed e.g. in Aristotle, Aquinas, Leibniz or Kant; for a
106
CHRISTOPH LUMER
actions and makes up the foundations of practical rationality, freedom of decision and freedom of action as well as of responsibility. The value consists in the fact that a mental structure, which we may call the “ego”, i.e. that part of our mind which is consciously accessible, with which we identify and which we consider to be the kernel of our self, controls our behaviour in a rational way and, via the consequences of our behaviour, also some segment of the outer and inner world. Parts of the ego are, among others, our desires, our knowledge about options and consequences and the deliberation mechanism, which tries to determine the option that best fulfils our desires and, accordingly, establishes an intention. The intention then is the hinge between deliberation and execution: it is actually or possibly the result of a deliberation, and, if everything runs smoothly, it causes the intended behaviour (Lumer 2013). Please note, an ego conceived in this way is not a homunculus but a mental structure in which certain processes occur; intentions are one group of results of such processes. The ego does not act like an agent but it does, among other things, generate our intentions. The American psychologist Daniel Wegner has developed a theory of the “experience of conscious will”, i.e. a theory of how we come to believe that we act, with which he has defended the strong claim that the conscious will is an illusion. This theory as well as the claim have found wide diffusion and often acceptance among psychologists, neuroscientists, the general public and, though perhaps to a somewhat lesser extent, philosophers. Together with the work of Benjamin Libet it is probably the currently most influential attack on the traditional concept of action. This chapter discusses the main and direct way in which this theory challenges the traditional conception, namely the theoretical model of the experience of conscious will (as well as its substantiation), by which Wegner defends his claim of the illusion of the conscious will. This claim itself is a very radical attack on the intentional-causalist concept of action, which questions the causal efficacy of intentions (or their physiological underpinnings) altogether. If it were true, the basis of our ideas of practical
present-day elaboration (including references to the classics) see: Lumer 2005; 2013.
The Effectiveness of Intentions – A Critique of Wegner
107
rationality, responsibility and freedom would be entirely undermined. The following discussion tries to show that that part of the model of the experience of conscious will which should sustain the illusion thesis is entirely unfounded. Wegner’s theory and, even more, the empirical evidence he adduces contain still another challenge to the intentional-causalist concept of action: Wegner presents an impressive subset of examples which seem to show that there are actions, even lots of them, without underlying intention – which, of course, contradicts the idea that actions are caused by intentions –: actions of schizophrenics, hypnotic behaviour, actions directed by subliminal priming, simply unconscious situation-specific actions, dynamically unconscious actions (e.g. Freudian slips), automatic routine actions, ideomotor behaviour ((micro-)movements caused by merely thinking of this movement) etc. Many of these kinds of examples also make the rounds in various publications by other critics of the traditional conception of action as evidence for a scientific, reductionist naturalism. They are really challenging but must be discussed group by group. Unfortunately, there is not enough space to do this here.2
2. Prelude – The Concepts of ‘Conscious Will’ and ‘Empirical Will’ Daniel Wegner has challenged the traditional picture of human agency, coming to a conclusion which makes up the title of his best known book: “The illusion of conscious will.” The rich empirical material sustaining this thesis consists of a wealth of examples where (i) people feel that they are (or have been) willing and executing an act that they are not (or have not been) doing or, conversely, (ii) are not willing an act that they in fact are doing or where (iii) they report about intentions for really executed actions though in fact they cannot have had these intentions, i.e. they confabulate intentions. Some examples are: (i) after strong accusations by others, someone believes to have committed a fault (Wegner 2002: 10 f.); a person intentionally “moves” her phantom 2
However, I am preparing critical discussions of some of these examples, e.g.: Lumer, forthcoming.
108
CHRISTOPH LUMER
limbs (ibid. 40); someone has the impression that another person’s hand movements projected to a place where one expects to have one’s own hand are one’s own movements (ibid. 41-43); (ii) a person experiences the alien hand syndrome, i.e. a neuropsychological disorder in which a person experiences one hand as operating with a mind of its own (ibid. 4-6); a hypnotised subject is acting under the influence of hypnosis thereby feeling externally controlled (ibid. 271-315); people very often unconsciously imitate other persons (ibid. 128-130); in spiritistic séances people provoke many kinds of “magic happenings” without feeling their doing (ibid. 101120); Wegner presents a long list of other forms of action projections, where people attribute their own actions or voluntarily produced events to external sources (ibid. 187-270); (iii) after the execution of posthypnotic suggestions the former hypnotised subjects often invent intentions for their deeds (ibid. 149-151); Wegner describes many other kinds of confabulations (ibid. 171186). The phenomena just cited are examples of the empirical basis of Wegner’s theory of the “illusion of conscious will”. The central conceptual part of this theory is the distinction between two meanings of “will”: Wegner defines the ‘empirical will’ as: “the causality of the person’s conscious thoughts as established by a scientific analysis of their covariation with the person’s behavior” (Wegner 2002: 14). So, the ‘empirical will’ captures real intentions which cause the respective actions. Wegner’s definition of ‘conscious will’ instead is taken from David Hume: The conscious will is “the internal impression we feel and are conscious of, when we knowingly give rise to any new motion of our body, or new perception of our mind” 3 (ibid. 3; italics deleted by me, C.L.); Wegner further elucidates this: “The [conscious, C.L.] will is not some cause or force or motor in a person but rather is the personal conscious feeling of such causing, forcing, or motoring” (ibid.). Hence, the conscious will, in Wegner’s terminology, is a sort of felt belief to act. – Wegner’s theory is mainly about the conscious will. Wegner’s definition of ‘empirical will’ comes close to but does not exactly capture a usual meaning of “will”. An ontologically more correct definition would begin like this: ‘the empirical will is the 3
Hume 1978: 399 (= II.3.1, para. 2). In Hume this is the definiens for ‘will’ simpliciter.
The Effectiveness of Intentions – A Critique of Wegner
109
person’s conscious thought about some proper (future) behaviour, where the thought causes this behaviour …’ Empirical will defined in this way is the same as an intention. “Conscious will”, however, is a misnomer – pace Hume –; the definiens does not come close to e.g. any of the 21 meanings of the noun “will” listed in “Webster’s Third New International Dictionary of the English Language Unabridged” (Babcock Gove 1993: 2617). A better short name for what Wegner (and Hume) define would be “control experience” or “control belief”. To use a misnomer is a peripheral error in itself. If, however, the misnamed entity is confused with the object usually designated with that name this can cause serious problems, and in particular fallacies of equivocation. Unfortunately, this is what happens repeatedly in Wegner’s book where the main fallacy of equivocation is not directly stated but at least insinuated to the broad public: (i) Conscious will (i.e. control experience) is a construction or fabrication, hence [why?] (ii) an illusion (Wegner 2002: 2); (iii) as a consequence, the will (i.e. intentions, hence mental states that cause respective actions, or the faculty to have such intentions) does not exist; therefore: (iv) “we develop the sense that the intentions have causal force even though they are actually just previews of what we may do” (ibid. 96). Form (i) does not follow (ii): a mental construction is an illusion only if its content is false. Furthermore, the step from (ii) to (iii) entails the just explained fallacy of equivocation. In (iv), finally, only a further explanation is provided. A less dramatic but this time explicit fallacy of equivocation is e.g. this: “When we apply mental explanations to our own behaviorcausation mechanism, we fall prey to the impression that our conscious will causes our actions” (ibid. 26). I have some doubts that anybody is so confused as to consider her control experience (= “conscious will”), i.e. her (felt) belief that her intentions cause her behaviour, to be (what the belief’s content itself denies) the cause of her behaviour.4 Wegner’s sentence only makes sense (which, of course, does not imply that it is true) if by “conscious will” he this time means the will, i.e. the intention itself and not the control 4
A bit more slowly: Wegner supposes that people have this impression: 1. They have a conscious will, i.e. they believe that their intentions cause their behaviour. 2. In addition, they believe (have the impression) that belief 1 causes their behaviour – of course in contradiction to belief 1.
110
CHRISTOPH LUMER
experience. With this interpretation (“we fall prey to the impression that our will / intention causes our actions”) the sentence is not nonsensical, but now it implies or at least implicates the very strong – and perhaps false – thesis that the will, i.e. the intention does not cause our actions.
3. Wegner’s Theory of the Experience and Illusion of Conscious Will The just provided linguistic analysis was already a look ahead. What does Wegner’s theory say? Its main topic is to explain the above listed dissociations and confabulations, following the basic idea that the “conscious will” (i.e. the control experience) is not an immediate experience of the ongoing causal processes but a cognitive construct, the result of an inferential reasoning about this causal process on the basis of the (mostly experiential) material at hand (e.g. Wegner 2002: 65 f.). I think this basic idea is absolutely right (some less basic criticisms: Bayne 2006: 170-175; Haggard et al. 2002). Already Hume wrote that we cannot perceive causality but only sequences of events; we construct our causality assumptions on the basis of this information. Normally we are quite good in selfattributing intentions, the causal relations and, thereby, actions. If the information basis, however, is missing or if the available information is false or if we are systematically led astray then the conscious will is illusory, it contains false information. (Cf. also Dennett 2003: 243244.) Wegner has presented a serious analysis of these processes and provided many important insights and material. So far, however, the idea is neither spectacular nor in conflict with the traditional concepts of action, intention, free will and responsibility because the essential propositions of the traditional picture do not speak of our control experience (Wegner’s “conscious will”) but of our control itself (Wegner’s “empirical will”), specifically that our intentions in fact rather reliably (via an action generating mechanism) cause the respective behaviour and then further anticipated consequences.5 If 5
The traditional view of actions supposes only that intentions cause the respective behaviour and makes no particular assumptions about an agent’s knowledge about this causal process. Though the great majority of action theorists shares this view, there are some philosophers, e.g. Anscombe, Davis, Ginet, Runggaldier and Searle, who detach from the traditional view
The Effectiveness of Intentions – A Critique of Wegner
111
the agents’ beliefs about such singular causal relations are inferential and if they are sometimes false this, of course, does not imply that such causal relations themselves do not exist and that the general control hypothesis, i.e. that our intentions in most cases rather reliably cause the respective behaviour, is false. Now, Wegner, however, implicitly also holds the following much stronger thesis, which may be called the “illusion of (empirical) will thesis”: Acts of willing, i.e. intentions, do not cause the respective action. I have written he “implicitly claims” the illusion of the empirical will thesis because he never states it in a concise form, nor does he really argue for it, and in at least one passage he even affirms something to the contrary.6 However, he
6
and take an immediate (though sometimes false) knowledge of our actions, hence a knowledge that is not based on sensory experience, to be a characteristic feature of human action (and in part they even refuse the causalist assumptinon of the intention causing the behaviour) (Anscombe 1957: §§6; 8; 16; 28; Davis 1979: 15-16; 61-62; Ginet 1990: 13; 15; 20; 28; Runggaldier 1996: 88; 90; Searle 1983: 87-93). However, this is only one of several minority views about the defining features of actions. And it is quite obviously false: There are unconscious and automatic actions of which we are not even aware during their performance; in addition, we have to learn which type of behaviour is under our intentional control and which is not; finally, there is the whole body of evidence submitted by Wegner for the inferential nature of our control beliefs. “It is possible that both [conscious and unconscious, C.L.] kinds of representation of action might contribute to the causation of an action, and in either event we would say that real mental causation had taken place.” (Wegner 2002: 161) Frankly, I am somewhat perplexed about this passage, which contradicts many other passages in Wegner’s book. In any case its tendency towards the illusion of empirical will thesis is stronger than its tendency towards granting mental causation. – In a later paper, he even dissociates explicitly from the illusion of empirical will thesis (“Does all this mean that conscious thought does not cause action? It does not mean this at all.”) (Wegner 2003: 68) and speaks more cautiously of “the possibility that conscious will is an illusion” (ibid. 65; my emphasis, C.L.); but he does not explain the strong contrast to his “Illusion of Conscious Will” book, and he again proposes the book’s central model of the relevant causal relationships, which characterises the relation between “thought” (intention) and action as “apparent causal path” [Wegner’s emphasis] as opposed to the “actual causal path” (ibid. 66). – One interpretation of these strong contradictions is that Wegner, when pressed later, had to admit that he has no evidence for his
112
CHRISTOPH LUMER
seems to take the illusion of empirical will thesis for granted, as is evident from the following quotations. “We come to think of these prior thoughts as intentions, and we develop the sense that the intentions have causal force even though they are actually just previews of what we may do.” (Wegner 2002: 96) “We perceive minds by using the idea of an agent to guide our perception. In the case of human agency, we typically do this by assuming that there is an agent that pursues goals and that the agent is conscious of the goals and will find it useful to achieve them. All this is a fabrication, of course, a way of making sense of behavior.” (Ibid. 146) “Our sense of being a conscious agent who does things comes at a cost of being technically wrong all the time. The feeling of doing is how it seems, not what it is – but that is as it should be. All is well because the illusion makes us human.” (Ibid. 342)
Then he adds a quotation from Einstein which concludes with: “So would a Being, endowed with higher insight and more perfect intelligence, watching man and his doings, smile about man’s illusion that he was acting according to his own free will.” (Ibid. 342)
Finally, already the title of the book, “The Illusion of Conscious Will” – beyond its explicit meaning –, also implies the stronger illusion of the empirical will thesis. This holds because in order for the conscious will, i.e. the control belief, to be also an illusion in addition to be a construction, this belief, rather generally, must have a false content. This content, however, is that the intention causes the action; that this content is illusory is exactly what the illusion of empirical will thesis says. Wegner elaborates his basic idea in the form of a theoretical model: “[1] Unconscious mental processes give rise to [2] conscious thought about the action (e.g., intention, belief), and [3] other unconscious mental processes give rise to [4] the voluntary action. There may or may not be links between these underlying unconscious systems (as designated by the bi-directional unconscious potential path). […] It is the perception of the apparent path that gives rise to the experience of will: When we think that our conscious intention has caused the voluntary action that we find ourselves doing, we feel a sense of will.” (Wegner 2002: 68)
spectacular theses, which, however, are much more interesting and sell so well.
The Effectiveness of Intentions – A Critique of Wegner
113
This is illustrated by a figure, whose essence is reproduced here in figure 1 (with a different graphic styling). Fig. 1. Wegner’s model of the experience of conscious will (adapted from: Wegner 2002: 68) mental events: ⎯⎯⎯⎯→ time physical events:
t N P(t) R UC(a)
Þ J
" E(w) a
J = causation Ö = apparent causation = makes up, contributes to 1. P(t) = physiological underpinning of the thought 2. t = thought [intention, belief] 3. UC(a) = unconscious cause of action 4. a = action E(w) = experience of will
Wegner speaks of a “thought” instead of “intention” among other things because he advocates the ideomotor theory of action, which says that simply thinking of a movement (without intending it) leads to the respective movement (and does so to a maximum degree if a simultaneous antagonist representation does not prevent this) (Wegner 2002: 121; 120-130). There are at least two real mechanisms which can be captured by this description, first, that after having formed a respective intention the mere thought of an action can trigger this action, and, second, the mere representation of a movement (without any accompanying intention), via the common usage of the motor area for representational and for executive processes, can induce respective muscle tensions and micro movements. However, the latter usually is neither experienced nor taken to be an action (normally, the movement is not perceived at all); it is not the mechanism of an (intentional) action in the common sense. But we can leave this question open here, keeping in mind that the “thought” is meant to comprise intentions and, hence, that the model also captures actions in the narrow sense. – One peculiarity of Wegner’s scheme is that the unconscious cause of the thought (or the thought’s “physiological underpinning”, as I have dubbed it) precedes the thought itself. According to supervenience or
114
CHRISTOPH LUMER
identity theories of the mental this is impossible. However, this is a minor point; I will correct it in the following without saying. – According to Wegner, there are three possible relations between the unconscious cause of thought (the thought’s physiological underpinning) P(t) and the unconscious cause of action UC(a), which are represented by the bi-directional causation arrow in figure 1: i. (: P(t)UC(a)) The thought’s physiological basis causes the cause of action. This is the causal way assumed in the traditional picture of action. If Wegner wants to sustain the strong thesis about the illusion, inefficacy also of the intention, i.e. the empirical will (illusion of empirical will thesis), he cannot hold this interpretation. ii. (: UC(a)P(t)) The cause of action also causes the thought’s physiological basis. This is the interesting new hypothesis: the thought or intention is only an epiphenomenon of the independent (and thoughtless) preparation for action. iii. (ő: UC(a)ŒP(t)) The action’s cause and the thought’s physiological basis are not causally connected. This scenario, however, is rather unlikely and implausible: The thought’s content is the action after all; that such a thought occurred prior to the action itself without being causally connected to it in some way, would be such an unlikely coincidence that we can exclude this case here. – The only interesting hypothesis which is coherent with Wegner’s claims of the illusion of the will thus is the assumption of causal path ii (though we have to keep in mind that he explicitly affirms all three ways). If we apply these small corrections – physiological underpinning of the thought and thought itself occurring at the same time, the unconscious cause of action causes the physiological underpinning of the thought – we get the model represented in figure 2. Fig. 2. Wegner’s model of the experience of conscious will, corrected as explained in the text t K P(t)
mental events: ⎯⎯⎯⎯→ time physical events:
Þ
" E(w)
N UC(a)
J
a
This is a rather gloomy picture of human action because it does not leave any substantial role in the production of action to the ego.
The Effectiveness of Intentions – A Critique of Wegner
115
The ego could be present in Wegner’s “thought” or its physical underpinning; but according to Wegner’s model, this thought does not cause and control the action – in contrast to what the intentionalcausalist concept of action says. Some philosophers have accepted this challenge and tried to disprove Wegner’s model directly by providing empirical evidences of direct effects of intentions on our behaviour. Mele (2009: 135-136) e.g. refers to Gollwitzer’s experiments, which show that if people have already formed a goal intention, for example to do some physical exercise next week, and additionally form an implementation intention, which fixes the exact details of what to do, this increases compliance with the goal intention considerably (meta-analysis of 94 experiments: Gollwitzer & Sheeran 2006). Pauen (2014 (= this volume, chapter 1)) refers to effects found by Haggard et al. (2002) and Haynes et al. (2007). One could even try to prove the effectiveness of intentions in a still more direct way: The experimenter proposes a certain kind of action, e.g. to sign a certain kind of contract or to donate some money; then he asks the subjects whether they intend to accept the proposal; immediately afterwards he presents the contract or the collecting tin without further ado. Probably the rate of subjects, among those who have just declared to have the respective intention, who finally act as proposed will be close to to 100%, whereas among those who have declared they do not intend to comply it will be near to 0%, thereby proving the effectiveness of intentions. However, the problem with this kind of confutation of Wegner’s model is that Wegner could reply to all these examples saying that it is true that they show an empirical correlation between intention and action but they do not prove that the intention was the cause of the action; and he could reaffirm that the real causes were some unconscious processes which produced the intention’s physiological underpinning as well as the action. Really refuting this rebuttal is difficult for at least two reasons. First, Wegner does not specify what the unconscious cause of action is, so his unconscious cause of action thesis is only a cheap existential claim: unspecific, easy to affirm and hard to falsify. Second, for falsifying such an unspecified claim one probably needs a detailed micro-physiological reconstruction of the real path of causation; and since present neurophysiology is not even able to locate the region of the physiological correlates of intentions with
116
CHRISTOPH LUMER
some certainty,7 such a reconstruction may still require decades. Until then the strongest possible critique of Wegner’s model probably is to criticise its justification, i.e. to show that the model is not substantiated. The following section tries to provide such a critique of Wegner’s justification.
4. The Illusion Theses’ Relying on Libet Wegner’s (slightly corrected) model reproduces exactly what is inherent in Benjamin Libet’s interpretation of his experiments on the unconscious preparation for action (and provides an amendment to it). Libet claims to have found that conscious (pre-actional) intentions to move one’s hand or finger are preceded (by about 500 ms) by electric readiness potentials under the vertex (and the temples), which lead to the execution of the respective action if this execution is not stopped by a conscious veto by the agent. Libet interprets his empirical reconstruction in holding that the “decision” to act is already taken unconsciously, namely inherent in the readiness potentials, before the conscious intention is formed.8 Libet’s model can be summarised graphically as in figure 3 (taken from Lumer 2014: fig. 4 (= chapter 2 above)).
7
8
The neurophysiologist Susan Pockett e.g. in 2006 writes that “the initiation of movements has not yet been the specific subject of very much neuroscientific investigation.” Continuing and resuming some research, she concludes: “Presumably, then, if the initiation of movements can be said to have a specific neural correlate at all, it must reside in one or more of the DLPFC, pre-SMA, SMA proper, basal ganglia, or primary sensorimotor cortex. There is a great deal of parallel processing in this region and the exact temporal order of activation of these areas when a movement starts is still controversial, but it is a reasonable assumption that activity flows from the prefrontal region (DLPFC) in a generally caudal direction to finish in the primary motor area. In between, it reverberates around at least five separate cortico-basal ganglia loops, all of which are probably active in parallel […]” (Pockett 2006: 14-15) This résumé sounds more like a sketch at the beginning rather than at the end of the research. Libet 1985: 529-539. – I discuss Libet’s theory in this volume in chapter 2 (=Lumer 2014); chapter 1 (= Pauen 2014) contains a further discussion.
The Effectiveness of Intentions – A Critique of Wegner
117
Fig. 3. Principal interpretation of Libet’s main experiment: physical epiphenomenalism i K P(i)
mental events: ⎯⎯⎯⎯→ time physical events: N r
J
a
J = causation i = forming of the intention P(i) = physiological underpinning of the intention a = action r = onset of the readiness potential or other trigger of action
That Wegner’s model is substantially the same (apart from his amendment) as Libet’s can easily be seen by comparing the two figures, i.e. the graphical representation of Wegner’s (slightly corrected) model (see figure 2) and the “physical epiphenomenalist” interpretation of Libet’s experiment (see figure 3). The only new piece in figure 2 as compared to figure 3 is the addition regarding the explanation of the apparent mental causation and the experience of conscious will, i.e. the control experience; the part of the figure representing this addition is within the polygon (though, of course, “a” and “i”, the correspondent of Wegner’s “t”, are already also parts of Libet’s model). Another difference is that in Wegner’s model Libet’s “readiness potential” is replaced by the more open formula “unconscious cause of action”. Wegner elaborates his addition (i.e. the part of figure 2 within the polygon) to Libet’s physical epiphenomenalism by providing a fairly general psychological theory of human acquisition of causal knowledge (Wegner 2002: 68-95). This theory of the feeling and belief of conscious control is interesting and, I think, mostly correct. Like the above discussed basic idea of this theory, it is completely consistent with the traditional picture of human action, intention, freedom and responsibility. The only part of Wegner’s model which challenges this traditional picture is not the addition but the piece it shares with Libet’s model, i.e. what I have called “physical epiphenomenalism”: The unconscious cause of action (UC(a)) is the common cause of action a and of the physiological basis (P(t)) of thought (t) or intention, where the latter or its physiological basis is not a cause of
118
CHRISTOPH LUMER
action a. I have dubbed this “physical epiphenomenalism” because, according to this view, already the physical basis P(t) of the thought or intention is only an epiphenomenon of the real “decision” taken by the unconscious cause of action UC(a) and does not causally influence the course of action. In addition to physical epiphenomenalism, mental epiphenomenalism may also hold, i.e. the state of affairs that the intention’s physical underpinning P(t) causes the intention or thought t but without this thought having any causal influence (as shown in figures 2 and 3). Whether or not mental epiphenomenalism is true is a vividly debated question but entirely independent of Wegner’s and Libet’s material and theory and, therefore, can be left open here. If mental epiphenomenalism were false and instead e.g. the identity theory true the arrow between “P(t)” and “t” in figure 2 and the arrow between “P(i)” and “i” in figure 3 would have to be replaced by equals signs – which would leave the physical epiphenomenalist relation between “UC(a)”, “P(t)” and “a” in figure 2 as well as the respective relation between “r”, “P(i)” and “a” in figure 3 unaltered. Physical epiphenomenalism would be very problematic for the initially sketched intentionalcausalist conception of action and, as a consequence, even for (rationalist) compatibilist conceptions of responsibility and of freedom of decision if the unconscious cause of action were not itself caused by a sort of conscious deliberative process – which however Libet and Wegner implicitly take to be excluded – because it would exactly preclude a rational basis of our decisions and intentions. Elsewhere (Lumer 2014: sect. 4 (= chapter 2, above)) I have extensively criticised Libet’s justification of his theory, in particular his physical epiphenomenalism. Some major objections are: 1. Because of the many experimental complications it is still grossly unclear whether the intention i really follows or perhaps even precedes the readiness potentials and, therefore, cannot or can be the decisive cause. 2. We can say with reasonable certainty that what Libet declares to be an intention, “i” in figure 3, is not an intention but an urge to move, i.e. an occurrent desire to move which often is also felt in the respective limb as a sort of unrest. An urge to act can provoke a decision or the forming of an intention (for or against the action) but it is not an intention. Hence it is not clear where the real intention is. 3. Libet has not proved at all that the observed type of readiness potential, apart from the possibility of a veto, (quasi)
The Effectiveness of Intentions – A Critique of Wegner
119
determines the action. The respective appearance is only a methodological artefact because Libet did not record readiness potentials after which no hand movement occurred. Experiments conducted by other scientists have shown that such readiness potentials are neither necessary nor sufficient for the respective movement. 4. Flexing one’s finger or wrist in itself is a completely irrelevant action; and the leeway in decision making left in Libet’s experiments – flexing one’s finger now or somewhat later – does not contain any value differences that would make a deliberation and decision possible and worthwhile. Therefore, things may be quite different with really important actions, where a deliberation whose result is not determined by any readiness potential may occur. 5. Libet does not provide any theory about how complex decisions, which consider and integrate much information, can be taken. Architecturally, the vertex of the brain and the (pre-)motor cortex are not the right areas for providing this integration of information. Probably many areas of the cortex provide some of the necessary information, which has to be integrated in an area interconnected with many of them. This, however, fits to a rather traditional picture of intention formation. Apart from this criticism of Libet’s justification of physical epiphenomenalism, physical epiphenomenalism itself can be criticised as sketched in Lumer 2014 (sect. 6), e.g. by observing that physical epiphenomenalism cannot explain the finality, situational appropriateness and biographical continuity of complex behaviours. Because Wegner’s model does not refer to intention / urges or to readiness potentials, it could, in theory, have resolved some of the problems of Libet’s model – e.g. problems 2, 3, 4 and 5 of the just provided list. However Wegner’s model does not help to solve them, already because it does not specify what the “unconscious causes of action” are. Nonetheless, Wegner could at least provide a new justification of physical epiphenomenalism with the help of his immensely rich material. However, he does not even do this. The new evidences he supplies relate to his theory of control experience (the polygon part of figure 2), not to physical epiphenomenalism; the only evidence he procures for the latter part of his model is his reference to – Libet (Wegner 2002: 49-55). Perhaps Wegner’s reasoning is: There are lots of errors in our control experience, which show that the control experience is not a direct emanation of action control, i.e. of our intentions causing the action; therefore, this
120
CHRISTOPH LUMER
direct control of our actions by our intentions does not exist. This, however, would be fallacious: If the control experience is not a direct emanation of action control this does not imply that there is no action control, i.e. that our intentions do not cause the respective actions. And if some – a not negligible share – of the agent’s beliefs about his action control are false neither does this imply that in these particular cases there was no intention to cause the action and still less that actions in general are not caused by intentions. Finally, even if a part of the control beliefs is false this does not imply that the vast majority of them is false. Another origin of Wegner’s physical epiphenomenalism may be his picture and refutation of folk psychology. He writes that the experience of conscious will attributes magical power to the self because it does not have access to myriads of neural, cognitive or biological causes underlying our behaviour; therefore, we believe that our conscious thoughts, our volitions control our actions (Wegner 2008: 234). “The magic of self […] doesn’t go away when you know how it works. It still feels as though you are doing things, freely willing them […]” (ibid. 236). Indeed, we cannot perceive intermittent processes between intending and acting. But, first, this does not mean that people believe that there are no such intermittent processes; even educated laymen by now have a neurophysiological idea of such processes and do not believe in Cartesian dualism. Second and above all, the existence of such intermittent processes or the fact that we do not perceive or know them by no means contradicts the claim that the intention or its physiological basis causes the action – what, however Wegner seems to believe –; indirect causation is something we cognise all the time. If someone, for example, presses the button of his TV remote control to switch on the TV, pressing the button is the cause of the appearance of pictures on the screen; and to initiate this causal process is exactly what the agent intends, though most of us have no precise idea of the causal path between the two events. As user interfaces are designed to cause complex effects by simple and accessible causes without having to worry about the intermittent underground, so our actiongenerating mechanisms make it possible to cause actions simply by intending them (cf. Dennett 2003: 248); not magic but excellent functioning, which empowers our ego. These criticisms refute the justification of Wegner’s empirical will thesis, but they cannot really disprove the thesis itself. However,
The Effectiveness of Intentions – A Critique of Wegner
121
if we disregard the justification the intentional-causalist conception of action is by far the simpler theory, because it does without the “unconscious cause of action”; and it explains more and better, namely by referring to the deliberation, how actions can adapt so well to the situation and bring about positive effects. In such a situation the intentional-causalist conception of action, according to adequacy criteria from philosophy of science, is to be preferred over Wegner’s model and in particular to his illusion of empirical will thesis.
5. Practical Consequences of the Constructivism of Our Control Experience The upshot of this critique is that the evidence and arguments submitted by Wegner by no means prove the illusion of empirical will thesis, i.e. that intentions do not cause and control the respective behaviour. In this respect, his theory adds nothing to Libet’s physical epiphenomenalism, the book is a big ignoratio elenchi, i.e. the reasons given entirely miss the claim to be proved. So, this part of Wegner’s theory is no real challenge to the traditional picture of action, intention, freedom and responsibility. However, what Wegner really substantiates is his theory of control experience, which among other things says that this control experience is inferential (a cognitive “construction”), that many actions and the explanatory reasons for them remain unconscious, hence unknown to the agent, and that the reasons by which she later explains or justifies her action can be false, ill-remembered, confabulations or rationalisations. This is a problem in our culture where giving and reflecting on reasons is an important part of our social exchange and of self-reflection. But, it is not really a completely new problem, only some aspects of Wegner’s material are new; and we have learned to cope with this problem. First, most of our beliefs about our comprehensive intentions are probably correct. Wegner only reports the interesting but extreme cases, where our control beliefs go astray or are not there in the first place. Second, no judge and no jury simply accept a defendant’s or witness’s explanatory reasons, mostly, of course, because these reasons are suspected to be presented strategically but also because it is known that the persons’ self-images are far from reliable. Judges
122
CHRISTOPH LUMER
and juries (as well as psychologists and many laymen) know that the provided explanatory reasons have to be interpreted; and a whole industry of psychological and psychiatric expertise has developed to do exactly this. These experts will take Wegner’s new results about the generation of control experience into consideration; this may change their inferences somewhat but probably far from revolutionising them. Third, reflected persons need to know their proper comprehensive intentions and real explanatory reasons in order to be able to understand themselves, to consider and reflect on and perhaps criticise their intentions and to change their motives, goals or decision strategies. Wegner’s theory implies that it is much more difficult to obtain the respective self-knowledge than it appears. Another theoretical and practical consequence then is that a certain degree of theoretical knowledge about such inferential construction processes is helpful if not indispensable for obtaining this self-knowledge; otherwise our self-“knowledge” remains naive. This requires that a somewhat theoretical engagement with oneself, which is informed about the pitfalls of illusory beliefs, be part of an enlightened personality. However, this was already one of the lessons of psychoanalysis and of the psychotherapeutic movement among intellectuals. Wegner’s theory does not make it necessary to change these insights about self-reflection in principle but it does add some important empirical knowledge about our psychic mechanisms to them.
6. Conclusion All in all, Wegner has provided a rich theory of the sources and mechanisms of our control beliefs. Even though certain parts of this theory have been criticised elsewhere it does contain at least much valuable material for a definite theory on this matter. However, then Wegner goes on to use this theory for justifying his spectacular illusion of the conscious will thesis and implicitly also the illusion of empirical will thesis, which have attracted so much attention among the general public. These theses challenge the traditional, intentional-causalist concept of action because they imply that intentions (or their physical underpinnings) do not cause actions. However, the theses cannot withstand critical scrutiny; Wegner has provided nothing tenable to sustain that part of his model which
The Effectiveness of Intentions – A Critique of Wegner
123
implies the two theses. (The only substantiation offered for these theses, namely Libet’s theory of the unconscious preparation of intentions, is itself deeply flawed.) This refutation of the challenge is good news because it leaves the intentional-causalist concept of action valid and with it the traditionally conceived and enormously valuable basis of practical rationality, of freedom of decision and of attributing responsibility.
REFERENCES Anscombe, G[ertrude] E[lizabeth] M[argaret] (1957): Intention. Oxford: Blackwell. Babcock Gove, Philip (1993) (ed.): Webster’s Third New International Dictionary of the English Language Unabridged. Springfield, MA: MerriamWebster. Bayne, Tim (2006): Phenomenology and the Feeling of Doing. Wegner on the Conscious Will. In: Susan Pockett; William P. Banks; Shaun Gallagher (eds.): Does Consciousness Cause Behavior? Cambridge, MA: MIT Press. 169-185. Davis, Lawrence H. (1979): Theory of Action. Englewood Cliffs, NJ: PrenticeHall. Dennett, Daniel C. (2003): Freedom Evolves. London: Penguin. Ginet, Carl (1990): On Action. Cambridge [etc.]: Cambridge U.P. Gollwitzer, Peter; P. Sheeran (2006): Implementation Intentions and Goal Achievement. A Meta-Analysis of Effects and Processes. In: Advances in Experimental Social Psychology 38. 69-119. Haggard, Patrick; Sam Clark; Jeri Kalogeras (2002): Voluntary action and conscious awareness. In: Nature Neuroscience 5. 382-385. Haynes, John-Dylan; Katsuyuki Sakai; Geraint Rees; Sam Gilbert; Chris Frith; Richard E. Passingham (2007): Reading Hidden Intentions in the Human Brain. In: Current Biology 17. 323-328. Hume, David ( 1978): A Treatise of Human Nature. Being an Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects. Ed. [...] L. A. Selby-Bigge. Second edition [...] by P. H. Nidditch. Oxford: Clarendon. Libet, Benjamin (1985): Unconscious cerebral initiative and the role of conscious will in voluntary action. In: Behavioral and Brain Science 8. 529566. Lumer, Christoph (2005): Intentions Are Optimality Beliefs – but Optimizing what? In: Erkenntnis 62. 235-262. Lumer, Christoph (2013): The Volitive and the Executive Function of Intentions. In: Philosophical Studies 166. 511-527.
124
CHRISTOPH LUMER
Lumer, Christoph (2014): Libet’s Experiments and the Possibility of Free Conscious Decision. In this volume. Lumer, Christoph (forthcoming): Reasons and Conscious Control in Automatic Actions. Mele, Alfred R. (2009): Effective Intentions. The Power of Conscious Will. Oxford: Oxford U.P. Pauen, Michael (2014): Naturalizing Free Will. Empirical and Conceptual Issues. In this volume. Pockett, Susan (2006): The Neuroscience of Movement. In: Susan Pockett; William P. Banks; Shaun Gallagher (eds.): Does Consciousness Cause Behavior? Cambridge, MA: MIT Press. 9-24. Runggaldier, Edmund (1996): Was sind Handlungen? Eine philosophische Auseinandersetzung mit dem Naturalismus. Stuttgart; Berlin; Köln: Kohlhammer. Searle, John R. (1983): Intentionality. Cambridge: Cambridge U.P. Wegner, Daniel M. (2002): The Illusion of Conscious Will. Cambridge, MA; London: MIT Press. Wegner, Daniel M. (2003): The mind’s best trick. How we experience conscious will. In: Trends in Cognitive Sciences 7. 65-69. Wegner, Daniel M. (2008): Self Is Magic. In: John Baer; James C. Kaufman; Roy F. Baumeister (eds.): Are We Free? Psychology and Free Will. Oxford: Oxford U.P. 226-247.
PART II Naturalising Ethics? Metaethical Perspectives
Neuroethics and the Rationalism/ Sentimentalism Divide MASSIMO REICHLIN Abstract: Recent work in neuroethics views moral judgment as not elicited by the reflective process of weighing normative reasons, but by the automatic and unconscious activation of emotive responses of approval and disapproval. According to this ‘neurosentimentalism’, moral properties are nothing but flashes of approval and disapproval, wired into our brains by evolution through the exaptation of very ancient mechanisms, such as those connected to oral distaste. This picture misses the basic facts of moral experience: this cannot be reduced to the generation of swift judgments in response to dilemmatic situations, but involves a larger web of choices, extending in time, stemming from, and contributing to, an idea of oneself and of a good life. Typical moral decisions involve the endorsement of certain sentiments rather than their mere expression; this process transforms them into reasons and generates moral motivation.
1. Basic features of neurosentimentalism Research in neuroethics was stimulated by the evidence of the relative role of different areas of the brain in generating moral responses; this evidence was obtained through the study of the alterations of behavioural patterns in patients with focal brain damages. In particular, patients with selective damage to the ventromedial area of the prefrontal cortex (VMPC) perform very badly in decisions and behaviour involving human relationships, although they retain full knowledge of moral rules and social conventions (Damasio 1994; Damasio et al. 1990). These patients do not care about what they know they ought to do, or about the consequences of their actions, though they know they should: lacking any motivation and constraint from the social and moral sentiments, they exhibit a sort of ‘acquired psychopathy’, developing outrageous and antisocial behaviour (Damasio et al. 1990). This evidence led researchers such as Damasio and others to criticize
128
MASSIMO REICHLIN
traditional views describing moral judgment as a process of conscious reasoning from explicit principles. More recently, the hypothesis of a foundational role for emotions in moral judgment emerging from these studies was tested by neuroimaging techniques, measuring the neural activations of subjects who were asked to respond to situations presenting ethical dilemmas such as the now famous ‘trolley problem’ (Foot 1967; Thomson 1976; Greene et al. 2001). In cases such as these, most people intuitively declare that, in order to save the lives of two or more people, it is permissible to do something—e.g. steering the trolley to another track—that has the consequence of bringing about the death of one person, while it is impermissible to directly kill one person—e.g. pushing a fat man from a footbridge onto the track— in order to save the others. Neuroscientific investigation explained the neurobiological bases of these intuitive judgments by stressing the role played in them by the emotive areas of the brain, which are much more activated in the second than in the first kind of case. Although the situations are identical in consequentialist terms, i.e., both involve one person dead and two or more spared, most people conclude that they have a moral duty to minimize the bad effects in cases of little emotional involvement and a moral duty not to do so in cases of relevant emotional involvement. In one famous experiment involving 60 practical dilemmas, which were divided into moral and non-moral categories (Greene et al. 2001), the moral ‘non personal’ ones—i.e., those emotionally non salient—showed patterns of neural activations much more similar to those in the non-moral ones than to those in the personal moral ones; that is, the areas associated with the emotions—such as the medial frontal gyrus, the posterior cingulated gyrus and the angular gyrus— were very much involved in the personal moral dilemmas, while both the non moral and the moral ‘non personal’ dilemmas elicited greater activity in the areas associated with higher order cognition— such as the dorsolateral prefrontal cortex (see also Greene et al. 2004). Moreover, as predicted, the responses of those who endorsed the maximization of utility in ‘personal’, emotionally laden cases were relevantly slower than those of the people who did not; the explanation being that, in these cases, the rational, ‘utilitarian’ response had to beat the interference of a countervailing emotional response.
Neuroethics and the Rationalism/Sentimentalism Divide
129
Further research provided evidence as to the causal role played by emotional processes in eliciting moral beliefs. Koenigs et al. (2007) studied moral judgment in individuals with focal damage to the ventromedial prefrontal cortex (VMPC), finding that they exhibit an abnormally high rate of endorsement of utilitarian responses in the emotionally salient, personal moral scenarios, relative to controls. It is important to stress that these patients’ capacities for general intelligence, logical reasoning, and even declarative knowledge of social and moral norms are preserved (Saver & Damasio 1991); in fact, they show normal patterns of judgment, as far as non-moral, or impersonal moral scenarios are concerned. The lesson is that rational, utilitarian judgments are ordinarily inhibited by certain fixed patterns of automatic emotional response. However, patients whose brain injury weakens or annihilates the emotional response rely on the maximization of aggregate welfare, as suggested by the ‘rational’, calculative areas of the brain. Not having to overcome the negative emotion generated by the direct infliction of harm on others, VMPC patients treat the high conflict moral dilemmas just as all other cases (Ciaramelli et al. 2007; Greene et al. 2008). This shows that emotions play a causal role in the generation of moral judgments, and cannot be regarded as mere consequences of the judgments themselves. Moreover, the emotions seem decisive not only in order to put into effect already possessed moral knowledge, but also in order to acquire it in the first place: in fact, VMPC subjects who were injured early in their development did not even possess the declarative knowledge of moral norms (Anderson et al. 1999). These findings suggested new interpretations of our moral nature, all sharing a critical stance towards the traditional rationalistic view of morality, and an insistence on the role of emotions and automatic processes. The basic idea, that was particularly developed in the ‘social-intuitionist model of moral judgment’ (Haidt 2001; Greene & Haidt 2002; Greene et al. 2004; Haidt & Joseph 2004; Haidt & Graham, 2007) is that a large part of our moral behaviour is shaped by the existence of a coherent set of emotional responses that are somehow wired into our brains and automatically triggered in the presence of certain conditions. In most situations, it is not the case that we reflect rationally on the pros and cons, on the normative reasons in favour of one or the other moral choice and course of action: we simply respond in a predetermined
130
MASSIMO REICHLIN
fashion, appealing to the deposit of intuitive reactions that we developed in the course of evolution. Greene and Haidt reconstructed morality in a dual-process framework, in which emotional/intuitive processes are clearly distinguished from rational ones: the former are quick, effortless, automatic and not accessible to consciousness, the latter are slow, effortful and partly accessible to consciousness. Moral judgments are triggered by emotions spontaneously generated by morally loaded situations, just as reactions of disgust are generated by olfactory sensations (Schnall et al. 2008; Chapman et al. 2009); higher order cognitive processes, such as computation of the cost-benefit ratio, may intervene, when time for more reflexive operations is allowed, mainly with a view to supporting already reached conclusions. As Haidt put it, “Moral reasoning is usually an ex post facto process used to influence the intuitions (and hence the judgments) of other people” (Haidt 2001: 814); it is generally activated by a social demand for a verbal justification and very rarely has the form of a private reflection. For example, it is not that we believe that life begins at conception and therefore we oppose abortion; rather, we have a gut feeling that abortion is bad, and, when asked to justify it, we form the belief that life begins at conception. Most of the time this reasoning is not used to question our own attitudes and beliefs; it can play a causal role only with regard to others’ attitudes and, even then, not because of the force of its arguments, but because of its efficacy in triggering new intuitions. In this picture, conscious moral reasoning is in fact largely dispensable; automatic, intuitive responses are dominant, for these processes are loaded with a motivational force that the deliberate ones lack: the affective system “came first in phylogeny, it emerges first in ontogeny, it is triggered more quickly in real-time judgments, and it is more powerful and irrevocable when the two systems yield conflicting judgments” (Haidt 2001: 819)1. In other words, this view interprets our moral nature as based on a moral sense; ethics is a matter of feeling, “an innate preparedness 1
It must be stressed that the distinction between so-called System 1 and System 2 is not, strictly speaking, a distinction between the emotive and the cognitive, but between two different modes of cognitive function; emotions and intuitions are modes of automatic cognition, and the contrast is with explicit, conscious processes of reasoning. See Kahneman, 2003 and Moll et al., 2005.
Neuroethics and the Rationalism/Sentimentalism Divide
131
to feel flashes of approval or disapproval toward certain patterns of events involving other human beings” (Haidt & Joseph, 2004: 56), in much the same sense in which David Hume said that “To have the sense of virtue, is nothing but to feel a satisfaction of a particular kind from the contemplation of a character. […] We do not infer a character to be virtuous, because it pleases: But in feeling that it pleases after such a particular manner, we in effect feel that it is virtuous” (Hume, 2007: 303).
This position is nativist, in that it assumes that, although in need of input and shaping by particular cultures, the basic intuitions that shape our moral landscape are not the result of a process of learning in childhood, but are built into our minds by evolution. The basic moral intuitions are a kind of ‘social receptors’ that shape our moral sense just as a few receptors in our skin give us the sense of touch, and those in our tongue the great variety of tasting experiences. This view is also meta-ethically non cognitivist, in that it relies on the analogy between perceiving a moral quality and perceiving ‘secondary qualities’, such as smells and colours: we believe that our moral judgments are driven by our moral reasoning which tracks moral reality. Actually, however, our reasoning just posthumously rationalizes our judgments, whose objects are not real properties in the world, but inherently subjective ways of feeling about certain facts of the world, whose universality depends on their adaptive utility and may change in different circumstances. In other words, this view leads to a new kind of meta-ethical emotivism that may be called ‘neurosentimentalism’: according to this view, moral properties are nothing but flashes of approval and (especially) disapproval, wired into our brains by evolution, possibly through the exaptation of very ancient evolutionary mechanisms, such as those connected to oral distaste. Moral properties are projections of our sentiments, artificially constructed to help human social life. This is meant to overthrow traditional views of moral judgment and decision-making, insofar as these views see morality as the weighing of normative reasons for or against certain judgments, or the reasoned application of certain consciously believed principles. The Kantian idea of moral agency as practical reason is rejected, in favour of a conception in which the conscious appeal to reason plays a much more limited role, if any.
132
MASSIMO REICHLIN
2. Moral reflection and moral agency This picture, I believe, misses the basic facts of moral experience: its main fault being a reductive account of moral experience, according to which morality essentially consists in the generation of moral judgments, which can be conceived of as quick responses to dilemmatic situations. I do not mean to deny that these sorts of judgments are part of the moral life; however, I submit that they cannot be conceived as the whole, or even as a central part, of morality. Morality has in fact to do with the formation of character, i.e. with the development of an idea of the good life and with the settled disposition to act from certain principles. It is not a matter of single decisions or judgments, but of forming a personal identity; and this cannot be effected by reaching quick decisions on dilemmatic cases, but inherently has to do with a process extending over time: “A moral agent needs to be able to conceive of herself as a temporally extended entity as a necessary condition for moral reflection and decision-making. Yet the recent work in cognitive neuroscience, especially on patients with impaired ventromedial functioning, concentrates on synchronic judgments of rightness or wrongness of hypothetical actions, which do not require this type of intertemporal perspective on action” (Gerrans & Kennett 2010: 588). A stable feature of the examples and dilemmatic cases used in psychological research on morality is that they concern third persons: research subjects are very often asked to express their judgment on some other person’s action, without knowing much about the person, her views and life-projects. And even when the examples refer to first-person moral decisions, they centre on single dilemmatic cases in which you have to choose without knowing how you came to find yourself involved in the situation and where no previous facts about your personal story and moral identity are allowed to influence your judgments. Authentic moral judgments, however, presuppose moral agency, which in turn involves a diachronic sense of oneself, that is, the memory of one’s past and the anticipation of one’s possible future. This implies the adoption of some stable moral identity, that is, the definition of some ideal of a good life and the stable acceptance of certain principles that shape our responses in action and obviously affect our judgments and decisions in token cases.
Neuroethics and the Rationalism/Sentimentalism Divide
133
The basic fact of morality is that it is normative. This means that moral judgments have, or claim to have, authority over us. And they can have authority because they give us reasons for action. Now, it is clear that intuitive flashes of feeling, in a sense, are reasons; but they are reasons only in the causal sense that they are effective on us. They cause our behaviour, rather than justify it. On the contrary, the relationship with our reasons for action presupposed in our standard conception of morality is normative in character: moral reasons are not just effective on us, without our knowing it, rather they can be contemplated beforehand, they can be weighed, and accepted or rejected on reflection. As noted by Kennett and Fine, we are not only reason trackers, that is, people who register reasons and act on them, but also reason responders, that is, people who are capable of responding to reasons as reasons: “genuine moral judgments must be made by moral agents and […] moral agents must, as a matter of conceptual necessity, be reason responders and not merely reason trackers” (Kennett & Fine 2009: 85). In other words, morality has an inherently reflexive nature; or better, morality is only possible, in the form in which we actually know it, because human nature is in itself reflective (Frankfurt 1971; Korsgaard 1996). Being reflective means that we have the capacity to think about ourselves, partly detaching from our motives in order to review them, to contrast different reasons for action and to choose the best one on which to act. This implies the cognitive abilities to conceive of ourselves as entities persisting over time, to anticipate different futures, to plan future action and to commit ourselves to acting in some planned manner (Bratman 2000). Moral reflection, planning and choice, therefore, “engages our sense of self and our capacity to see ourselves, and others, and the world in which we find ourselves, diachronically” (Gerrans & Kennett 2010: 607). It is by reflecting on our reasons, and by choosing to act on intentions that express our acceptance of some stable principles, that we progressively develop a moral character and we make clear to ourselves and to others what our practical identity is. Having a moral character means having stable reasons for action, which are normative for us, even though they can be occasionally overridden by contextually more relevant ones. Now this is not inconsistent with the idea that emotions do and should play a very prominent role in the process. While some
134
MASSIMO REICHLIN
rationalist thinkers have thought that emotions and sentiments play a merely accessory and substantially negligible role, it is clear that it is not so: emotions and sentiments are central even in a view of morality emphasizing the reflexivity of consciousness. Let’s see why this is so. To form a moral belief and an intention to act is to review and weigh various considerations counting in favour of one or the other judgment and intention: the space of morality, therefore, is essentially a space of reasons. But the central questions to ask are the following: What are these reasons made of? How are they constructed? Where do they spring from? The rationalistic tradition emphasizes their springing from the ‘nature of things’, or from the ‘relations between objects’, and even from the ‘intrinsic fittingness’ of certain actions. Perhaps there is some meaning that can be attached to these phrases; however, it cannot be denied that most of our first and most important sources of reasons are not generated by the simple fact that we apprehend other persons as being people like us, or that we acknowledge the relationships that link them to us. Rather, they are generated by the fact that, in normal humans, apprehending other persons as being like us or acknowledging certain human relationships are emotionally laden processes. To see that another one is a person like me just is to feel some empathy for her, to have a disposition to pity her suffering, and perhaps also to rejoice for her happiness. To acknowledge the ties of human relationships just is to feel some degree of solidarity with others stemming from our common humanity. Thus, it is not merely the ‘nature of things’, or some empirical facts concerning other persons, that generates moral reasons and may give rise to obligations towards them; our sentimental nature is in fact essential to grounding our sense of duty. Having the ‘right’ emotions is thus the ordinary condition for entering the space of reasons and morality; which is of course quite different from saying that to accept certain moral judgments is nothing but to feel such emotions. Neurosentimentalism, as other forms of emotivism, would have morality reduced to the mere expression of emotions and sentiments; this expressivist attitude fails to do justice to the role of moral agency and of reflexivity in ordinary moral experience. An adequate appreciation of the role of reflexivity would rather view moral decisions as the endorsement of certain emotions or sentiments, or to put it better, the generation of normative reasons for action, and of intentions to act, through the
Neuroethics and the Rationalism/Sentimentalism Divide
135
reflective judgment on the appropriateness of our emotions and sentiments. To make a moral judgment is not just to utter some emotive response to a situation, but to endorse the kind of response that better fits our view of ourselves, and our normative practical identity. If this is so, it can be argued that, although they posses theoretical knowledge of moral rules, VMPC patients do not really make authentic moral judgments because they lack the basic conditions of moral agency and do not possess a practical identity, which implies the capacity to respond to reasons and to anticipate and take into account the consequences of our choices for ourselves and for others (Kennett & Fine 2009; Gerrans & Kennett 2010). What is peculiar about human morality is just the fact that the reflexivity of consciousness has the capacity to transform an automatic impulse, or a passive event of our psychological makeup, into an intention to act; the reflective process operates this transformation by conferring the authority of reflection to a motive. The neuroimaging findings are themselves consistent with this view. In fact, even in the context of quick responses to dilemmatic situations, the innate, automatic responses generated by ‘system 1’ may sometimes be corrected in the light of ‘system 2’. We can speculate that those whose responses differ from the standard ones have so deeply endorsed certain values that they can in some way overwrite and override the automatic responses. There is evidence that this can be explained as the unconscious inhibiting influence exerted by prior reasoning on the activation of intuitive responses that are in contrast with reflectively endorsed values (Kennett and Fine 2009: 91-93). It must be acknowledged, then, that effortful, longer processes of reasoning can in fact affect the quick responses of ‘system 1’, so that there is much more interplay between the two systems than the social intuitionist model is willing to allow.
3. Emotions as conditions and motives Emotions and sentiments do offer an essential part of the materials out of which moral reasons are generated. But they play a much deeper role as well, and one connected with the very existence of morality. In fact, patients with selective brain damages who lack the standard activation of the emotive areas of the brain are unable to reach ordinary moral judgments, that is, judgments in which the
136
MASSIMO REICHLIN
interests and feelings of others are appropriately considered; even though they do not lack the intellectual competence concerning moral rules, they are unable to tune in on the wavelength of morality. This suggests that the emotions are conditions of possibility of authentic moral judgments, since it is their task to ‘activate’ the moral faculty. This is perfectly in line with the idea of human reflexivity as central to the generation of normativity; in fact, reflexivity itself is not the work of solitary beings, but is inherently tied to the social character of human beings. We would not reflect on our reasons for doing this or that if we had not conscience, that is, if we were not aware of the others’ expectations from us. Other human beings claim some kind of recognition from us, some respect or consideration of their interests, and we need others’ help and consideration as well, in order to further our interests: this is why we develop the capacity to reflect on our choices, and to strike a balance between different considerations. We can say, therefore, that some kind of empathy, that is, a capacity to feel the needs of others, to put oneself in their shoes and to consider things also from their perspective, is a condition of possibility of moral judgments. In other words, it can be perfectly true that morality presupposes a moral sense, in the precise meaning of a disposition to feel approval and disapproval that evolved from much more basic mechanisms, perhaps such as those connected to oral distaste. However, this can by no means be taken to imply that all that there is to morality is the unreflective expression of such emotions; to show the evolutionary road through which a process evolved is definitely not to have reduced that process to its original determinants. It is one thing to say that we would not have morality unless we had certain basic dispositions to feel some emotions and sentiments; this is borne out by the fact that individuals who lack the anatomical structures necessary for those emotions do not ‘have morality’ in the way we do. It is definitely another thing to say that the morality we have is but the work of those basic emotions and sentiments. This cannot be true, for it fails to acknowledge the fact that we deliberate on our choices, that we very often discuss, with ourselves and with others, what to do, and that most of our decisions are not analogous to those quick replies to dilemmatic situations that were investigated in neuroethical research. Rather, they are much longer and more winding routes in which several normative reasons are considered, and some are rejected, some endorsed. We very often
Neuroethics and the Rationalism/Sentimentalism Divide
137
review our emotional reactions by considering things in a different way and from different perspectives; to reflect on our reasons for action is just to discard or correct some of our initial impressions, bringing into play a larger web of considerations. And of course emotions and sentiments do play a decisive role in motivating agents as well. One standard philosophical question concerning morality is whether moral judgments have motivating power in themselves, or if they get it from some external source. Hume famously provided an internal explanation of moral motivation, and proceeded to show that, because of their inherent motivating power, moral distinctions cannot be the work of reason; in fact, reason is motivationally inert, it “is and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them” (Hume 2007: 266). As a consequence, those who defend some kind of rationalism or realism in ethics often accept motivational externalism, according to which moral judgments are cognitive propositions with no inherent motivational power: motivation must be supplemented by contingent affective responses in order for our moral judgments to guide our behaviour (Brink 1986 and 1989; Roskies 2003 and 2008). However, once we reconstruct the process of moral decision-making as one of forming an intention by endorsing certain emotions or sentiments as the ‘correct ones’, or those giving rise to the stronger reasons, we can accept a different form of internalism, in which both reason and sentiments cooperate to produce motivation. Let’s imagine that I have to decide whether to help a friend in need; this is somehow inconvenient for me, so that my disposition to feel some empathy for him, and to put myself in his shoes, has to ‘fight’ against a contrary disposition, advising me to refrain from the endeavour. In order to make up my mind, I embark on a reflective process, letting various considerations come up to my mind and show their weight, at the end of which I decide to endorse my initial disposition to help as ‘the right thing to do’, or ‘the stronger reason’. I therefore form an intention to help my friend; at this point, my motivation to act on this intention is supported both by the empathy for the other’s difficulty and by the authority of reflection. I do in fact help him both because I feel his need, imagine myself in the situation and understand how much he desires my help and because I have come to the decision that it the right thing to do, perhaps even that it is my duty to help him. The sentimental element and the
138
MASSIMO REICHLIN
reflective/rational one are in fact blended into my intention to act, which was formed by endorsing that very sentiment. Of course, hard-line rationalists would claim that the determination of reason should be the only motive to our action, that we should act only ‘from duty’. However, we can recall that even Kant viewed respect for the moral law as a peculiar sentiment, stemming from reason itself, that motivates human action. In any case, it seems to me clear that, while sentiments and emotions have an obvious motivating power that can act independently of any further reflection and reasoning, to become convinced that acting on the reason offered by some sentiment is the best thing to do does add much to that motivating power. An action that is motivated by such a reflective endorsement is a fully justified one, a fully authoritative one; it is one in which we have the deepest and most serious reasons to engage, and which we can more confidently be trusted to accomplish, because it is an authentic expression of our moral agency and identity. On the contrary, an action stemming directly from our emotions is subject to being later disavowed, as soon as the emotions change, for it is not an authentic expression of our moral agency.
4. Sentimentalism, rationalism and moral realism Neurosentimentalism views neuroethical findings as justifying a non-cognitivistic meta-ethical view: moral judgments lack any truthvalue, they merely express the speakers’ emotions, and perhaps have some pragmatic influence on the attitudes of others. However, on the basis of the above considerations the most appropriate meta-ethical view seems to be one acknowledging both the role of emotions in starting up moral experience and the key contribution of reason to implementing it. Should this view be considered sentimentalist or rationalist? Neither, of course, if both terms are taken in their strictest meaning; both, if we allow some laxity in their definition. We may distinguish at least two versions of sentimentalism: the first contends not only that moral distinctions are the work of a moral faculty or sense, but also that reason has no power of generating, or of correcting, the emotions or sentiments which moral judgments express. In other words, a sentiment can be corrected only by another sentiment. This is the view that we originally find, for
Neuroethics and the Rationalism/Sentimentalism Divide
139
example, in Hutcheson’s Illustrations on the Moral Sense, according to which there are no normative reasons: “all exciting reasons presuppose instincts and affections; and the justifying presuppose a moral sense” (Hutcheson 1991: 308). Neither the exciting nor the justifying ones are normative reasons, in the sense of being considerations that can be weighed, and accepted or discarded: both are simply effective on us, being the effect of a disposition to approve and disapprove, different from reason, implanted in us by our Creator. For Hutcheson, the moral quality is neither the external motion and its tendencies known by our senses, nor the apprehension of the affections of the agent inferred by reason: it is only the “perception of approbation or disapprobation arising in the observer, according as the affections of the agent are apprehended kind in their just degree, or deficient, or malicious” (Hutcheson 1991: 319). Should this perception be disordered, Hutcheson says that reason can do nothing but suggest former approbations and represent the general sense of mankind; it has no direct power of correcting deficient perceptions. Shaftesbury had offered a very different account of sentimentalism: in his view, moral distinctions are reflective affections, that is, affections concerning affections of pity, kindness, gratitude and their opposites, brought to the mind by reflection: the work of the moral sense is to give rise to “another kind of affection towards those very affections themselves, which have been already felt and have now become the subject of a new liking or dislike” (Shaftesbury 1999: 172). In this case, however, the moral sense does not constitute or construe the moral facts: it just gives access to the objective moral properties of actions and characters. This means that, should our moral sense be corrupted, it is not impossible for reason to correct its deliverances, as stressed by Shaftesbury himself: “And thus we find how far worth and virtue depend on a knowledge of right and wrong and on a use of reason sufficient to secure a right application of the affections, that nothing horrid or unnatural, nothing unexemplary, nothing destructive of that natural affection by which the species or society is upheld, may on any account or through any principle or notion of honour or religion be at any time affected or prosecuted as a good and proper object of esteem” (Shaftesbury 1999: 175).
140
MASSIMO REICHLIN
On this account, it is not the case that passions or sentiments rule, and reason plays but an instrumental role; even though morality presupposes a moral sense, reason does have a controlling power and must secure the “right application” of the affections2. The most plausible meta-ethical view seems to be the one according to which the social feelings are preconditions of moral cognition, but reason has the role of correcting the emotional responses, and of giving rise to new reasons by stimulating the adoption of a larger and more comprehensive viewpoint on practical matters. In this picture, we may also say that reason tracks moral reality, or better, that it tracks those reasons for acting that are objectively there, in that they can be captured and accepted by anyone confronted with the situation. It is not the case that there are moral facts ‘out there’, just as there natural facts out there to be perceived; however, it is true that there are moral reasons ‘out there’, that is, there are relations, considerations, and other facts about the world and about other people, which are mirrored in our sentiments and emotions and which everyone who reflects on the situation may apprehend and accept as her reasons to act. In this sense, morality is in itself cognitive, and moral propositions can be viewed as beliefs, endowed with a truth value3.
5. Conclusion In conclusion, unless moral experience is construed, in a very revisionist and deflationary fashion, as involving the mere synchronic generation of moral outputs when confronted with situations eliciting moral projections, neurobiological findings on moral judgments do not vindicate moral non-cognitivism or expressivism. The meta-ethical reading of the data that seems to fit best with the phenomenology of moral experience is one acknowledging the basic roles of both the automatic, unconscious emotions of approval and disapproval, and the controlled, conscious processes of reflective endorsement. Both moderate rationalism and 2
3
It is a difficult historical question to say what were David Hume’s—the most influent sentimentalist in history—exact views on this issue; for a balanced analysis, see Cohon 2008. I have tried to defend this kind of moderate rationalism from recent sentimentalist critiques in Reichlin 2012.
Neuroethics and the Rationalism/Sentimentalism Divide
141
sophisticated sentimentalism, which stress the interplay between the two systems in generating moral judgments, seem therefore to be plausible views.
REFERENCES Anderson, Steven W.; Antoine Bechara; Hanna Damasio; Daniel Tranel; Antonio R. Damasio (1999): Impairment of Social and Moral Behavior Related to Early Damage in Human Prefrontal Cortex. In: Nature Neuroscience 2. 1032-37. Bratman, Michael (2000): Reflection, Planning, and Temporally Extended Agency. In: Philosophical Review 109. 35-61. Brink, David O. (1986): Externalist Moral Realism. In: N. Gillespie (ed.): Moral Realism. Proceedings of the 1985 Spindel Conference. In: The Southern Journal of Philosophy 24, suppl. 23-41. Brink, David O. (1989): Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Chapman, Hanah A.; D. A. Kim; Joshua M. Susskind; Adam K. Anderson (2009): In Bad Taste. Evidence for the Oral Origins of Moral Disgust. In: Science 323. 1222-26. Ciaramelli, Elisa; Michela Muccioli; Elisabetta Làdavas; Giuseppe di Pellegrino (2007): Selective Deficit in Personal Moral Judgment Following Damage to Ventromedial Prefrontal Cortex. In: Social Cognitive and Affective Neuroscience 2. 84-92. Cohon, Rachel (2008): Hume’s Morality. Feeling and Fabricating. Oxford: Oxford University Press. Damasio, Antonio R.; Daniel Trabel; Hannah Damasio (1990): Individuals with Sociopathic Behavior Caused by Frontal Damage Fail to Respond Autonomically to Social Stimuli. In: Behavioral Brain Research 41. 81-94. Damasio, Antonio R. (1994): Descartes’ Error. Emotion, Reason, and the Human Brain. New York: Putnam. Damasio, Hannah; Thomas Graboswki; Randall Frank; Albert M. Galaburda; Antonio R. Damasio (1994): The Return of Phineas Gage. Clues About the Brain from the Skull of a Famous Patient. In: Science 264. 1102–5. Foot, Philippa (1967): The Problem of Abortion and the Doctrine of Double Effect. In: Oxford Review 5. 5-15. Frankfurt, Harry (1971): Freedom of the Will and the Concept of a Person. In: Journal of Philosophy 68. 5-20. Gerrans, Philip; Jeanette Kennett (2010): Neurosentimentalism and Moral Agency. In: Mind 119. 585-614. Greene, Joshua; R. Brian Sommerville; Leigh E. Nystrom; John M. Darley; Jonathan D. Cohen (2001): An fMRI Investigation of Emotional Engagement in Moral Judgment. In: Science 293. 2105-8.
142
MASSIMO REICHLIN
Greene, Joshua; Jonathan Haidt (2002): How (and Where) Does Moral Judgment Work? In: Trends in Cognitive Science 6. 517-23. Greene, Joshua; Leigh E. Nystrom; Andrew D. Engel; John M. Darley; Jonathan D. Cohen (2004): The Neural Basis of Cognitive Conflict and Control in Moral Judgment. In: Neuron 44. 389-400. Greene, Joshua; Sylvia A. Morelli; Kelly Lowenberg; Leigh E. Nystrom; Jonathan D. Cohen (2008): Cognitive Load Selectively Interferes with Utilitarian Moral Judgment. In: Cognition 7. 1144-54. Haidt, Jonathan (2001): The Emotional Dog and its Rational Tail. A Social Intuitionist Approach to Moral Judgment. In: Psychological Review 108. 814-34. Haidt, Jonathan; Craig Joseph (2004): Intuitive Ethics. How Innately Prepared Intuitions Generate Culturally Variable Virtues. In: Daedalus 133. 55-66. Haidt, Jonathan; Jesse Graham (2007): When Morality Opposes Justice. Conservatives Have Moral Intuitions that Liberals May Not Recognize. In: Social Justice Research 20. 98-116. Hume, David ( 2007): A Treatise of Human Nature. A Critical Edition. Ed. by David F. Norton and Mary Norton. Oxford: Clarendon Press. Vol. 1. Hutcheson, Francis ( 1991): Illustrations on the Moral Sense. In: Daiches D. Raphael (ed.): British Moralists 1650-1800. Indianapolis: Hackett Publishing Company, vol. 1. Kahneman, Daniel (2003): A Perspective on Judgment and Choice. Mapping Bounded Rationality. In: American Psychologist 58. 697-720. Kennett, Jeanette; Cordelia Fine (2009): Will the Real Moral Judgment Please Stand Up? The Implications of Social Intuitionist Models of Cognition for Meta-Ethics and Moral Psychology. In: Ethical Theory and Moral Practice 12. 77-96. Koenigs, Michael; Liane Young; Ralph Adolphs; Daniel Trabel; Fiery Cushman; Marc Hauser; Antonio R. Damasio (2007): Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgements. In: Nature 446. 908-11. Korsgaard, Christine (1996): The Sources of Normativity. Cambridge: Cambridge University Press. Moll, Jorge; Roland Zahn; Ricardo de Oliveira-Souza; Frank Krueger; Jordan Grafman (2005): The Neural Basis of Human Moral Cognition. Nature Review Neuroscience 6. 799-809. Reichlin, Massimo (2012): The Neosentimentalist Argument Against Moral Rationalism. Some Critical Observations. In: Phenomenology and Mind 3. 163-175. Roskies, Adina (2003): Are Ethical Judgments Intrinsically Motivational? Lessons from “Acquired Sociopathy”. In: Philosophical Psychology 16. 5166. Roskies, Adina (2008): Internalism and the Evidence from Pathology. In: Walter Sinnott-Armstrong (ed.): Moral Psychology. Volume 3: The
Neuroethics and the Rationalism/Sentimentalism Divide
143
Neuroscience of Morality. Emotion, Brain Disorders, and Development. Cambridge, MA: MIT Press. 191-206. Saver, Jeffrey L.; Antonio Damasio (1991): Preserved Access and Processing of Social Knowledge in a Patient with Acquired Sociopathy due to Ventromedial Frontal Damage. In: Neuropsychologia 29. 1241-1249. Schnall, Simone; Jonathan Haidt; Gerald L. Clore; Alexander H. Jordan (2008): Disgust as Embodied Moral Judgment. In: Personality and Social Psychology Bulletin 34. 1096-1109. Shaftesbury, Earl of ( 1999): Inquiry Concerning Virtue and Merit. In: Idem: Characteristics of Men, Manners, Opinions, Times. Ed. by L. E. Klein. Cambridge: Cambridge University Press. 163-230. Thomson, Judith J. (1976): Killing, Letting Die, and the Trolley Problem. In: The Monist 59. 204-217.
Experimental Ethics – A Critical Analysis ANTONELLA CORRADINI Abstract: According to experimental philosophers, experiments conducted within the psychological sciences and the neurosciences can show that moral intuitions are incapable of thorough justification. Thus, as a substitute for reliable philosophical justifications, psychological or neuropsychological explanations should be taken into consideration to provide guidance about our conduct. – In my essay I shall argue against both claims. First, I will defend the justificatory capacity of moral philosophy and maintain that empirical evidence cannot undermine moral judgements. Secondly I will point to some methodological difficulties in psychological and neuroscientific explanations of moral judgements. Finally I will show that Greene’s (2008) argument from morally irrelevant factors fails to prove that moral implications can be drawn from scientific theories about moral psychology.
1. Experimental ethics as a kind of experimental philosophy Experimental ethics is a discipline belonging to the wider domain of experimental philosophy. A few words need to be said about this new field of intellectual inquiry. In a recent article, Shaun Nichols, one of the initiators of the experimental philosophy movement, contrasts experimental with traditional philosophy. The latter’s problems, he maintains, are notoriously resilient; not by chance, in fact, are they traceable back to the earliest days of philosophy. But no less obsolete are the techniques that have been employed to their solution. As Nichols puts it, “The central technique is careful and sustained thought, sharpened by dialogue with fellow philosophers” (2011: 1401). But, if the conceptual, a priori methods of traditional philosophy fail to solve the problems that philosophy faces, nothing prevents us from looking for new methods, capable of achieving advancements in philosophy. First, we should not forget that philosophical problems such as free will, morality and consciousness are rooted in common-sense and can therefore be appreciated without
146
ANTONELLA CORRADINI
presupposing any specific training. Moreover, the extent to which they are common-sense problems, it is appropriate to investigate their psychological origins through the experimental methods of the social sciences, such as surveys on philosophical matters. Brainimaging also has to be counted among the techniques used in experimental philosophy along with surveys. As the literature shows, however, criticism of traditional philosophy is not just directed at its being an “armchair philosophy”, but addresses, above all, the question of the reliability of the philosophers’ intuitions. Conflicts of intuitions that traditional philosophy is unable to solve and incompatibility between philosophers’ and ordinary people’s intuitions induce experimental philosophers to privilege both ordinary people’s intuitions and the experimental methods through which these can be ascertained (see also Knobe & Nichols 2008: Ch. 1). Thus, my first step into experimental ethics will consist in spelling out what experimental ethics’ criticism of moral philosophers’ intuitions looks like.1
2. On the unreliability of moral intuitions 2.1 From the philosophical point of view In Appiah’s opinion (2008: 77), traditional ethics is affected by an “intuition problem”. Before confronting this problem, however, it is appropriate to ask ourselves what moral philosophers mean by moral intuition. In the history of moral philosophy we can find two main meanings of the concept. According to the first meaning, intuitions are spontaneous, prereflective moral judgements about particular cases formulated by well-educated people who also act in a morally right way. This is approximately what Sidgwick and Ross maintain about intuitions. Following the second meaning, instead, intuitions are rather opinions of ordinary people who are not particularly competent about moral cases. This is roughly the way in which Utilitarians like Bentham think of intuitions. The difference between the two meanings reflects 1
On the general debate about the reliability of philosophical intuitions see Weinberg (2007), Grundmann (2010), Hoffmann (2010), Horvath (2010), May (2010), Pinillos et al. (2011), Seeger (2010), Shieber (2010), Sosa (2010), Tobia et al. (2013).
Experimental Ethics – A Critical Analysis
147
a difference regarding the degree of epistemic reliability that is attributed to intuitions, that is to say whether they do or do not represent a kind of moral knowledge. In both cases, however, experimental ethicists maintain that the above mentioned “intuition problem” arises because there is no justificatory procedure to tell us when a moral intuition is reliable. As an example of the second meaning of “moral intuition”, let us take R.M. Hare’s theory of two-level moral thinking (1981). The first intuitive level concerns general prima facie principles, transmitted to us by education and tradition. They are at the basis of the moral intuitions we appeal to in our ordinary conduct. Moral intuitions, however, are not a kind of moral knowledge. According to Hare they are epistemically reliable only to the extent that secondlevel moral thinking confers plausibility on them. And this is the level of critical thinking, typical of an enlightened and fully informed agent, whose task is to check the validity of the first-level principles in individual cases, particularly when they clash with each other. In order to be accepted by a rational agent, a received opinion has to measure up to the scrutiny of critical thinking. If it is not successful, it will not help to appeal to moral intuitions that owe their apparent strength only to their familiarity. Hare’s theory could seem to be a good candidate for solving the “intuition problem”, but, according to Appiah, it does not succeed in doing so. Why not? Because the principles of critical thinking are not always preferable to those of common sense. A case related to Hare’s theory is that of sadistic preferences, which should be fulfilled if they maximized general utility (1981: 8.6). Counterintuitive results thus make a case against Hare’s utilitarianism. But, if this is true, we would have to suppose that the supporters of the first position about the meaning of moral intuition are right in maintaining that moral intuitions do have a certain degree of epistemic reliability. As an example of this conception let us take John Rawls’ theory of justice. In Rawls’ view the principles of justice are not self-evident; in fact, the conditions imposed on the original position display the epistemological status of reasonable conventions; they are the starting point of the justification procedure regarding the principles of justice, but they do not exhaust it. Another step is needed to complete this procedure, one which relates the principles of justice to the so-called “considered judgements”. These express our sense of justice in the best way, as they are the
148
ANTONELLA CORRADINI
judgements we can fully abide by in the absence of perturbing elements, such as the influence of emotions or of our own personal interests. Considered judgements are the benchmark against which the principles of justice have to be evaluated. However, such a fundamentality is only prima facie, since we cannot rule out that our considered judgements will be modified in the light of the principles of justice we adhere to. According to Rawls, justification is thus a matter of convergence and reciprocal adaptation between the abstract general principles of justice and considered judgements. If both kinds of elements cohere, the methodical goal of justification has been reached. Rawls refers to this procedure as reflective equilibrium: “It is an equilibrium because at last our principles and judgements coincide; and it is reflective since we know to what principles our judgements conform and the premises of their derivation” (1971: 20). Is reflective equilibrium able to succeed where Hare’s two-level moral thinking has failed; that is, is reflective equilibrium able to solve the “intuition problem”? Appiah’s answer is in the negative, again. His main criticism of reflective equilibrium is addressed to the indeterminacy of the procedure. In particular, it cannot be applied in order to assess a conflict between intuitions if the ethical theory is not already established on the basis of criteria independent of intuitions. As the criteria’s independence is cast into doubt, we are left again without any guide as to which of the conflicting intuitions is the right one (2008: 78-80).2 The final conclusion drawn by Appiah is that justification procedures in philosophy seem unable to ensure the reliability of moral intuitions. Appiah’s conclusion can first be challenged by observing that he is quite strict with the justification procedures of moral philosophy. As we have just seen, the intuition problem arises because sometimes we cannot establish which intuitions are reliable. But this seldom happens in Hare’s utilitarianism and as to Rawls, the criticism of the dependence of ethical theory on intuitions seems to me simply wrong, if we consider the whole framework of his theory of justice, including the contractualist argument (on this point see Corradini 1999).
2
For a criticism of Rawls’ method of reflective equilibrium see also Singer (2005).
Experimental Ethics – A Critical Analysis
149
It seems that Appiah requires that moral philosophy attain the certainty of the intuitions’ rightness. But why demand this of moral philosophy, when the justification of the scientific theories does not aim at certainty and instead recognizes the falsifiable and temporary character of scientific knowledge? Appiah seems to have in mind an old-fashioned model of moral philosophy, according to which moral principles and intuitions should be certain and indefeasible. But, instead, it is reasonable not to require so much and to put moral philosophy and philosophy of science at least on a parity level as far as their respective justification claims are concerned. 2.2 From the psychological point of view But other bad news impends on moral intuitions. As Appiah puts it “a wave of empirically based research into human decision making has depicted many of those intuitions to be – in ways that seem universal, and perhaps incorrigible – unreliable and incoherent” (2008: 82). What Appiah refers to are the so-called “framing effects”, discovered in a famous experiment by psychologists Kahneman & Tversky (1981). As the authors argue, human rational decision making is actually not very rational, inasmuch as people’s choices often depend on how the options are framed. In the example presented in the 1981 essay, a country is preparing for the outbreak of an unusual Asiatic disease that, without any public intervention, would kill 600 people. Two different programmes, A and B, to fight the disease were proposed to group 1. If programme A is adopted, 200 people are saved, if programme B is adopted there is a one-third chance of saving 600 people and a two-third chance of saving none. 72 percent of the experimental subjects chose A and 28 percent chose B, thus showing a strong aversion to risk. Now, group 2 was given a different description of the dilemma: if programme C is adopted, 400 people die; if programme D is adopted there is a onethird chance that nobody dies and a two-third chance that 600 will die. In this case, only 22 percent of the experimental subjects chose C, although A and C and B and D are equivalent options, which are just described in different manners. The same framing effects that lead rational decision making to failure also seem to find an application to the moral domain. This implies that the intuitions considered to be right by moral
150
ANTONELLA CORRADINI
philosophers are, at least in part, influenced by factors which, per se, are not relevant to the moral judgement. As to the moral field, let us examine Weathly & Haidt’s 2005 experiment. The experimental subjects were given a posthypnotic suggestion to feel disgust when they heard an emotionally neutral word such as “take”. Once presented with two identical versions of the same scenario, they reacted differently, depending on whether one of the two contained the cue word. In the version in which a corrupt congressman “takes” a bribe, the experimental subjects judged the unmoral conduct more negatively than in the other version, in which the congressman “is bribed” by the tobacco lobby. Appiah’s conclusion is that people’s moral judgement can be shaped by a little hypnotic priming (2008: 87). What is the lesson that we should draw from these and similar experiments? An experimental philosopher will interpret them as Appiah proposes: the presence in us of psychological mechanisms which in part influence our actual choices, the universality and cross-cultural nature of these mechanisms, the good explanations that psychologists offer for them (prospect theory, for example, by Kahneman & Tversky), all these elements induce experimental philosophers to trust psychological results more than philosophical justifications, to claim that psychological explanations undermine moral judgements, that moral intuitions are deeply unreliable and, as a consequence, not capable of guiding our conduct. In my eyes, however, a non-experimental, traditional philosopher is fully entitled to react differently to the above experiments. My first remark is that Appiah has not shown anywhere that psychological explanations undermine specifically moral judgements. Both in Kahneman & Tversky’s and in Weathly & Haidt’s experiments, subjects judge wrongly because they differently judge scenarios which are identical as to their relevant characteristics. Thus subjects judge irrationally, not immorally, although rationality rules are just as normative as are morality rules. Only if in Weathly & Haidt’s experiment the active form of the verb “take” had a moral relevance as compared to the passive form of the verb “to be bribed” could we maintain that the two scenarios are morally not equivalent. But, then, the experimental subjects would deliver the right moral judgement, even though they have been given a posthypnotic suggestion.
Experimental Ethics – A Critical Analysis
151
That said, is it plausible to maintain that psychological explanations undermine normative judgements? I believe, that it is neither our moral nor our rational judgements that are undermined – but rather the application conditions of these judgements. Let me explain this point. A normative system consists not only of prescriptive elements but also of descriptive ones, in particular a specification of the conditions under which a prescription is formulated. In other words, obligations are mostly conditional obligations, taking the form O (a, b), whereby b is the condition of application of the obligation. Let us imagine that a certain prescription is repeatedly not fulfilled by the people who should. Is it reasonable to say that the non-fulfilment of an obligation implies its falsification? The answer is negative, because Hume’s law makes the falsification of an obligation logically impossible. However, Hume’s law allows that the condition under which the obligation arises can be falsified. On the one hand, the distinction between obligation and its application conditions accounts for the principle of the real possibility of the obligation fulfilment (ought implies can); on the other, it shows that psychological explanations can in no way undermine any justified normative judgements. At this point, an experimental philosopher could object that my argument owes its plausibility to the acceptance of Hume’s law and claim that we should not take the is-ought dichotomy as seriously as we do. In fact, experimental philosophers, psychologists and neuroethicists tend to soft-pedal it, although they do not rebut it straightforwardly. Hauser, for example, observes that “the surgical separation of facts from ideals is … too extreme”. “The point of all this is simple enough: Sometimes the marriage between fact and desire leads to a logical conclusion about what we ought to do, and sometimes it doesn’t. We need to look at the facts of each case, case by case” (2006: 5-6). Well, all I need to say is that Hume’s law is a logical principle that has been proved within numerous systems of deontic logic (pure deontic systems O-KD, O-KD4, O-KD5, O-KD45 and the alethic systems of deontic logic KQ, K4Q, K5Q, K45Q, KT5Q). But I expect an experimental philosopher to claim that pure logic is no good, as it is one of the old-fashioned techniques that traditional philosophers are used to employ and that it should be substituted by the new ways of treating philosophical issues that experimental
152
ANTONELLA CORRADINI
philosophy proposes, such as the experimental methods from social science (see Nichols 2011 cited above). But social science usually uses statistical methods, and statistical methods are based on mathematics and mathematics belong to the general framework of logic … So, I do not think that we can really get rid of logic. At any rate, here we have an attempt by Appiah (2008: 25 ff) to logically confute Hume’s law. The structure of his argumentation is based on the disjunctive syllogism: Premise 1: A ∨ OB (whereby A is descriptive) Premise 2: non A ——————————————————— Conclusion: OB Now, Appiah assumes that every sentence is either moral or not moral. We will then have two hypotheses: Hypothesis 1: A ∨ OB is not a moral but a descriptive sentence. Then the disjunctive syllogism is a violation of Hume’s law. Hypothesis 2: A ∨ OB is not a descriptive but a prescriptive (deontic) sentence. Then an alternative violation of Hume’s law can be constructed, based on the rule of introduction of ∨: A ———— A ∨ OB However, some deontic logicians such as von Kutschera (1977), Galvan (1991) and Schurz (1997), have pointed to the fact that the assumption that every sentence is either moral or not moral is not valid. In addition to deontic and descriptive sentences there are also mixed sentences. But Hume’s law forbids drawing deontic consequences from purely descriptive premises. Thus, neither of the two hypotheses represents a violation of Hume’s law. By the way, a much simpler violation of Hume’s law on the basis of mixed sentences is obtained by the application of Modus Ponens: A → OB, A Ō OB.
Experimental Ethics – A Critical Analysis
153
3. The role of explanation in experimental ethics 3.1 Aspects of explanation in experimental ethics Once it is established that moral intuitions are unreliable, both from the philosophical and the psychological point of view, we are left with empirical, in particular psychological and neuroscientific, explanations of moral issues. Therefore, let us ask ourselves what these explanations look like in the moral domain according to experimental philosophy. The first question to ask is what are the explananda of these explanations. Once again, we come across intuitions but they are no longer the unreliable, philosophically badly justified and empirically often falsified moral intuitions we have spoken about so far. Rather, they correspond to how people actually think (or feel) about moral issues, which is in principle (but not always) quite different from how traditional philosophers think about them. It is the experimental investigations of psychological processes which reveal what ordinary people really think (Knobe & Nichols 2008: Ch. 1). Thus “moral intuition” here means moral common sense, moral folk psychological opinions and the explananda of the psychological explanations consist of the beliefs and feelings people have about certain ethically relevant issues. As to explanation itself, experimental philosophers conceive of it as the identification of the mechanisms which lie at the basis of people’s moral intuitions. These mechanisms have the characteristic of being unconscious, and this makes it intelligible as to why people sometimes maintain to know that a certain practice is wrong (e.g. incest) without knowing why it is so (Haidt, 2001: 814). Mechanisms are unobservable to their proprietors, not because they are too tiny like sub-atomic particles, or because they are abstract, which is not the case since they have neural correlates, but because they are said to be not accessible to the subject, due to the debatable assumption that introspection fails to grasp the causes of our thoughts, actions and behaviours (on this point see Nisbett & Wilson 1977).3 3
It is quite surprising how prone both experimental philosophers and psychologists are to accept Nisbett & Wilson’s (1977) verdict that cognitive processes that cause people’s behaviours are not accessible to consciousness. This – never properly justified thesis – leads to the sceptical consequence that justifications of behaviours based on reasons are nothing else than
154
ANTONELLA CORRADINI
Besides the shared commonality of being unconscious, mechanisms differ in dependence on the preferred theory. Those who adopt a “social intuitionist approach to moral judgement” will support the fundamental role of emotions as causal mechanisms explaining moral intuitions (Haidt 2001). Other experimental philosophers like Joshua Greene will instead pursue “a middle course between the traditional rationalism and more recent emotivism that have dominated moral psychology” (2001: 2107). Last but not least, while admitting that both reasoning and emotion play some role in our moral behaviour, Marc Hauser declares that neither can fully explain the process leading up to moral judgement. To this end, it is necessary to advocate “an organ of the mind that carries a universal grammar of action” (2006: 14). It is then no surprise that explanations of moral topics are often empirically underdetermined. Empirical underdetermination occurs when two theoretically non-equivalent explanations are empirically equivalent, that is to say they have the same empirical content. But if empirical underdetermination is a problem for philosophy of science (see Kosso 1995: Ch. 5), it is a more thorny one for experimental ethics, in particular in those cases in which mechanisms misleadingly influence our choices, and being aware of these mechanisms helps us to overrule them (Appiah 2008: 99). Another source of methodological troubles, moreover, could be represented by the lack of relevance of the explanans to the explanandum. I shall expand on this point by taking Greene’s experiments on the trolley and footbridge dilemmas as examples. Greene and his colleagues’ experiments move from the hypothesis that some moral dilemmas, that is to say the moral-personal ones, engage emotional processes to a larger extent then others, that is to say the moral-impersonal ones, and that these differences in emotional engagement affect people’s judgement. From this hypothesis the prediction follows that brain areas associated with emotions are more active during the contemplation of the footbridge dilemma, which is an instance of a moral-personal dilemma (2001: 2106). This prediction has been confirmed by experimental results.
pseudo-objective, illusory, post hoc constructions. On this see Haidt 2001: 822 ff., and Greene 2008: 35-6. For a critical comment on this topic see Lo Dico, submitted: Ch. 3.
Experimental Ethics – A Critical Analysis
155
Now, my question is: how do we know that what is explained through emotional influence is a moral belief? By moral belief I mean a belief according to which a certain state of affairs represents fulfillment of others’ good or avoidance of others’ harm. In fact, the cerebral areas associated with emotions (and activated during the performance of the task in the footbridge scenario) are correlated to a variety of phenomena which do not have anything in common with moral beliefs, as for example fear, disgust, repugnance, and further emotional factors. Hauser (2006), for example, concedes that none of the imaging studies carried out so far “pinpoints a uniquely dedicated moral organ, a circuitry that is selectively triggered by conflicting moral duties but no other”. However, he then goes on to add that “the lack of evidence for a system that selectively processes moral content is not evidence against such selectivity” (2006: 241). This principle is no doubt correct and conforms to the logic of provability, but we can reply that neither it is evidence that selectivity exists. We are therefore entitled to ask: how can we exclude that in the footbridge scenario people’s reaction against direct killing is really due to moral worries and not, for example, to the personal amoral repugnance at being involved in the violent scene of a killing? My remark is confirmed by people’s reactions in the variant of the loop-scenario. From the moral point of view this scenario is not relevantly different from the footbridge one, but, unlike that case, people judge the death of the heavy stranger as admissible, presumably because he is already standing or lying on the loop and people are not directly involved in his active killing. The moral that I aim to draw from these considerations is that the neuroscientific explanation of moral beliefs can possibly be spurious, since the explanans might not meet the criterion of relevance with regard to the explanandum. But we could go further in our criticism of neuroscientific explanations and ask ourselves whether these can be defined as explanations at all. Indeed, it is often assumed that an explanation sheds light on the cause(s) of the phenomena to be explained. In the neuroscientific field, however, researchers can only spot correlation coefficients between neurobiological and psychological measures. Now, a correlation of 1.0 seems extremely unrealistic, but even if we got it in an ideal circumstance, a high correlation would not yet correspond to a causal relation. A further element is needed, which is that the correlation is interpreted on the basis of independent
156
ANTONELLA CORRADINI
evidence as expressing a causal relation. Now, since we can maintain that emotional factors explain moral beliefs only if we identify emotions as the causes of these beliefs, we can reach the conclusion that perhaps neuroscientific explanations are not explanations in the full meaning of the word. 3.2 Explanation as a substitute for philosophical justification Lacking reliable moral justifications and needing guidance about our conduct, we could be tempted to attribute the practical functions fulfilled by moral philosophy to psychological or neuroscientific explanations. Greene (2001), while admitting that the conclusion of his experiment is descriptive rather than prescriptive, at the end of his essay asks this question: “How will a better understanding of the mechanisms that give rise to our moral judgements alter our attitudes toward the moral judgements we make?” (2001: 2107). This still harmless looking question is explored further in Greene (2003), where he declares: “Whereas I am sceptical of attempts to derive moral principles from scientific facts, I agree with the proponents of naturalized ethics that scientific facts can have profound moral implications, and that moral philosophers have paid too little attention to relevant work in the natural sciences” (2003: 847). Greene (2008), finally, produces an argument on whose basis it should be possible to pass from the “is” of empirical evidence to the “ought” of moral judgement without violating Hume’s law. In particular, the conclusion of the argument would show that consequentialist morality should be preferred to deontological morality, thus helping philosophers as well as ordinary people to correctly confront the dilemmas they face in everyday life. Greene’s argument reads in the following way. As several experiments show, emotional attitudes usually correspond to judgements of a deontological kind, whereas more cognitive attitudes correspond to judgements of a utilitarian kind (Greene et al. 2001, 2004; Small and Loewenstein, 2005; Haidt et al. 1993, and others cited below). The best way to explain the correspondences between emotional attitudes and deontological judgements is to hypothesize that emotions are the real causes of our deontological moral judgements and that deontological theories are instead mere ex post rationalizations. Greene argues for his hypothesis by refuting
Experimental Ethics – A Critical Analysis
157
a possible counter-explanation coming from the deontological side. According to Greene’s construal, deontologists could try to defend their viewpoint by maintaining that moral emotional dispositions track an independent, rationally discoverable moral truth that is not based on emotion. But this hypothesis, in Greene’s view, is implausibly thought of as valid, since deontological moral intuitions4 “reflect the influence of morally irrelevant factors and are therefore unlikely to track the moral truth” (2008: 69-70). The “argument from morally irrelevant factors”, as Berker (2009) names it, intends to bring discredit upon deontology by moving from descriptive scientific theories about moral psychology, that is to say it aims at crossing the Rubicon of the is/ought dichotomy. Nevertheless, it does so, not through a deductive, but through an inductive procedure, which ensures that the argument is logically sound. As Greene puts it, “… we have inferred on the basis of the available evidence that the phenomenon of rationalist deontological philosophy is best explained as a rationalization of evolved emotional intuition” (2008: 72). Greene’s argument needs to be examined against a wider background that I shall now try to interpret and comment on. The first assumption the author starts with is that philosophical definitions of deontology and consequentialism are quite futile, as philosophers do not necessarily know what they really are. Greene’s aim, in fact, is to argue in favour of the thesis that deontology and consequentialism refer to psychological natural kinds, whose investigation pertains to science rather than to philosophy. This approach allows Greene to maintain that science tells us what deontology essentially is, i.e. a kind of rationalization of moral judgements driven by emotions. In my opinion, however, the parallel between natural kinds as thought of in the physical sciences and natural kinds in deontological morality is not correct. The natural kind of water, in fact, refers to its hidden essence, which gives rise to its phenomenal properties, such as liquidity. But the natural kind of deontology does not give rise to any manifest properties, since typical properties of deontological morality like reasoning about rights and duties do not really exist, as
4
To my understanding, Greene holds “moral intuition” to be a synonym of “emotional attitude, disposition or inclination”.
158
ANTONELLA CORRADINI
they are illusory. As Greene says, the essence of deontology consists in its being an illusion! Further, Greene understands the high correlation coefficients existing between deontological judgements and emotional dispositions as corresponding to a causal relation. As already noted in 3.1, however, independent evidence is required in order to interpret the correlation between two states of affairs as a causal relation. The author, unfortunately, has not presented it yet. Moreover, to speak of the causation of moral judgements makes sense only if mental states are conceived as empirical states of affairs. But deontologists probably will not agree on this. Rather, they will be likely to point out that a moral belief has a mental content which is both intentional and abstract, thus hardly the object of any causal action. Moreover, the causal relation cannot be at the origin of the normative character of the belief’s content. This is why deontologists usually assign a high value to argumentation conducted in the light of reasons (see also Dean 2010; Kahane & Shackel 2010). On this basis, a deontologist can propose a different understanding of the correlation between moral judgements and emotional dispositions than Greene’s. Since emotions are not the causes of moral judgements, nothing prevents us from conceiving them as mere contingent concomitant factors, which co-occur with deontological judgements but are not necessarily morally valenced (see 3.1). Greene hypothesizes that the correspondence between our emotional dispositions and the deontological judgements has its origins in our evolutionary history and that the circumstances of that history are non-moral. A deontologist will be perfectly happy with this hypothesis, because to him it is wholly irrelevant whether or not the emotions that contingently accompany his moral judgements depend on morally irrelevant factors. In his (2008) paper Greene repeatedly affirms that his claims on deontology and consequentialism are based on empirical evidence. This also holds for the thesis that deontological emotional dispositions derive from morally irrelevant factors. I do not think that things are this way and I shall argue my criticism by taking as an example the hypothesis about retributive punishment formulated on pages 70-71 of Greene’s essay. It is science, if anything, that has to tell us that retributive attitudes are a by-product of biological evolution: that is, an element that does not contribute to adaptation
Experimental Ethics – A Critical Analysis
159
(Carlsmith et al. 2002; de Quervein 2004). This is likely to be an empirical fact. Instead, the thesis that attitudes that are evolutionary by-products are morally irrelevant factors is not an empirical fact, but a moral judgement. As such, it is derived neither from emotional attitudes nor from science. It is a result, pace experimental ethicist Greene, at which one arrives exclusively from the armchair. For a matter of logical consistency, a similar conclusion cannot be escaped by its complementary moral judgement either, according to which attitudes that enhance fitness are morally relevant.5 If this criticism is correct, Greene’s non-deductive 6 argument has crossed no is/ought Rubicon nor has it shown how to draw moral implications from psychological or neuroscientific facts (for an analogous criticism of Greene’s argument see Berker, 2009).7
4. Conclusion Throughout this essay I have tried to show how often experimental ethicists have been ungenerous to moral philosophy, or too hasty in favourably interpreting empirical evidence, or too wobbly in their methodological assumptions. Some of them tend to systematically abuse inductive procedures for supporting general claims on human nature reached within non-ecological and bizarre experimental settings. No doubt can be cast on the fact that some of these experiments have yielded important results by pointing to flaws in human rationality as well as in human morality. However, this does not amount to proving that human beings do not follow rational and/or moral preferential orders at all, that is to say systems of beliefs which are consistent and closed with respect to the relation of logical consequence. It is wholly realistic to assume that whenever a human being happens to spot an inconsistency in her belief system, in normal circumstances she is willing and able to revise it (Kutschera 1999). The contradictory thesis owes its 5
6
7
“Our most basic moral dispositions are evolutionary adaptations that arose in response to the demands and opportunities created by social life”, Greene 2008: 60. Actually, Greene's argument seems closer to an “inference to the best explanation” than to an inductive process. See Lipton 1991. For further discussion on this topic see Sauer 2012 and Kumar & Campbell 2012.
160
ANTONELLA CORRADINI
plausibility to the naturalistic context out of which it has arisen. It seems that the naturalistic viewpoint pursues the aim of “depersonalizing” the human being by depicting her as determined in her thoughts and choices by hidden, unconscious and impersonal factors. In the initial pages of Greene (2008) the author declares that he would like to draw on his fellow experimental ethicists’ insights “in the service of a bit of philosophical psychoanalysis” (36). His endeavour encourages me to devote myself to a bit of “experimental ethical psychoanalysis” and to conclude this essay by asking my readers the following question about experimental philosophers: How deeply are they afraid of human subjectivity and its typical characteristics, such as consciousness, intentionality, mentality, agency, and free will? The debate is open.
REFERENCES Appiah, Kwame A. (2008): Experiments in Ethics. Cambridge, MA; London: Harvard University Press. Berker, Selim (2009): The Normative Insignificance of Neuroscience. In: Philosophy & Public Affairs 37. 293-329. Carlsmith, Kevin M. et al. (2002): Why Do We Punish? Deterrence and Just Deserts as Motives for Punishment. In: Journal of Personality and Social Psychology 83. 284-99. Corradini, Antonella (1999): Logische Formen der Begründung und Kritik der sittlichen Urteile. In: K. Feiereis (ed.): Wahrheit und Sittlichkeit. Erfurt: Benno Verlag. 61-79. Dean, Richard (2010): Does Neuroscience Undermine Deontological Theory? In: Neuroethics 3. 43-60. de Quervein, Dominique J.-F. et al. (2004): The Neural Basis of Altruistic Punishment. In: Science 305. 1254-8. Galvan, Sergio (1991): Logiche intensionali. Sistemi proposizionali di logica modale, deontica, epistemica. Milano: Franco Angeli. Greene, Joshua D. (2003): From Neural “Is” to Moral “Ought”. What Are the Moral Implications of Neuroscientific Moral Psychology? In: Nature Reviews 4. 847-50. Greene, Joshua D. (2008): The Secret Joke of Kant’s Soul. In: Walter SinnottArmstrong (ed.): Moral Psychology Vol. 3. Cambridge, MA; London: MIT Press. 35-79. Greene, Joshua D. et al. (2001): An fMRI Investigation of Emotional Engagement in Moral Judgment. In: Science 293. 2105-8.
Experimental Ethics – A Critical Analysis
161
Greene, Joshua D. et al. (2004): The Neural Bases of Cognitive Conflict and Control in Moral Judgement. In: Neuron 44. 389-400. Grundmann, Thomas (2010): Some Hope For Intuitions. A Reply To Weinberg. In: Philosophical Psychology 23. 481-509. Haidt, Jonathan (2001): The Emotional Dog and Its Rational Tail. A Social Intuitionist Approach to Moral Judgement. In: Psychological Review 108. 814-34. Haidt, Jonathan et al. (1993): Affect, Culture, and Morality, or Is It Wrong to Eat Your Dog? In: Journal of Personality and Social Psychology 65. 613-28. Hare, Richard M. (1981): Moral Thinking. Its Levels, Method and Point. Oxford: Clarendon Press. Hauser, Marc D. (2006): Moral Minds. How Nature Designed Our Universal Sense of Right and Wrong. London: Abacus. Hoffmann, Frank (2010): Intuitions, Concepts, and Imagination. In: Philosophical Psychology 23. 529-46. Horvath, Joachim (2010): How (not) to React to Experimental Philosophy. In: Philosophical Psychology 23. 447-80. Kahane, Guy; Nicholas Shackel (2010) Methodological Issues in the Neuroscience of Moral Judgement. In: Mind and Language 25. 561-82. Kahneman, Daniel; Amos Tversky (1981): The Framing of Decisions and the Psychology of Choice. In: Science New Series 211. 453-8. Knobe, Joshua; Shaun Nichols (eds.) (2008): Experimental Philosophy. Oxford: Oxford University Press. Kosso, Peter (1995): Reading the Book of Nature. Cambridge: Cambridge University Press. Kumar, Victor; Richmond Campbell (2012): On the Normative Significance of Experimental Moral Psychology. In: Philosophical Psychology 25. 311-330. Kutschera, Franz.von (1977): Das Humesche Gesetz. In: Grazer philosophische Studien 4. 1-14. Kutschera, Franz von ( 1999): Grundlagen der Ethik. Berlin; New York: Walter de Gruyter. Lipton, Peter ( 2004): Inference to the Best Explanation. London; New York: Routledge. Lo Dico, Giuseppe (submitted): Mentalism and Anti-mentalism in Psychology. An Epistemological Analysis and an Empirical Research. Book manuscript submitted for publication. May, Joshua (2010): Experimental Philosophy. In: Philosophical Psychology 23. 711-715. Nichols, Shaun (2011): Experimental Philosophy and the Problem of Free Will. In: Science 331. 1401-3. Nisbett, Richard E.; Timothy D. Wilson (1977): Telling More Than We Can Know. Verbal Reports on Mental Processes. In: Psychological Review 84. 231-59.
162
ANTONELLA CORRADINI
Pinillos, Angel N., et. al. (2011): Philosophy’s New Challenge. Experiments and Intentional Action. In: Mind and Language 26. 115-39. Rawls, John (1971): A Theory of Justice. Oxford: Oxford University Press. Sauer, Hanno (2012): Morally Irrelevant Factors. What’s Left of the Dual Process-Model of Moral Cognition? In: Philosophical Psychology 25. 783811. Schurz, Gerhard (1997): The Is-Ought Problem. An Investigation in Philosophical Logic. Dordrecht; Boston; London: Kluwer Academic Publishers. Seeger, Max (2010): Experimental Philosophy and the Twin Earth Intuition. In: Grazer Philosophische Studien 80. 237-44. Shieber, Joseph (2010): On the Nature of Thought Experiments and a Core Motivation of Experimental Philosophy. In: Philosophical Psychology 23. 547-64. Singer, Peter (2005): Ethics and Intuitions. In: The Journal of Ethics 9. 331-52. Small, Deborah A.; George Loewenstein (2005): The Devil You Know. The Effects of Identifiability on Punitiveness. In: Journal of Behavioral Decision Making 18. 311-18. Sosa, E. (2010): Intuitions and Meaning Divergence. Philosophical Psychology 23. 419-426. Tobia, Kevin et al. (2013): Moral Intuitions: Are Philosophers Experts? In: Philosophical Psychology 26. 629-638. Wheatley, Thalia; Jonathan Haidt (2005): Hypnotic Disgust Makes Moral Judgements More Severe. In: Psychological Science 16. 780-4. Weinberg, Jonathan (2007): How to Challenge Intuitions Empirically Without Risking Skepticism. In: Midwest Studies in Philosophy 31. 318-43.
PART III Naturalised Ethics? Empirical Perspectives
Moral Soulfulness & Moral Hypocrisy – Is Scientific Study of Moral Agency Relevant to Ethical Reflection?1 MAUREEN SIE “Oh Hello Mr Soul, I dropped by to pick up a reason…” Neil Young, Buffalo Springfield Abstract: In this paper I argue that the scientific investigation of moral agency is relevant to ethical reflection (hence, also to moral philosophy) and that it does not warrant scepticism with regard to our nature as moral agents. I discuss an intriguing series of experiments by Daniel D. Batson et. al. that purportedly show us all to be moral hypocrites who do not truly care about morality, but act on the basis of the wish to merely appear moral. I argue that that conclusion is ultimately based on an overly simplistic picture of (moral) agency, i.e., the relation between our (moral) reasons and our actions. I consider a more sophisticated picture by drawing an analogy with a story by Oliver Sacks about a woman who loses her proprioception. I show that such a picture opens up several distinct interpretations of the Batson findings and their normative significance. I conclude that without additional arguments the Batson and other related research do not warrant scepticism with regard to our nature as moral agents. Rather this research suggests the opposite, that moral considerations thoroughly influence us, also in ways we are not fully aware of.
1. Introduction In contemporary science the traditional opposition between mind and body has been complemented by a new, equally important one: between our rational, reasoning, abstract and slow thinking self and our passionate, biased, fast and efficient reacting self. The latter is also known as the “adaptive unconscious” (Fine 2006b; Wilson 2002), the “New Unconscious” (Hassin et al. 2005) or “the smart unconscious” (Dijksterhuis 2007). The paradigm of adaptive 1
This paper is based on a Dutch paper (Sie 2010), but thoroughly rewritten. The last two sections have a different content than the original Dutch paper.
166
MAUREEN SIE
unconscious brings together a large diversity of developments in the behavioural, cognitive and neurosciences (hereafter: BCN-sciences). Common to these very diverse developments is the view that unconscious processes have a far-reaching influence on the way we behave and act (and think). ‘Unconscious processes’ refers to processes that function without us being aware of them. When asked for our reasons for our judgments, choices, and actions the answers we provide are sometimes shown to be mistaken or not the full story. Apparently our answers are not direct and infallible introspective reports (Sie 2009). The importance of this paradigm for our ideas of morality, free will and responsibility is a subject of many scientific and philosophical controversies. And justifiably so. The implicit dualism between ‘reflective’ and ‘unconscious bodily’ functioning is as yet not well thought out. Why, for example, would we locate our ‘real self’ exclusively on the reflective level? And why would we regard the influence of unconscious processes as a threat, as possibly undermining ‘our self’ and our status as moral creatures? Who doubts the fact that we often react automatically to things that happen to us? 2 And, closely related to this, are the conclusions “the” scientists (and some enthusiastic philosophers) draw on the basis of their research not overly radical? Character and virtue are out-dated concepts: it is rather local and contingent circumstances that explain why individuals act in different ways (Doris 2002). Conscious free will does not exist: our brain causes us to act as we do, long before conscious reflection enters the picture (Wegner 2002). Moral responsibility is a superseded concept: we do not act for reasons, but concoct them after the act. We act and judge on the basis of immediate and direct intuitions, gut feelings, and take stock of the available justifying reasons only to justify our behaviour to others. We are rationalising and confabulating creatures. Self-knowledge? More often than not, we have no idea what we do and why we do it. Self-control on the basis of our values, if such a thing exists, is an extremely slow and laborious process, unfit for the hectic and hasty nature of daily life. 2
We also find the distinctions sketched in the previous section in so-called dual process theories in social psychology where both aspects are seen as equally involved in our daily functioning. See for example (Chaiken and Trope 2002).
Moral Soulfulness
167
Such claims and conclusions seem premature and are contradicted or softened by other findings from the same sciences: psychology, social science, the cognitive and neurosciences. Nevertheless, the controversies raised by the new paradigm are refreshing, as is the change of focus in ethics that it brings with it. Moral psychology and experimental ethics – fields that are explicitly related to the above-sketched developments – provide us with a large amount of entertaining experiments with fascinating results. For example, what to think about the body of research findings associated with Jonathan Haidt and his colleagues, such as the fact that our moral judgments are more severe against the background of a penetrating smell? (Schnall et al. 2008a) What to think of the fact that only two of the forty subjects, when asked, think that this smell has any influence on their judgment? (Schnall et al. 2008b). To mention just one of the dozens of findings that point in the same direction: our moral judgments (more precisely put: our moral gutreactions) are sensitive to circumstances that have nothing to do with the case on which we are asked to pass judgment. They are circumstances that we would judge to be ‘morally irrelevant.’3 In addition, the moral philosopher herself has become the subject-matter of scientific investigation. If we want to determine what the relation between deliberation and moral action is, what would be a better place to start than to study the people who are moral philosophers? After all, they pre-eminently reflect on what is morally good or bad, on what we should do and in what circumstances we should do it. Who else, if not the moral philosopher, engages in careful reasoning and nuanced deliberation all the time? If reflection, self-knowledge and deliberation are edifying, then the people who made it their job to do so must excel in it. However, research by the American philosopher Eric Schwizgebel contradicts this expectation. His research shows that 3
For if someone told us that our judgment is demonstrably influenced by the smell in the background or the mess on the desk, we would reconsider our judgment. A smell in the background or a messy desk do not constitute reasons! We must keep in mind that this type of research is not about the content of our judgment – whether we regard something as morally wrong or right – but about the severity with which we judge. Participants are asked to judge on a scale of 1 to 7 how morally responsible they consider a certain action or choice in a particular case. See (Sie 2009) for a more detailed description of this kind of moral psychological research.
168
MAUREEN SIE
contemporary specialised books on moral philosophy – books that are mainly used by professors and advanced students – are, in his words, “missing 50% more likely from the university’s library than equivalent specialist books used by other philosophers than moral philosophers, i.e., non-ethics books.”(Schwitzgebel 2009: 716). The difference between books on moral philosophy and comparable books on non-ethics that have ‘not yet’ been returned to the library is – to put it mildly – not shocking if one takes a look at the actual numbers: 1.6% morally reprehensibly ‘disappeared’ books in other areas of philosophy compared to 1.9% in ethics as a fraction of all available books and 10% versus 12% respectively as a fraction of all books that have been borrowed. While the difference is, as researchers call this, “statistically significant,” many less scientifically inclined people might protest that it is still very small. However, we should not forget that the null hypothesis here was that moral philosophers should excel morally speaking, because of the intellectual nature of their profession and their particular research subject. That hypothesis surely is falsified by Schwitzgebel’s research even in the opinion of those who are unimpressed by statistical significance; at least, when we accept the idea that our treatment of library books is indicative of our moral behaviour more generally. Studies like the above are part of a broader and more serious trend: a trend to distrust moral philosophy, moral philosophers, ethical reflection, and our capacity for moral judgment and motivation and action. This trend is embedded in a broader scepticism regarding the role of our reasoning, rational, abstract, and slow-thinking self in comparison with our active, passionate, and biased, quick and efficient reacting self. The idea under suspicion is the idea that our moral nature is connected with our individual rational and reflective capabilities and, correspondingly with the idea that the moral quality of our actions is determined by the extent to which our moral principles, deliberations and values are expressed in these actions. If we are the adaptive unconscious creatures the BCNfindings indicate, seeing ourselves as moral creatures – who structure their lives on the basis of deliberation about what is good and bad – might hardly seem realistic. In this paper I argue against this conclusion. To be sure, I am neither the first, nor alone in doing so. There are many diverse philosophical views on morality and many of these do not put our
Moral Soulfulness
169
individual rational and reflective capabilities centre stage. In the neo-Humean tradition, for example, the role of our moral sentiments and emotions is emphasised. Other philosophers stress the importance of Aristotelian virtues, the idea of habituation and the enormous possibilities of these concepts (virtue and habituation), for closing the gap between our reason and the often unconscious and automatic interaction with the world (thereby preventing the assumed tension described above). In this paper I choose a slightly different perspective, although it contains elements of the above listed views. The reason for this is that I believe the findings in moral psychology and experimental ethics to be very interesting and informative, and also believe that in order to accommodate them we need to adjust our traditional perspective on the phenomenon of human agency and morality. That is, we need to adopt a perspective that abandons the crucial importance of individual deliberation.4 I also believe that once we take this perspective, we immediately see why the recent developments introduced above do not invite scepticism about our moral nature even though they certainly invite critical self-reflection and scrutiny. Hence, rather than critically discussing this adaptive unconscious research or addressing the hasty conclusions sometimes drawn from it head-on, I take a more speculative approach in this paper. I focus on elements of this body of research that seem most interesting to me: the complicated relation between individual reflection, reasons (including moral ones) and action that it discloses. I elaborate on how to understand this relation in a way that accommodates many of the contemporary findings that invite moral scepticism. Finally, in the last section I explain why the framework developed in the second section is a reason for thinking that moral scepticism is the opposite of what the findings show. Hence, in the second section I develop my framework with the help of an analogy suggested in a story by the ‘neuroanthropologist,’ Oliver Sacks, in order to clarify the relation between reflection, reasons and action.5 In this section I sketch how we can 4
5
Exactly to what extent this perspective differs from ‘the traditional view’ is an issue I do not discuss in this paper. Sacks calls himself a neuro-anthropologist because of his unique work on Tourette syndrome. According to him, he needs this label because the changeover from the 19th to the 20th century has left us with a “psychology without a body” and a “neurology without a soul”. The Tourette syndrome,
170
MAUREEN SIE
close the gap between reason and our generally unconscious and automatic interactions with the world. In the third section I discuss one of the many fascinating findings in moral psychology, those that derive from research on the phenomenon of moral hypocrisy. In the fourth and fifth section I show how we could interpret the researchfindings if the analogy drawn in the second section is adequate. I argue that if one looks at it from this angle, it does not warrant general distrust and scepticism regarding our moral nature. In the final section it should also become clear why the title of this paper is “moral soulfulness.” The research in moral psychology and experimental ethics,6 so I argue, primarily teaches us that the desire to act within the boundaries of morality and to stay in tune with the moral community is an important source of motivation: so important that it might distort our self-image and obstruct the acquisition of self-knowledge. It is precisely because of that, I argue, that moral psychology and experimental ethics are important for ethical reflection.
2. Exchanging reasons The story by Sacks that I want to draw attention to is about Christina, a woman who loses her body, the sense of her body (Sacks 1985). She can still see her body, the parts that are within her sight, but she no longer embodies it. An awful and very rare neurological disease causes Christina to lose what is sometimes called our ‘sixth sense’, our proprioception. Because of this proprioception our body is not something that we have, but something that we are; a soulful, animated body that allows us to move around in the world. Christina has a body and a ‘mind’ (a first person perspective and mental characteristics), but they are no longer a unity. She is no longer a unity. Christina also has willpower: through her vision she regains ‘control’ over her body to some extent, steering it by using her eyes to locate the parts of her
6
as he nicely shows, can only be understood if you consider the person as a whole – body, mind and soul. By the very broad label ‘experimental ethics’ I refer to that body of research that investigates moral agency with the help of surveys or other experimental methods such as those used by the aforementioned Eric Schwitzgebel.
Moral Soulfulness
171
body. However, when the light goes off at night she collapses like a marionette. Christina’s case makes clear that to move around effortlessly and as a matter of course we need something in addition to our ‘senses,’7 willpower and cognitive capacities. That we need this, and what it is exactly, becomes clear only when we lose it or witness someone losing it. The analogy that I want to explore in this paper is the idea of a similar proprioceptive mechanism that enables us to move smoothly, often without deliberate conscious effort, in the moral domain. Moral psychology and experimental ethics can be understood as shedding light on the existence of such proprioceptive mechanisms and illuminating the details of how they function. I call the mechanisms studied by moral psychology and experimental ethics ‘proprioceptive,’ because they operate without us being aware of them and because that might exactly be what enables us to function smoothly in the moral domain — that is, when nothing goes wrong. The findings from moral psychology and experimental ethics are often fascinating and entertaining because they bring to light these mechanisms by ‘letting them go wrong.’ Let us take a look at the above-mentioned research by Schnall and colleagues, especially her co-author Jonathan Haidt, who is primarily associated with this body of research. Their findings can be interpreted as showing that we are not sure what moves us and why, even when our task is relatively simple and nothing is at stake, for example when we are asked to judge how wrong a certain action or choice is (on a scale from 1 to 9) described to us in a vignette. Two hypotheses drive this kind of research. First, the idea that such moral judgments are often made on the basis of immediate gut feelings. Second, the idea that these gut feelings can be manipulated by circumstances that have nothing to do with the content of the case we are asked to judge on, but which influence our feelings of ‘disgust’ and ‘pureness.’8 These hypotheses have been confirmed by a variety of experiments, all showing how conditions and 7
8
Senses is put in quotation-marks because proprioception too is often called a ‘sense.’ One could argue that feelings of purity and disgust do have something to do with the content of our moral judgments, but this objection misses the point of why this research is troubling. The point is that if we were informed of the manipulated circumstances, we would dismiss them as ‘irrelevant’. See footnote 3 above.
172
MAUREEN SIE
circumstances that we would not consider morally relevant nevertheless do influence the severity of our judgments. On top of this, these findings do not stand alone. They fit well into a broader picture that shows our choice and other behaviour in general, to be highly susceptible to what is called ‘framing’ influences. If you present people with cases that have identical outcomes, their choices might nevertheless vary due to whether the case is stated (framed) in terms of risks (for a negative outcome) or chances (for a positive outcome). Strongly simplified: whether a doctor tells you a specific medicine is ‘an effective cure in 20% of the cases’ or is ‘ineffective as a cure in 80% of the cases’ has a huge impact on the choice we make, even though the communicated efficacy of the medicine is the same.9 One way to understand the upshot of these findings is that we do not know exactly how and on what basis we judge, hence in what ways we are susceptible to a variety of manipulative measures. However, not only are we susceptible to manipulation, we also seem to be quite unaware of this fact. When asked why we have made certain decisions or passed certain judgments we readily provide reasons to justify and explain them, even if our judgments and choices are demonstrably caused or strongly influenced by the manipulated circumstances. It is for this reason that some scientists speak of our tendency to ‘confabulate,’ the tendency to make up reasons afterwards (Gazzaniga & LeDoux 1978). The Dutch cognitive scientist Victor Lamme goes as far as to claim that we are nothing but ‘rattleboxes’(Lamme 2010: 282). We do not act, decide or judge on the basis of the reasons we explicate, we just make them up afterwards. And there are findings that seem to corroborate these strong claims. Murphy and colleagues, for example, showed that the refutation of arguments that would ground a certain judgment of moral wrongness, in the vast majority of cases does not lead to a revision of that judgment (Murphy et al. 2000). This phenomenon, elaborated on in close relation with the Haidt et al. findings, is known as moral dumbfounding: we pass moral judgments and hold 9
There is a lot of discussion about what this kind of research exactly shows about our so-called ‘rationality’ (is this a cognitive defect or are these actually important heuristic aids?), see for example, (Tversky & Kahneman 1981) and (Thaler & Sunstein 2008). For a discussion about this question that focuses on the moral domain see (Gigerenzer et al. 1989) and (Gigerenzer 2008).
Moral Soulfulness
173
on to them, even if we find ourselves ‘out of reasons’ to do so (if we are dumbfounded with regard to our reasons). Whatever one’s opinion about this body of research, it indicates that many processes that we do not fully grasp and that make us vulnerable to manipulation operate behind the scenes. What worries many is that we lack transparency also when our judgments concern moral cases. Judgements, moreover, made in a laboratory setting, with little of importance at stake and ample room to reflect. What does that suggest about our judgments in everyday situations where a lot is at stake and with little to no room for reflection? Some believe, most famously Haidt himself, that individual reflection plays no substantial role in our moral judgments, that is, it does not precede or cause these judgments. As a consequence one might start to wonder whether the idea that we generally tend to act for reasons is mistaken as well. Especially if one combines these findings from social and moral psychology, as the Dutch cognitive scientist Lamme is inclined to do, with the neuroscientific findings showing that brain-activity predicts our choices, to precede our awareness of these choices (Lamme 2010; Libet et al. 1983; Soon et al. 2008). For the purposes of this paper let us leave aside the many methodological and conceptual worries with regard to the psychological and neuroscientific body of research, and take a closer look at the relation between our explicated reasons and our choices, judgments and actions (hereinafter taken together as ‘actions’).10 That is, let us raise the question whether, if the findings are correct, we might conclude that reasons (including moral reasons) play no substantial role whatsoever in our daily intercourse with each other, ourselves and our environment. Let me explain what counts against such a conclusion, by drawing the analogy with Christina. Before her disease, Christina would have explained a lateral movement in terms of, for example, the cat that crossed her path (or that she thought crossed her path). Now that she has lost her proprioception, she has a more extensive explanation available about how exactly to orchestrate such a lateral movement. We could say that because of her condition Christina understands what the shortcoming is of an 10
For an excellent critique of the interpretation of the Libet experiment see Daniel Dennett’s discussion in chapter 8 of (Dennett 2003), other critiques can be found for example in (Bennett et al. 2007) and work of the editor of this volume (Lumer, this volume, ch. 2). For criticism of the Haidt studies see, e.g., (Fine 2006a; Narvaez 2008; Jacobson 2008).
174
MAUREEN SIE
explanation of her behaviour in terms of a simple temporal causal process: “a cat in the world” (or something that looks like it) causes an “internal representation of a cat,” which “results in a lateral movement.” In an important sense we can apply such a description to Christina’s movements only after she started to suffer from her nasty neurological condition; before it, she could just react to a cat without perceiving it consciously and controlling her movements self-consciously. But does this imply that before she fell ill she was mistaken in claiming that she jumped laterally because a cat crossed her path? That claim does not seem to make sense. Even when she intuitively and immediately jumped as an unconscious and automatic reaction to the cat, her explanation that she jumped aside because she saw a cat remains adequate. We continuously see and think that we see things even though they escape our explicit attention. “Why were you honking your horn?” “I thought I saw a cat jumping in front of the car.” “Why did not you call me?” “I thought you were coming home tomorrow.” We also give these answers when we only realise after the fact that we thought we saw a cat or that we assumed that you would be coming home tomorrow. We seldomly retrieve in memory what went through our heads at the moment of the actionto-be-explained when answering questions about it. This is just the very common phenomenon philosophers identify with the term ‘rationalisation’ (Davidson 1980). Something similar might be the case when we are looking for reasons to explain or justify more complex forms of behaviour. The fact that we often do this post hoc does not mean that the reasons we provide have nothing to do with the actions explained by them. It seems silly to infer from the fact that reasons did not literally cross our minds prior to our actions, that they did not play any role whatsoever in bringing about those actions. Few philosophers, if any at all, would believe a picture of the relation between thinking and acting that requires every action to be preceded by a conscious mental event, nor is there reason to accept such a picture. Though, it is understandable why there is something attractive about that picture. I will come back to this in the last section of this paper. Exchanging reasons, asking for them and providing reasons when asked, is an integral and crucial part of our dealings with one another. If someone asks us to take care of his children, invites us to a party or promises to be at the movie theatre at eight, we assume
Moral Soulfulness
175
that there is a reliable relation between the reasons exchanged and the subsequent behaviour. Grosso modo. If the babysitter arrives and the father opens the door with the notification “Sorry, I’ve fallen ill”, everyone understands that the babysitter has no job for the evening. A babysitter is required when the parents are absent; going out for a night is a reason for parents to be absent, being ill is a reason to stay at home, and so on. When the considerations are of a moral nature (and the agent in question is a reliable and mentally healthy one), the relation with our actions is considered to be even more close. You can forget that you asked a babysitter over, but you cannot forget promises and moral values. When you break a promise or violate one of your moral values, the answer (even if meant seriously and genuinely) that you ‘simply forgot’, is, to put it mildly, unsatisfactory. The research in moral psychology and experimental ethics shows that the relation between our reasons and our actions, no matter how reliable, does not get its reliability from processes comparable to those involved in the lateral movements of Christina in her serious condition. When we act for reasons it is not necessarily the case that first there is a reason of which we are fully aware that subsequently results in bodily movements. Just like the reasons “I saw a cat” or “I thought I saw a cat”, other reasons, too, are often reconstructions or interpretations of what happened at the moment that we act (Sie 2009). Most of us, most of the time, do something similar when we cite moral principles or reasons to explain our behaviour: We reconstruct the most obvious answers to justificatory questions about our behaviour, on the basis of the situation in which we find ourselves (including our thoughts and emotions). Usually we can trust that the reasons we cite are more or less adequate, because we have years of practical experience. Through experience we have learned what the correct answers are for the situations in which we find ourselves, what the margin of error is, and how we should correct for that. In other words, what the correct mix of excuses, explanation and counterattack is. If someone blames us for forgetting his birthday we will excuse ourselves, come up with an explanation and, if our friend is not satisfied yet, snappingly remark that he himself is not the most thoughtful person
176
MAUREEN SIE
either. In daily life we continuously exchange such moral sentiments, explanations and justifications.11 It is of course possible that some of our reactions rest on quicksand. It is possible that we have grown tired of the friend whose birthday we forgot and are not yet consciously aware of this. In that case the fact that we forgot his birthday is not quite as ‘accidental’ as we suggest in our reaction to him. It is also possible that our forgetting of his birthday is the first sign of memory loss that is the forewarning of a serious brain disease. In this case too, our direct interaction with him as sketched above is confabulated; it is a rationalisation of behaviour that has demonstrably different causes than the ones we cite. But this possibility does not preclude that the reasons we cite usually very efficiently correspond with our actions. The analogy sketched in this section is meant to suggest two things. The first suggestion is that there might be ample room for the possibility that the reasons we exchange on an everyday basis do not give us the full story, because there are many processes that operate ‘behind the scene,’ i.e., processes we are not aware of when everything goes well. We become aware of these processes only when they go awry, for example, due to neurological diseases (as in the case of Christina) or when hypotheses about these underlying processes enable us to manipulate them (as in the case of the moral psychology experiments). The second suggestion is that our daily practices leave ample room for mistakes with regard to the reasons we exchange. With this in mind let us take a closer look at one of the many recent and fascinating experimental studies, the study on the phenomenon of moral hypocrisy.
3. Moral Hypocrisy versus Moral Integrity Imagine yourself taking part in an experiment on the relation between task-performance and reward. The experimenters tell you there are two tasks: a boring and tedious task with no reward attached (NEG), and an easy task that earns you a raffle ticket when 11
It is difficult, if not impossible, to withhold reactions to one another as morally responsible individuals as Peter F. Strawson pointed out in his influential paper “Freedom and Resentment” in his attempt to shift the focus of discussions on moral responsibility away from the metaphysical discussions on free will and determinism (Strawson 1962).
Moral Soulfulness
177
performed adequately (POS). The raffle ticket gives you a chance of earning 30 dollars at the end of the experiment and on top of your payment for participation. Besides the main aim of the experiment (investigating the relation between task-performance and reward), so they tell you, the experiment also tests an additional hypothesis related to the division of tasks. Because of that you are asked to divide the two tasks (POS and NEG) between yourself and another participant. The other participant will not know that another person decided on how the tasks are to be divided, you will not meet this participant, and you are allowed to make your choice alone and behind closed doors (hereinafter: in private). What will you do and how will you evaluate your choice with hindsight? Research by Daniel Batson et. al. shows that 16 out of 20 persons will take the POS task (80%) and rate their choice as ‘not very moral’ (Batson et al. 1999; Batson 2008). That is, on a scale from 1-9 they rate themselves with a 4.4 (1 and 9 represent my choice was ‘not morally right’ and ‘morally right’). The small group of participants who took the NEG task rate themselves with a 8.4. Apparently, the vast majority of us do not act as we judge morally best. However, what interests the researchers is not the fact that we do not act in a morally exemplary way by our own standards. The researchers are interested in the hypothesis that what explains our actions and choices in the moral domain, is moral hypocrisy. They believe that it is ‘wanting to appear moral’ (moral hypocrisy) that explains our choices in the experimental setting, not wanting to be ‘truly moral.’ And the latter they understand narrowly as ‘acting in accordance with a moral principle one accepts.’12 In order to enable an experimental investigation of their hypothesis they (1) make explicit a principle they found to be effective in prior runs of the experiment. They tell the participants that in a former run of this experiment people indicated that they thought ‘it most fair to give each an equal chance on the positive task’ (herineafter we refer to this as ‘the EC principle’), and that therefore they are given a coin packed in a sealed plastic wrapper. This coin, so the participants are told, can be used to divide the tasks 12
They refer, more adequately, to a ‘behavioural standard.’ Since they are clearly interested in moral behaviour I use ‘moral principle’ and ‘behavioural standard’ interchangeably for ease of reference. Clearly, ‘moral principle’ should not to be understood in any strict philosophical sense.
178
MAUREEN SIE
in a fair way. The coin adds (2) an easy opportunity (elbowroom) to cheat. Again the participants are left alone to make their choice in private. So what did the participants do? 50% of them use the coin, the other 50% do not. Interestingly, in both cases 90% of the people end up with assigning the POS task to themselves. Moreover, those who end up with the POS task after flipping the coin rate the moral nature of their choice with a 7.11, those who did not with a 3.56. Since we know coins do not cheat, we can infer that part of the participants made use of the available elbowroom, that is, they assigned the POS task to themselves regardless of the outcome of the coin flip. The results of this experiment remain more or less the same in slightly different set-ups. When one adds serious negative consequences to inadequate task-performance, e.g., administering electric shocks, people will feel less inclined to use the coin (30%) but will assign themselves the positive task even more often (100%). When the other participant is informed about how the taskassignment is determined more people will use the coin (80%), but the division of tasks remains the same. Hence, as the researchers conclude, if we can appear to be moral without paying the actual price of acting in accordance with the moral principle made salient to us (i.e., the EC-principle), many of us will do so. That is, 4 out of every 5 participants will assign the POS task to themselves without using the coin even though they acknowledge that the most moral choice would be to give each an equal chance to obtain the positive task. And of those who do decide to use the coin 4 out of every 5 will also take the POS task, regardless of the outcome of the coin. It is clear that of those who use the coin, at least some deceive themselves. After all, they take the positive task themselves regardless of the outcome of the toss and they rate this action as ‘more moral’ than those who take the positive task without making use of the coin. Batson et. al. wonder how exactly these participants are deceiving themselves. Do they fool themselves in believing their choice is the moral one by fiddling and not paying attention to the coin too closely? Or does the deceit involve a failure to compare their own choices with the behavioural standard (the EC-principle) they explicitly adopt? To examine the first possibility they repeated the experiment with clearly labelled coins, Pos-to-Other on one side and Pos-to-Self on the other. This took away the chance to fool oneself by not
Moral Soulfulness
179
paying attention too closely, e.g. by fooling oneself about which side of the coin came up or by fooling oneself about which side was attached to which task. The labelling did not change the results. (Batson et al. 2002: study 2; Batson 2008) To check whether the second kind of self-deception is involved, Batson et. al. repeated the experiment under conditions of heightened self-awareness. In the seventies experiments on cheating disclosed that heightened self-awareness reduced cheating behaviour and that self-awareness could easily be manipulated by the presence of a mirror (Wicklund 1975; Diener 1976). When the participants made their choice in the Batson-experiment with a mirror hanging from one of the walls the (cheating) effect that the coin strongly favoured those using it, disappeared completely. Of those who did not use the coin, the number that took the POS-task dropped significantly (to 62%). The researchers conclude that this shows the second form of self-deception explains the phenomenon of moral hypocrisy. We are capable of living with a discrepancy between our behaviour and the moral standard, as long as we can get away with it. We can get away with it by not comparing our cheating with the EC-principle. When you believe that acting morally consists of conforming your behaviour to what you consider the morally right thing to do on grounds of individual deliberation, then Batson et al.’s work is disturbing. It shows that most people do not genuinely act morally if they can suffice with “appearing to act morally”. When they have the choice between “appearing to act morally” without having to bear the costs of “actually acting on the basis of EC,” they go for the first option.13 However, calling this phenomenon ‘moral hypocrisy’ stays within the traditional view that only conscious acts on the basis of affirmed moral principles qualify as truly moral. This is, first of all, grist to the mill of those people who believe that we are nothing more than rattleboxes, who act on the basis of a plethora of motives and influences and only afterwards look for considerations that justify our actions and are accepted as reasons by our fellowmen. And that in turn, fuels the idea that morality hardly plays a substantial role in our daily lives. Secondly, it allows no room for 13
In the paper “Moral Hypocrisy and Acting for Reasons” (Sie, work in progress) I analyse this experiment and its philosophical implications more extensively.
180
MAUREEN SIE
the possibility that the relation between our reasons and actions is complex and does not derive its reliability from individual deliberation or affirmation of principles, reasons, or standards prior to the action. As the analogy elaborated in the previous section suggests, it might be the case that proprioceptive mechanisms mediate between our reasons and our actions without us being aware of it. The experiments in moral psychology, like the awkward neurological disease of Christina, might be understood as bringing these mechanisms to our attention by letting them go awry. If this is the case, as I argue next, they might establish the opposite of what the sceptical interpretation suggests.
4. Moral Hypocrisy? If Batson et. al. are right ‘appearing to be moral’ is very important to us. Does that mean that we are all ‘moral hypocrites’? Does it mean that we never truly act morally, but only observe moral principles (behavioural standards) when doing so is required to appear to be moral? Not necessarily. This conclusion follows only if one holds with Batson et. al., that an agent only truly acts morally when she or he acts on an individually affirmed salient moral principle. However, as argued in the second section of this chapter, we can question whether the picture of human agency and morality that requires such a prior affirmation of principles is a plausible one. Much of what we do is done without prior deliberation. We learn how to act, in what circumstances and on the basis of what reasons by participating in our reason-exchanging practices. As a consequence, the details of how exactly we are able to act in morally adequate ways might become clear to us only upon losing the ability to do so, as in the case of Christina. If that is the case the ‘wish to appear moral’ brought to the fore by the Batson findings, might function as a proprioceptive mechanism that enables us to fit in smoothly with the wider moral community even though we often lack the time to thoroughly reflect on what to do prior to our actions.14 14
Note that the previous paragraph uses the concept ‘moral’ to refer to the behavioral standards that regulate our interactions, a concept clearly related, but not similar to the concept philosophers use as a normative evaluative label. Also see note 12 above. As we use the label someone might act in accordance with ‘moral’ principles that we reject as utterly immoral. It goes
Moral Soulfulness
181
In such a picture we are constantly ‘tuned in’ to what is expected of us and able to act in accordance with it automatically. Salient moral principles function as the roadmaps on the basis of which we determine which way to take, so to speak. If this roadmap analogy makes sense, the discovery that the wish to appear moral plays a crucial role in our actions does not equal the discovery that the moral principles do not play any role in it. On the contrary: These moral principles outline the contours of what we will and will not do, i.e., the contours of what we think we should and should not do. This is the case even though the principles in and of themselves, like a roadmap, will not tell us where exactly we will go. Likewise, explicating such a principle even when our commitment to it is not what fully explains our action, is not nonsensical. Remember our analogy: Although Christina as a consequence of her nasty condition acquires insights into the mechanisms that enable her to jump laterally when a cat crosses her path, this does not render her former replies to questions why she suddenly jumped laterally nonsensical. In both cases, the answers turn out to be incomplete but not wrong. When the wish to appear moral operates in a proprioceptive way, the Batson findings show us that the desire to appear moral might well be what enables us to act in accordance with salient moral principles on an everyday basis. To be sure, even in this interpretation the Batson findings are disconcerting. After all, there is no reason to be proud of the fact that elbowroom to cheat and conditions of low self-awareness undermine our ability to act on a moral principle we subscribe to.15 However, before we draw any
15
beyond the scope of this paper to address the obviously very important, but also extremely complex, issue of how moral psychology's use (and other scientific disciplines investigating moral agency) of the label ‘moral’ relates to this more substantive use. Let it be noted that on account of the ‘desire to be moral’ as discussed in the Batson experiment and in this paper, it is quite possible that it leads to ‘immoral’ actions, i.e., when the explicated behavioral standards of one’s society are immoral. However, that does not mean that this very same desire to be moral has no imporant role to play in enabling people to act in accordance with moral behavioural standards. It is the latter point that this paper focusses on. Batson et al. also found that when we induce additional altruistic motives by asking people to imagine how they would feel upon receiving the negative task, this enables us to act in accordance with the EC-principle (Batson 2008, 61-65).
182
MAUREEN SIE
definite conclusions from that fact, we should bear in mind that that subscription itself might be susceptible to error. According to the view we are exploring in this chapter, we do not exactly know why we act as we do and for what reasons. Therefore, it might also be the case that although we think that the EC-principle adequately captures our behavioural standard in the experimental situation, in fact it does not. Let me recapitulate the end of the second section of this paper. If we learn how to act by practicing, by acting in morally adequate ways and learning which reasons are accepted and which not, it might very well be the case that (a) there are operative mechanisms of which we are not aware (such as the desire to appear to be moral). This in turn, (b) leaves ample room for mistakes with regard to the correct explanation of our behaviour. After all, when it is not—or only seldomly—the case that we act on the basis of reasons that we explicate prior to our actions and on the basis of which we consciously control our actions, this leaves ample room for influences on and causes of our actions of which we are unaware. Let us take a closer look at this possibility with respect to the experimental situation. What is it exactly that most participants in the Batson experiment do ‘wrong’? They: (i) assign themselves a positive task (acting in their own best interest), (ii) without the risk of others knowing that they did (anonymous condition) and (iii) without doing great harm to others, i.e., a harm that could have been prevented without harming themselves or that others would not have risked suffering otherwise. How wrong is that? Is striking a healthy balance between our own interests and our relation to others and the larger community not a task we are constantly faced with? Is it not our task to take good care of ourselves and those close to us (within certain boundaries, of course)? Hence, could we not question (1) the EC-principle (a normative question) and, closely related, (2) the notion that the EC-principle is the behavioural standard we in fact apply in situations such as the experimental one described above (a descriptive question)? According to the EC-principle “we should always give everyone an equal chance of a positive outcome when put in position to do so.” However, it seems obvious that we do not always give everyone equal chances in our daily lives, or that we think we should do so. For example, when we are the first to see something valuable lying in the street and pick it up, or when we happen to pick the right
Moral Soulfulness
183
supermarket-line (the one that does not close due to a malfunctioning cash-machine), we do not seem to feel in any way morally obligated—or even inclined—to share our good fortune with the people around us. Clearly, it is just a matter of good luck that those things happen to us, but that does not mean that we have moral reasons to share that piece of luck with others. Sure, other people might be a bit annoyed at ending up in the wrong line themselves, or missing out on that valuable object they see being picked up by you, but they will not feel morally indignant or resentful towards you. They will not consider a principle like the EC-principle to be applicable in this situation. So what is the difference between those situations and the particular experimental one that is central to this chapter? Why not regard the opportunity to divide the tasks as an instance of us having good luck while other participants don’t? Let us see whether we can adjust our example in such a manner that the EC-principle presents itself as a likely candidate for what should be done. Suppose we adapt our example of finding some valuable object in the street and imagine ourselves now only inches away from another person, for example, because we both knelt down to help an old lady pick up groceries that fell out of her bag. In such a scenario, it might feel a bit awkward to not recognise this other person's equal claim on the beautiful object. Why would that be? In this adapted scenario we are both part of the same situation, both literally in the same position. Consequently, how we respond to the valuable object immediately also communicates our attitude to this other person, someone we literally face. That is, when we take possession of the object without acknowledging her or him as equally ‘deserving’ it, we seem to ignore her or him. Would we not feel similarly ignored when the other person happened to have her or his hands on the valuable object first? In such a face-to-face situation using a coin to decide who gets the valuable object becomes a reasonable option⎯in any case much more reasonable than in the previous versions of our example. The situation of the Batson experiment looks a bit like such a face-to-face situation. After all, you are told that you both share the same position in the same experimental situation, the only difference being that you happen to be so lucky as to be the one asked to allocate the dissimilar tasks. Hence, when asked what would be the most moral thing to do a principle such as EC might spring to mind.
184
MAUREEN SIE
However, the experimental situation is not a face-to-face situation. In the experimental situation you take advantage of your position without ever meeting the other person, or the other person ever knowing of your existence and your role in the experiment. Hence, if we confirm the EC-principle might we not just be mistaken that that principle applies to the experimental situation? After all, you're not causing harm to the anonymous other than what could have been avoided without suffering it yourself. That does not seem terribly wrong. In this respect the experimental situation is much more similar to the example of finding a beautiful object or picking the shortest line in the supermarket. At the same time it is easy to see that one might think that it would be wrong to take the positive task, because the situation resembles a face-to-face scenario. Hence, we have two interpretations of the Batson findings that suggest that the label ‘moral hypocrisy’ might be a bit misleading when taken to suggest that we never act on the basis of moral reasons or principles. According to the first interpretation, the Batson findings show that we are mistaken about the exact mechanisms involved when we act in moral ways, i.e., that the desire to appear to be moral plays an important role in our everyday lives. To be sure, that finding is interesting and might be disconcerting to some (particularly those who have a high opinion of the moral nature of our everyday motivations). However, it in no way suggests that we never truly act in moral ways or that moral principles do not play a substantial role in our everyday practices. Rather, the Batson findings show that moral principles are so important to us that they might even lead us to fool ourselves when given the elbowroom to do so. According to the second interpretation, we might be mistaken about the applicability of the EC-principle. On closer inspection there is room for doubt whether we should make use of the coin or whether we are terribly wrong not to assign the POS task to the other person. If we are correct in that respect the Batson findings, again, do not establish that we never truly act morally. First of all, the decision of the majority of participants to take the POS themselves is not that wrong and we can wonder whether the EC-principle captures a behavioural standard that we should apply in this situation. Secondly, the fact that people are prepared to fool themselves because they think EC is the principle that should regulate their behavior in this situation, rather shows that they care
Moral Soulfulness
185
too much about appearing to be moral. Moreover, the experiment could as well be taken to show that in certain conditions the desire to appear moral enables us to rise above what might reasonably be expected from us morally speaking. To be sure, both interpretations of the Batson findings sketched in this section are speculative. However, the main point pursued here is that these interpretations come as no surprise if the analogy elaborated in the second section of this paper is correct. According to this analogy we are not aware of what enables us to move without effort and in an immediate and unconscious manner—much as in the case of individual bodily movements. As a result the Batson findings might disclose (a) the efficacy of a proprioceptive desire to appear moral and/or (b) that we are mistaken about the principle that guides us in experimental situations like the one investigated. These interpretations are not mutually exclusive and probably also not exhaustive. More research is needed to clarify exactly what is implied by the research. For the moment it suffices to point out that the correct interpretation of the Batson findings, or other experimental results, heavily relies on one's philosophical picture of moral agency. An elaborate explanation of the role of the desire to appear to be moral that casts this desire in a partly constructive role is beyond the aim of the present paper. However, I hope to have made clear that positing a simple contrast between the efficacy of this desire labelled ‘moral hypocrisy’ by Batson et al. and ‘true moral behavior’ might be too hasty. One of the views worth exploring is that the desire to appear moral is actually very crucial to our ability to act and respond in morally adequate ways (Sie 2009; Sie, work in progress), especially once one realises that the Batson findings show that the desire is also operative when no other people are present (does that not begin to sound like having a conscience?). The suggestion of the researchers is that their experiment debunks a certain picture of ourselves as moral creatures, but perhaps what they establish is rather that we misperceive the workings of everyday moral agency. Before concluding let me return to the traditional focus on our individual deliberative capacities as relevant to the moral domain, and examine how that focus relates to the view sketched in this paper.
186
MAUREEN SIE
5. Moral Soulfulness Although the view roughly sketched in this paper emphasises the social aspects of moral agency and the possibility and potential of a more positive interpretation of the findings of moral psychology, it can accommodate the common and traditional focus of moral philosophy on our individual capacity for deliberation and reflection on what to do and for what reasons. First of all, it is especially when we no longer know what to do, face a difficult moral situation or dilemma, or when other people criticise, blame or resent our actions that deliberation and reflection is required. Effort, concentration and conscious control are called for when we want to change our behaviour and correct ways of acting that come naturally to us. Perhaps it is for this reason that we tend to identify our ‘true self’ with our thinking, deliberating self. Although most of the time we move effortlessly and as a matter of course in the moral space, we can obviously also withdraw from this fluent interaction. Such withdrawal requires an effort – if we manage to succeed at all. As a result, it might therefore feel much more like steering and controlling than the tension-free participation in our everyday interactions. Moreover, it might feel much more like an individual steering, and consequently as that which distinguishes us from the broader thoughtlessly ‘going with the flow’ of other interactions. Also, in order to decide whether such withdrawal is required we need to deliberate, think and reflect on what is justified, on what basis, and so on. Hence, it is far from strange that our individual capacity to deliberate is assumed to play such a central role when morality is concerned.16 However, that does not mean that our ability to act in morally adequate ways entails nothing more than our ability to morally evaluate ourselves and to figure out what to do when facing a dilemma or moral problem. Just as the ability to drive well not only entails that we be able to recognise dangerous situations and able to get our car back on the road if we go into a skid, but also the very mundane ability to get home safely on an everyday basis. A more positive interpretation of the experimental findings discussed in this paper, would be that the desire to appear moral derives from our sincere wish to be part of a larger community, unless the costs are too high. Perhaps we usually assume that what 16
And for the reasons set out in note 17 below, 12 and 14 above, this might also be considered desirable.
Moral Soulfulness
187
we do more or less corresponds with the morally permissible. Of course we know that every now and then we make mistakes, commit tiny moral errors, but in general we perform pretty well, knowing what we can and cannot afford without becoming a ‘moral’ eccentric. We effortlessly play along with the rest, often without the need for much deliberation on what to do prior to our acting. When someone asks us why we did what we did, we answer in all honesty. “Why did you toss the coin?” “It seemed to me like the fairest way to divide the tasks!” Sometimes we give this answer even if the principle played no actual role in our behaviour. This does not show that we are not truly moral or that moral principles and reasons play no role in our life. It just shows that we should not oversimplify this role, by capturing it in straightforward temporal causal processes. Just as Christina’s case does not show us that perception plays no role in navigating the world, the Batson findings do not show that moral principles or moral reasons play no role. If the analogy I sketched in this paper is helpful, it is not just understandable that we often are mistaken about what motivates us, but also that we might be mistaken about the moral principles that regulate our interactions. We might discover that the principles we claim to adhere to, do not correspond with what is actually important to us, i.e., with what actually regulates our behaviour. If the analogy makes sense we learn how to act in morally adequate ways, pretty much in the same way as we learn how to move our bodies, i.e. by trial and error: by being corrected when we make mistakes, by being instructed how to do certain things and seeing examples of how it is done around us, and by getting explanations of why certain movements or exercises work and others do not. That is to say, by living in a community that requires us to anticipate what is expected of us, by regularly inviting reactions when we transgress these expectations, by being expected to justify and explain ourselves when that happens, and by being expected to give the correct answers when we are asked to explain our deviant behaviour. Just as proprioception enables us to move our bodies as our own and learn to do so by trial and error, the susceptibility to other people’s moral evaluation of our actions enables us to learn to act in moral ways in a largely automatic and immediate fashion. It is easy to imagine that the desire to appear moral has a crucial role to play in this process: it makes us behave as we are supposed to behave, morally speaking, as we believe ourselves to be supposed to
188
MAUREEN SIE
behave. And it does so by keeping an eye focussed on salient moral principles and making sure we adjust our behaviour in accordance with it, at least to a degree that is necessary in order not to appear morally indifferent or even immoral, not by thorough ethical deliberation prior to each and every action we perform. The Batson findings are fascinating and partly unsettling because they disclose the downside of such a desire and mode of operating, namely the risk of self-deception.17 And they show this risk to be present even when the stakes are very low and the situation is quite simple and straightforward. The phenomenon of confabulation, drawn attention to by the research briefly introduced in the first section, is one of the ways in which such self-deception can surface. When asked for reasons we provide those that make sense of our behaviour, morally speaking (but very likely also in other domains).18 And we do so, even when it has been ascertained that these reasons have less explanatory power in the cases examined than other features of the situation. That, however, does not imply that the reasons we exchange in everyday settings bear no relation to the ways in which we act. If that were the case we would not be able to make sense of the generally quite reliable relation between those reasons and our actions. As argued in the previous section, this reliable relation is not undermined by the mechanisms and processes uncovered by scientific investigation to be at play underneath. However, such findings might offer new ways of understanding what can go wrong in the moral domain. Besides the agent’s emotional or cognitive failures, one of the things that might provoke moral wrongdoing could be the loss of the desire to appear moral, e.g., because one no longer feels a part of the moral community.19 Such a loss of 17
18
19
Another downside is the risk that the desire makes one act in accordance with acccepted and explicated behavioral standards that are not moral or even immoral. See notes 16, 14, 12 above. Not only do we make up reasons for judgments that are supposed to be guided by moral considerations (as in the Haidt experiments), but also for choices that concern trivial things (panties), our health, financial future, and so on. This might indicate that besides the wish to appear moral we also wish to appear, for example, rational (prudent). That it is such individual cognitive or emotional failings that explain immoral actions, is suggested by the more traditional meta-ethical positions referred to in the introduction of this paper. Certain moral failures can be
Moral Soulfulness
189
‘identification’ might, in turn, be caused, for example, by a rejection by and/or alienation from the community in which one lives, but could also have a pathological source (analogously with the case of Christina). Such a pathological loss of the proprioceptive desire to appear moral might be partly remedied. We might be able to stay on or return to the ‘right track’ with the help of our cognitive and/or emotional capacities (as Christina did with the help of her eyesight and focus), but our interactions will lose the ease and adequacy that previously characterised them. So where do these speculations leave us with regard to the role played by experimental ethics and social and moral psychology for moral reflection? If the desire to appear moral plays the important role that the Batson findings suggest, and if the picture of moral agency suggested by us on the basis of our analogy is correct, then self-deception is a constant danger, especially in the moral domain. In that case we need experimental ethics and social and moral psychology to show us who we are, what we are capable of and under what conditions, undistorted by our flattering self-perceptions. Such information is indispensable for discussions about who we want to be and what would be required to live up to that ideal.
6. Conclusion In this paper I have explored the idea of the existence of a proprioceptive mechanism in the moral domain. Using a story by Sacks about a woman affected with a rare neurological disease as illustration, I suggested that the relation between reasons and actions leaves ample room for a diversity of underlying motives and other influences of which we are not aware. According to that picture the fact that often our actions are not preceded by individual reflection does not entail that the reasons we exchange daily to explain and justify our actions bear no or little relation to their actual motivational origin or causes. I have shown how this picture might accommodate recent findings in the behavioural, cognitive and neurosciences, especially those that show that unconscious processes have a far-reaching influence on the way we behave and act (and explained by appeal to such emotional and cognitive impairments. However, there might be others that cannnot so readily explained in these terms.
190
MAUREEN SIE
think), as well as closely related findings in experimental ethics and moral psychology. More specifically, I have discussed a series of experiments performed by Batson and colleagues on moral hypocrisy. I argued against an interpretation of these findings as adding to the growing body of research that suggests that we are not the moral beings that we perceive ourselves to be. Instead, I suggested two alternative interpretations of the Batson-findings: (1) that the desire to ‘appear to be moral’ disclosed by the findings operates as a kind of proprioceptive mechanism that might occasionally lead us astray and/or (2) that the EC-principle that we (are made to) identify is not the moral principle that actually guides us in the experimental situation. Both suggestions derive from taking seriously the elaborated analogy, both are ample reason to resist a sceptical interpretation about our nature as moral agents. If the analogy elaborated on in the first sections and the interpretations of the Batson findings that it suggests are convincing, as I argued in the last section, we have all the more reason to take experimental ethics and moral psychology seriously. They show us that we might lack self-knowledge (the analogy), and moreover, are prone to deceiving ourselves in the moral domain (Batson-findings). Hence, experimental ethics and moral psychology present us with a much required mirror. This mirror reflects a moral nature that is less attractive than our explicit moral principles and values suggest, but not one that is anywhere near undermining the need for ethical reflection and philosophy. ACKNOWLEDGEMENTS: I thank Wiljan van den Berge and Nicole van Voorst Vader-Bours for preparing the English translation of the Dutch paper that was the basis of this chapter, the editor Christoph Lumer for careful reading and thoughtful comments, The Dutch Organization of Scientific Research (NWO) who financed the research-project from which this paper is one of the results.
REFERENCES Batson, C. (2008): Moral masquerades. Experimental exploration of the nature of moral motivation. In: Phenomenology and the Cognitive Sciences 7 (1). 51-66. doi: 10.1007/s11097-007-9058-y.
Moral Soulfulness
191
Batson, C. Daniel; Elizabeth R. Thompson; Hubert Chen (2002): Moral hypocrisy. Addressing some alternatives. In: Journal of Personality and Social Psychology 83 (2). 330-339. doi: 10.1037/0022-3514.83.2.330. Batson, C. Daniel; E. R. Thompson; G. Seuferling; H. Whitney; J.A. Strongman (1999): Moral hypocrisy. Appearing moral to oneself without being so. In: Journal of Personality and Social Psychology 77 (3). 525-537. Bennett, Maxwell; Daniel Dennett; Peter Hacker; John Searle (2007): Neuroscience and Philosophy. Brain, Mind, and Language. New York: Columbia University Press. Chaiken, S.; Y. Trope (2002): Dual-process theories in social psychology. New York: Guilford. Davidson, Donald (1980): Essays on actions and events. Oxford; New York: Clarendon Press; Oxford University Press. Dennett, Daniel (2003). Freedom Evolves. London: Penguin. Diener, Edward; Mark Wallbom (1976): Effects of self-awareness on antinormative behavior. In: Journal of Research in Personality 10 (1). 107111. doi: 10.1016/0092-6566(76)90088-X. Dijksterhuis, A. (2007): Het slimme onbewuste. Amsterdam: Bert Bakker. Doris, John M. (2002): Lack of character. Personality and moral behavior. Cambridge; New York: Cambridge University Press. Fine, Cordelia (2006a): Is the emotional dog wagging its rational tail or chasing it? In: Philosophical Explorations 9 (1). 83-98. Fine, Cordelia (2006b): A mind of its own. How your brain distorts and deceives. 1st ed. New York: W.W. Norton & Co. Gazzaniga, Michael S.; Joseph E. LeDoux (1978): The integrated mind. New York: Plenum. Gigerenzer, Gerd (2008): Moral Intuition = Fast and Frugal Heuristics. In: Walter Sinnott-Armstrong (ed.): Moral Psychology. Volume 2: The Cognitive Science of Morality. Intuition and Diversity. Cambridge, MA: MIT Press. 1-26. Gigerenzer, Gerd et al. (1989): The Empire of chance. How probability changed science and everyday life. Cambridge; New York: Cambridge University Press. Hassin, Ran R.; James S. Uleman; John A. Bargh (eds.) (2005): The new unconscious. Oxford; New York: Oxford University Press. Jacobson, Daniel (2008): Does Social Intuitionism Flatter Morality or Challenge it? In: Walter Sinnott-Armstrong (ed.): Moral Psychology. Volume 2: The Cognitive Science of Morality. Intuition and Diversity. Cambridge, MA: MIT Press. 219-232. Lamme, V. (2010): De vrije wil bestaat niet. Over wie er echt de baas is in het brein. Amsterdam: Bert Bakker. Libet, Benjamin; Curtis A. Gleason; Elwood W. Wright; Dennis K. Pearl (1983): Time of conscious intention to act in relation to onset of cerebral
192
MAUREEN SIE
activities (readiness-potentials). The unconscious initiation of a freely voluntary act. In: Brain 106. 623-642. Murphy, S.; J. Haidt; F. Björklund (2000): Moral Dumbfounding. When Intuition Finds No Reason. University of Virginia. Narvaez, Darcia (2008): The Social Intuitionist Model: Some CounterIntuitions. In: Walter Sinnott-Armstrong (ed.): Moral Psychology. Volume 2: The Cognitive Science of Morality. Intuition and Diversity. Cambridge, MA: MIT Press. 233-240. Sacks, Oliver (1985): The man who mistook his wife for a hat. London: Gerald Duckworth. Schnall, Simone; Jennifer Benton; Sophie Harvey (2008a): With a Clean Conscience. Cleanliness Reduces the Severity of Moral Judgments. In: Psychological Science 19. 1219-1222. Schnall, Simone; Jonathan Haidt; Gerald L. Clore; Alexander H. Jordan (2008b): Disgust as Embodied Moral Judgment. In: Personality and Social Psychology Bulletin 37. 1096-1109. Schwitzgebel, Eric (2009): Do ethicists steal more books? In: Philosophical Psychology 22. 711-725. Sie, Maureen (2009): Moral Agency, Conscious Control, and Deliberative Awareness. In: Inquiry 52. 516-531. Sie, Maureen (2010): Wat bezielt ons? Over de relevantie van de morele psychologie voor ethische reflectie. In: filosofie & praktijk 32. 19-36. Sie, Maureen (work in progress): Moral Hypocrisy and Acting for Reasons. Soon, Chun Siong; Marcel Brass; Hans-Jochen Heinze; John-Dylan Haynes (2008): Unconscious determinants of free decisions in the human brain. In: Nat Neurosci 11. 543-545. doi: http://www.nature.com/neuro/journal/v11/n5/suppinfo/nn.2112_S1.html. Strawson, Peter (1962): Freedom and Resentment. In: Proceedings of the British Academy 48. 1-25. - Reprinted in: Gary Watson (ed.): Free Will. Oxford [etc.]: Oxford U.P. 1982. 59-81. Thaler, R. H.; C. R. Sunstein (2008): Nudge. Improving Decisions About Health, Wealth, and Happiness. New Haven, CT: Yale University Press. Tversky, Amos; Daniel Kahneman (1981): The Framing of Decisions and the Psychology of Choice. In: Science 211. 453-458. Wegner, Daniel M. (2002): The illusion of conscious will. Cambridge, MA: MIT Press. Wicklund, Robert A. (1975): Objective Self-Awareness. In: Advances in Experimental Social Psychology 8. 233-275. Wilson, Timothy D. (2002): Strangers to ourselves. Discovering the adaptive unconscious. Cambridge, MA: Belknap Press of Harvard University Press.
PART IV Neuroethics Which Values?
The Rationale Behind Surgery – Truth, Facts, Values ARNALDO BENINI Abstract: Ideally the doctor should be able to identify the singularity of every case of sickness, in order to choose a treatment related to its particularities. This lays beyond human capacities, so that risks and misfortunes belong unavoidably to medical praxis. The ethical aspect of surgical treatment requires a deep thinking. Just the patient and not the surgeon has to face the consequence of surgeon’s doing. Which truth, which facts and values should be evaluated before a surgical intervention can be decided on and can be recommended? Every medical and surgical field has its own ethical problems. The paper deals mainly with the special aspects of the treatment of tumours of the organ which creates consciousness and mind: the brain.
2003 William Safire held neuroethics as: “The field of philosophy that discusses the rights and wrongs of the treatment […] of the human brain.” 1 Two years later Michel Gazzaniga proposed the current meaning of neuroethics: “It is – or should be – an effort to come up with a brain-based philosophy of life. [It is] the examination of how we want to deal with the social issues of disease, normality, mortality, lifestyle, and the philosophy of living informed by our understanding and underlying brain mechanism. […]”2 I will try to give some examples how neuroethics can and should be “implemented” in the medical and surgical practice. I am grateful for the opportunity to reflect on and to speak about the ethical background of my professional activity as neurosurgeon throughout forty-three years. I have recalled to mind and reconsidered cases of patients and operations in which the ethical aspect of treatment required deep thinking in order to avoid ethical mistakes. They are not eo ipso a crime. But they can amount to medical inaccuracy. 1 2
W. Safire, The Risk That Failed, “New York Times” July 10. 2003. M. S. Gazzaniga, The Ethical Brain, Dana Press, New York, Washington 2005, p.XV.
196
ARNALDO BENINI
The key point of medical ethics is to consider the patient’s health as the very pinnacle of medical practice. This should be obvious; it is indeed not accidental that one third of Hippocrates’ oath admonishes doctors not to request too high a price for taking care of a sick person. Niels Bohr said that prediction is difficult, especially about the future. This is particularly true in the field of medicine, and more than ever for surgery. In two cases surgery performed in identical ways for apparently identical lesions, can result for one patient in a complete or in a quite satisfactory recovery, while in the other the condition remains unchanged or gets worse. Sometimes the cause of a bad outcome can be found ex post in aspects of the lesion which were overlooked or misunderstood and which can be treated by a second intervention. Very scientific, technical, human and ethical problems have to be faced when the cause of failure of surgery (but also of every treatment) cannot be found. This is indeed the worst case. What can and should be done? This is the event, the incident, the accident, which obliges the surgeon to carefully consider the rationale behind every surgery. Which truth, which facts and values should be evaluated before a surgical intervention can be decided on and can be recommended?
1. Semantic preliminary A semantic preliminary can help for a better understanding of the topic.3 The Italian word malattia corresponds to the English sickness and comprehends two completely different conditions. Unlike Italian, English makes a fundamental distinction between illness and disease. Illness is the sufferers’ belief that something is wrong with them. They feel ill even though no evidence of disease can be detected. Many people who end up presenting themselves to a doctor have no identifiable organic disease. There is apparently nothing physically wrong with them, even if they claim and in many cases, genuinely believe that they are unwell. They really do feel ill and their ability to lead a normal life may be significantly impaired. The 3
P. Martin, The Sickening Mind: Brain, Behaviour, Immunity and Disease, Harper Collins, London 1997, p.46ff.
The Rationale Behind Surgery – Truth, Facts, Values
197
majority of those who are suffering from vague, undiagnosed illness are not malingering. It is no wonder that all treatment is frequently ineffective and the patients keep returning to the doctor over and over again, distressed and dissatisfied. Every elusive disease leads to a waste of time and money. Health and illness lie along a continuum and often the dividing line between the two is arbitrary. It is not the surgeons’ duty to make the correct diagnosis in these sufferings. His fundamental task is to recognize the truth that no surgical treatment is needed, even if the patient desires an operation. This duty can be difficult indeed, and requires a lot of experience and strong discipline. Disease: is a definable medical disorder that can be identified according to agreed criteria of evidence-based diagnostics. In some cases the patient might not feel ill, for instance in case of early-stage cancer, coronary heart disease or small brain tumours. An example particularly familiar to me throughout forty-three years of performing spine surgery: two people have almost identical, severely degenerated lumbar spines, both with impressive alterations of bone and joint structures. One patient suffers from an invalidating back pain, the other feels quite well except for some complaints in his lower back after physical exertion.
2. Sickness as natural processes Every sickness is a unique event. Theoretically and ideally the doctor should be able to identify the singularity of every single case of sickness, in order to choose a treatment strictly related to its particularities. This lies beyond human capacities, so that risks and misfortunes are unavoidable aspects of medical practice. A sickness is not a major upset of nature’s laws. Sickness is a consequence of a normal natural process with adverse effects on our health. The humbling truth sent out by sickness is that our bodies are products of evolutionary processes that did not take us into any particular consideration. Medicine and surgery try to modify or to slow down or to block biophysical events which are the normality of nature. The difficulty in forecasting the outcome of medical or surgical treatments has its roots there. The flow of natural events is unpredictable, their explanation and interpretation are equally dubious and uncertain.
198
ARNALDO BENINI
The truth to be kept in mind is that this level of knowledge of every single case is inaccessible: therefore not only the surgical, but also the medical treatments, even the simplest, are connected with risks. Not all doctors find it indispensable to inform the patients about this aspect of every kind of treatment. In every field of medical practice, it is much easier to explain what seems to have happened in the diseased body than to forecast what can be the follow up of the lesion. When something goes wrong in the human body, it informs consciousness in rather strange, one might think even inhuman ways. Death itself, an outcome that suggests that we are closer to insects than angels, is its most extreme statement. Pain, itching, nausea, shortness of breath, paralysis, blindness, deafness, weakness or disappearance of memory, confusion and so on compress the mind to a narrow and impoverished present. The inhumanity of diseases, and of the primary experiences associated with them, is not surprising since there is nothing specifically human about much of what goes on in the human body. With sickness the body moves into the foreground. The inhuman depths beneath its human surface assert themselves. After diagnosis, the insistent discord in the ensemble of ordinary experiences still declaims that the body, our own most territory, is a foreign land with laws and customs hidden from its owner.
3. Rationale of surgery The difficulty in working out the rationale of surgery is one of the difficulties of being human. The matter concerns the difficulty of operating with rational caution in a matter in which just the patient and not the surgeon has to face, and, if things go wrong, to bear, the consequences of a surgeon’s doing. The surgeon is confronted with various dilemmas: (1) Is surgery the only way to treat with likely efficacy the patient’s sickness? If this should be the case, it is clear that surgery has to be recommended. It must be evident that the patient’s actual condition contains a risk higher than the risk of an operation.
The Rationale Behind Surgery – Truth, Facts, Values
199
(2) If surgery is held to be unavoidable and the patient agrees to be operated upon, the question is: what kind of intervention is the most appropriate for the particular kind of lesion considering the patient’s conditions? This choice can be the most intricate in the whole therapeutic plan. The technique must be the most selecting among the known procedures and it has to be well-known and already verified according to international guide-lines evaluating safety and outcome. As a rule, such kind of valuation is based on the comparison of results obtained by different surgical techniques, or by comparing surgical to non-surgical treatments. The surgeon has to be familiar with the most appropriate procedure, in order to do the best and to be able to cope with unexpected events. What, if the surgeon is not familiar enough with the most suitable procedure? The right decision is to transfer the patient to the appropriate surgeon. In case of extreme urgency, the patient and his or her close relatives should be informed that due to urgency surgery cannot be performed under the best thinkable requisite. The surgical risk could be higher. This dialogue can be a ticklish question. Medical ethics holds this practice to be unavoidable. (3) In most cases, the surgeon can foresee the best achievable outcome. And the best achievable outcome is not always what the patient and his relatives expect. Sometimes a tumour cannot be totally removed or the cause of chronic pain cannot be totally cured.
4. Rationale of brain surgery Every medical and surgical field has its own ethical problems. Let me take the example of a frequent neurosurgical event, the treatment of tumours of the organ which creates consciousness and mind and therefore also the ability to perform surgery: the brain. Strictly speaking, all kinds of brain surgery have to be considered an intervention on the organ of the patient's Self. Grosso modo, two kinds of tumours can grow in the brain:
200
ARNALDO BENINI
1. The benign meningioma, one of the most benign neoplasia in the body. It grows slowly from the meninx into the cerebral mass, is solid, capsulated, not infiltrative into the brain. The disease can cause severe headache, mental confusion, dizziness or can erupt with epileptic seizures. Due to the slow tumour growth, complaints and signs can be very mild and intermittent. The tumour grows inexorably, so that the operation is mandatory, with exception of elderly people, in whom a successful operation could not significantly prolong life expectancy and improve quality of life. In case of complete removal of the tumour without significant injury to the cerebral substance, the patients are cured, even if some of them must take a couple of pills every day in order to avoid epileptic seizures. Recurrences are rare. 2. Gliomas are the other kind of intracranial tumours. They are intrinsic neoplasms of the brain growing from the cells (gliacell) interposed between the neurons in the cerebral cortex. Even if the cytological patterns are basically benign, Grade 1 and Grade 2 gliomas can devastate a wide portion of brain tissue. In favourable cases, the central tumour mass can be removed by surgery. A sharp boundary between tumour and normal tissue does not exist, because the tumour infiltrates the rest of the brain. Trying to totally remove the tumour can be a shot in the dark. The rationale in the treatment of this kind of brain pathology is to perform an operation without provoking any further deficit, mainly of language, movement and consciousness, knowing that the tumour will unavoidably grow again. Surgery should not worsen the physical and mental conditions caused by the lesion itself. It can be unwise to undertake such an operation in elderly patients. Modern technology enables surgeons to perform the intervention under continuous magnetic resonance investigation (MRI) monitoring. The operating theatre is a special MRI-device. The surgeon can in this way avoid destroying key functional regions of the brain. Sometimes, more pathologic tissue can be removed than through conventional techniques. Life expectancy can in this way be prolonged by some months. Due to the benignity of the tumour tissue, irradiation and chemotherapy are ineffective. The rationale in these cases requires an exact diagnosis of the visible extension and of the cytological dignity of the tumour. In uncertain cases, a biopsy before or during surgery is indispensable. The patient and his
The Rationale Behind Surgery – Truth, Facts, Values
201
relatives have to be informed about general risks, the possibility of worsening, and about average life expectancy. 3. Grade III and IV gliomas are rapidly growing intrinsic brain tumours. Grade IV is one of the most malignant forms of tumours of the human body. Whatever of the neoplasia that is visible in the brain imaging (CT or MRI) is just the central part of the tumour mass, which infiltrates not only the whole hemisphere, but after a short time also the other hemisphere. The glioblastoma, i.e. Grade IV glioma, seems to be a general, gradual malignant alteration of the whole brain. Healing is unachievable, even with early treatment. After surgical removal of the central part and subsequent radio- and chemotherapy the average life expectancy is 8 to 14 months. Without surgery, just through radio- and chemotherapy, the average remaining life is 6 months at the most. What should be done here? The rational procedure depends exclusively on the patient’s attitude towards life and on his immediate needs. It is surprising how many patients in normal mental conditions refuse any therapy because they want to spend the rest of their life without the trouble of brain surgery and its risks, perfectly aware of the unavoidable, sometimes rapid worsening of their condition. This decision has to be rigorously respected by relatives and by the physician who eventually has to care for and to help the patients to die in the most humane way. On one occasion a 55-year old patient begged me to do everything possible to prolong his life by four or six months so that he could arrange his will. The moral duty of a surgeon consists in informing the patients and relatives about the kind and extension of the tumour and about its infiltrative nature, about the limits of the surgical and subsequent treatments. Talking to patients and relatives about life and death is an extremely delicate duty for the surgeon, it requires certainty of diagnosis, a lot of empathy and experience. The surgeon should not feel offended if the patient wishes a second or a third opinion. A decision for surgery and subsequent treatments as well as for pure survival as long as nature permits is perfectly rational. In both eventualities physician should be ready to help the patients to die with dignity. A positive aspect of such kinds of brain tumours is that in the last weeks the brain can be so damaged that the patient is in a timeless life unaware of being at his end. He or she passes away unconsciously.
202
ARNALDO BENINI
5. Truth, facts and values in the rationale of surgery What is the truth in the rationale behind surgery of which the surgeon must be aware? (A) Exact knowledge of the lesion which is to be treated. (B) Certainty that surgery is the only or the best way to treat the lesion. (C) Certainty that the surgical risks are lower than the risk of an untreated condition. (D) Certainty to be able to employ the most suitable technique. And what are facts? (A) There is no surgery without risks. The patient needs to be informed about surgical risks. (B) The lesion can be treated, but not always really cured. This depends on the characteristics and on the extension of the lesion. The patient and his relatives must know the average prognosis in case of palliative treatment or treatments and in case of surgery and subsequent procedures.4 And finally, which values should a surgeon adhere to? In surgery with relatively high intrinsic risk, the only and leading value is to try to achieve what the patient holds to be best for him. Just that, nothing more.
4
M.C. Schmidt, Griff nach dem Ich? Ethische Kriterien für die medizinische Interventionen in das menschliche Gehirn, De Gruyter, Berlin, New York 2008, pp. 71 ff.
Biographical Notes on the Authors Arnaldo Benini is Professor of Neurosurgery and Clinical Neurology at Zurich University since 1979. Born in Ravenna, Italy, in 1938, graduate in Medicine at Florence University (1964) and Zurich University (1975), he specialized in Neurosurgery at Zurich University with Professor Hugo Krayenbühl. 1965 to 2003 he was neurosurgeon in St.Gallen and Zurich. He has published on neurosurgery and neurology (188 publications), on the physiology of pain in Desartes, on Domenico Cotugno, Vittorio Putti, on euthanasia and physicians’ conscience, on the biological testament, on Thomas Mann and Jakob Wassermann. His books include What am I. The Brain in Search of Itself (2009, in Italian); Imperfect Conscience. Neurosciences and The Meaning of Life (2012, in Italian). He contributes to the Sunday supplement of the daily newspaper Sole 24 Ore on matters of philosophy and science. Antonella Corradini is Professor of Philosophy of the Human Sciences at the Faculty of Psychology of the Catholic University of Milan, Italy, specializing in the philosophy of mind, philosophical psychology, and metaethics. Her latest publications include: “Mirrors neurons and their function in cognitively conceived empathy” (with A. Antonietti, in: A. Antonietti and A. Corradini (eds.) The role of mirror neurons in intentionality understanding. From empirical evidence to theoretical interpretation, Special Issue of Consciousness and Cognition, 2013); “Emergent Dualism: Why and How” (in: P. Wallusch and H. Watzka (eds.) Verkörpert existieren. Ein Beitrag zur Metaphysik menschlicher Personen aus dualistischer Perspektive, 2014 (in press)); “Quantum Physics and the Fundamentality of the Mental” (in: A. Corradini and U. Meixner (eds.) Quantum Physics Meets the Philosophy of Mind, 2014 (in press)). Christoph Lumer is Professor of Moral Philosophy at the University of Siena (Italy). His main fields of research are normative ethics, metaethics (e.g. justification of morals), some applied ethics (environmental ethics, future generations …), theory of action and
204
Biographical Notes on the Authors
moral psychology, theory of prudential rationality and desirability, argumentation theory. He has published the monographs Rational Altruism. A Prudential Theory of Rationality and Altruism (2nd ed. 2009, in German), The Greenhouse. A Welfare Assessment and Some Morals (2002), and Practical Theory of Argumentation (1990, in German), and more than 100 articles. Further information, including access to many of Lumer’s publications: www.lumer.info. Michael Pauen is Professor of Philosophy at the HumboldtUniversität zu Berlin and academic director of the Berlin School of Mind and Brain. His research focuses on the philosophy of mind and on the relation between philosophy and neuroscience. Recent papers: “The Second-Person Perspective” (Inquiry, 2012); “Materialism, Metaphysics, and the Intuition of Distinctness” (Journal of Consciousness Studies, 2011); “How Privileged is First Person Privileged Access?” (American Philosophical Quarterly, 2010); “Does Free Will Arise Freely” (Scientific American, 2003). Massimo Reichlin is Professor of Moral Philosophy at the Università San Raffaele, Milan (Italy). His fields of research are normative ethics, bioethics, neuroethics and the history of moral philosophy. He has published books on euthanasia (L'etica e la buona morte, 2002), aspects of Kantian ethics in contemporary discussion (Fini in sé. La teoria morale di Alan Donagan, 2003), abortion (Aborto. La morale oltre il diritto, 2007), philosophical theories of life ethics (Etica della vita. Nuovi paradigmi morali, 2008), the relationship between ethics and neuroscience (Etica e neuroscienze, 2012), and the history of utilitarianism (L'utilitarismo, 2013). Maureen Sie is professor at the Institute of Philosophy, Leiden University and associate professor of Meta-ethics and Moral Psychology at the Erasmus University, Rotterdam, the Netherlands. From 2009 to 2014 she led a small research-group enabled by a prestigious personal grant (Dutch Organization of Scientific Research), exploring the implications of findings in the behavioral, cognitive, and neuroscience for our concept of moral agency. Her publications include: “The real neuroscientific challenge to free will” (Trends in Cognitive Science, 12/1 (2008) pp 3-4, co-authored with Arno Wouters). She published Justifying Blame. Why Free Will
Biographical Notes on the Authors
205
matters and why it does not (2005) and co-edited with Derk Pereboom Basic Desert, Reactive Attitudes and Free Will a Special Issue of Philosophical Explorations 16(2) (2013).
Name Index Adams, F., 67n2, 105n1 Adolphs, R., 38, 142 Anderson, A. K., 141 Anderson, St. W., 129, 141 Andreiuolo, P. A., 40 Anscombe, G. E. M., 11n4, 35, 110111n5, 123 Antonietti, A., 203 Appiah, K. A., 146-150, 152, 154, 160 Aquinas, Th., see: Thomas Aquinas Aristotle of Stagira, 5, 67n2, 105n1, 169 Aronson, J. A., 41 Ashley-Cooper, A., see: Shaftesbury, Third Earl of Audi, R., 27 Augustine of Hippo, 67n2, 105n1 Azevedo, I. F., 40 Baars, B. J., 97, 101 Babcock Gove, Ph., 109, 123 Bargh, J. A., 9, 35, 36, 37, 191 Barndollar, K., 9, 35 Batson, C. D., 25, 26, 35, 36, 165, 177-185, 187-189, 190, 191 Bayne, T., 110, 123 Bechara, A., 141 Benini, B., 3, 35, 203 Bennett, M., 173n10, 191 Bentham, J., 146 Benton, J., 192 Berge, W. van den, 190 Berker, S., 157, 159, 160 Berthoz, A., 5n1, 36 Bettman, J. R., 41 Björklund, F., 29, 37, 192
Bohr, N., 196 Bolbecker, A. R., 74n4 Bowles, S., 37, 38 Boyd, R. T., 37, 38 Bramati, I. E., 40 Brandt, R. H., 67n2, 105n1 Brass, M., 103, 192 Bratman, M., 67n2, 105 1, 133, 141 Breitmeyer, B. G., 74n4, 75, 88 Brink, D. O., 137, 141 Brodbeck, Ch., 138 Buchak, L., 9, 36 Burrows, L., 36 Busch, N. A., 60, 101 Camerer, C. F., 8, 36, 38 Campbell, R., 159n7, 161 Caparelli-Daquer, E. M., 40 Carlsmith, K. M., 159, 160 Chaiken, S., 166n2, 191 Chapman, H. A., 130, 141 Chartrand, T. L., 9, 35 Chen, H., 191 Chen, M., 36 Chisholm, R. M., 52, 55, 60 Cholbi, M., 18, 36 Churchland, P. Smith, 5, 36 Ciaramelli, E., 13, 36, 129, 141 Clark, A., 9n3, 36 Clark, S., 60, 123 Clore, G. L., 143, 192 Cohen, J. D., 37, 41, 141, 142 Cohen, J. L., 39 Cohon, R., 140n2, 141 Corradini, A., 14, 17, 30, 34, 148, 160, 204
208
Name Index
Cotugno, D., 203 Craig J., 142 Crick, F., 7n3, 38 Crozier, R. W., 8, 36 Cushman, F., 32, 36, 38, 42, 142 Damasio, A. R., 17, 36, 37, 38, 127, 129, 141, 142, 143 Damasio, H., 141 Darley, J. M., 37, 141, 142 Davidson, D. D., 67n2, 105n1, 174, 191 Davis, L. H., 110-111n5, 123 Dean, R., 158, 160 Deecke, L., 57, 60 Deigh, J., 18, 36 Dennett, D. C., 50, 60, 74, 79, 96, 98n12, 101, 110, 119, 123, 173n10, 191 de Oliveira-Souza, R., see: Oliveira-Souza, R. de de Quervein, D. J.-F., see: Quervein, D. J.-F. de Descartes, R., 36, 54, 67n2, 105n1, 120, 141, 203 Dias, M. G., 37 Diener, E., 179, 191 Dijksterhuis, A., 165, 191 di Pellegrino, G., see: Pellegrino, G. di Dolan, R. J., 41 Doris, J. M., 36, 41, 42, 166, 191 Eimer, M., 57, 60, 65, 66, 79, 83, 91, 93, 101 Einstein, A., 112 Engell, A. D., 37, 142 Eslinger, P. J., 40 Evans, J. St. B. T., 89n7, 101 Fehr, E., 24, 36, 37, 38 Fehrer, E., 89
Fincher, K., 41 Fine, C., 18, 30, 36, 133, 135, 142, 165, 173n10, 191 Foot, Ph., 128, 141 Frank, R., 141 Frank, R. H., 24, 37 Frankfurt, H., 61, 133, 141 Freeman, A., 101, 102 Freud, S., 10, 14, 16, 107 Frith, Ch. D., 41, 60, 123 Gächter, S., 24, 36 Galaburda, A. M., 141 Galvan, S., 152, 160 Gazzaniga, M. S., 172, 191, 195, 195n2 Gerrans, Ph., 132, 133, 135, 141 Gigerenzer, G., 8, 28, 37, 172n9, 191 Gilbert, S., 60, 123 Ginet, C., 52, 60, 110-111n5, 123 Gintis, H., 24, 37, 38 Gleason, C. A., 61, 102, 191 Goldberg, E., 86, 101 Goldman, A., 67n2, 105n1 Gollwitzer, P., 115, 123 Gomes, G., 56, 60, 74 Grabowski, Th., 141 Grafman, J., 40, 142 Graham, J., 129, 142 Greene, J. D., 12-17, 20, 29-30, 33, 34, 36, 37, 128, 129-130, 141, 142, 145, 154, 154n3, 156-159, 159n5, 159n6, 160, 161 Grind, W. van de, 74n4 Grundmann, Th., 146n1, 161 Hacker, P., 191 Haidt, J., 16, 28, 29-31, 33, 37, 41, 129, 130-131, 142, 143, 150, 153-154, 154n3, 156, 161, 162,
Name Index 167, 171, 172, 173, 173n10, 188n18, 192 Haggard, P., 57, 58, 62, 65, 66, 79, 83, 91, 93, 100, 101, 110, 115, 123 Hardman, D., 8, 37 Hare, R. M., 34, 147, 148, 161 Harvey, N., 8, 38 Harvey, S., 192 Hassin, R. R., 10, 37, 165, 191 Hauser, M. D., 28, 29-30n11, 37, 38, 142, 151, 154, 155, 161 Haynes, J.-D., 59, 60, 66, 79, 79n5, 101, 103, 115, 123, 192 Heckhausen, H., 23, 38, 56, 60, 65, 69, 87, 88, 91, 95n9, 101 Heekeren, H. R., 18, 38 Hein, G., 20, 38 Heinze, H.-J., 103, 192 Henrich, J., 24, 38 Herrmann, Ch. S., 56, 57, 60, 82, 101 Hobbes, Th., 23 Hoffman, M. L., 25n7, 26, 38 Hoffmann, F., 146n1, 161 Horvath, J., 148n1, 161 Huemer, M., 27 Hume, D., 8, 16, 17, 22, 34, 67n2, 105n1, 108, 108n3, 109, 110, 123, 131, 137, 140n2, 141, 142, 151-152, 156, 161, 169 Hutcheson, F., 139, 142 Imada, S., 41 Jacobson, D., 173n10, 191 Jeannerod, M., 5n1, 38, 100 Johnson, E. J., 41 Joordens, St., 74n4 Jordan, A. H., 143, 192 Kagan, J., 28, 38
209
Kahane, G., 158, 161 Kahneman, D., 8, 38, 130n1, 142, 149, 150, 161, 172n9, 192 Kalogeras, J., 60, 123 Kane, R., 48, 53, 60 Kant, I., 8, 18, 23, 23n6, 37, 38, 39, 53, 67n2, 105n1, 131, 138, 160, 204 Kaube, H., 41 Keil, G., 48, 60 Keller, I., 56, 60, 65, 69, 87, 88, 91, 95n9, 101 Kennett, J., 18, 132, 133, 135, 141, 142 Kiehl, K. A., 18, 38 Kiesler, S., 24, 38 Kim, D. A., 141 Kiverstein, J., 36 Knobe, J., 146, 153, 161 Koch, Ch., 7, 38 Koehler, D. J., 8, 38 Koenigs, M., 13, 38, 129, 142 Kohlberg, L., 28, 39 Koller, S. H., 37 Kornhuber, H. H., 57, 60 Korsgaard, Ch., 133, 142 Kosso, P., 154, 161 Krayenbühl, H., 203 Krueger, F., 40, 142 Kumar, V., 159n7, 161 Kurlander, D., 40 Kutschera, F. von, 152, 159, 161 Làdavas, E., 36, 141 Lamm, C., 38 Lamme, V., 172, 173, 191 Lau, H. C., 5n1, 41, 73, 85, 86, 97n10, 102 LeDoux, J. E., 172, 191 Leibniz, G. W., 67n2, 105n1 Levy, N., 12, 14, 30, 39
210
Name Index
Libet, B., 5-7, 12, 32, 33, 39, 45, 48, 55, 56-57, 58, 59, 60, 61, 63-96, 98, 98n12, 99, 100, 101-102, 103, 106, 116-119, 121, 123, 124, 173, 173n10, 191 Lipton, P., 159n6, 161 Locke, J., 67n2, 105n1 Lo Dico, G., 154n3, 161 Loewenstein, G., 156, 162 Lowenberg, K., 142 Lowery, L., 41 Lumer, Ch., 6, 8, 11, 12, 16, 23, 23n6, 25n7, 26, 32, 33, 39, 67n3, 97n11, 99, 102, 106, 106n1, 107n2, 116, 116n8, 118, 119, 123, 124, 173n10, 190, 203-204 Lynch, J. G. jr., 23, 39 Mahapatra, M., 41 Mandel, D. R., 5, 42 Manktelow, K., 8, 39 Mann, Th., 203 Mansbridge, J. J., 24, 39 Martin, P., 196n3 May, J., 146n1 McCann, H., 100 McDavis, K., 36 Melden, A. I., 9, 39 Mele, A. R., 39, 67n2, 79, 81n6, 87, 102, 105n1, 115, 124 Metzinger, Th., 50, 61 Mikhail, J., 12, 14, 15, 29n11, 40 Miller, J., 56, 61, 65, 74n4, 103 Min, B. K., 60, 101 Minsky, M., 50, 61 Moll, J., 16, 18, 19, 20, 40, 130n1, 142 Moore, G. E., 27 Morelli, S. A., 142 Mourao-Miranda, J., 40 Muccioli, M., 36, 141 Much, N. C., 41
Murphy, S., 172, 192 Narvaez, D., 173n10, 192 Neal, D. T., 5n2, 40 Neumann, O., 89, 102 Nichols, Sh., 18, 22n5, 26, 26n10, 29, 40, 41, 145, 146 152, 153, 161 Niemi, P., 75, 76 Nisbett, R. E., 153, 153n3, 161 Nunner-Winkler, G., 22, 28, 40 Nystrom, L. E., 37, 41,141,142 Ockham, W. of, see: William of Ockham O’Doherty, J., 41 Oleson, K. C., 25, 26, 36 Oliveira-Souza, R. de, 16, 40, 142 Pardini, M., 40 Park, L., 41 Parks, C. D., 24, 40 Passingham, R. E., 5n1, 41, 60, 73, 85, 86, 97n10, 102, 123 Pauen, M., 6, 12, 32-33, 56, 60, 61, 92n8, 101, 102, 115, 116n8, 124, 204 Payne, J. W., 8, 41 Pearl, D. K., 61, 102, 191 Pellegrino, G. di, 36, 141 Pessoa, L., 40 Piaget, J., 28, 41 Pinillos, A. N., 146n1, 162 Pockett, S., 74n4, 79, 81, 82-83, 8485, 91, 103, 116n7, 124 Pollard, B., 9, 41 Polonioli, A., 12, 41 Prinz, J. J., 22n5, 26, 26n10, 29, 41 Prinz, W., 89, 102 Purdy, S. C., 79, 81, 82-83, 84-85, 102 Putti, V., 203
Name Index Quervein, D. J.-F. de, 159, 160 Raab, D., 89 Rawls, J., 17, 27, 34, 40, 147-148, 148n2, 162 Rees, G., 60, 123 Reichlin, M., 12, 14, 30, 33-34, 140n3, 142, 204 Rieger, J. W., 60, 101 Rilling, J. K., 41 Rizzolatti, G., 19, 41 Roedder, E., 42 Roskies, A., 18, 41, 72, 75, 76, 84, 103, 137, 142 Ross, W., 27, 146 Roth, G., 6 Rozin, P., 27, 29, 41 Runggaldier, E., 110-111n5, 124 Sacks, O., 165, 169, 169n5, 170, 189, 192 Safire, W., 195, 195n1 Sakai, K., 60, 123 Sanfey, A. G., 19, 41 Sauer, H., 159n7, 162 Saver, J. L., 129, 143 Schmidt, H., 38 Schmidt, M. C., 202 Schnall, S., 130, 143, 167, 171, 192 Schopenhauer, A., 22, 23 Schurz, G., 152, 162 Schwintowski, H.-P., 38 Schwitzgebel, E., 168, 170n6, 192 Searle, J. R., 110-111n5, 124, 191 Seebaß, G., 48, 61 Seeger, M., 146n1, 162 Seuferling, G., 191 Seymour, B., 41 Shackel, N., 158, 161 Shaftesbury, Third Earl of (= Anthony Ashley-Cooper), 139, 143
211
Sheeran, P., 115, 123 Shieber, J., 146n1, 162 Shweder, R. A., 27, 41 Sidgwick, H., 146 Sie, M., 34-35, 165n1, 166, 167n3, 175, 179n13, 185, 192, 204-205 Singer, P., 13-14, 16, 41, 148n2, 162 Singer, T., 19, 38, 41 Singer, W., 6 Sinigaglia, C., 19, 41 Sinnott-Armstrong, W., 14, 18, 28, 41, 42 Small, D. A., 156, 162 Sommerville, R. B., 37, 141 Soon, Ch. S., 66, 79, 79n5, 83, 103, 173, 192 Sosa, E., 146n1, 162 Spence, S. A., 5n1, 42 Sproull, L., 38 Stamm, J. S., 79 Stich, St., 25, 32, 42 Strawson, G., 53, 61 Strawson, P. F., 176n11, 192 Strongman, J. A., 191 Sunstein, C. R., 172n9, 192 Susskind, J. M., 141 Sutherland, J. K. B., 101, 102 Thaler, R. H., 172n9, 192 Thomas Aquinas, 67n2, 105n1 Thompson, E. R., 191 Thomson, J. J., 128, 143 Tobia, K., 146n1, 162 Tovar Moll, F., 40 Tranel, D., 38, 141, 142 Trevena, J. A., 56, 61, 65, 74n4, 103 Trope, Y., 166n2, 191 Turiel, E., 28, 42 Tversky, A., 8, 149, 150, 161, 172n9, 192
212 Uleman, J. S., 37, 191 Underwood, G., 75, 76 van de Grind, W., see: Grind, W. van de Van Inwagen, P., 48, 52n1, 61 Vartanian, O., 5n1, 42 Vierkant, T., 36 Villringer, A., 38 Voorst Vader-Bours, N. van, 190 Vu, A. D., 24, 40 Wallbom, M., 191 Ward, J., 5n1, 42 Wartenburger, I., 38 Wasserman, G. S., 76, 77 Wassermann, J., 203 Waters, K., 38 Webster, N., 109, 123
Name Index Wegner, D. M., 10, 11-12, 11n4, 32, 33, 42, 56, 57-59, 61, 103, 105124, 166, 192 Weinberg, J., 146n1, 161, 162 Wheatley, Th., 150, 162 Whitney, H., 191 Wicklund, R. A., 179, 192 Widerker, D., 48, 61 William of Ockham, 67n2 Williams, B., 18, 42 Wilson, T. D., 153, 153n3, 161, 165, 192 Wood, W., 40 Wright, E. W., 61, 102, 191 Wu, M., 40 Young, L., 36, 38, 42, 142 Young, N., 165 Zahn, R., 40, 142