Political Self-Deception 9781108529242, 9781108423724

Self-deception, that is the distortion of reality against the available evidence and according to one's wishes, rep

254 99 3MB

English Pages [274] Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half-title
Title page
Copyright information
Dedication
Epigraph
Contents
Acknowledgments
Introduction
Why Political Self-Deception?
A Clear Notion of SD
Can Moral Responsibility Be Ascribed to Self-Deceivers?
SD in Politics
SD in Context
Part I The Philosophy of Self-Deception
Chapter 1 Investigating Self-Deception
1 Preliminary
2 A Critical Analysis of the Philosophy of SD
3 The Invisible Hand Model
4 Concluding Remarks
Chapter 2 The Attribution of Responsibility to Self-Deceivers
1 The Moral Question and Responsibility Attribution
2 Bypassing Control
3 SD Prevention
4 Concluding Remarks
Part II Self-Deception in Politics
Chapter 3 The Self-Deception of Political Leaders, Officials, and Governments
1 Preliminary
2 Objections to Political SD
3 The Make-Believe Effect of Leaders' SD and the Realist Objection
4 A Typology of Political Self-Deception
5 Collective SD: How It Works
6 What Is to Be Done?
7 Concluding Remarks
Chapter 4 Kennedy and Cuba
1 How Could I Have Been So Stupid?
2 Collective SD
3 Consent and Criticism
4 Sour Grapes
5 The Missile Crisis
6 A Success Story
7 The Secret Deal and the President's Lie
Chapter 5 Johnson and the Gulf of Tonkin Resolution
1 Introduction
2 Plotting the Escalation (1963-1964)
3 August 1964, the Gulf of Tonkin
4 The Response and the Interpretations
5 Concluding Remarks
Chapter 6 Bush and the Weapons of Mass Destruction
1 Smoking Guns, Mushroom Clouds, and Smokescreens
2 The Gap between Selling and Background Reasons
3 The Real Motive(s)
4 The Public Rationale for War
5 Aluminum Tubes, Yellowcake, and Curveball
6 The People's SD
Conclusion
Explanatory and Normative Reasons in Favor of Self-Deception
References
Index
Recommend Papers

Political Self-Deception
 9781108529242, 9781108423724

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

POLITICAL SELF-DECEPTION

Self-deception, that is the distortion of reality against the available evidence and according to one’s wishes, represents a distinctive component in the wide realm of political deception. It has received relatively little attention but is well worth examining for its explanatory and normative dimensions. In this book Elisabetta Galeotti shows how self-deception can explain political occurrences where public deception intertwines with political failure – from bad decisions based on false beliefs, through the self-serving nature of those beliefs, to the deception of the public as a by-product of a leader’s self-deception. Her discussion uses close analysis of three well-known case studies: John F. Kennedy and the Cuba Crisis, Lyndon B. Johnson and the Gulf of Tonkin Resolution, and George W. Bush and the Weapons of Mass Destruction. Her book will appeal to a range of readers in political philosophy, political theory, and international relations.    teaches political philosophy at the University of Piemonte Orientale. Her publications include Toleration as Recognition (Cambridge, ).

POLITICAL SELF-DECEPTION ANNA ELISABETTA GALEOTTI University of Piemonte Orientale

University Printing House, Cambridge  , United Kingdom One Liberty Plaza, th Floor, New York,  , USA  Williamstown Road, Port Melbourne,  , Australia –, rd Floor, Plot , Splendor Forum, Jasola District Centre, New Delhi – , India  Anson Road, #–/, Singapore  Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/ : ./ © Anna Elisabetta Galeotti  This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published  Printed and bound in Great Britain by Clays Ltd, Elcograf S.p.A. A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data : Galeotti, Anna E., author. : Political self-deception / Anna Elisabetta Galeotti, Universita del Piemonte Orientale. : New York : Cambridge University Press, [] | Includes bibliographical references. :   |   (hardback) |   (pbk.) : : Political science–Philosophy. | Self-deception–Philosophy. | Self-deception–Political aspects. :   .  |  ./–c LC record available at https://lccn.loc.gov/  ---- Hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

To Chiara, Enrico, and Federica

It is sometimes essential for a state, even for a democratic state, to undertake clandestine operations, as I learned in OSS during the Second World War. But when such operations are undertaken, it is important never to forget that the relationship between an intelligence agency and its instruments tends to be a corrupting one. Arthur M. Schlesinger Jr., A Thousand Days: J. F. Kennedy at the White House

Contents

Acknowledgments

page viii 

Introduction      -





Investigating Self-Deception





The Attribution of Responsibility to Self-Deceivers



  -  





The Self-Deception of Political Leaders, Officials, and Governments



Kennedy and Cuba





Johnson and the Gulf of Tonkin Resolution





Bush and the Weapons of Mass Destruction





Conclusion



References Index

 

vii

Acknowledgments

This book is the outcome of a project that started more than ten years ago, in my summer browsing at the University Library of Cambridge University. During this time I had the opportunity to spend one academic year at Harvard University as a fellow of the Edmund Safra Center for Ethics, a semester at the department of philosophy of Boston College as a visiting scholar, a semester as a fellow at the Italian Academy of Columbia University, one summer as a visitor at the Institute of Advanced Study in Princeton, and many summers as visiting scholar at Cambridge University. I am indebted to all of these institutions for providing me with ideal contexts to carry out this work, and I am grateful to my department, the humanities department of the University of Piemonte Orientale, for granting me leaves of absence for my research. In all of these places, I have profited from discussion with many colleagues and from their insights. I must first thank Dennis Thompson and Arthur Appelbaum, not only for their comments but also for their encouragement at an early stage of my work, and with them the whole group of fellows of the – year at the Safra Center. I also want to thank Akeel Bilgrami and Jon Elster, who discussed my paper at the Italian Academy, together with Achille Varzi and all the fellows of Spring ; and I owe a special thank you to William Caferro, who read and commented on a chapter of my work in depth. I have presented different parts of this book in various seminars and conferences over the years: I would like to thank Andrew Aisenberg, Dion Scott-Kakures, Sue James, Carla Bagnoli, Patrizia Petrini, Luaurent Jaffro, Emanuela Ceva, Ian Carter, Valeria Ottonelli, Michele Bocchiola, Tito Magri, Mario De Caro, Gianfranco Pellegrino, and Sebastiano Maffettone. Michael Walzer has not only believed in this project from the start and been supportive through the years but also has read and commented on a substantial part of the book. Salvatore Veca discussed my ideas at different stages of the work and invariably gave me useful hints. Finally, I am very grateful to the two viii

Acknowledgments

ix

anonymous reviewers for Cambridge University Press, whose painstaking criticisms and thoughtful comments have greatly improved the final version. Chiara Testino, Enrico Biale, and Federica Liveriero have followed all the phases of my writing, discussed my project on a daily basis, borne with my ups and downs, and helped me with teaching, contributing to make our common work fun. To them this book is dedicated.

Introduction

Why Political Self-Deception? The aim of this work is to contribute to the study and, possibly, the treatment of political deception. More precisely, it intends to focus on a specific mode of political deception that is usually disregarded in political analysis, namely, self-deception (SD). Although it is a contested concept within philosophy and psychology, let us say, as a preliminary definition, that SD is the distortion of reality against the available evidence and according to one’s wishes. The motivated distortion of data produced by SD obviously has significant consequences on the decision-making processes of political leaders, politicians, and government officials. Political decisions and policies induced by SD then lead to the deception of the public. Political deception is generally acknowledged as a relevant, if problematic, issue for democratic politics. The deception of the public is constantly denounced by media and by the press. It is mostly considered as intentionally achieved by governments and politicians, either by active lying – that is, by using false statements to mislead – or by intentional omission – that is, by withholding relevant information in order to 



When I talk of political deception, I refer to the public being deceived about something politically relevant. Deception is a success concept in that it means people have been made or come to believe something that it is false. This outcome is compatible with different modes of inducing the deception: (a) by intentional lying or intentional misleading; (b) by misperceptions and errors, (c) by SD, or (d) by having been unintentionally misled as the consequence of someone else’s mistakes or SD. See, for example, J. E. Mahon, “The Definition of Lying and Deception,” in The Stanford Encyclopedia of Philosophy (Spring ), Edward N. Zalta (ed.), http://plato.stanford.edu/archives/ spr/entries/lying-definition/. The problem of political deception is widely dealt with in the literature. See, for example, N. Chomsky, Necessary Illusions: Thought Control in Democratic Society, Pluto Press, London, ; P. Rynard and D. P. Shugarman, eds., Cruelty and Deception: The Controversy over Dirty Hands in Politics, Broadview Press, Peterborough, ON, ; L. Cliffe, M. Ramsay, and D. Bartlett, eds., The Politics of Lying, Macmillan, London, .





Political Self-Deception

mislead, or by secrecy, propaganda, spin, and “bullshit,” where the nontruth-oriented character of statements is quite explicit. Both explanatory accounts and normative assessments of this widespread presence of deception have been proposed. They range from the realist position, holding that deception, secrecy, and manipulation are intrinsic to politics, to the “dirty hands” position, justifying certain political lies under well-defined circumstances, to the deontological stand denouncing political deception as a serious pathology of democratic systems. Some recent works have more specifically focused on unpacking political deception, drawing distinctions among different kinds of lying – from above and from below; addressing either international or national audiences; for self-serving and for strategic reasons. These works aim at understanding political deception without viewing it through the lens of moral outrage, yet, despite their more analytical and dispassionate approach, none entertains the possibility that political deception might partly be induced unintentionally by SD. Alternatively, it is sometimes conceded that the deception of the public is the by-product of government officials’ (honest) mistakes. In other words, political theory so far has considered political deception as induced either by lies, manipulation, and willful misinformation, or as the unintended consequence of illusions and misperceptions. The former calls for moral outrage and public exposure, the latter for cognitive analysis. But what if the false belief was candidly believed and, at the same time, the epistemic process of belief-formation was in the grip of an emotionally loaded desire switching on cognitive biases? Neither moral outrage nor purely cognitive analysis is of much help in this case, although political theory has basically oscillated from the first to the second. This work aims at sidestepping the fork between pure dishonesty and cynicism, on the one hand, and honest mistakes, on the other. It aims to show that, more often than not, the misperception of reality, under various sources of psychological and emotional pressure, is driven by the desire to    



See H. Frankfurt, On Bullshit, Princeton University Press, Princeton, . For example, M. Edelman, The Politics of Misinformation, Cambridge University Press, Cambridge, . For example, M. Walzer “Political Action: The Problem of Dirty Hands,” Philosophy and Public Affairs, , , pp. –. See M. Ramsay “Justification for Lying in Politics” (pp. –); “Democratic Dirty Hands” (pp. –); “Explanations: The Political Context” (pp. –) in L. Cliffe, M. Ramsay, and D. Bartlett, eds. The Politics of Lying. M. Jay, The Virtue of Mendacity: On Lying in Politics, University of Virginia Press, Richmond, ; David Runciman, Political Hypocrisy: Three Masks of Power from Hobbes to Orwell, Princeton University Press, Princeton, ; John Mearsheimer, Why Leaders Lie: The Truth about Lying in International Politics, Oxford University Press, Oxford, .

Introduction



believe what one wishes to be the case, even if a dispassionate review of available data would lead any impartial observer to the opposite conclusion. Such a phenomenon is quite common, and evidence from experimental psychology seems to confirm that the common experience of seeing someone believing something against the evidence, but in accordance with his desires, has a scientific foundation. Thus, there is no reason to believe that politics may be spared; moreover, SD is actually the best way to deceive others, for the more sincerely convinced a politician is, the more persuasive she appears to the public. Lastly, attributing the whole of political deception either to lying or to mistakes runs the risk of missing something important for explanatory as well as normative purposes. I shall argue that SD represents a distinctive component in the wide realm of political deception, a component much unexplored, and yet worth examining in depth for its explanatory and normative bearings. Many students of politics have hinted that SD rather than straightforward deception might have been the case in the conduct and decision of political leaders; Hannah Arendt figures prominently among them in her renowned comment on The Pentagon Papers. Few, however, have thoroughly pursued the hypothesis of political SD and analyzed it properly. Moreover, the casual references to SD rely on the commonsense idea of SD, although its meaning is highly controversial and much debated in conceptual analysis. Therefore, SD often provides the umbrella for lumping together a variety of unjustified beliefs, such as myths and ideology. Obviously, SD cannot play any significant role in politics if its nature and meaning are not conceptually clear and distinct from other kinds of unwarranted beliefs and convictions. There are both explanatory and normative reasons for considering the role of political SD seriously. The proper analysis of SD not only adds a missing piece to our knowledge of political deception but also provides a vantage point from which to explain political occurrences where public deception intertwines with political failure. It is often the case that the deception of the public goes hand in hand with faulty decisions. The 

 

See, for example, J. Lockards and D. Pahulus, eds., SD: An Adaptive Mechanism, Prentice Hall, Princeton ; V. S. Ramachandran, “The Evolutionary Biology of SD, Laughter, Dreaming and Depression: Some Clues from Anosognosia” Medical Hypotheses,  (): –. Hanna Arendt, Lying in Politics: Reflections on the Pentagon Papers, in Crises of the Republic, Harcourt Brace Janovich, San Diego, . Recent exceptions are Stephen Holmes in The Matador’s Cape. America’s Reckless Response to Terror, Cambridge University Press, Cambridge, , and Robert Jervis, “Understanding Beliefs and Threat Inflation,” in A. Trevor-Thrall and J. K. Cramer, American Foreign Policy and the Politics of Fear: Threat Inflation since /, Routledge, London, , pp. –.



Political Self-Deception

explanation of this fact divides current political studies into two groups. A first group discounts the deception as epiphenomenal of a slippery slope of mistakes and unintended consequences, thus exonerating politicians from moral responsibility. This view finds support in the strand of political psychology according to which misperceptions are simply the outcome of “cold biases.” When applied to actual political failures, this view turns out to be too one-sided, and burdened with a clear exculpatory undertone, for the motivated and self-serving nature of such mistakes is left unaccounted for, and the associated responsibility is extenuated in epistemic inaccuracy. A second group focuses solely on the deceptive intent of politicians. But then, this conspiratorial position must also explain why such scheming liars plan so poorly, ending up exposed. Actually, there is no apparent causal connection between the government officials’ deception of the public and the subsequent failure of the policy related to the public lie, whereas SD enables the analyst to account for (a) why the decision was bad, given that it was grounded on self-deceptive, hence false beliefs; (b) why the beliefs were not just false but self-serving, as in the result of the motivated processing of data; and (c) why the people were deceived, as the by-product of the leaders’ SD. In addition to providing an explanation of the conjunction of bad planning, driven by self-serving motives, and public deception, the focus on SD will also imply an accurate analysis of the circumstances that set the process in motion. This is the junction where normative reasons for considering SD become apparent. Mistakes and lies are detected by hindsight, and lies are usually detected only if unsuccessful. By contrast, a proper understanding of how SD works in crucial decision making opens up the possibility of identifying favorable circumstances for its taking place ex ante; hence, the possibility of devising preventive measures. SD is based on the cognitive distortion of data under the influence of some wish and not on bad will; yet SD is not the usual shortcut to believing what one wishes, unconstrained by evidence and epistemic processing. Only under special circumstances does motivation take the lead and bias the process of belief formation so as to produce the deceptive belief against the available evidence. Whereas bad will and mistakes can be neither predicted nor prevented, the circumstances conducive to SD can be detected. In principle, then, preventive strategies can be worked out at both the individual and the institutional level. As we shall see, SD is not under the control of the agent, but if it is acknowledged as an actual risk in policy-making, and if favorable circumstances are understood, we can at least count on a good starting point for working on prophylactic

Introduction



measures: the assumption that no one likes to have duped him- or herself. In general, agents are not aware of entering a SD process, and external observers are the ones who usually detect others’ SD. Putting together: (a) the typical circumstances in which SD may take place; and (b) the ability of external observers to identify other people’s SD, a strategy of precommitment can be devised. Precommitment is a precautionary strategy, aimed at creating constraints to prevent people from falling prey to SD. If, ex hypothesi, independent observers have the authority to act as overseers of certain governmental decisions when the typical circumstances for SD are present, decision makers can become aware of the risk of certain chains of reasoning being biased, and might be offered the opportunity to reconsider their decisions. I am well aware of the many pragmatic difficulties in translating this idea into a viable institutional option. But the difficulties are no reason to dismiss the idea of preventive measures altogether.

A Clear Notion of SD In order to appreciate the political role of SD, a preliminary analytical and critical understanding of its nature and working is required. As I noted earlier, the occasional hints at SD in political studies are too vague to be really useful, and the phenomenon is not really distinguished from a vast array of unwarranted beliefs and convictions that play an important part in political decisions and policies. Some of them are actually mistakes produced by cold biases, that is, by systematic cognitive distortions of our reasoning well described by cognitive psychologists. Others are motivated instead, induced by the effect on cognition of motivational states such as wishful thinking, illusions, and positive thinking. Then there are ideological convictions and political myths, projecting their fixed lens on data processing, that may or may not either cause or turn into self-deceptive beliefs. Political illusions, ideologies, and political myths, although not necessarily negative in decision making, are all contiguous with SD insofar as they imply a prejudiced consideration of data and induce unwarranted or not fully warranted beliefs. Only self-deceptive beliefs are, however, false by definition, being counterevidential, prompted by an emotional reaction to data that contradicts one’s desires. If this is the specific nature of SD, as I shall show in the first part of this work, then self-deceptive beliefs are distinctly dangerous, for no false belief can ground a wise decision. It is therefore crucial to single out SD among the vast array of unjustified beliefs, motivated and unmotivated alike.



Political Self-Deception

SD has been an issue in philosophy, especially in the Anglo-American tradition for the past forty years, and has more lately become a subject for experimental psychology and neuroscience. In the wide discussion on SD, there is no agreement on a standard notion or on its explanation, so much so that some scholars doubt that anything like SD may genuinely be the case. Skepticism about the phenomenon is voiced both by philosophers and by cognitive psychologists, albeit for different reasons. Philosophical skepticism is linked to the traditional view of SD as lying to oneself, which paves the way to paradoxes, for how can one be both the perpetrator and the victim of a lie? As a reaction to this paradoxical view, some scholars maintain that what has been described as SD is actually only a pretense to keep up one’s image in front of other people. But given that there is no need to pattern SD on the other-deception model, or to end up in paradoxes, as the current discussion has shown, then the philosophical skeptic has lost her footing for doubting SD as a genuine phenomenon. On the psychological front, skepticism for SD is voiced on the grounds that the more economical explanation of biased beliefs is available. According to this perspective, the presumption of any motivational influence on cognition is unnecessary, given that cold biases can do the job more directly. This reasoning, however, clashes both with many studies in experimental psychology showing that motivation often affects our perception, and with information gathering and processing in many different experimental settings. Given the rich phenomenology of SD and the growing experimental data on motivational interference on cognition, the reasons for doubting SD as a distinct phenomenon are very thin indeed. Even conceding that SD is a genuine phenomenon, however, a plausible and persuasive account of SD is necessary before it is exported into politics. In this respect, philosophical discussion can serve three purposes in the economy of the present work. The first concerns conceptual clarity. This work’s general aim is to single out the distinctive role of SD in the realm of political deception. If I employ an ambiguous notion, mainly relying on common sense, my argument is weakened thereby and exposed to all kinds of objections prompted by a more analytical and critical understanding of the phenomenon. There is no ready-to-use notion of SD; thus, the discussion and clarification of a viable conception 

M. Haight, A Study on SD, Harvester Press, Brighton, ; R. Audi, “SD, Action and the Will,” Erkenntnis, , : –, E. Funkhouser, “Do the Self-Deceived Get What They Want?,” Pacific Philosophical Quarterly, , : –.

Introduction



of SD is not only intrinsically important but also constitutes a preliminary component of my project on political SD. The second purpose that a critical analysis of SD is meant to serve relates to the issue of whether agents can be held responsible for their SD. I take it that a common reason for downplaying the role of SD in social and political life is a concern about responsibility. It may seem that introducing SD into the realm of political deception relieves politicians, government officials, and leaders deceiving the public wholly or partially of responsibility. If the deception of the public is explained by SD, instead of by a lie or by an intentional misleading by omission, then it is unclear whether moral blame and political outrage apply properly and fully in such a case, for the self-deceiver was neither aware of deceiving others as a consequence of his deception nor intending to deceive anyone else. The attribution of moral responsibility to self-deceivers is controversial: at most, it would seem that the responsibility would be reduced to negligence. But, if there were no possibility of holding people responsible for their SD, then SD would end up conflated with mistakes in the political domain. The fact that some mistakes are motivated does not significantly change the consideration that political analysts bestow on them. For my argument to stand, the issue of responsibility is thus crucial. This question, however, cannot be settled independently of the account of SD adopted: whether SD is seen as an intentional doing of the agent, albeit without full awareness and consciousness, or as a false belief causally induced by a motivational state that triggers cognitive biases has an obvious different bearing on the issue of responsibility. Finally, the third purpose of philosophical and psychological analysis is to single out circumstances conducive to SD. As mentioned earlier, from a normative viewpoint, the advantage of SD lies in the hypothetical possibility of its prevention, and the latter, in turn, depends on the possibility of detecting when SD is likely to take place, which implies understanding its typical circumstances. The philosophy of SD has mainly disregarded the issue of circumstances, focusing instead on the definition of a puzzling concept and of its necessary and sufficient conditions. In other words, philosophical analysis has been concerned with defining criteria for identifying SD as a case of motivated irrationality. An account of SD that dispenses with its circumstances, however, is incomplete, for it cannot properly explain how SD strikes only selectively. Thus, my aim is to expand the scope of philosophical analysis so as to reach the circumstances, hence preparing the ground for prophylactic measures.



Political Self-Deception

The first chapter of this book will be concerned with a critical analysis of the philosophical debate on SD. Among the many controversial issues in this debate, I shall consider one concerning whether SD is viewed as an intentional – although possibly unconscious – doing of the subject, or as a causal happening to the subject. The first intentional view dominated the discussion until the nineties, whereas the second causal one is prevalent now. This issue has significant implications for the possibility of linking responsibility to SD; hence, settling the question of intentionality is crucial for the possibility of applying SD to politics. I shall therefore provide a critical presentation of the discussion on SD, both in its historical development and at its present stage, from the standpoint of the intentionality/causality debate. Intentionalists have so far failed to show how and why any agent would bring herself to believe what is knowingly false but corresponds to a wish of hers in a nonparadoxical way. On the other side, the supporters of the causal/motivational view claim to have provided a simple, nonparadoxical, unified account of SD by showing that SD is the effect of a motivational state that causally triggers cognitive biases directly producing the false belief. Although I share the causalists’ criticisms, I nevertheless think that the intentional account embodies important intuitions that appear to be lost in the rival view. Briefly, the intentional account has had no difficulty in rendering SD as a specific form of motivated irrationality, well set apart from wishful thinking, positive thinking, or faith. Under the intentional description, the self-deceiver is not simply someone who believes what she wishes, but someone who has brought herself to believe that P that she (unconsciously) knows is false, for the true Non-P goes against her wish. This paradoxical description captures a specific aspect of SD in the range of motivated irrationality, namely, the role played by the appraisal of the contrary evidence in allowing SD to start. This aspect is either overlooked or much diluted in the causal account, and the result is a loss of the specificity of SD, which is    

Anna Elisabetta Galeotti, “SD: Intentional Plan or Mental Event?” Humana Mente, , , pp. –. In the present debate, see, for example, among intentionalists, J. L. Bermudes, “SD, Intentions and Contradictory Beliefs,” Analysis, , , pp. –. The causal view is best represented by Alfred Mele, SD Unmasked, Princeton University Press, Princeton, . The issue of specificity is well addressed in the SD definition by D. Davidson, “Deception and Division,” in E. LePore and B. McLaughlin, eds., Actions and Events: Perspectives on the Philosophy of Donald Davidson, Basil Blackwell, Oxford, , pp. –. Davidson’s definition, however, gives rise to paradoxes, which need to be explained away by the problematic mind-partition. See the next chapter.

Introduction



not well set apart from wishful thinking, on the one hand, or delusion, on the other. Similarly, there seems to be no problem for the intentional account in explaining why people confronting negative evidence do not always end up being self-deceived, given that SD is a doing of the subject. If, by contrast, SD is caused by a motivational state taking a causal grip on mental tropisms, why does SD work selectively instead of being the normal response to wish-frustration? In sum, the causal account, for all its simplicity, does not provide convincing responses to the specificity and to the selectivity problems of SD. Therefore, in the critical reconstruction of the debate, I shall argue for a serious revision of the causal account enabling it to respond to these two major weaknesses, while subscribing to the nonparadoxical view of SD. I shall then propose my solution of SD’s puzzles in the form of a view of the phenomenon that is unintentional as to the outcome but acknowledges the intentionality of the process. In the realm of social explanations, the explanatory model accounting for unintentional consequences of intentional doings is the “invisible hand”; accordingly, I shall argue for an invisible hand account of SD, which will enable me to respond to the specificity issue while keeping a nonparadoxical view of the phenomenon. Similarly, the selectivity issue will be dealt with within the invisible hand model, but, in this case, the analysis shall be supplemented by a reflection on the circumstances conducive to SD. This will lead me to unpack the motivational state at the origin of the SD process. Finally, the analysis of the motivational state will help me sort out the issue of a unitary explanation of “straight” and “twisted” cases in the framework of the invisible hand model. The straight cases are those where there is a match between the wish and the false belief resulting from SD, as in the case of a husband who stubbornly refuses to believe his wife unfaithful despite evidence that would induce a nonmotivated observer to conclude the opposite. The twisted cases are those where the false belief runs counter to the agent’s desire, as in the case of a jealous husband who becomes convinced of his wife’s infidelity despite the lack of evidence. The intentionalists tend to deny that twisted cases are cases of SD. By contrast, the causalists claim to have provided a unitary explanation of straight and twisted SD, which would constitute a clear advantage over alternative explanations. They hold that in both cases the self-deceptive belief is similarly produced by the  

See W. J. Talbott, “Intentional SD in a Single, Coherent Self,” Philosophy and Phenomenological Research, , , pp. –, and Bermudes, “SD, Intentions and Contradictory Beliefs.” See Mele, SD Unmasked, pp. ff.



Political Self-Deception

causal working of a motivational state on cognitive biases, and whether the result is favorable or unfavorable to the subject’s wish is irrelevant from the point of view of its production. In this perspective, the soothing effect of the false belief, typical of the more common straight cases, has nothing to do with the subject’s desire to believe what she wishes but is simply a causal result, contingently going in one or another direction. This explanation, if correct, may undermine my account. I shall argue, however, that both cases are better explained within the invisible hand model as the outcomes of different strategies of reasoning, both affected by cognitive biases, chosen by subjects differently influenced by desires and emotions. A proper explanation of twisted cases as cases of SD is especially relevant in the political realm, for in this domain twisted cases often occur as the outcome of worst case scenario mode of reasoning. In situation of great uncertainty and risk, it is common to consider the worst as a precautionary move for working out potential responses. Such a move would not be intrinsically epistemically faulty, but it is usually exposed to the probability neglect bias, so that a quite improbable event, which represents the most feared outcome for the decision-maker, comes to be considered an actual probability. Given that the potential responses are necessarily quite drastic and harmful, the self-deceptive belief that worst case scenarios are real is very dangerous. Think, for example, of the reasoning backing the consideration of the potential nuclear capacity of Iran. In the uncertainty of reliable and accurate information, given the risk associated to it, many politicians and analysists thought it safer to consider it as a real possibility, and in fact believed that it was, instead of weighing the actual probability of such an event carefully. In the grip of such a belief, a preventive attack to Iran might look justified in view of its future nuclear potentiality. If, however, such a belief were a case of twisted SD, the justification of a preventive attack would fall apart, and the harmful and far-reaching consequences would be responsibility of the self-deceived decision makers. In sum, the rather technical analysis on the three issues of specificity, selectivity and unitary explanation of both straight and twisted SD is relevant for the purpose of this book of applying SD to politics. A specific notion of SD is in fact required to detect its specimen precisely in the fog of political deception, by setting it apart from illusions, political myth, ideological assumptions and cold mistakes. The solution to the selectivity issue, furthermore, will help to define the circumstances where SD is likely to take place, and will provide the possibility to foresee SD and, hopefully, to devise preventive measure. Finally, the explanation of twisted cases, along with the more common straight cases, help

Introduction



understanding a relatively unusual form of political SD linked to a worst case scenario, which is definitely very dangerous in the context of international relations. All three issues are aptly addressed within the invisible hand model, for their solution requires that self-deceivers be conceived as intentionally entering the reasoning that will induce SD as an unintended outcome, via the causal effect of cognitive biases.

Can Moral Responsibility Be Ascribed to Self-Deceivers? Once I have outlined the invisible hand model, I shall turn to the moral dimension of SD (Chapter ). For the purpose of this work, the moral implications of SD represent a crucial element for assessing its manifestations in social interaction and in politics. In the literature on SD, its moral dimension has been addressed in moral psychology, but it seems mostly to have withered away in recent studies. This shift of interest is mainly connected with the prevalent causal account of SD. When the intentional view of SD, under the description of lying to oneself, dominated the debate, an intrinsic component of that account also included the consideration of SD as a moral failure for which responsibility, and the related blame, could swiftly be ascribed to the self-deceiver. The now prevalent causal account instead leaves little room for the moral dimension of SD, although a few recent works have claimed that responsibility can properly be attributed within the causal view as well. This point, however, is highly controversial. In this respect, the invisible hand account will enjoys a crucial advantage over rival views, given that, in my account, SD is not simply a causal happening, a mental event but instead something that is done by the agent, although not in full awareness. Hence, the agent is active in the process, although not lucid and perfectly rational as assumed by the rational deliberation model. On this basis, I can draw an argument showing that SD cannot dispense any agent from being held responsible of her false belief. Responsibility is traditionally attributed if the agent has control over her actions and attitudes, but the control condition is not met in the case of SD. I face this issue with the help of recent views on responsibility that

 

See Mele, SD Unmasked, p. , and A. Barnes, Seeing through SD, Cambridge University Press, Cambridge, , p. . Neil Levy, “SD and Moral Responsibility,” Ratio, , , pp. –.



Political Self-Deception

have challenged the traditional view of control as the key attribution factor. In line with this perspective, I provide an account of how selfdeceivers can be held responsible for their state even if they did not achieve it intentionally and knowingly. In my reasoning, the attribution of responsibility is linked to the presence of agency in performing the act, which implies the possibility of future prevention by learning the lesson ex-post and coming to see SD as bad and wrong. Even if SD escapes the agent’s direct control, there are suggestions within the moral tradition for dealing with desirable/undesirable states, which cannot be attained directly by the agent. The two main indirect moral strategies are character-building and precommitment. The latter is especially promising for SD and requires creating a number of constraints at time t¹, in a condition of cognitive lucidity, that will prevent the agent at time t², under emotional pressure, from falling prey to SD. For example, an agent may invest some friend with the authority to act as a referee when typical circumstances for SD arise. Precommitment can, in principle, be worked out at the institutional level as well, and, maybe, provide a preventive measure countering political SD.

SD in Politics Once equipped with a clear notion of SD and with the arguments about its moral implications, incurrence of responsibility and possible prevention, I am prepared to move into the realm of politics and to argue for the distinctive role played by SD in the wide area of political deception. Here a first step is to set SD apart from lies and straight deception, on the one side, and from simple mistakes, on the other, and also from the vast array of unjustified beliefs and convictions orienting political decisions. Actually, in the dynamics of policy making, SD is usually intertwined with them all, and this fact makes it all the more difficult to analyze the process properly, and all the more necessary to use a well-defined notion of SD. My intent is neither to substitute SD for straightforward deception, nor to consider any mistake a case of SD, nor even to conflate SD with ideological convictions. Granted that all these phenomena are present in policy-making, my aim is to single out the distinctive role of SD and show how SD either may  

G. Sher, Who Knew? Responsibility without Consciousness, Oxford University Press, New York, . The champion of character-building through discipline is Aristotle in his Nicomachean Ethics, often revisited, for example, by G. Ainslie, The Breakdown of the Will. Cambridge University Press, Cambridge . Precommitment is instead analyzed and proposed as a strategy to deal with human irrational lapses by Jon Elster in Ulysses and the Sirens, Cambridge University Press, Cambridge, .

Introduction



support or be supported by straightforward deception. The focus of the second part of my work will therefore be on analyzing this intertwining between deception and SD: I shall try to disentangle the two and to show both the explanatory and the normative advantages of acknowledging the specific, albeit limited, role of SD. In Chapter , I argue in favor of taking SD seriously in politics. Several considerations suggest that SD is not only present but also plays an important role in the realm of democratic politics. Among them, SD may present both an explanatory and a normative advantage for making sense of certain political decisions that bring about bad outcomes. I am thinking here especially of foreign policy decisions over military interventions that have ended in failure. Very often, in such cases, the public, and possibly part of the political establishment, such as Parliament or Congress, have been deceived concerning relevant facts grounding the decision, and the public deception is usually attributed to an intentional misleading. Yet, the lying thesis has no explanatory force concerning the failure that must have been caused by some mistake in addition to the public deception. If, however, the deception of the public is a by-product of leaders and officials’ SD, then, by the same token, SD also explains why the decision has produced a failure: policy-making based on distorted data can hardly be successful. In addition, SD does not constitute an excuse for leaders and officials, for, as we have seen, self-deceivers are justly held morally as well as politically responsible for their SD; furthermore, political SD is always intertwined with straightforward deception. In fact, the typology of political SD that concludes the chapter is based on the varying link that SD establishes with deception of the public. Three main types can be singled out, according to whether (a) the deception of the public is a by-product of SD; (b) SD is ancillary to public deception; and (c) SD provides the justification for political lies. A final caveat: this book is mainly concerned with applying SD to decision-making processes, especially in the foreign policy area, where the effects of decisions affected by SD are dramatic and far-reaching. There is, however, another important area of SD application in democratic politics, for the SD of leaders, cabinets, and government officials is mirrored by SD of the people. That people can distort reality in order to find it acceptable is a phenomenon often acknowledged in social studies. The reconstruction of the people’s SD is crucial for the normative theory of democracy. That people can be self-deceived about the political reality, the available options, and the government’s conduct poses a challenge to the basic tenet of normative democratic theory, namely, that citizens



Political Self-Deception

should be in control of the democratic process. The implications of SD for the democratic process will, however, not be pursued in this book, notwithstanding its importance. It may be the subject of another project, but this work is confined to analyzing the working of SD in crucial instances of governmental decision making and to drawing the normative implications related both to responsibility ascription and to devising prophylactic measures.

SD in Context The remaining chapters of the book will be devoted to illustrating the three main types of political SD by analyzing a number of cases of American foreign policy as examples of SD in the context of decision making. Why have I chosen American foreign policy as a suitable context for illustrating the working of SD? Generally speaking, SD is likely to affect decision making when there are momentous decisions to be made under pressure of time and given the conflicting expectations of parties, constituencies, national and international audiences, and in view of the risks in terms of reelection and public image and the perceived threat to national security. Such circumstances are likely to put a special emotional pressure on decision makers, and hence to affect their reasoning and processing of data, which are often caught in the grip of personal wishes. Those circumstances are typically present in international crises, when decisions concern military interventions, reprisals for attack, responses to terrorist actions, and the like: in a word, crucial decisions with far-reaching effects on nationals and foreign citizens, as well as on the leader’s career and public stature. SD may also occur in different kinds of decision making concerning domestic policies, but similar circumstances of time pressure and a clear perception of it being a momentous, exceptional decision must occur for SD to be likely to start. In this respect, foreign policy is a privileged context for looking for SD episodes. Given the international role played by the United States following World War II, first in the bipolar system and in the globalized world after the fall of the Berlin Wall, it will not be surprising that, among the many momentous and controversial US decisions on military intervention, some of them may turn out to be infiltrated by SD. A caveat is in order here on the nature of my argument, which has analytical ambitions, but is purely speculative in terms of its application to actual cases. My reconstruction is conjectural, based on known records and on speculative reasoning, not on new historical findings about what took

Introduction



place. In any case, I am eager to stress that imputing SD to decision makers is no more speculative than imputing deceptive intent or epistemic confusion, for all three imputations concern mental states for which there is no direct evidence. In Chapter , I shall consider the Kennedy policy on Cuba, and more specifically the Bay of Pigs fiasco and the successful solution to the Cuban Missile Crisis in the following year. The Bay of Pigs episode illustrates the first type of political SD, where the SD of the CIA agents induced the deception of the whole Kennedy cabinet as a by-product – at least up to a point, after which it was supplemented by the SD of the Kennedy team itself. In the solution of the Cuban Missiles Crisis, instead, we have an example of the third type of political SD. Here, SD provided the cover up for the public lie on the secret deal between Bobby Kennedy and the Soviet Ambassador, trading the Cuban Missiles for the Turkish ones. In that case, the conviction that the public lie was indeed justified by reasons of national security represents the content of the SD, allowing the Kennedy brothers to realign a lie, a moral wrong, with their self-image as righteous, courageous leaders. Chapter  will provide an illustration of the second type of political SD, in which SD is ancillary to straightforward deception. The example is centered on the Gulf of Tonkin Incident in the course of the Vietnam War. The incident, which later records made clear never took place, was used as a pretext to pass the Congress Resolution of August , trusting to the president’s discretion to do whatever he thought necessary for resolving the conflict. McNamara knew that it was a pretext; nevertheless, he believed the incident real, despite the evidence to the contrary. The Tonkin incident was not the real reason for the escalation but it was not, for all that, a fabrication, as the most accurate and technical accounts have subsequently argued. It was an episode of SD supporting the lie that the escalation was required precisely as a reprisal for the North Vietnamese attack. In Chapter , I take up the case of weapons of mass destruction (WMD) as the rationale for the controversial Iraq invasion in  by the Bush administration. I shall argue that there is an analogy between the Gulf of Tonkin Incident and the WMD: both served as pretext for a controversial military attack, but, as in the case of Vietnam, the distance between the actual reasons for the intervention and those selling it does not imply that the latter were simply a fabrication. I examine whether the SD thesis fares any better than its competitors, namely, the straight lying view and the honest mistake view, in explanatory terms. And I conclude, via a



Political Self-Deception

speculative line of reasoning, that the administration (wrongly) believed in the existence of WMD despite the unfavorable and contradictory evidence up until that point. I should like to stress that the SD thesis, although certainly controversial, has an important consequence for future occurrences. If the role of SD had been acknowledged in the Vietnam case, the analogy between the two cases might have been grasped, and a more vigilant attitude might have been adopted if not by the president’s cabinet and advisers, then by Congress and the media at large. Had the Tonkin Incident been interpreted as an instance of SD instead of as a cunning lie by McNamara, preventive measures might perhaps have been devised with a view to averting any repetition of similar motivated mistakes. My conclusions will be devoted precisely to the exploration of those preventive measures, drawing on past experience.

 

The Philosophy of Self-Deception

 

Investigating Self-Deception

Forse era ver, ma non però credibile/ a chi del senso suo fosse signore Ma parea facilmente a lui possibile/che era perduto in via più grave errore Quel che l’uom vede Amor gli fa invisibile/e l’invisibile fa vedere Amore Questo creduto fu che ‘l miser suole/ dar facile credenza a quel che vuole

Ludovico Ariosto, Orlando Furioso, , 

 Preliminary Generally speaking, self-deception (SD) is believing that P, under the influence of the desire that P be the case, when the available evidence would lead an unmotivated rational cognizer to conclude that ~P. Typical examples of SD are believing one’s marriage to be fine when one has seen enough evidence of marital infidelity, or considering oneself healthy when alarming symptoms should make one conclude the opposite. It is a form of motivated irrationality, displayed by usually rational subjects, as a rule capable of responding to evidence adequately and forming and holding beliefs appropriately. After Sartre’s theorizing of mauvoise fois (), a wide and complex discussion has developed on this topic, ranging from philosophical analysis to cognitive science and evolutionary psychology as well. What is the nature of SD? If such a thing does indeed exist, how do we account for it in a nonparadoxical way? How do we value it, both from the viewpoint 

“This may have been true, but scarcely plausible to anyone in his right mind; to him it seems quite possible, however, lost as he was in a far deeper delusion. What a man sees, Love can make it invisible – and what invisible, that can Love make him see. This, then, was believed, for a poor wretch will readily believe whatever suits him” (English trans. by G. Waldman, Oxford University Press, Oxford , p. ).





Political Self-Deception

of rationality and from that of morality? These are all matters of ongoing controversy. Taking up this discussion, I shall defend a nonskeptical account of SD as a genuinely puzzling but nonparadoxical phenomenon, which is the unintended outcome of intentional steps taken by the agent. I shall contend that SD is not the result of an intentional plan carried out by an agent under the spell of a rebellious wish, but the consequence of many intentional steps culminating in an unintended outcome. The resulting self-deceptive belief is unintended yet functional to reducing the subject’s anxiety and worries about P, at least temporarily. I shall argue that SD’s functionality is shared both by the more common straight cases of SD (when the motivating wish and the product of SD are identical), and by twisted cases of SD (when the self-deceptive belief runs opposite to the motivating wish). In my account, SD is brought about in a way similar to social phenomena that serve some collective purpose, and are the product of human action but not of human design, such as money, language, and many social conventions. For such kinds of phenomena, invisible hand explanations have been proposed which take into account the intentional actions input, the composition mechanism, and the unintended outcome. Accordingly, I shall propose an invisible hand explanation to account for the production of SD by an intentional process elsewhere directed, leading to an unintentional outcome, yet in some respect functional to the subject’s wish. The functional role of SD, however, needs to be critically analyzed for (a) it is only temporary, and (b) it is such only with reference to the subject’s immediate emotional states. Sooner or later, the subject will have to come to terms with the unwelcome reality; meanwhile, if the self-deceptive belief leads to action, being as a rule false, it may harm both the subject and other people. Thus, a short-term emotional functionality is matched by overall negative consequences, both for the subject’s doxastic states and the actions following from her irrational beliefs. If SD is an unintentional outcome and not under the direct control of the subject, can the subject nevertheless be held responsible for it and blamed for her doxastic 



Straight and twisted cases of SD are specifically discussed by A. Mele “Twisted SD,” Philosophical Psychology, , : – and A. Lazar, “SD and the Desire to Believe,” The Behavioral and Brain Sciences, , : –, and A. Lazar, “Deceiving Oneself or Self-Deceived? On the Formation of Beliefs ‘Under the Influence’,” Mind, , : –; C. Michael and A. Newen, “SD as Pseudo-Rational Regulation of Belief,” Consciousness and Cognition, , : –. This idea, which comes from Adam Smith and was picked up by Carl Menger, is especially examined by Friedrick von Hayek. See, for example, his Individualism and Economic Order, Routledge, London, .

Investigating Self-Deception



confusion? My response is that if SD is a by-product of mental activities otherwise directed, then the subject’s responsibility is likewise indirect: since SD is not simply a happening, but also a doing of the subject, the agent is not free from responsibility; but because SD is not under the agent’s control, the responsibility for past SD implies indirect strategies to avoid becoming prey of future SD. In the next section, I shall critically review the main positions on SD, starting from the skeptical challenge of those who question the very possibility of SD. Then, I shall focus on the two main views of SD: the intentional and the causal-motivationist view. The first and more traditional considers SD as a doing of the subject who literally lies to himself, as in the other-deception model. The second and now prevalent account views SD as a causal event produced by a motivational state. The causal model has done away with the risk of paradoxes, which infected the intentional account; nevertheless it exhibits some other difficulties, which were absent in the intentional model. In the third section of this chapter, I shall advance my alternative view of SD as produced by an invisible hand mechanism. My proposal subscribes to the nonparadoxical account of the causal model, but also overcomes its difficulties, by preserving some important intuitions of the intentionalists. I like to stress that the following and quite technical discussion in analytical philosophy is relevant for the project of applying SD to politics. Not only a clear and plausible conception is necessary, but also not any conception makes sense in political analysis. For example, if SD is only pretense, as the skeptic maintains, then there is no genuine SD and no point in exporting it in politics. Consider now the intentional view: if SD is the intentional product of the subject who wants to believe what he wishes, then, from the viewpoint of political analysis, it is conflated with lying and does not matter if the first victim of the lie is the subject himself. Lastly, if SD is considered as a causal product of cognitive biases, then it is conflated with honest mistakes, even if the mistake is in this case motivated. How can we make sense, for example, of those who do not want to believe that the / attack on the Twin Towers was carried out by Al-Qaeda terrorists, despite the overwhelming evidence, and who are convinced that it was in fact planned by American and Israeli intelligence to put the blame on the Muslims? If their conviction is just a pretense, then there is no point in further discussion. If we hold that their false belief is causally produced by biases, then it is politically similar to a mistake. If instead we see it as the outcome of an intentional strategy, under the influence of the desire to believe in the innocence of Muslims, in this case,



Political Self-Deception

the self-deceptive belief is not politically different from lying, despite the complex psychology setting SD in motion. In either case, reference to SD turns out to be politically redundant. For SD to play a political role, we need an account making sense of the (usually self-serving) motivation, of the cognitive confusion, of the intentional effort to do away with unwelcome evidence, and of the unintentional result of believing something patently false. The invisible hand model explains that the false conspiratorial belief is held candidly, yet through an intentional process under the influence of the wish, for which the self-deceiver can be held responsible. In this way, the self-deceptive belief in the American–Israeli conspiracy is not mirrored by a symmetrical conspiratorial belief of the analyst, neither is it downplayed as an honest mistake, but appraised in its distinctiveness.

 A Critical Analysis of the Philosophy of SD .. Skepticism about the very existence of SD as a genuine phenomenon is shared by some philosophers and some experimental psychologists and for different reasons. Within philosophical analysis, there is a well-defined skeptical position equating SD to mere pretense and deception of others. This position is prompted by the definition of SD as literally “lying to oneself.” As lying to oneself, SD looks impossible in so far as it harbors two contradictions: (a) the self-deceived subject believes P and ~P at the same time; and (b) she has made herself believe something that she knows is false; in so doing, she has acted both as perpetrator and victim of the lie. To state it differently, if patterned after the other-deception model, SD implies two different paradoxes: the static or doxastic paradox of holding two contradictory beliefs at the same time and the dynamic paradox of making oneself believe what one knows is not true. Reflecting on these two contradictory conditions, the skeptic concludes that SD is not literally possible, at least in a nondissociated self. In the skeptic’s view, what is called SD is actually only pretense and deception of others. Given that SD  



R. Demos, “Lying to Oneself,” Journal of Philosophy, , : –. These two impossibility conditions are clarified by B. McLaughin, “On the Very Possibility of SD,” in R. T. Ames and W. Dissanayake, eds., Self and Deception, State University of New York Press, Albany, , pp. –. The label of the static and dynamic paradox comes from A. Mele, “Real SD,” Behavioral and Brain Sciences, , : –. Haight, A Study on SD; David Kipp, “On SD,” Philosophical Quarterly, , : –; K. J. Gergen, “The Ethnopsychology of SD,” pp. –; M. R. Haight, “Tales from a Black Box,” pp. – in M. Martin, ed., SD and Self-Understanding; and David Kipp, “SD, Inauthenticity and Weakness of the Will,” ivi, pp. –; Tamar Szabò Gendler, “SD as Pretense,” Philosphical Perspectives, , : –.

Investigating Self-Deception



is by definition always attributed by external observers or by a later self, what external observers actually see is a contradiction between an expressed belief of the subject and the data in front of her, a contradiction leading to the inference of SD. The contradiction, according to the skeptic, is, however, only apparent for, most likely, the subject has only expressed a false belief in order to mislead others about herself, to cover up some of her unpleasant attitudes and conducts, and present a better self-image to other people. Unfortunately, her pretense comes into conflict with other behavioral traits or other expressed beliefs, because, as well known, liars do not always have good memories nor succeed in being perfectly consistent in their pretense and deception. I think that the skeptic does not consider the possibility that the inconsistencies exhibited by the self-deceiver, in his doxastic states and in his related conduct, are possibly philosophically objectionable, but nonetheless can be real indeed. The skeptic’s view is prompted by the description of SD as “lying to oneself,” and by the consequent paradoxical nature of such an act. But if there were a nonparadoxical account of SD, then the philosophical skepticism about SD would consequently fall. Within cognitive psychology, the skeptical challenge is grounded on the denial of the interference of motivation on cognition. If the deceptive belief is produced by cold biases instead of wishes and emotions, then SD simply disappears and leaves room only for cognitive mistakes. Some psychologists explain what looks like SD as the effect of cold biases and cognitive illusions, which tend to be blindly ego-centered. This antimotivationist position is argued on the grounds of explanatory parsimony for actually it does away with the complication of explaining beliefs by reference to the motivational system with a sort of mysterious mechanism. The parsimonious position is, however, contested by many other psychologists who hold that there is enough evidence to support the thesis of



 

The view of SD as pretense does not always imply a skeptical consideration. There is also a view of SD as pretense that considers SD a genuine phenomenon. See R. Audi, “SD, Action and Will,” Erkenntnis, , : –; G. Rey, “Toward a Computational Account of Akrasia and SD,” in B. McLaughlin and A. O. Rorty, eds., Perspectives on SD, University of California Press, Berkeley , pp. –; Funkhauser, “Do the Self-Deceived Get What They Want?,” –; T. SzabòGendler, “SD as Pretense.” This point is well-stressed by D. S. Neil Van Leuwen, “The Product of SD,” Erkenntnis, , : –. See, for example, Thomas Gilovich, How Do We Know What Isn’t So, The Free Press, New York , and M. Piattelli Palmarini, Inevitable Illusions, John Wiley & Son, New York .



Political Self-Deception

motivational interference with cognition. In addition, the supporters of the cold-bias view must explain the ego-centeredness of biases, and the motivational interference can precisely account for the general human tendency to overestimate one’s success, capabilities, and prospects, and undervaluate failures and negative traits, for which there is abundant empirical evidence. Evidence in favor of the interference of wishes on belief-formation comes, for example, from the experiment of voice recognition by R. Gur and H. Sackein. They found that error-recognition of one’s voice versus others’ met the criteria for ascribing SD, as a motivated false belief brought about to escape embarrassment. Many other works in cognitive psychology appear to confirm the influence of motivation on cognition and to falsify the purely cold-bias view. In sum, the cognitive skepticism about SD as a genuine phenomenon is only a hypothesis, basically justified by explanatory parsimony; but then it leaves out the account of the self-centeredness nature of cognitive biases, and, moreover, it is not finally confirmed by experimental psychology. Finally, a third skeptical argument comes from perspectives in neuroscience holding the brain as made up by a variety of relatively independent modules, lacking any unitary center. There is nothing mysterious or puzzling in there being two contrasting beliefs if they are stored in two independent brain modules with different functions and pursuing different goals; it is only on the basis of the illusion of a unitary mind that holding two opposite beliefs 

  





See Z. Kunda, “Motivated Inference: Self-Serving Generation and Evaluational Causal Theory,” Journal of Personality and Social Psychology, , : –, and A. Kruglanski, “Motivated Social Cognition: Principles of the Interface,” in E. T. Higgins and A. Kruglanski, eds., Social Psychology Handbook of Basic Principles, Guildford Press, New York-, pp. –. Kunda, “Motivated Inference,” p. . See R. Trivers, Deceit and SD: Fooling Yourself the Better to Fool Others, Penguin, London . R. C. Gur and H. A. Sackeim, “SD: A Concept in Search of a Phenomenon,” Journal of Personality and Social Psychology, , : –; R. C. Gur and H. A. Sackeim, “SD, Self-Confrontation, and Consciousness,” in G. E. Schwartz and D. Shapiro, eds., Consciousness and Self-Regulation: Advances in Research, vol. , Plenum Press, New York , pp. –; see also their response to criticisms in H. A. Sackeim and R. C. Gur, “Voice Recognition and the Ontological Status of SD,” Journal of Personality and Social Psychology, , , pp. –. The criticism of the voice experiment is put forward by W. Douglas and K. Gibbins “Inadequacy of Voice Recognition as a Demonstration of SD,” Journal of Personality and Social Psychology, ,  pp. –. See, among other, D. Wenture and W. Greve “Who Wants to Be Erudite? Everyone. Evidence for Automatic Adaptations of Trait Definition,” Social Cognition, , , –; M. G. Haselton and D. Nettle, “The Paranoid Optimist: An Integrative Evolutionary Model of Cognitive Biases,” Personality and Social Psychology Review, , : –; Micheal and Newen “SD as PseudoRational Regulation of Beliefs,” –; A. Mata, M. Ferreira, and S. Sherman, “Flexibility in Motivated Reasoning: Strategic Shift of Reasoning Modes in Co-Variation Judgment,” Social Cognition, , : –.

Investigating Self-Deception



seems contradictory and puzzling. On this view, SD is simply the (illusionary) consequence of a more fundamental illusion: the existence of the Cartesian ego. However, the view of the brain as made up by independent modules by no means entails this conclusion. Actually, such a view may help with the explanation of SD, as shown by the findings on anosognosia in patients with damage to either the right hemisphere or the left. In sum, the three different skeptical challenges to SD either are controversial theses unsupported by conclusive empirical experiments or derive from a problematic definition of SD as lying to oneself. Conversely, giving up SD implies giving up a whole body of phenomena, amply described and reported in literary works and novels, and deeply entrenched in our common experience. Straitjacketing them all into the deflationary view of pretense or cognitive distortion does not seem to do justice to the familiar experience of believing something in the teeth of evidence under the influence of a strong wish. If one cannot take the phenomenology of SD at face value, one cannot even discard it without good reasons, and as far as we have seen, there are no good reasons to discard SD as a genuine phenomenon beside its controversial description as lying to oneself. .. The traditional intentional option, generally speaking, defines SD as a state where the subject believes both that P and ~P, and the belief that P is false and corresponds to the subject’s wish. This doxastic state is brought about intentionally by the subject lying to herself to reconcile her belief with her desire. From the intentional view the following general definition of SD can be drawn: • • • •

S S S S

has a wish that P knows that ~P brings herself to believe that P believes both P and ~P.

In order to illustrate the intentional view, consider the belief in the presence of the weapons of mass destruction (WMD) in Iraq by the Bush administration.   

See R. Kurzban, Why Everyone (Else) is a Hypocrite: Evolution and the Modular Mind, Princeton University Press, Princeton . See Ramachandran “The Evolutionary Biology of SD, Laughter, Dreaming and Depression,” –. This definition does not correspond to any provided by representatives of the intentional view. I have drawn it from the variety of more specific positions in the debate. Note that in the debate on SD, knowing and knowledge are not used in the technical sense of epistemology of believing something that is certainly true, as in Gettier’s problem but in the more commonsensical way of having appraised information warranting a certain belief.



Political Self-Deception

If one interprets that belief as a case of SD and views SD according to the intentional model, the following obtains: the administration wished that WMD were actually hidden in Iraq, but knowing that there were none, brought itself to falsely believe in their presence, ending up believing both that there were and that there were not WMD in Iraq. This view is clearly exposed to both the static and the dynamic paradox; in order to show that SD is possible under this reading, both paradoxes must be shown to be only apparent. If one takes lying to oneself literally, the presence of two contradictory beliefs is an implicit necessary condition for deception of the self by the self. As a deceiver, the self believes that ~P, and as a victim of the deception he is made to believe that P. The belief that ~P is held to be necessary, because it represents the very reason why the subject engages in SD, that is, in order to evade the unwelcoming truth. How can the belief that P coexist with the belief that ~P in the same self? Prima facie it seems impossible, yet there are at least three different positions that try to dispel the paradox while accounting for the possibility of holding two contradictory beliefs at the same time. A first possibility is the recourse to mind partition. Mind partition is best represented in the works of Donald Davidson and David Pears and explains the seemingly paradoxical nature of SD as the product of two subagents (Pears) or of two subsystems (Davidson). The literal conception of lying to oneself is thus reconciled with the resolution of the double paradox of SD, for no such logical problems arise if there are two subagents, or even two intentional subsystems, reproducing the other deception model. This solution has been widely criticized and on various grounds. On the one hand, the mind-partition does not sit well with the phenomenology of the self-deceiver characterized by a specific doxastic instability. On the other hand, it presents the humuncularistic problem: in  



In a similar fashion, Paul Helm (Belief Policies, Cambridge University Press, Cambridge ) sees accounts of SD as divided between “one-mind” and “two-mind modes,” pp. –. See: D. Pears, Motivated Irrationality, Oxford University Press, Oxford ; Id, “The Goals and Strategies of SD,” in J. Elster, ed., The Multiple Self, Cambridge University Press, Cambridge , pp. –; Davidson, “Deception and Division”; Id. “Paradoxes of Irrationality,” in R. Wollheim and J. Hopkins, eds., Philosophical Essays on Freud, Cambridge University Press, Cambridge , pp. –. Notably, Marc Johnston’s critique of homuncularism (“SD and the Nature of the Mind” in A. Rorty and B. P. McLaughin, eds, Perspectives on Self- Deception, University of California Press, Berkely-La, , pp. –) to which Pears responded in “Self-Deceptive Belief-Formation,” Synthese, , , pp. –. A different kind of criticism to Davidson and Pears is raised by Dion Scott-Kakures “SD and Internal Irrationality,” Philosophy and Phenomenological Research, , , pp. –. Scott-Kakures here criticized the contention that SD is a typical instance of internal irrationality, while if one favors a one-belief-account the problem vanishes.

Investigating Self-Deception



this account, there are not just two brain modules, each harboring one of the two contradictory beliefs, but there are actually two agents. One of the two, the unconscious, is scheming and acting at the expenses of the conscious victim, who is left in charge of action. The problem with homuncularism is that, on the one hand, it is not clear why the liar is so keen to dupe the other part, and, on the other hand, why the victim is so blind to the scheming activity leading to its own deception. Certainly, SD requires a certain amount of nontransparency and nonreflexivity of the subject and of his mental processes and states. However, in order to account for such a lack of awareness, we do not need a humuncularistic notion of the unconscious. The second way out of the static paradox is entrusted to a logical consideration. Under a two-belief-account, SD can be described either as “it is the case that S believes P and it is not the case that S believes P” or as “S believes P and ~P.” Only the first implies a logical contradiction, being a contradictory attribution of beliefs, while the second, as an attribution of contradictory beliefs, implies only a confused mind. In this way, the paradox is bypassed using the third person in the ascription of SD. But if SD can be ascribed without incurring in logical contradiction, how a person can believe two contradictory beliefs needs explaining. While there are some explanations for the coexistence of contradictory beliefs, such as being stored in different brain modules or not being both in the focus of the attention at the same time, it is more difficult to explain, however, how the false belief is the direct consequence of the intentional evasion of the first true one. For example, it is hard to account for the false belief in the existence of WMD in Iraq as intentionally produced to escape the true belief in their absence. From a political perspective, this psychological puzzle is of little relevance, while the intention to deceive conflates SD with straight deception. Alternatively, the paradox is bypassed by stating that the subject does not hold two contradictory beliefs at the same time, for one of the two is less than a proper belief, whether it is an avowal of belief or a thought or a half-belief. At first sight, this option looks more like a linguistic trick or   

Marc Johnston describes this process very effectively in “SD and the Nature of the Mind.” J. Foss “Rethinking SD,” American Philosophical Quarterly, , , (pp. –) p. . See H. Fingerette, SD, Routledge & Kegan Paul, London , was the first to introduce the difference between believing and avowal of beliefs. On similar line, see also A.O. Rorty “Belief and SD,” Inquiry, , , pp. –; J. T. Saunders “The Paradox of SD,” Philosophy and Phenomenological Research , , pp. –; Audi “SD, Action and the Will,” –; Id., “SD vs Self-Caused Deception: A Comment of Mele,” in Behavioral and Brain Sciences, ,



Political Self-Deception

a rhetorical way out than a serious solution. We must set apart, though, the view stating that the less-than full belief is the true ~P from that which affirms that it is the false P that falls short of being a belief, being rather only an avowal of belief (which is closer to the pretense view espoused by skeptics). This latter avowal holds that the agent knows and believes the truth of ~P, but, under the influence of the wish that P, comes to sincerely avow that P, without actually believing it true. In this way, the paradox is avoided, but it is not clear what is gained instead: what does it mean that the subject knows the truth but sincerely avowals its opposite without believing it? What is a “sincere” avowal that P, which is not believed? In sum, the three strategies for explaining away the static paradox exhibit several problems and concur to bring reasons in favor of skepticism of SD. Yet, is the two-belief-view really necessary for SD to be the case? The supporters of the causal model maintain that it is not. Before examining the one-belief option put forward by casualists, let us briefly consider how intentionalists have faced the dynamic paradox. In this respect, the traditional view captures the intuition that the selfdeceiver seems to display intellectual dishonesty in her conviction that P is the case despite contrary evidence. “Dishonesty” leads to the idea of an intentional doing of the subject for matching beliefs with desires instead of being rationally responsive to evidence. In turn, this fits with the conception of SD as lying to oneself, paving the way to the dynamic beside the static paradox. But bringing oneself to believe what one knows is not true makes SD a self-defeating strategy. For the intentional account to be true, the agent cannot bring herself to believe that P, against evidence, in a straightforward way, simply because she wants P to be true: not only can we not believe by fiat, but we also cannot believe by fiat what we know is not true. SD cannot be a direct and self-transparent strategy; hence, if SD is to be intentional, it has to be either indirect and/or nontransparent. The indirectness has been proposed, exploiting time and bad memory, in such a way that S at t¹ can plan to make herself believe that P at time t², which





: . It must be stress that Audi sees the self-deceptive statement as less than a belief, while the truth is actually believed by the subject. His account in this way comes closer to the reductionist view that SD is akin of pretension. Others distinguish between beliefs and thoughts or beliefs and knowledge. The distinction between believing and knowing is made for example by Kent Bach, “An Analysis of SD,” Philosophy and Phenomenological Research, , : –; Stanley Paluch “SD,” Inquiry, , , pp. –. See the dismissive comments by Kipp, “On SD”; M. Johnston, “SD and the Nature of Mind”; A. Mele, Irrationality: An Essay on Akrasia, SD, and Self-Control, Oxford University Press, Oxford . See S. Borge, “The Myth of SD,” The Southern Journal of Philosophy, , : –.

Investigating Self-Deception



now she knows is false. But even if it is conceivable to manipulate one’s beliefs willfully, and cunningly create a false belief ad hoc, this is hardly a case of SD, because in fact the resulting false belief that P will be brought about in the usual rational way. At time t² the subject will be justified in believing that P, though P is false. Alternatively, nontransparency implies some reference to the unconscious. Leaving aside the Freudian unconscious, many scholars make use of a nontechnical notion of the unconscious, such as nonawareness, intrinsic opacity of cognitive operation, mental tropisms and so on. Such accounts, however, fail to acknowledge the underlying methodological issue of SD ascription. SD is never and cannot be self-ascribed in the present tense, because that would be paradoxical indeed. Therefore, it is the case that SD ascription is always made from outside, by another agent or by a later self, without the possibility of being confirmed by the self-deceiver in the present tense. This very fact suffices to cast some doubt about the interpretation of SD as the subject’s strategy. It is indeed an external observer, or a later self who sees all the bits of practical reasoning in place: motivating wish, ends and means, and concludes that it is an intentional, although somehow unconscious, plan. Yet, the presence of a purpose and a motive, supposedly evident to everyone, does not yet justify the inference of a strategy unconsciously devised by S. After all, the natural and social world displays a variety of seemingly purposive phenomena, which are, in fact, unintended consequences of blind processes or of elsewhere directed actions; and the explanation of such phenomena dispenses with the teleological scheme typical of intentional strategies. Given the ascription issue, and the availability of alternative 

 



The example is that of Clara who wants to forget about a meeting fixed in two months’ time and write it down on her diary at a wrong day. Given her poor memory she is confident that in two months she will believe her own writing and forget the original date. A version of this example is advanced and discussed by D. Davidson in “Deception and Division.” The same example is then discussed by A. Mele, Irrationality: An Essay on Akrasia, SD and Self-Control, pp. –; McLaughlin “On the Very Possibility of SD”; J. L. Bermudez, “SD, Intention and Contradictory Beliefs,” Analysis, , : –. That self-induced deception is not real SD is argued by McLaughlin, “On the Very Possibility of SD” while it is defended by Bermudez in “SD, Intention and Contradictory Beliefs.” S. Gardner, Irrationality and the Philosophy of Psychoanalysis, Cambridge University Press, Cambridge ; Talbott, “Intentional SD in a Single, Coherent Self,” –; A. O. Rorty, “The Deceptive Self: Liars Layers and Loirs,” in B. McLaughlin and A. O. Rorty, eds, Perspectives on SD, pp. –; A. Barnes, Seeing through SD, Cambridge University Press, Cambridge . The problematic ascription condition for SD is relatively overlooked in the literature, but see, for example, E. A. Johnson, “Real Ascription of SD are Fallible Moral Judgments,” Behavioral and Brain Sciences, , : .



Political Self-Deception

explanations making sense of the purposefulness of SD, the imputation of intentionality does not look justified. In general, even the most persuasive versions of the intentional account, such as Fingarette’s, are unclear as two specific points: (a) does the presence of a seemingly purposive phenomenon suffice to justify the presupposition of an intentional plan and of a teleological strategy to carry it out? (b) What should the content of the intention be in order to count as a self-deceptive intention? Concerning the latter, almost everyone excludes that it is the intention of deceiving oneself, which would be puzzling indeed. But is it the intention to believe that P, which is knowingly false, or is it the intention to reduce one’s anxiety or improve one’s image, and so on? The latter is definitely present and legitimately so, but the self-deceptive outcome, the soothing false belief, can hardly be seen as the direct result of that intention working in its usual way. (Hence, the problem of explaining how that intention can work behind the back of the subject, so to speak, and the question whether this nontransparent work can be said “intentional” nonetheless.) By contrast, the former, i.e. the intention to manipulate one’s cognitive process in order to believe what one wishes in the teeth of evidence, cannot be the case for (a) it brings along the dynamic paradox all over again and (b) is simply imputed by the observer illegitimately, without sufficient warrant, by applying the teleological scheme from outside, and by ascribing the apparent purpose of the false belief to the agent as her purpose. Even if the false belief is shown to be practically rational according to Bayesian rationality, this is not sufficient to prove the intentional nature of SD. There is a point in favor of the intentional view, however, as pointed out first by W. G. Talbott. His defense of the intentional account is focused on the selectivity of SD, which he contends the causal view leaves unexplained. He argues that if it were the case that a wish causally triggered a biasing process ending up in a false belief, as anti-intentionalists maintain, there would be no limit to perceptual distortion for the immediate goal of maximizing pleasure and minimizing pain, with serious problems for the agent’s long-run interests. If SD were causally produced by a wish to reduce one’s anxiety, by believing that everything is fine, then why is it that most of the time mental processes do not take the first shortcut to pain minimization? If SD were the outcome of mental tropism for anxiety reduction, there would be no possibility of a different response all the  

H. Fingarette, “SD Needs No Explaining,” Philosophical Quarterly, , : –.  Talbott, “Intentional SD in a Single, Coherent Self.” Ivi.

Investigating Self-Deception



time reality frustrates one’s wishes. This is why Talbott holds that we need an intentional account of SD, one which makes sense of its limited scope in a fairly circumscribed area of individual life. Yet Talbott’s position is unconvincing given all the difficulties of intentional accounts listed above; certainly it cannot be a transparent choice of the subject whether to take the shortcut for comforting false beliefs or to respond rationally to the evidence. Causal explanations so far have no convincing answer to the selectivity issue, but their supposed deficiencies cannot alone prove that SD is an intentional strategy performed by a Bayesian agent, if it is not shown that the agent’s goal is his intention and that it is intentionally carried out and not brought about behind the agent’s back. .. The other option to overcome the conceptual difficulty of SD is to reject the lying to oneself model altogether, and to view SD simply as a motivated belief contrary to the available evidence. According to this model, the self-deceiver does not hold two contradictory beliefs at the same time. He holds only the false one, which is the product of his wish causally switching on cognitive biases. In this way, both paradoxes are conveniently bypassed for the causal model claims that (a) there is only one belief and (b) the process is brought about by the causal switch of cognitive biases by the motivational state. Mele, probably the most eminent representative of the causal view, offers the following list of jointly sufficient conditions for the production of self-deceptive beliefs, namely: . . . .

the belief that p which S acquires is false; S treats data relevant, or at least seemingly relevant, to the truth value of p in a motivationally biased way; This biased treatment is a nondeviant cause of S’s acquiring the belief that p, The body of data possessed by S at the time provides greater warrant for non-p than for p.

As an illustration, let us apply this model to the belief in the presence of WMD before the  invasion. The four sufficient conditions by Mele apply to the case as follows. The belief in their existence was false and was produced by a biased consideration of the evidence, due to the wish that they existed. The wish causally triggered cognitive biases bringing about   

A similar position is expressed by J. L. Bermudez in “SD, Intention and Contradictory Beliefs.” This condition is supposed to rule out that the deception is produced in someone other than the subject. Mele, “Real SD,” – and SD Unmasked, pp. –.



Political Self-Deception

the false belief, while the available evidence concerning the truth of the presence of WMD was more negative than positive. Compared to the same example interpreted according to the intentional model, here the motivation concerning the presence of WMD triggers cognitive biases, which induce a false reasoning ending up in the false belief of their existence despite contrary evidence. Condition () is to my mind ambiguous for it does not specify (a) whether the evidence available to S is an internal or external condition; and (b) that it is the contrary evidence that sets the biasing process in motion. If the possession of the evidence means simply that it is available, then () is an external condition independent from the biasing ending up in the belief that P. If P is false, then we have a case of SD; if it is only unwarranted then we have a case of wishful thinking. If, as Mele stated, wishful thinking is believing beyond the evidence and SD is believing against the evidence, then it is the external condition () which contingently makes P either SD or wishful thinking. In order to keep SD specificity, condition () not only should be internal, processed by the subject, but also needs to be the priming of the biasing. If SD is not just wishful thinking going badly, then the appraisal of the contrary evidence and the consequent priming of the biased treatment of data (and in that order) are to my mind crucial to account for SD as a specific instance of motivated irrationality. Moreover, the distinctiveness of SD further requires that the condition “S has a wish that P” be added to the list. Without this condition, SD cannot be distinguished from cases of stubborn beliefs, as noted by Kevin Lynch. What actually distinguishes stubborn beliefs from self-deceptive ones is the motivation, which in SD is thematic with reference to the belief, but not in stubborn belief cases. As to the nonparadoxical production of SD, the causal model views SD as a purely causal product where the operating cause is a motivational state. Different supporters of the causal-motivationist account explain the motivational state triggering the biasing process in terms of emotion or desires or both. Mele sees the production of self-deceptive beliefs in continuity 

 

This seems to be the position held by Mele, at least in his  article. He seems to subscribe the view that the specificity of SD as a faulty mental process is linked to the intentional view of the phenomenon. See “Real Self Deception,” p. . K. Lynch, “SD and Stubborn Beliefs,” Erkenntis, , : –. Ariela Lazar holds that emotions are the causal component of the biasing, Mele favors desires for straight cases and emotions for twisted cases, and Scott-Kakures refers to different kinds of emotions. See Lazar, “SD and the Desire to Believe,” – and “Deceiving Oneself or SelfDeceived?,” p. . Also see Mele, SD Unmasked; D. Scott-Kakures “Motivated Believing: Wishful and Unwelcome,” Nous, , , pp. –.

Investigating Self-Deception



with the production of normal beliefs, in so far as both are affected by the work of cognitive biases and heuristics. His account relies on the lay hypothesis testing theory, as outlined by cognitive psychology. The pragmatic model of hypothesis testing describes people’s daily reasoning. Briefly, the model says that (a) our knowledge is generally oriented by the pragmatic need to minimize costly errors in belief-formation relative to resources required for acquiring and processing information; and (b) individuals have different acceptance/rejection thresholds of confidence relative to the belief that p depending on the cost to the individual of a false acceptance or, conversely, of a false rejection. Taking up this model, Mele says that motivations interfere precisely by defining the focal error which then triggers the biasing by either lowering the acceptance threshold for believing P true or heightening the rejection threshold for disbelieving that P. This manipulation will result in a corresponding relaxation of the accuracy of data processing and evaluation, bringing the subject to falsely believe (or retain the belief ) that P. Take the example of Joan’s husband, who has started coming home later and later, spends a long time on the telephone after dinner, and has had an uncommon number of conferences and conventions in the past weekends. Joan possesses this evidence that would make an unmotivated cognizer conclude that the husband is likely having an affair (condition  of Mele’s sufficient conditions for SD); however, for Joan, the focal error to avoid is to falsely believe her husband is having an affair, for that belief would uselessly disrupt her ménage and family life. Consequently, her acceptance threshold for believing her husband having an affair is so heightened that, short of surprising her husband in bed with another woman, she will discount the suspicious evidence, explaining it away (condition ), and, as a result of her biased data processing, she will go on believing her marriage is fine (condition ), which is false (condition ). In this way, not only no static paradox needs to be overcome for the subject does not entertain two contrary beliefs, but it is also unnecessary to imagine a person involved in a cunning manipulation of her mental states aimed at fooling herself. SD is indeed one species



On the general pragmatic model of hypothesis testing, see J. Klayman and Young-Won Ha “Confirmation, Disconfirmation and Information in Hypothesis Testing,” Psychological Review, , , pp. –; James Friedrich “Primary Error Detection and Minimization (PEDMIN) Strategies in Social Cognition. A Reinterpretation of Confirmation Bias Phenomena,” Psychological Review, , , pp. –; Y. Trobe and A. Liberman, “Social Hypothesis Testing. Cognitive and Motivational Mechanism,” in E. Higgins and A. Kruglansky, eds., Social Psychology: Handbook of Basic Principles, New York, Guilford Press: –.



Political Self-Deception

of motivated irrationality, which exploits the normal everyday process of hypothesis testing and the normal cognitive biases affecting all human cognition. Much as this account is refreshing for its simplicity and nonpuzzling nature, yet there is a lot that it leaves unexplained. All in all, relying on the general lay model of hypothesis testing can backfire on Mele’s account. For the model, providing a general explanation of everyday reasoning says that human cognition is always pragmatically rather than epistemically oriented, and that it is likewise open to pervasive biases and systematic mistakes. It is not sufficient to posit a wish as the cause of the biasing ending up in SD, as Mele does, because the hypothesis testing model is always guided by wishes and needs as opposed to purely epistemic norms. Neither the self-serving nature of the wish is sufficient, being common to all kinds of motivated irrationality, nor even necessary for the production of a deceptive belief. How can we specifically detect SD in such a cognitive background? In Mele’s theory, the specificity of SD rests only on the fact that the available evidence runs opposite to what the subject has falsely come to believe. In this case, however, SD seems entrusted to a purely contingent and external condition and has nothing to do with any specific process in belief-formation. Moreover, Mele’s sufficient conditions cannot set apart SD from stubborn beliefs. The specificity issue is not simply intriguing for philosophical analysis but also affects the role that SD may play in political analysis. In order to understand faulty decision making properly, the gray area of illusions, myths, ideological mindset, SD, amply acknowledged by political scientists and historians, must be unpacked. Only a specific conception of SD can render the distinctive role that self-deceptive beliefs play in bad planning, for they are not simply induced by desires, but they are false and prompted by an emotional reaction against negative evidence. As we shall see in the analysis of cases, SD is actually always mixed with illusions and ideological convictions; hence, the review of data is hardly dispassionate. Thus, in order to detect SD precisely, we need something more specific than motivations and biasing which are always present. Consider, for example, the conviction that an escalation of the American involvement in Vietnam in the course of  was the solution to the clear lack of success of the counterinsurgency in the South. In the light of the subsequent failure of the American escalation, this conviction might have been a case of wishful thinking, that is of believing according to one’s wish to win and beyond the available evidence. Yet, very likely, it might have been a case of SD, given that it was precisely the gut reaction to the lack of success at time;

Investigating Self-Deception



more specifically, the idea of escalation represented the evasion to the unsuccessful counterinsurgency campaign in the South Vietnam. In such a case, the desire to win, confronted with the negative evidence of the situation in South Vietnam, engendered the false contrary belief that bringing the conflict to the North would resolve the conflict. The subsequent alleged incident of the Tonkin Gulf represents an even clearer example of SD. In that case, when doubts on the incident having taken place started to surface, the wish for a dramatic incident, justifying the escalation to the Congress, led the biased search for confirmation. In both occurrences, the negative evidence relative to the wish primed a biased search, leading to the confirmation of the wish. In order to pinpoint SD precisely, Mele’s four conditions are not sufficient, because the links among the wish, the negative evidence, and the biasing is missing. In addition to the specificity issue, the causal model runs into the selectivity issue. If despite pervasiveness of biases and motivational interference, on the whole, we are usually responsive to evidence and come to hold beliefs, which are mostly true, then we cannot explain the selectivity of SD via our general cognitive vulnerability. Mele has proposed a solution based on the following example. Gideon is a CIA agent accused of treason. Both his staff and his parents share the desire that he is innocent, but, when confronted with the body of evidence, his staff comes to be convinced that he is guilty; his parents instead retain the desired belief that he is innocent despite the counter-evidence. Only his parents are then victim of SD, while his staff responds to the evidence adequately. The difference, in Mele’s explanation, lies in that the cost of falsely believing Gideon innocent is higher for the intelligence agents than for his parents. For the staff, the desire of his innocence is trumped by the desire of not being betrayed, but not for his parents. But what is it that makes the desire that P sometimes immune to potentially conflicting truth-oriented desires? We must have a theory that specifies which desires in S’s motivational set may become operative for biasing, and in which situations. In Gideon’s case, it is clear that the same desire is embedded in two different utility functions respectively for the parents and for the staff: this fact may explain the different intensity of the wish for the two sets of agents, but, it hardly explains how the biasing process is causally activated by strong preferences. Usually preference ranking gives reason to agents in favor of certain choices and certain actions, but clearly the causal account cannot resort to the idea of having reason to deceive oneself which was instead central for the intentional account. If SD is entered into causally as the effect of the triggering of biases by certain motivation, the intensity of a desire cannot



Political Self-Deception

be the causal explanation of its triggering biases. Granted that Gideon’s parents have not only a strong preference for Gideon being innocent, but one that is clearly ranked over all others, which is not the case for the staff, still there should be something bridging that strong preference and the triggering of the biasing process. To put it briefly, the solution of the selectivity issue under a causal account cannot simply be entrusted to the lay hypothesis testing model without presuming agency at a certain juncture. The relevance of selectivity for political application is easy to see: it explains why and when SD takes place instead of normal truthresponsive process of belief formation. If we can account for SD selectivity, we can detect the favorable circumstances for SD to take place. Much as specificity allows for detecting SD, selectivity allows for foresight and for a basis for preventive measures. In order to face both issues of specificity and selectivity, I propose to look into aspects that have so far been left unclear or outside the focus of the causal model. I am not disputing that the product of SD is not intended, but that does not imply that it is wholly causally produced as a mental event. Mele does not deny that the self-deceiver also acts intentionally in the process of deceptive belief-formation, but such intentional doing is pushed aside in his account and has no effect on the causal explanation of the self-deceptive belief. I think that this is a mistake that exposes the supporters of the causal account to the same confusion displayed by intentionalists between intentionality of process and intentionality of outcome. Actually, for the outcome to be unintended, the process needs not to be wholly unintended and causal. I argue that a proper consideration of the intentional dimension of SD help the resolution of the two issues of specificity and selectivity. A final critical remark on the causal model. Causalists claim to have provided a simple and unified account both of straight and of twisted cases of SD. The most common straight SD produces a “positive belief” corresponding to the subject’s desire; in that case the desire that P causally induces S to search for confirming evidence for (false) P and discounting ~P, making use of the hypothesis testing model and exploiting usual biases. Example of straight SD in the social and political reality is the widely held belief that immigrants take away jobs from nationals, which has amply been proven false by all records and data analysts, but that also fits the dislike for immigrants and for a multicultural society. Twisted SD instead 

To my knowledge, this distinction has been only observed by Levy in “SD and Moral Responsibility,” p. .

Investigating Self-Deception



produces a belief adverse to S’s desires, such as falsely believing one partner unfaithful or a nuclear attack by an enemy state likely. In such cases, a strong emotion, a fear that P be the case (marital infidelity or nuclear attack) causes an analogous biased search (falsely) confirming that ~P that happens to be contrary to S’s wish. The SD model is the same for positive and soothing false beliefs as well as for negative, adverse and similarly false ones. The advantage of having a unitary account, beside explanatory simplicity, lies in doing away with the seeming purposefulness of selfdeceptive beliefs which have been a pillar of the intentional model. If the same explanation applies to favorable and to adverse beliefs, the purposive nature of SD is shown to be only illusionary for the object of the desire has no role in producing the false belief. On closer inspection, though, the unitary account is less unitary than claimed. The fact that the same biased path goes in opposite directions needs explaining. Different explanations are suggested either imputing the difference to a switch in the causal component of the motivational set from desire to emotion or from emotions like love to emotions like jealousy; or to different dispositional traits of subjects. These switches in the causal component of the model, however, must be properly explained on the one hand, and, on the other, will alter the original simplicity. If the explanation refers to the different causal role of emotions and desires, respectively, then the motivational state needs to be carefully unpacked. Similarly unsatisfactory is the switch from one type of emotions to another, for these switches should be explained in the first place. The causal shifts are neither properly analyzed nor justified, so suggest more an ad hoc accommodation, on the whole making the account less unitary than claimed. Summarizing, the causal story of SD seems more promising than the intentional one. It is logically impossible to conceive of SD as an intentional plan of the subject; only via mind partition can such a view be defended, but then one stumbles on the problems of double agency and homuncularism. To be sure: it is not impossible to conceive of a confused subject holding contradictory beliefs. Even in this case, though, the two beliefs cannot be held in the focus of the attention at the same time, and one of them must be buried somewhere or never attended. As to the formation of the deceptive belief, again, we are either stuck with mind-partition or less strong notions of unconscious cannot persuasively account SD as an intentional strategy of the subject. Despite these serious  

Scott-Kakures, “Motivated Believing,” p. . This is Mele’s preferred solution to the twisted cases; see SD Unmasked, pp. ff.



Political Self-Deception

difficulties of the intentional view, the intentionalist grasps something important, namely that SD involves a doing of the subject and not simply a mental event. The view of SD as a mental event is actually responsible for giving rise both to the specificity and to the selectivity issue. If SD is viewed as caused by a motivational state triggering normal cognitive biasing, ending up in a false belief, it may be a common and indistinguishable case of cognition interfered by interests, expectations, and favored results where the evidence happens to run contrary to the motivation. If instead triggering is the causal response to contrary evidence, then why is not SD the general shortcut to minimize discomfort whenever evidence runs against our wishes? In order to find a solution to both issues, SD should not be seen simply as a mental event in which the subject is a passive victim. Actually, it is not the case that the supporters of the causal account deny that the subject also acts in the process of SD belief-formation. Yet they stress that the subject’s doing (a) is not aimed at deceiving oneself; (b) hence it is peripheral and contingent on the product of SD; (c) lastly, the causal nature of the process disposes of the seemingly purposefulness of the deceptive-belief as amply illustrated by twisted cases of SD. In other words, they discount any role of the intentional doing in bringing about SD for in the causal model intentionality is irrelevant to the deceptive outcome. My view is that the first claim is uncontroversially right, but the other two are wrong and contribute to the one-sidedness of the causal model. If we want to account for both the specificity and the selectivity of SD – which, it must be stressed, do not represent obstacles for the intentional model – and if we want as well to preserve the nonmysterious and nonpuzzling explanation as in the causal model, then we need a different account capable of retaining the positive aspects of both and disposing of the difficulties of each, and yet a specific one and not simply a middle ground between the two or a loosening up one of the two. In what follows, I argue that the model of invisible hand is the ideal candidate.



The Invisible Hand Model

And so he was able to trace, in these faults which she found in him, a proof at least of her interest, perhaps even of her love; and in fact, she gave him so little, now of the last, that he was obliged to regard as proofs of her interest in him the various things which, every now and then, she forbade him to do Marcel Proust, Swann in Love

Investigating Self-Deception



.. The argument for my alternative model of SD starts by considering the propositional nature of SD doxastic process, which is discounted by a purely causal story. Recent work by C. Michel and A. Newen, backed by the experiments by D. Wentura and W. Greve, focuses on this feature as crucial for defining SD. The experiment involves a test ostensibly meant to measure how knowledgeable the participants are about history, but actually aims at analyzing their reactions to failure. The findings unambiguously show that participants adapt traitdefinition for self-immunization purposes. Subjects who ex ante had thought of themselves as cultivated and specifically expert in history, and that had failed the history test in the context of the experiment, found out ways to preserve their self-image intact. They immediately processed the negative result by adapting the “criterial evidence” required to define someone “cultivated” so as to escape the implication of not being really expert. Either by redefining the previous belief that “knowledge of history is a necessary component for a cultivated person,” or by discounting the value of the test for real historical knowledge, subjects managed to defend the belief that they were cultivated despite the test’s contrary evidence. That such stories were self-deceptive is proved also by the fact that the participants, who were tested as normally rational and evidence-sensitive in general, applied standards of evaluation and reasoning to themselves different from those applied in general and specifically to other people. This experiment shows two important features of SD, which run contrary to the purely causal view: (a) It shows that the product of SD is more than the false belief that P; it is actually centered on the false belief that P, but the latter comes with a whole host of arguments, reasons, and evidence in its support. The SD reasoning is definitely faulty: it is precisely aimed at justifying the denial of inference from the negative evidence, which would falsify the belief that P, nevertheless it does not proceed randomly or



 

Actually, in “When Are We Self-Deceived?” (Humana Mente, , : –), Mele acknowledges the propositional nature of self-deceptive beliefs and belief-formation, even add this condition at the list of sufficient conditions for SD. But this switch makes it hard to understand the co-presence of the propositional nature and pure-causality in belief-formation. C. Michael and A. Newen, “Self Deception as Pseudo-Rational Regulation of Belief.” D. Wentura and W. Greve, “Who Want to Be Erudite? Everyone! Evidence for Automatic Adaptation Trait Definition,” and Id., “Evidence for Self-Defensive Processes by Using a Sentence Priming Task,” Self and Identity, , : –.



Political Self-Deception

incoherently. The thinking mode is biased, no doubt, yet it responds to common constraints for reasoning in so far as it provides a justification both for discounting the criterial evidence and for retaining the false belief. In other words, those self-deceptive stories do not look like the causal effect of biases operating behind the subjects’ back, but instead like the result of an intentional effort aimed not at deceiving oneself, but at finding a way out of self-embarrassment. In addition, in the context of the experiment, it emerges that while true beliefs often require little argument (“I am a cultivated person, that’s why I scored very high in the test”), self-deceptive ones needs much more complex arguments and a certain sophistication at finding fault with the criterial evidence. (b) Moreover, the experiment shows that the explicit knowledge of the negative result does not compel the self-deceiver to hold two contradictory beliefs. In this experiment, the subjects knew – not just perceived or suspected – the result of the test, which provided the criterial evidence from which to infer either the belief “I am competent” or “I am ignorant.” The participants who failed the text precisely blocked the inference from the belief “I failed the test” to the belief “I am ignorant.” In this case, the negative evidence is actually encoded in the belief that Q (I failed the test), but this belief does not directly contradict P (“I am cultivated”), for ~P is only implicated by Q, and the implication is blocked. This experiment provides a good illustration that SD can take place by blocking the representation of the negative evidence short of the belief that ~P. In this way, the self-deceptive belief need not contradict the true ~P, for the negative evidence is blocked before the belief that ~P is formed. Cases like the history test experiment are widespread in political reality: when a policy ends up in failure, decision makers know that the policy failed but tend to readapt the criterial evidence and escape the conclusion that “I am responsible for the failure.” Excuses vary: the intelligence was incorrect, some officers betrayed the decision maker’s trust, the failure was only apparent. Once all excuses have been exhausted, the decision maker may appeal to his good intentions to keep up with his cherished image of





Michel and Newen conclude that SDS displays dual rationality and that what constitutes selfdeceptive reports is a quasi-rationality working in an automatic, prereflexive, hence nontransparent, mode to the subject. Another experiment in motivated reasoning has confirmed that when participants want to get a certain result which is not delivered by the simple default strategy of reasoning, they become more sophisticated in their reasoning, which is twisted at certain juncture to get to the desired result. See Mata, Ferreir, and Sherman, “Flexibility in Motivated Reasoning,” –.

Investigating Self-Deception



good leader, and to reject the moral responsibility of the harm done. This was precisely what Tony Blair did, after it became clear that the rationale for the Iraq invasion was untrue, and the harmful consequences were undeniable. From this experiment, we gather that definitely subjects do not intend to fool themselves, but a careful consideration of the self-deceptive product – comprehensive both of the belief that P and of its surrounding arguments – seems to imply that there is a lot that subjects do and do knowingly, and up to a point openly and legitimately, which guides their reasoning towards the false belief that P. If we are to make sense of the intentional doing displayed in the unfolding of SD, we need to think neither that the false belief was intended nor that the lying agent was an unconscious mind inaccessible to the conscious ego. Nor should we discount the intentional doing. In my view, the best solution must account for: (a) the intentional steps; (b) the causal biasing; and (c) the unintentional deceptive belief that results from the process. The model of the invisible hand is precisely meant to compose all three elements in a single explanation. This invisible hand explanation for SD does away with the paradoxical idea of lying to oneself, yet it accounts for the agency dimension in the production of SD without the recourse to a deceptive plan which would not sit comfortably with the impossibility of ascription in the present tense. In general, the invisible hand model provides a neat account of the coexistence of intentional actions and of the unintentional outcomes, for it explains a certain beneficial effect, such as the institution of money, as the result of human action but not of human design. The outcome is unintended, but it results from the composition of intentional actions elsewhere directed, plus some filtering mechanism. This model fits the production of self-deceptive beliefs: the agent does not aim at deceiving oneself, but without her doing, albeit elsewhere directed, there would be no SD. In this respect, SD implies agency, admittedly a confused and emotionally impaired agency, but not simply a passive victim. As previously stated, the supporters of the causal explanation do not deny that the subject may act in the course of the formation of the self-deceptive belief, but the intentional doing of the subject ultimately plays no explanatory role, since the result of SD – the false belief that P – is exhaustively  

M. Forrester, “SD and Valuing Truth,” American Philosophical Quarterly, , : –. A detailed discussion of invisible hand explanations is found in R. Nozick, “On Austrian Methodology,” Synthese, , , pp. –.



Political Self-Deception

explained by the causal data biasing. I argue instead that the intentional doing plays a causal role in producing the outcome, albeit an indirect and nonexhaustive one. Moreover, the invisible hand model also explains the apparent intentionality of the SD product. The latter, although it is not a goal of the self-deceiver, is not simply a contingent outcome. When the agent’s biased reasoning reaches the favored conclusion, the agent stops her search, feeling relieved and satisfied (at least for the time being) in the false belief that P. Hence, it is not contingent that the false belief is favorable: until a favorable explanation of the contrary evidence is found, the subject goes on with her biased search. Arriving at the false belief that P is the effect of biasing but stopping at P is the intentional doing of the subject. This explanation seems to find an obstacle in the cases of twisted SD, where the subject comes up with the adverse belief. I shall show that also twisted cases serve a goal of the subject though definitely not brought about with that intention. Finally, the invisible hand model can also capture the specificity and selectivity of SD, which are lost in a purely causal deflationary account. .. In the SD process, what the subject does is the response to the appraisal of some evidence ~E contrary to the belief that P. In order to preserve the specificity of SD, the consideration of the appraisal of negative evidence is crucial, for it is precisely what activates the motivational state, arousing the fear and anxiety about the truth of P; in turn, the motivational state, that is the emotionally laden wish that P, sets in motion the process of thinking over P, where biases easily strike, thus ending up (and not contingently) in the false belief that P. In other words, not only Mele’s condition () must be internal but it must also be the priming for starting the reasoning process which, in turn, is affected by biases under the influence of the wish that P. Moreover, the deceptive belief that P must correspond to the wish that P, in order to set SD apart from stubborn beliefs whose content is independent from the content of the wish. In order to keep the specificity of SD in the realm of motivated irrationality, the motivational state setting the reasoning in motion should be (a) content-specific and (b) aroused by the contrary evidence ~E. There is no risk that some kind of encoding of the negative evidence might bring back the two-belief view and the related static paradox. There are various levels of stimulus exposure, which correspond to different steps in the representation of the stimulus. SD can occur at each level of the stimulus representation after the first, by blocking the correspondent

Investigating Self-Deception



stimulus exposure. The process is blocked if “the lower level analysis detects content which is predictive of the unwelcomeness of further processing.” Depending on the level of blocking, different self-deceptive beliefs obtain. The role of the contrary evidence explains not only the arousal of the motivational state and the subsequent biased thinking, but also the relatively complex reasoning employed to do away with ~E and support P instead. After appraising of ~E, what the subject does may be done in a preattentive mode, and may not require full awareness, as in many mental processes, but it is her doing, and can be aproblematically self-ascribed. The wish that P is true is legitimately there, can be acknowledged by S in the present tense, and need not be a causal trigger, but simply the origin of the self-deceptive process. In order to be unintentionally distorted by biases, our cognitive processes should be normally activated. Having the wish that P and having met with ~E, however, is not sufficient to set SD in motion, as the example of Gideon has shown. Before the process starts, the subject actually believes P to be true. Gideon’s parents believe their son’s loyal, and Joan her husband’s faithful. The belief that P is emotionally turned on by the encounter with the negative evidence. It is consequently transformed into the anxious wish to go on believing that P (because P is true). Usually, one does not think about one’s health when s/he is well, or about his/her husband’s fidelity if the relationship goes smoothly. The desires of being in good health and of having a happy marriage represent priorities for most people, but before the threatening evidence is perceived, the subject takes them for granted. The perception of the threat concerning the belief that P is therefore a necessary condition for transforming P from an important but granted belief into an emotionally overloaded wish, just because the very possibility of believing that P is true is undermined and ~E has aroused anxiety and fear concerning P. Many scholars of SD have taken this condition as equivalent to “S believes that ~P,” but this is not the case: what S knows, and often only in a pre-attentive mode, is the negative evidence ~E and feels the related emotional overload. It is important to remark that the negative  

A. Greenwald, “Self-Knowledge and SD,” in Lockard and Paulhus, eds., SD: An Adaptive Mechanism (pp. –), p. . See C. Michael and A. Newen, “Self Deception as Pseudo-Rational Regulation of Belief,” P. Petrini, L’autoinganno: Che cos’è e come funziona (SD: What It Is and How It Works), Laterza, Bari , pp. –.



Political Self-Deception

data, although strongly favoring ~P, are nevertheless inconclusive. The typical self-deceiver has available sufficient evidence for a hypothetical rational observer to think that ~P is reasonably the case, but not conclusively. Hence, at the start of SD process, the subject not only does not form the belief ~P, but also exploits precisely the little latitude left open by ~E to think the matter over, hereafter setting the biasing in motion and eventually coming to hold the self-deceptive belief that P. Exploiting the interstitial room left by ~E, and exploring the possibility that, after all, P may still turn out to be the case is not irrational per se, though it is definitely motivated and anxiously undertaken. Yet this is not to say that the subject has (possibly nonconscious) reason to engage in SD, as some authors hold. Generally speaking, when confronting negative data, what agents would have most reason to do is to counteract the threat so as to try to bring about the desired state of the world. But, actually, this is often precluded in SD circumstances or it is too costly. The agent can in fact take some action concerning the negative data (she can kill the husband) but what, generally speaking, she cannot do is to bring about the status quo ex-ante, where husband was just faithful. Even if in some cases she can counteract ~E, the agent, rightly or wrongly, perceives either that the situation is beyond repair or that the costs of action are too high, so that taking action is not an option for her. What she can do instead of acting is to ruminate and think the matter over, pondering if, after all, P may still be true. Such reflection is intentional and not even needs to be unconscious. Yet this does not mean that S intends either to deceive herself, or, for that matter, to trade the desire that P be the case with the desire to believe that P independently of its being true. What the subject wants is to go on believing that P, because P is true. The understanding of SD as intentionally entered, under an emotionally loaded desire, must be distinguished from the conception of SD as an intentional strategy of the subject wishing to believe P, no matter what. Let me summarize the steps conducive to SD, which I take to be the circumstances or the contextual conditions that make the SD process very likely: . P fulfills a crucial desire-element for S’s well-being and prospects; . some negative evidence ~E potentially threatening the truth of P becomes known to S; 

On this point, see, for example, Béla Szabados, “The Self, Its Passions and SD,” in M. W. Martin, ed., SD and Self-Understanding.

Investigating Self-Deception . . .



the knowledge of ~E makes P the object of S’s emotionally loaded wish that P. S is not, or perceives not to be, in a position to undo the threat; S starts thinking the matter over and see whether P can still be true.

In this way, SD is specific, for the biasing ending up in the false belief that P is produced not just by a motivational state that happens to run against the available evidence, but rather by the perception of ~E that transforms the belief that P into the emotionally overloaded wish that P. SD is thus set apart from both (a) mere wishful believing gone badly and (b) stubborn beliefs, for the operative motivation is the wish that P thematically corresponding to SD outcome. Moreover, SD operates selectively for it is not a general causal mechanism countering any unwelcome data, but it activates only under specified circumstances, namely, in case the negative evidence threatens an important belief for S’s well-being which S cannot (or believes he cannot, or discounts he can) counteract or finds too costly to counteract. Only under those circumstances, the likely response of S is thinking over acting, and it is a thinking entered intentionally, but, because of the emotional overload, easy prey of biasing. We shall see that the circumstance of the negative evidence available and appraised at the time of the decision is crucial for detecting SD in politics and setting it apart from wishful thinking and from the whole array of myths and unwarranted beliefs and unexamined convictions. Given the usual uncertainty of political reality and the difficulty of a clear review of data, we can impute SD only when the amount of negative evidence available to the decision maker is sufficient to make any rational cognizer to conclude that the policy is based on false premises and doomed from the start. This will circumscribe the role of SD, but also makes it more distinctive than just illusions or false hopes. In order to understand how motivation switches on SD process, the motivational state must be unpacked, and the first issue to clarify is the nature of the operative wish driving the reasoning towards SD. I contend that such wish is that P be true, and only derivatively, that it be believed true. This point is indeed controversial. Some students of SD, and notably supporters of the intentional-nontransparent view, argue that the crucial wish driving SD is rather the wish to believe that P. This view seems  

This description applies to SD both as retaining and as forming the false belief that P. This view is shared by Talbott, “Intentional SD in a Single, Coherent Self,” J. L. Bermudez, “SD, Intentional and Contradictory Beliefs,” D. Nelkin, “SD, Motivation and the Desire to Believe.”



Political Self-Deception

supported by the fact that when the process starts the subject wants to go on believing that P as before, yet she does not want her belief to be false, for that would be puzzling indeed. Any examples of political SD may easily show the wish-to-believe view as unsound. Politicians fool themselves about states of the world, while they are unconcerned about their internal doxastic states. The wish driving the Bush administration into the false belief in the existence of WMD was not a wish to believe in the existence of the WMD in Iraq, but a wish in the existence of the WMD in Iraq. In fact, the wish-to-believe view would imply that there has been an unconscious trade-off between the wish that P be the case and the wish to believe P no matter what. The political example makes crystal clear that such a trade would not make sense even considered the faulty reasoning of selfdeceivers. Yet, there is no need to posit such an unconscious exchange, for the process can well be driven by the conscious wish that P be the case, without unnecessary obscure trades. While it is true that the self-deceiver precisely gets a belief devoid of its true-value, that was neither the object of her desire nor of her unconscious aim. The supposed exchange between states-of-the-world and beliefs, which, if transparent, would make the intentional pursuit of SD a “crazy choice” indeed, is yet again a consequence of conceiving the intentionality of the process as one with the intentional design of SD. But the invisible hand model disposes of such confusion. It is thus plausible to assume that the operative wish is that P be true, given that this wish can be openly acknowledged by S, while the wish to believe P no matter what, in order to make sense at all, must be unconscious. In sum, the emotionally loaded wish that P is the origin of the process which eventually and unintentionally brings about or reconfirms the belief that P devoid of its truth-value. How does the wish that P induce the twisting of thoughts into the false belief that P? Three options have been advanced: (a) the wish works exactly like any other desire (short of the confusion between reality and beliefs), providing reasons for action to the subject who then devises an intentional strategy aimed at securing the goal of believing that P; (b) the wish of believing that P figures in the preference ranking of the subject as second best after the wish of P being true; and given the unavailability of the latter, 

 

Neil Van Leuven (“The Product of SD”) thinks that in a taxonomy of various forms of SD, there is one, called “willful SD,” in which the target of the wish is the belief. By contrast, an argument leading to my conclusion is advanced by P. Pedrini, “What Does the Self-Deceivers Want?” Humana Mente, , , pp. –. It is A. Lazar that remarks that SD under the intentional model would correspond to a crazy choice. S. Gardner, Irrationality and the Philosophy of Psychoanalysis.

Investigating Self-Deception



the subject proceeds to an intentional but nontransparently biasing in order to secure this second best; and (c) the wish causally triggers the biasing ending up with the belief that P. None of these seem to me to be correct. Firstly, I would say that the wish cannot work like a normal strong desire providing reason for action aimed at states of the world, precisely because changing the state of the world is beyond the agent’s options. Secondly, the puzzling idea of the trade-off between the wish that P and the wish to believe that P as the reason for the intentional plan to secure the false belief that P is unnecessary. It is unnecessary, for the false belief that P, the outcome of SD, can well be accounted for as unintended. Thirdly, in order to avoid the puzzle of the intentional yet unconscious trade-off between the two wishes, it is likewise unnecessary to think of the wish as a mere causal trigger for biasing. In the process of reflection, the wish intervenes when interpretative choices are to be made, much in the same way as a theoretical hypothesis intervenes in scientific research, orienting the analysis in a certain direction, raising certain questions and discarding others, and searching to the left and not to the right. In principle this intervention is intentional, even though mostly unaware, and up to a point even legitimate. As we have seen, in daily reasoning, subjects tend to be guided less by epistemic norms than by heuristics, working pragmatically quite well. In the wish’s influence on cognition, there is neither a self-deceptive intent, nor necessarily a self-deceptive event yet. The wish works as a pretheoretical and extraepistemic pragmatic selector; and the fact that in this case the selector is “motivated” is not a distinctive element either, given that intuitions orienting scientific research very often are motivated as well. Yet, in the production of the self-deceptive belief, the wish has to be emotionally overloaded. The emotional characterization is in turn explained by the second feature of SD wish, namely by its crucial connection to S’s well-being and identity, in conjunction with the circumstance of negative evidence. If the wish is not so important for the subject, negative evidence is unlikely to arouse emotions of fear and anxiety. But if there is no negative evidence, the crucial wish lies dormant or may lead to wishful believing. In sum, the motivational state is aroused by the negative evidence and is composed by the wish that P be the case, which is important to the subject’s well-being and by the emotions aroused by the threat concerning the truth of P. In wishful thinking, S’s wish, awoken  

 Talbott, “Intentional SD in a Single Coherent Self.” Mele, SD Unmasked. M. Sultana, SD and Akrasia, Analecta Gregoriana, Editrice Pontificia Gregoriana .



Political Self-Deception

by the right opportunity, directly induces the wishful belief simply by avoiding an accurate consideration of the odds. Think for example of someone who having bought a lottery ticket, becomes confident to win the prize. Moving to politics, think of the Bush administration confidence that the Invasion of Iraq would be a swift success, indeed only a matter of a few days: this illusion was not prompted by the negative evidence but, rather, directly induced by their wish, independently of any evidence consideration. In SD, S is not simply motivated to believe that P, but is motivated to believe that P under the threat of contrary evidence which makes his wish that P emotionally loaded with fear and anxiety. In wishful thinking, by contrast, it is sufficient to have a wish that P and an opportunity to believe that P. In SD, the motivational state is brought about by ~E which threatens the pre-existing crucial belief that P; and it is composed by (a) the wish to go on believing that P, based on P being true; (b) the fear about P; and (c) the anxiety to reconfirm that P is true. Moreover, the process of reconfirming the belief that P in SD is more complex and sophisticated for the negative evidence should be neutralized by arguments and reasoning. It is the emotional overload that, plausibly in a causal way, manipulates attention and relaxes usual standards of reasoning; it makes the subject to block and discard certain data, and to focus and stress others, so as to arrive at an explanation which reconciles ~E and P, despite being clearly faulty and below the usual standards of the same subject in different cases. This reconstruction fits the most commonly discussed cases of SD in philosophy, for example, marital infidelity, fatal diseases, children’s well-being, and the threat to self-image. In the social and political realm, however, the reconstruction of SD is much more complex because the context, from which SD stems, is complicated and includes wrong assumptions, mixed intentions, ideology and blurred data. For SD to be detected in such a complexity, in any case, the following elements should be present: motivation, for example, the conviction that a policy is both necessary and feasible and the desire to do it successfully; the anxiety and fear aroused by negative evidence about its feasibility; and a search aimed at reconfirming the feasibility of the policy and not diagnostic of the negative data. As an illustration of this real-world complexity, I shall mention the example of the alleged uranium deal between Iraq and Niger, also known as the “yellow cake.” In the process of 

See Chapter .

Investigating Self-Deception



selling the  Iraq invasion to the American people and to the world, an intelligence information about a yellow cake of uranium sold by Niger to Iraq reached the Bush cabinet. CIA officers were doubtful about the authenticity of the information and cautious about its possible use. Richard Cheney, who was definitely very keen to find support for the WMD thesis, nevertheless ordered an investigation. To this end, retired ambassador Joseph Wilson was sent to Niger to corroborate the intelligence. Wilson came back convinced that the yellowcake was a fabrication; but, given the nondiagnostic nature of the investigation, his view was ignored, no report was requested, and eventually Wilson was discredited and his wife, who was an undercover CIA agent, was exposed. Thus, motivation, negative evidence, and biased search for confirmation jointly led to the false confirmation that WMD were present in Iraq. Let us turn now to explain in detail how SD works selectively. As noted earlier, steps conducive to SD imply that SD is entered by the subject only under specified circumstances and not whenever reality frustrates one of his wishes. Recall Mele’s example of Gideon, the CIA agent accused of treason. His parents continue to believe that Gideon is innocent, despite contrary data, while his colleagues do not. The difference lies not only in the crucial importance of the belief in their son’s innocence for his parents’ own well-being but also in the emotional arousal and in the circumstance that they are powerless about countering the threat represented by negative evidence. Fulfillment of their wish in their son’s innocence neither can be pursued by a plan of action nor be helped by accurate background knowledge, for action is precisely foreclosed by the circumstances. The fact that the parents’ wish is beyond their control has the effect of sinking the costs of inaccuracy in acquiring and processing data relevant to the truth of P. Costs of inaccuracy sinking together with the anxious wish in their son’s innocence is precisely what triggers the process toward SD. The different responses of the CIA staff and of the parents depend, respectively, on the different costs of inaccuracy in processing data in either case. Notwithstanding the desire for Gideon’s innocence, his colleagues have different priorities, and a strong incentive to uncover the truth of the matter, for in case Gideon is a traitor, then their work is jeopardized and the whole unit is in danger. For the staff, a properly diagnostic attitude will mend the problem by uncovering a mole, while Gideon’s parents have nothing to gain from discovering that their son is a traitor.



Political Self-Deception

This point is finely accounted by Robert Jervis in a well-known study in political psychology. He argues that the level of accuracy in gathering and processing information correlates with real-life incentives and prices for one’s welfare. Incentives, costs, and expectations are the reasons for either accuracy or sloppiness: the higher the cost for inaccuracy and the more important the expectations connected, the more likely is the adoption of a vigilant attitude about gathering, processing, reviewing information, and vice versa. The costs of inaccuracy for Gideon’s staff are to miss a traitor, such as to lead them to a vigilant attitude. For his parents instead, being vigilant on their part cannot change Gideon’s behavior, which is what they care about most. For them, there is nothing to lose in being inaccurate. In a similar fashion, Mele resorts to the notion of the focal error to be avoided, which provides an explanation of selective vigilance close to Jervis’s. When conduct guided by rational beliefs cannot bring about the desired result, the focal error to avoid is not a dangerous outcome, like a car crash, which is rational to expect in case the belief about one’s car noise is wishfully incorrect. When subjects are powerless concerning the threat to their well-being – or perceived themselves to be powerless – the focal error instead exposes oneself to a painful truth unnecessarily, for changing the state of the world is beyond one’s reach. Compared to Mele’s explanation, Jervis adds a further noncognitive component to the understanding of selective vigilance and the lack of it, namely the level of anxiety and stress concerning evidence processing. Low and high anxiety would typically induce less accuracy than medium level of stress. But while low anxiety often leads the agent to rely on routines and traditional patterns of processing, high anxiety and stress tend to engender “defensive avoidance,” that is, a blocking of the negative information and reliance on a false soothing belief, which is precisely what SD is all about. Combining these two considerations about costs of inaccuracy sinking and high anxiety, we have an explanation as to why Gideon’s parents do not have a diagnostic attitude aimed at settling the truth of the matter but, rather, explain away or discount the threatening evidence, defending their belief in Gideon’s innocence by all means. On the contrary, the staff is not under high anxiety for Gideon’s innocence, and can counter the threat implicit in the negative evidence, namely that Gideon is a mole, by adopting a selectively vigilant attitude in order to avoid the focal error of 

R. Jervis, Perception and Misperception in International Politics, Princeton University Press, Princeton .

Investigating Self-Deception



believing Gideon innocent in case he were betraying them all. The focal error is thus fixed by three features of S’s motivational set, namely, (a) the appraisal of the negative evidence threatening the truth that P; (b) the importance of the wish that P for one’s well-being; and (c) the costs of inaccuracy relative to the truth that P. Under those circumstances, for his parents the focal error of unnecessarily believing Gideon’s guilty disproportionately heightens the threshold of evidence required for disbelieving that he is innocent. The lack of selective vigilance in gathering and processing information is permitted by the costs of inaccuracy sinking and the correspondent heightening of emotional stakes in disbelieving what the subject most anxiously wishes. It is not a coincidence that typical examples of SD have to do with what I would call “mortal questions,” that is matters which bear a fundamental and constitutive relationship with the self, and are at the same time beyond the subject’s control: marital infidelity, fatal illness, children’s problems of addiction or criminal behavior. The first circumstance of involving fundamental desires is responsible to arouse high anxiety, while the second makes the costs of inaccuracy sink. The subject, perceiving the drama of the situation, relaxes her usual standards of data processing and lets the emotional overload activate biasing. Other SD cases look less tragic; as in the history-test experiment, often we redescribe unwelcome truths about ourselves in order to realign negative facts about ourselves – failures and misconduct of various kinds – to the positive self-image that we cherish. In the reduction of cognitive dissonance between the negative evidence and our cherished self-image the costs of inaccuracy are low, because failures have taken place already, and a diagnostic self-reflection would only make people feel depressed, guilty, and powerless, while a deceptive positive image can enhance a more energetic or adaptive response. Finally, SD can take place when the costs of inaccuracy may be significant in general, but either are not 



The expression comes from Thomas Nagel’s Mortal Questions (Cambridge University Press, Cambridge ). However, I would stress that the momentous nature of such questions derives from the relationship the subject sees between them and herself, more than in the essential features of certain problems. Although most examples of SD are indeed of such momentous nature, not every scholar shares the view that SD is involved with mortal questions. See, for example, A. O. Rorty, “User-Friendly SD,” in R. T. Ames and W. Dissanayake, eds., Self and Deception, pp. –, in which she puts forward a sort of naturalistic explanation of SD as a sort of functional device to cope with complex natural and social environment. That the desire originating SD must be “anxious” is stated by Pears, Motivated Irrationality, and Johnston, “SD and the Nature of the Mind,” but it is denied by Mele, SD Unmasked, and it is discussed by Michel and Newnen in “Self-Deception as a Pseudo Rational Regulation of Belief,” in which it is concluded that it is not necessary.



Political Self-Deception

perceived as such, or are discounted, or will not affect the agent. Hence, from the agent’s subjective point of view, the costs of inaccuracy are likewise negligible. For example, in political decision making, when the agenda is dictated by international crisis, the collective search for a solution contributes to a diminished sense of responsibility. The diffuse responsibility and the emotional pressure for a solution fulfilling the leader’s preference jointly contribute to discount the costs of inaccuracy, not only in team members but also in the president, despite his high stakes in the decision, for he may wrongly take the collective consensus as a measure of the truth of the matter. This explanation of selectivity implies that an agent is active in the process, though confused and anxious. Only an agent appraises the threat and can sense, perceive, or suspect the costs of inaccuracy relative to the focal error. In the causal model, it is difficult to grasp when and how considerations of costs of accuracy/inaccuracy relative to the focal error to avoid can have room in the subject’s doxastic processes, given that biasing is triggered causally by motivation. In conclusion, my proposed solution to the specificity and selectivity issues implies that: (a) SD is performed by an agent; (b) who emotionally reacts to negative evidence with reference to her wish that P; and (c) in case of low or (discounted) inaccuracy costs. Consequently, her cognitive standards are lowered, her reasoning is faulty, and she unintentionally reconfirms her belief that P. .. So far it would seem that my invisible hand account crucially depends on the product of SD being a belief corresponding to the wish that P, a belief that can benefit the subject, in the short term at least, by relaxing her anxieties. How can then twisted cases be fitted in this account? Generally speaking, invisible hand explanations have been used in social theory also for accounting perverse effects in the composition of social actions elsewhere oriented. In this respect, what qualifies invisible hand explanations is the fact that the outcome, no matter if favorable or adverse, is (a) unintended; (b) produced by intentional actions plus the filtering or composing mechanism (c) so that, though caused, the result is intelligible. However, I contend that even the outcome of twisted SD serves a purpose of the subject insofar it relieves the anxiety of uncertainty and provides (incorrect) grounds for action. 

See I. Janis, Groupthink: Psychological Studies of Policy Decisions and Fiascoes, Houghton Mills, Boston . For a more detailed illustration of this phenomenon, see ch. §

Investigating Self-Deception



Mele cites the example of a jealous husband, suffering much for his jealousy, who, in the absence of critical evidence, comes nonetheless to believe his wife unfaithful. Such a belief is hardly soothing, and runs contrary to his desire to have a faithful spouse: thus, how can one imagine such adverse and false belief being the result of the intentional doing of the subject? Mele’s causal account holds that in such cases the strong emotion of jealousy causes the desire for a faithful spouse to be trumped by the operative desire of not being fooled, which in turn triggers the biased processing of data, ending up in the false twisted belief. Dana Nelkin has criticized Mele’s view of twisted cases because, in his account, the operative desire turns out to be content-unrestricted. In the jealous husband example, the original desire was P (faithful wife) but then for him P is overwhelmed by emotions that makes operative another desire, Q (not being fooled), producing the biasing, ending up in the false belief ~P (wife unfaithful). As a result, there is no match between the operative desire Q and the deceptive belief ~P. It is thus unclear whether twisted cases are cases of SD after all. The criticism is well taken, but Nelkin’s response less so: her solution is that the operative desire is the “desire to believe P (wife faithful),” and in twisted cases “the desire to believe ~P (wife unfaithful).” As argued earlier, I reject the idea that the operative desire is “to believe that P” no matter what. In this case, the desire to believe ~P makes even less sense: why should anyone, starting with the wish that P, twist P into the desire to believe its opposite? Nelkin’s proposal may resolve the issue of content-unrestrictedness, but then the twisted SD is utterly non intelligible. Yet, there is a simpler and more straightforward solution: the twisting of the operative desire that P into the belief ~P is due to a special strategy of thinking affected by a special bias. In addition to content-unrestrictedness, Mele’s view exhibits other problems. In fact, his unitary model, despite his claim to simplicity, implies two unjustified shifts in twisted cases compared to straight ones: (a) the emotional component takes the lead and causes the biasing ending up in the false adverse belief; and (b) the operative desire shift from P (faithful partner) to Q (not being fooled). Both shifts are unexplained and taken together they explain too much. If one holds that there is a shift in the operative desire from P to Q, then there is no need to suppose that the causal job is done by emotions as fear or jealousy, for the desire that 

If desires are content-unrestricted, then we may be confronting a case of stubborn instead of selfdeceptive beliefs, for the distinctiveness of SD vis-à-vis stubborn beliefs lie precisely in the thematically specific content of the motivating desire. See Lynch, “SD and Stubborn Beliefs.”



Political Self-Deception

Q is acting precisely like the desire that P in straight cases – minus the issue of content-unrestrictedness of the operative desire. If instead the causal job is done by fear and jealousy, there is no need of shifting the operative desire. There is, however, the need to explain why S and S¹, say Swann and Othello, similarly suspecting that their love is under threat, become prey to biases, triggered by the desire that their beloved is faithful and by the fear that she is not, and end up holding the opposite, and equally false, belief. Swann self-deceives himself that Odette is faithful and Othello wrongly become convinced that Desdemona is guilty. The difference cannot lie in the fact that suspicion is warranted in Swann case and unwarranted in Othello’s, for that looks like an external condition apparently independent from the subject’s motivational and epistemic setting and extrinsic to modes of reasoning. Here is my alternative proposal. I hold that the necessary conditions for SD to start are the same in both SD types that is for the cuckold as well as the jealous husband. Common to both, there is the perception – or misperception – of negative evidence concerning the crucial belief that P. In both cases, it follows the emotional arousal that makes P the focus of the anxious wish that P, and similarly common is the unavailability of a course of action for relieving their anxiety by bringing about P. In other words, for both husbands the cost of inaccuracy in acquiring and processing data relevant to the knowledge that P is low or irrelevant. Given these conditions, both husbands, anxiously suspecting that P is under threat, start thinking the matter over, and their motivated reasoning becomes affected by cognitive biases. The process of SD formation starts intentionally as a response to the appraisal of contrary evidence; then, the thinking is causally manipulated by biases, ending up in the self-deceptive belief that P for the cuckold husband and that ~P for the jealous one. In both cases, the operative desire matches the self-deceptive belief, positively in the straight case and negatively in the twisted case. There is no need to shift the operative desire from P to Q (Mele), for explaining the biased data processing ending up in ~P. Similarly, there is no need to suppose that the operative desire is the desire to believe that P or ~P, just for avoiding the problem of content-unrestrictedness (Nelkin); finally, there is no need to posit a shift from desire to emotion as the trigger of biases, for in either case the desire that P is emotionally overloaded by anxiety, fear, and jealousy. How to explain their opposite outcome? On my view, the two husbands, while thinking the matter over, activate two different reasoning strategies, in turn affected by different kind of biases leading to opposite results. The cuckold husband tries to confirm P directly, by discounting ~E, or the

Investigating Self-Deception



implication from ~E, and by selective searching positive evidence in favor of P. When the favorable P is produced by his confirmation strategy, the cuckold stops his search and is relieved in his anxiety, at least for the time being; as said, P is favorable not just casually, as only when the desired belief is confirmed will the cuckold husband stop his search and reasoning. The jealous husband instead is pushed by his anxiety to imagine what he most fears in order to prepare himself for the worst possible case. Whether because of his character or of special circumstances, the subject conjures up a worst-case scenario in which ~P is the case. In principle, such strategy is not faulty: it is justified by the insurance logic that is by a strongly riskaverse approach of the kind: “let’s assume the worst, and work out a maximin response in terms of attitudes, beliefs and actions to such scenario.” The problem with this strategy, however, is that worst-case scenarios become easily affected by a specific bias – probability neglect – that twists the reasoning in a faulty way. By definition, worst-case scenarios are improbable events, representing only one extreme of a whole range of possibilities, yet it seems that once the scenario has been construed, subjects tend to forget the hypothetical nature of the picture. Once the probability neglect affects a worst-casescenario, only the certainty of his opposite can convince the subject that P is the case. Yet certainty is not easily available to epistemic agents like us. Twisted SD displays the same intentionality of process leading, through the causal biasing to the nonintentionality of outcome. I would add that much as the confirming strategy is the common mode of the lay hypothesis testing, and is pragmatically reasonable (although epistemically questionable) until it is twisted against ~E, similar considerations can be made for the counterfactual mode of reasoning implied by the worst-case scenario. This mode may be pragmatically useful in many respects and only when coupled with probability neglect it becomes biased. In both types of SD, normal modes of reasoning are distorted by the anxious fear concerning P, leading either to defensive avoidance concerning the negative evidence and ending up in the false belief that P, or disregarding the lack of negative evidence for P and producing the opposite and adverse but still false ~P. If the subject of straight SD gets a vicarious satisfaction of his desire that P in the false corresponding belief, twisted SD certainly cannot serve the practical goal of dispelling the subject’s worries, for the adverse belief instead feeds such worries. Yet, under conditions of high anxiety some agents may display a preference for certainty over uncertainty even of unwelcome truths. Becoming convinced of the worst will make them miserable, but also relieve



Political Self-Deception

them from the pain of uncertainty engendering a (deceptive) feeling of higher control. Twisted SD is relatively common in international relations when a government feels under threat and adopts a precautionary attitude elaborating a worst-case scenario. Given the pressure of the situation, it is quite possible that such reasoning becomes affected by probability neglect and that the worst case comes deceptively to be believed true, and not just a quite unlikely supposition. In such cases, the adverse belief serves the purpose of being a definite exit from the confusion and anxiety of an international crisis, but, the belief being false, leads to responses that are misconceived, and brings about unjustified harmful consequences.

 Concluding Remarks In conclusion, I have argued that the invisible hand account of SD is capable of providing: (a) a plausible account of SD formation without losing its specificity; (b) the explanation of the selectivity issue; and (c) of the two SD types. A further advantage of my account is that the apparent purposefulness of SD outcome is made sense of, despite being unintentionally brought about. At the end of my reasoning, however, I am not in a position to provide a final list of necessary and sufficient conditions for SD to be the case. That is because, on the one hand, the phenomenology of SD is not only varied but also fuzzy at its boundaries. It is always possible to find examples in this fuzzy area which won’t be included in the definition, and it is controversial whether such cases falsify the definition or do not belong to SD. On the other hand, any list of conditions is never sufficient for the social scientist who looks for a falsification method to set apart SD, deception, and mistakes. How can a social scientist set apart a false belief, which seems produced by a motivational state, from a straight lie or a contingent mistake? From these considerations, I shall instead enlist three minimal necessary conditions that any kind of SD must meet, and that enable the observer to identify SD cases with a high degree of probability, even if lies or mistakes cannot conclusively be ruled out. They are: Contextual condition. S appraises of contrary evidence, which threatens the truth of P. b. Motivational condition. S anxiously wishes that P be true and feels powerless to counter the threat. c. Cognitive condition. S reasons over P and falls prey to cognitive biases bringing about the counterevidential belief that P. a.

Investigating Self-Deception



In the political domain, straightforward lies usually meet the motivational condition, for they usually are meant to serve some of the liar’s interests, but lies are not usually prompted by negative evidence, nor are usually based on faulty reasoning. Hence, as a rule, lies do not fulfill condition (a) and (c). Political mistakes, by contrast, need not serve any interest nor be produced by special contextual condition, if not purely contingently. Hence, mistakes meet the cognitive condition but, as a rule, neither the contextual nor the motivational condition. When the political student meets with a false belief for which a motivation can be clearly imputed and which is also evidently counterevidential, so as to be plainly faulty, then SD is very likely the case.

 

The Attribution of Responsibility to Self-Deceivers

 The Moral Question and Responsibility Attribution .. The connotation of SD as morally dubious has been at the forefront of traditional views of SD, starting with the well-known sermon of Joseph Butler on Self-Deceit (), and it was stressed by novelists and shared in early philosophical discussions in the s. Under the description of lying to oneself, SD unsurprisingly was considered to be a moral failure and, in addition, one for which moral responsibility could be ascribed to the selfdeceiver without much ado. The moral dimension of SD seems mostly to have faded away in recent studies. Only the issue of responsibility has been recently revived in connection with the prevalent causal accounts that seemingly make the attribution of responsibility more problematic. Two distinct, but intertwined, questions can be asked with reference to the relation between SD and morality. The first concerns the assessment of SD as either good or bad; the second concerns whether agents can be held responsible for being self-deceived. The two questions are closely related, but I will not be concerned with the moral appraisal of SD here. I shall simply take the view that being self-deceived negatively affects the agent’s well-being in the long run and, also, self-respect and self-esteem. Hence,  

 

J. Butler [], Upon Self-Deceit, in W. E. Gladston, ed., Works, vol. , Oxford: Clarendon , pp. –. The moral issue of SD is instead raised by works in applied ethics, for obvious reasons. Unfortunately, however, this kind of work usually assumes SD views without a proper discussion, and without developing connections between theoretical analysis and the moral dimension. In this respect, for my present argument, these works are of little use. See, for example, A. E. Tenbrunsel and D. M. Messick, “Ethical Fading: The Role of SD in Unethical Behavior,” Social Justice Research, (), : –. I. Deweere-Boyd, “Taking Care: SD, Culpability and Control,” Theorema, , : –. J. Kirsch “What’s So Great about Reality,” Canadian Journal of Philosophy, , , pp. –, is an example of considering the moral connotation of SD independently from the question of responsibility attribution.



The Attribution of Responsibility to Self-Deceivers



although I do not share the view that SD signals a morally flawed character, I think that there are good reasons, both prudential and moral, to try to avoid being self-deceived. The issue of the attribution of responsibility for SD is, however, wider than the moral consideration of SD, for, even if SD were viewed as morally neutral or even beneficial for the self, it might still be the case that the false belief grounded action detrimental to other people. This is especially true in the social and political domain. Think, for example, of a government that becomes self-deceptively convinced that the country is under an imminent nuclear threat by a terrorist group. Whether or not self-deceptive beliefs are morally reprehensible, in this case, the false belief grounds the decision for a preventive attack on the country that is wrongly presumed to harbor the terrorist group, with harmful consequences. In political instances, SD brings about moral wrong. If, however, it were the case that SD were something for which no proper responsibility could be attributed, then it would turn out to be irrelevant in political analysis. In such a case, being self-deceived actually can be conflated with being mistaken. The fact that some mistakes are motivated does not significantly change the consideration that political analysis should give them. Politicians are accountable for their mistakes, yet they are not morally responsible. The issue of responsibility is therefore paramount when considering the possibility of categorizing SD as a distinct political category. .. The attribution of moral responsibility to self-deceivers is a problematic issue. All accounts of SD acknowledge that the faulty beliefformation process is not under the direct and conscious control of the agent. And, according to a common and traditional view of responsibility, it is questionable whether agents are properly responsible for actions out of their own control. In fact, scholars favoring an intentional, albeit partly unconscious, account have usually no problem in attributing responsibility to the agent for his irrationality, although the non-wholly conscious nature of SD and the emotional pressure under which the agent is located may call for extenuating circumstances in the assignment of blame. But no 



The example is not simply a case of philosophical fiction given that Stephen Holmes, in The Matador’s Cape, argues that it is precisely what happened in the aftermath of the / terrorist attack: both the government and the American people, as an effect of a self-deceptive process, became convinced of being under an imminent nuclear threat, and such a conviction was the grounds for the intervention in Afghanistan and Iraq. See J. M. Fisher and M. Ravizza, Responsibility and Control: A Theory of Moral Responsibility, Cambridge: Cambridge University Press, ; G. Sher, “Out of Control,” Ethics, , : –; N. Levy, “Restoring Control: Comments on Sher,” Philosophia, , : –.



Political Self-Deception

intentionalist actually doubts that in the SD process there is an agent to whom responsibility can be properly assigned. By contrast, there might be a problem with the causal account that would appear to free the agent from moral responsibility, for it is not clear that any agent is involved in SD, hence that there is anyone to attach responsibility to. This, however, is not the prevalent conclusion of the supporters of the causal account. These supporters try to solve the problem by finding a juncture, before or after the causal triggering of biases, where the agent can be said to have control over her volitional or epistemic states. According to Mele, for example, agents do not control their cognitive process once the triggering mechanism has set the biasing process in motion, but before that moment, they should be capable of exercising control over their desires and emotions for which any rational and moral agent is reason-responsive and accordingly the subject of praise or blame. The self-deceiver can then be held responsible for having let her unscrutinized motivational state take the lead in the cognitive process. Neil Levy, by contrast, denies that moral responsibility can be rightly attributed to self-deceivers if SD is conceived as a mere causal event. In order for an agent to be the appropriate target of responsibility attribution, a control requirement must be satisfied in the first place, namely, the degree of actual or counterfactual control one has on one’s action. But do we have control of our SD? We certainly do not have control in the belief-formation process, but possibly we may gain control on the outcome. We can imagine that, quite independently from the process of belief-formation, we are under a general duty to scrutinize our beliefs, as proposed by Clifford’s Ethics of Beliefs. Taking Clifford’s proposal seriously would mean that before subscribing to any belief we should first examine its justification critically and carefully, regarding both the sustaining evidence and the inferential reasoning. No matter if we have arrived at the belief that P through a distorted cognitive process, in any case, before accepting the belief that P in our belief setting, we have the epistemic duty to re-examine the belief by an independent critical revision. Levy, however, acknowledges that such a duty is generally impossible to discharge in normal life. Only under special circumstances may this duty hold, and it   

Mele, SD Unmasked, p. ; a similar argument is also to be found in A. Barnes, Seeing through SD, p. . Fisher and Ravizza, Responsibility and Control. W. K. Clifford, Ethics of Beliefs, in L. Stephen and F. Pollock, eds., Lectures and Essays, London: Macmillan, .

The Attribution of Responsibility to Self-Deceivers



is precisely in cases of beliefs that concern relevant matters for moral considerations and/or in case of those beliefs about which we have doubts. Levy has an easy game, though, to show that under SD’s deflationary account, for example, Mele’s account, neither conditions apply to the self-deceptive P. The first condition has an obvious identification problem: in principle, most beliefs may have implications that are morally relevant, although few have intrinsic moral relevance. As was seen in the earlier example, the belief that an enemy state is about to attack our country is not morally relevant per se; yet if it leads to a preventive attack on that country, it has far-reaching moral consequences. In general, the moral relevance of a belief depends on whether it plays a role in moral deliberation. However, even if the belief is not used in moral reasoning when it is formed, it may be used in the future, blurring the distinction between relevant and irrelevant beliefs for moral considerations. Concerning the second condition, selfdeceivers may not experience any doubt, for, under Mele’s account, they have come to hold the false belief as the direct effect of a wish causing biases. In sum, if the agent is a victim of a causal process, taking place behind his back, it is unlikely that any room is left either for the critical assessment of the belief as morally relevant or for doubts about its sustaining evidence, since both operations would require an agent capable to detach himself from his SD. Thus, Levy concludes that there is no room for responsibility ascription under a purely causal account of SD. However, he acknowledges that there may be different explanations of SD according to which the resulting belief is produced unintentionally, albeit as a consequence of “cognitively evasive intentional activity.” Under this nonparadoxical but more complex view of SD, which is actually closer to mine, these two conditions may well be met. In sum, his argument against the possibility to attribute responsibility for SD specifically aims at the deflationary and purely causal account. .. Levy’s argument is resisted by students of SD, and other solutions have been explored; giving up the possibility to ascribe responsibility is not done lightheartedly. It is one thing to acknowledge that not all SD is culpable and yet another to hold that all SD is excusable. DeWeere-Boyd, for example, rejects Levy’s line of reasoning and proposes to reexamine the possibility of locating control at the beginning of the process, yet in a slightly different way. DeWeere-Boyd refers to the fact that in beliefformation, data processing should be guided by selective vigilance 

Levy, “SD and Moral Responsibility,” p. .



Political Self-Deception

concerning the threshold of evidence deemed necessary to believe or disbelieve that P. Such a threshold depends on the focal error that the subject wants especially to avoid. In SD, the biasing process starts as the effect of fixing the focal error on the basis of a wish, which consequently distorts the threshold of evidence necessary to believe that P. According to DeWeese-Boyd, when motivated agents single out the focal error to avoid, they are actually reason-responsive and meet the control condition for imputing responsibility. But if the motivational state is the general cause of the inaccuracy in data processing, it is hard to think that the selection of the focal error that manipulates the threshold of evidence is instead immune from its causal influence. If that were the case, consequently it would result that agents intentionally chose to let their wish fix the focal error, and therefore chose cognitive inaccuracy over vigilance, undermining the causal account as a whole. Mele’s account, however, excludes the notion that cognitive inaccuracy is entered intentionally by agents, entrusting instead the production of the self-deceptive belief to subintentional causal mechanisms, which can hardly be presented as loci of reasonresponsiveness. I hold that the search for the (vanishing) moment of the process when reason-responsiveness, hence control, can be imputed is doomed in a deflationary and purely causal account. In general, the deflationary account portrays the agent in the SD process as a passive victim of biases, one who is not even clear whether the agent in the SD process is able to appraise the negative evidence. The solution of the responsibility issue advanced by Mele implies that a moment before entering the machine mode, the agent could be reason-responsive for the wish and emotions triggering cognitive distortion. The problem with this implication is twofold. On the one hand, the motivational state triggers cognitive biases insofar as it escapes the agent’s control; on the other, there is nothing wrong with the agent’s wishes and emotions before their triggering of cognitive biases. It is perfectly legitimate to wish one’s partner is faithful, one’s son is wellbehaved, and one’s health to be fine; these are just some of the typical  

 De Weese-Boyd, “Taking Care,” –. Mele, SD Unmasked. Within the causalist-motivationist account see also D. Nelkin, “Responsibility and SD: A Framework,” Humana Mente, , : –. Her quite complex proposal actually does not significantly change Mele’s view of relocating control in the motivational set; Nelkin only argues that her view of the operative wish as the “wish to believe that P” makes it easier to track back the locus of reason responsiveness. In addition, she suggests a review of studies on responsibility for inattentive or impetuous action.

The Attribution of Responsibility to Self-Deceivers



wishes at the origin of SD. Being anxious and worried when appraising unfavorable evidence concerning those wishes is, similarly, not irrational. What is irrational is the distortion of the cognitive process, but this distortion, because it is caused by the motivational state, is beyond the agent’s control. In other words, the agent can be reason-responsive for his desire, but he cannot be reason-responsive for the fact that a perfectly legitimate desire triggers cognitive biases. Therefore, the agent may be able to give reasons for his desire but nevertheless is unable to provide reasons for the causal effect of his desire on cognition. The solution presented by DeWeere-Boyd implies an agent capable of discriminating the difference between accuracy and inaccuracy in data processing, and opts for the latter. This solution is, however, self-defeating, for if the agent were responsible for choosing inaccuracy over accuracy, then SD would fall back into the intentional model, and the whole point of the causal model would vanish. In sum, the attempt to reinstate control at some juncture of the process so as to allow the ascription of responsibility is highly problematic. According to Levy, who expounds a less deflationary account that is similar to mine, intentional steps by the agent end up in an unintentional false belief. Even if Levy’s account would meet either condition for the duty of belief scrutiny, his approach to responsibility ascription for SD does not convince me. The self-deceiver is both confused and under emotional pressure; the requirement of control, at whatever moment of SD, seems to me to conflict with the effective agent’s powerlessness over the process as a whole. The agent under SD is generally acknowledged to be far from an ideal situation of rational deliberation in which the control condition properly applies. Nevertheless, she is asked to exercise control over the cognitive impact of emotions and anxiety from which, in the ideal situation of rational deliberation, she is supposed to be free. The idea to reinstate control by means of the duty to scrutinize one’s beliefs as applied to SD is similarly implausible, even if Levy’s two conditions are in principle met. To be sure, often the self-deceiver lingers in her thought and ruminates about evidence, yet in a biased way of which she is not aware and which she does not control; thinking through one’s belief in that situation is quite different from rationally scrutinizing one’s belief. Generally speaking, how the opacity typical of the self-deceptive beliefformation can then be dispelled and give way to lucid critical thinking about that very belief needs explaining. But granted that self-deceivers have little or no control over their SD, is control the necessary condition for ascribing responsibility to self-deceivers?



Political Self-Deception

 Bypassing Control .. The control view has been much debated in recent studies on responsibility, and interesting alternatives are on offer. Obviously, no one doubts that actions and attitudes stemming from rational choice, after proper deliberation, are uncontroversial objects of moral responsibility. Yet our intuitive judgments of responsibility, ingrained in social life and practices, extend far beyond actions and attitudes rationally chosen after proper deliberation. It is also widely acknowledged that many wrongdoings stem from inattention, carelessness, and impulsiveness instead of vicious and malevolent intentions deliberately pursued in evil strategies. Such considerations have led many scholars to revise the traditional view of responsibility questioning the control condition as the central attribution. If control is no more the central condition, then also choice (and choice-controlled actions) is unnecessary for responsibility attribution, as long as the action or attitudes under scrutiny can be viewed as a manifestation of “the true self,” or, alternatively, of “judgment-sensitivity,” of “character-expressiveness” and of “judgment-dependence” – just to name a few of the options substituting control. Both the control condition and the choice centrality have been poignantly criticized by George Sher, whose alternative approach seems to be particularly apt. Control and choice are crucial to what Sher names the “searchlight view” of responsibility according to which “an agent’s responsibility extends only as far as his awareness of what he is doing.” Such a view is captured by “the metaphor of conscience as a kind of searchlight.” A major problem with this predominant view of responsibility, according to Sher, is the conflation of two distinct and incompatible perspectives on action, namely, the agent’s viewpoint at the time of her action, and the ex-post detached position of observers (or later self ) of the action. The first  





See, for example, K. Jenny, “Vices of Inattention,” Journal of Applied Philosophy, (), : –; E. O’Hagan, “Self-Knowledge and Moral Stupidity,” Ratio, , : –. The control condition affects conditions for responsible conduct, namely voluntariness, and appropriate epistemic condition. From Aristotle on, action performed under coercion or in a condition of ignorance are excused, totally or partially, and responsibility accordingly is lifted. In my discussion on this theme, the focus is especially on the epistemic condition, for in SD the agent is both unaware of what he is doing and confused epistemically. Susan Wolf, Freedom within Reason, Oxford: Oxford University Press, ; T. Scanlon, What We Owe to Each Other, Cambridge, MA: Harvard University Press, , Sher, “Out of Control,” –, A. Smith, “Control, Responsibility and Moral Assessment,” Philosophical Studies, , : –.  Sher, Who Knew?, p. . Sher, ibid., p. .

The Attribution of Responsibility to Self-Deceivers



is internal and forward-looking, the second external and retrospective; only the second is the appropriate perspective for the responsibility attribution. It is usually other people (or a later self ), from outside, after the action’s performance and after its consequences have affected others, who demand reasons and justifications from the agent for her action. Features and considerations about which the agent was not aware at the time of acting do not exempt the agent from demands for reasons. The backward-looking perspective proposed by Sher does not imply that the agent’s perspective at the time of action is irrelevant but, rather, that it is not the only element to consider in the process of responsibility attribution. The adoption of the ex-post perspective for responsibility ascription seems to me promising and consistent with the practices of the moral community, and the complex critical argument analyzed by Sher in opposition to the searchlight view seems similarly convincing. The difficult part of Sher’s reasoning concerns his positive alternative to the searchlight view, for he must tailor a conception in which the scope of responsibility is enlarged but not up to the point that agents end up being held responsible even for heart attack. Thus he must link the outside judgment of an action as bearing responsibility with appropriate internal conditions granting its origination as an action of the agent. External and internal conditions for an action to bear responsibility are then specified: the agent “is responsible for his act’s wrongness or foolishness if and only if: ) is aware that the act is wrong or foolish when he performs it, or else ) is unaware that the act is wrong or foolish despite having evidence for its wrongness or foolishness his failure to recognize which a) falls below some applicable standard, and b) is caused by the interaction of some combination of his constitutive attitudes, dispositions and traits. ”

Sher’s definition encapsulates the idea that in the case of agent unawareness responsibility for wrongdoings or foolish acts cannot be withheld if (i) the agent had the evidence necessary for understanding the foolishness or wrongness of his act at time of acting and (ii) nevertheless failed to grasp it, (iii) so falling below standards. (i), (ii), and (iii) together represent the outside perspective of someone asking the agent ex post: “How could you forget our meeting?” If the agent can answer this question saying, “I was kidnapped and imprisoned in a basement for three hours” or “I passed out 

Sher, ibid., p. , my italics. I am only referring to Sher’s treatment of responsive attitudes for wrongdoings and foolishness, not for praiseworthy action, given that the two are considered separately and not exactly mirroring each other, and, in any case, praise is beside the point of SD.



Political Self-Deception

for three hours” or “My wife inadvertently locked me in the bathroom,” then his apparent misdeed is definitely excused for, in all three cases, the failure to turn up at the meeting was not an omission by the agent, but an event caused by external factors. No excuses are instead acceptable if (iv) the agent’s failure is dependent on his constitutive traits, attitudes, and dispositions. This last condition, as specified in (b), grants the origination problem of the act, namely, that the act is performed by the agent and not caused by some contingent or extrinsic factor, such as coercion, brainwashing, reflexive behavior, or external forces. To put it in another way: whenever the agent fails to grasp the evidence for seeing his action wrong or foolish and such failure can be attributed to him and not to some external or contingent impairments, then he can and ought to be held responsible for his action. .. According to this description, however, it is not obvious that agents can be held responsible for their SD. SD is not caused by a vice of character, but it is instead the quite common product of a motivated form of belief formation that strikes any normal cognizer under certain circumstances. But then, it is not clear that disjunctive (b) is met in most SD instances. However, although SD is not the product of constitutive traits of the agent’s character, there is nonetheless no doubt that it originates in agents, in particular situations, and in particular circumstances. It is not that the circumstances directly cause SD, bypassing agency, but attributing SD responsibility to character-expressiveness misconstrues how the faulty belief is formed. Under the invisible hand account, the subject is not just a passive automaton, nor the victim of causal mechanisms, yet she is in a condition of both opacity and epistemic confusion: SD is the typical instantiation of mental faulty activities whose faultiness escapes the subject’s awareness, despite the available evidence to the contrary. It actually fits features (i), (ii), and (iii). In other words, it would seem a typical instantiation of those acts performed “in the dark” and yet proper objects of responsibility assignments for which Sher has proposed his revised view. Not being traceable to character-expressiveness, hence not fitting feature (iv), however, it seems that for SD either responsibility is not attributable or Sher’s characterological condition must fall. I favor the latter option, not only because I am reluctant to lift responsibility from the self-deceiver’s shoulders. The reason why Sher has posited the character-expressiveness lies in the need to link the outside perspective with the internal one, that is, to solve the origination issue. But the fact that something originates in me, instead of happening to me, does not require that my constitutive character (or true self) be the originator,

The Attribution of Responsibility to Self-Deceivers



especially in cases of foolish or careless actions. The latter are not necessarily the product of flawed characters; given the intrinsic vulnerability of human rationality and agency, all of us occasionally act absentmindedly, foolishly, unthinkingly, especially if pressed by time, anxiety, or other emotions impairing cognitive lucidity. All of us forget something now and then, and although some people may be more forgetful than others, that fact does not alter the ascription of responsibility for any act of inattention. If John forgets the train ticket for whatever reason (usual absentmindedness, haste, or an untimely call while he was preparing his bag), he knows that he cannot be excused to travel without ticket simply because he did not do it on purpose. This fact may affect the degree of blameworthiness, but does not relieve him from his responsibility. But John’s acknowledgment of his foolishness, and the consequent acceptance of responsibility, does not depend on his forgetfulness being expressive of his character. Whether or not it is his habit to be absentminded, he is in any case responsible because he and nobody else omitted to put the ticket in the bag, and the omission was properly his. I am not claiming that forgetfulness is never a sign of a flawed character; I am claiming that responsibility for one’s forgetfulness does not derive from being the expression of a flawed character. Philosophers analyzing responsibility are obviously concerned that it is the agent and not an external force, a blind mechanism, or an evil genie who produces the outcome, albeit inadvertently or absentmindedly; if the subject is the agent and not the victim of the action, then responsive attitudes properly apply. Yet, in order to assign responsibility for a misdeed or a misjudgment to an agent, it is sufficient that there is an agent, and not a victim or an automaton. For a misdeed to be a misdeed of the agent, it needs not have been done autonomously or expressing her character. The inadvertent wrongdoer may not be at the top of her agential authority, yet she is an agent for all that: human imperfection of rationality and morality pertains to human agency, and the lapses of either are the agent’s, expressing general human vulnerability more than character’s imperfections. In sum, the origination condition requires agency as opposed to mechanism, automaton, and passive victims, but it does not require the reference to the true self. I would stick to a conception that entrusts the origination issue to the simple presence of agency in the action, presence that in the absence of coercion, brainwashing, external circumstances and physical or mental disabilities must be presumed. In other words, I adopt a conception of responsibility in which ascription does not depend on the control condition ex ante but, rather, on the fact that an action, although



Political Self-Deception

performed without full awareness, is however performed by an agent capable to respond to the question: ‘How could you do it?’ Responsibility is ascribed because of the presence of agency in the act, a presence that is proven by the reason-responsiveness of the agent ex-post. To be sure, also a victim of external circumstances may be able to give an explanation of her behavior, but reason-responsiveness is more than providing an explanation, for it implies the ability to engage in an exchange of reasons with others and be reactive to their resentment and criticisms. This ultimately leads the agent to learn a lesson. Although coercion and manipulation do not erase agency in general, the coerced action cannot be said to be authored by the agent. If instead the past conduct is acknowledged as hers and faulty, then it was performed by a moral agent, below standards and yet able to produce a self-judgment ex-post. Actually, it is not the case that reason-responsiveness can retrospectively reinstate the agent’s control on her past conduct. This would be the case if the control condition were conceived of as counterfactual. But I think that Sher is right in saying that control cannot be counterfactually imputed, for failure of control is a nonevent unrelated to the subject. It seems indisputable that at time of acting, the subject’s agential capabilities were substandard, so to speak. Yet if ex-post the agent was not reason-responsive, this fact would suggest either that agency was absent in that instance or that the subject has a more serious agential deficit. Ex-post reason-responsiveness is required to guarantee that the person has agential capability in general, and that her conduct or act has been produced by a local failure of her agential capabilities to be ascribed to her and not to external factors. In this case, SD is comprised in the range of (mental) acts for which responsibility is properly ascribed. In turn, ex-post reason-responsiveness is what allows the agent to learn the lesson, and to be more careful, prudent, and considerate in future similar occurrences. Learning a lesson is not the reason why we ascribe responsibility for past actions, but it is the sign that certain conduct was performed by an agent, who is able to reflect on her deeds or misdeeds ex-post and possibly avoid similar mistakes in the future. Suppose that, instead, the objectionable behavior turns out ex-post as nonimputable to the agent for it was caused by illness or by external impediments. In such a case, there is no lesson to learn for the agent through an ex-post reflection on behavior. If Anna did not show up at the meeting because she fainted or because she was stuck in the elevator, there is little she can learn by a retrospective reflection in order to avoid similar occurrences in the future. 

Sher, ibid., p. .

The Attribution of Responsibility to Self-Deceivers



Responsibility can be attributed to out-of-control acts, yet in such cases, given the practice of exchanging reasons among moral agents, if the act was the agent’s and responsibility can be attributed to her, then she is expected to learn the lesson for the future. It is not by chance that relapses of misbehavior are usually regarded as more blameworthy than one-time misdeed. If there is no expectation about learning the lesson, that is because the behavior is not considered as performed by the agent but caused by some external factors. In this respect, the possibility of learning a lesson ex-post is the signal of agency. In other words, we can ascribe responsibility for conduct if performed by agents for which control can be learned. In sum, the internal condition is met if the action for which responsibility is to be attributed is an action of the agent, and the agency condition is in turn specified, negatively, by the absence of coercion, brainwashing, manipulation, and external causes, and positively, by the fact that the agent is ex post reason-responsive. Therefore, we can learn a lesson and acquire control on future conduct. .. How does SD fit in this view of responsibility attribution? As said, responsibility can be attributed when someone performs a wrong or foolish act nonconsciously at the following conditions: (a) that the agent has evidence that the act is wrong or foolish at the time; (b) that she fails to recognize that evidence; (c) that the failure falls below some applicable standards; and (d) that the faulty act is imputable to the agent. In turn, (d) implies that no completely incapacitating conditions are present and that the agent is ex-post reason-responsive that is, she can understand her fault and learn the lesson. These conditions perfectly apply to my invisible hand account of SD: the agent is unaware of the faultiness of what she is doing and fails to consider the available evidence properly. As a result, her mental activities are below standard, and yet are performed by an agent, albeit an opaque and confused one. The agent is under emotional pressure, to be sure, but such circumstance is not incapacitating to the point of erasing agency. Exiting SD, the agent usually recognizes the faultiness of her previous condition and feels regret and shame at having fooled herself. The ex-post reason-responsiveness of the previous self-deceiver is thus present and she can properly be held responsible for her deceptive belief. 

The internal condition is (b) of Sher’s definition at p. . It states that someone is responsible of his foolish or wrong acts in case he is unaware, despite having evidence for its wrongness or foolishness, iff his failure to recognize the foolishness or wrongness (a) falls below some applicable standard, and (b) is caused by the interaction of some combination of his constitutive attitudes, dispositions, and traits.



Political Self-Deception

It is, however, less clear how much reason responsiveness ex-post can help the agent to avoid future episodes of SD. When the favorable conditions for SD obtain, the subject may recognize them, and may try to resist the impact of emotions and desires, preventing the triggering of biases. In principle, at a second shot, having learned how SD strikes, there is the chance that a disenchanted subject may resist the influence of her desires and the pressure of her emotions. This hypothetical option may satisfy the philosopher focused on the conditions under which (a) responsibility for SD can be attributed; and (b) future SD can in principle be avoided. But for someone interested in applied ethics, in the actual impact of SD on social and political realm, this conclusion is far from satisfactory. For the possibility of avoiding future SD even in a disenchanted subject meets many obstacles on its way. Desires and emotions are resisted by internal argument; but the latter is in turn exposed to the motivational influence and to the triggering of biases, at different junctures. Moreover, when we move from the realm of personal life and interpersonal morality, the responsibility for SD does not simply concern its foolishness and does not just affect the agent’s self-respect. At stake is not simply the lapse in one’s agential authority and autonomy, but much heavier moral and political considerations, those with something to do with harming and wronging other people. To put it briefly, entrusting the prevention of SD to a fortified agent with a more disciplined character is clearly insufficient if the point is to avoid the bad consequences and harmful effects of SD pragmatically. In that case, we cannot rest content with assigning stronger moral condemnation and culpability: we should, rather, seek preventive measures.



SD Prevention

.. If we move from the realm of moral theory to the social and political domain, then the issue of responsibility ascription must expand to that of SD prevention. With reference to prevention, SD in fact presents an advantage over the two phenomena with which it is often equated: straight lying, on the one hand, and honest mistakes, on the other. Deception and mistakes, being ubiquitous, are normally detected by hindsight and can hardly be predicted and prevented. So far, no general circumstances for telling a lie or for making a mistake have been provided, and I think this is not by chance. By contrast, SD is in principle open to prevention for SD is triggered in specific circumstances, as I have argued in the previous

The Attribution of Responsibility to Self-Deceivers



chapter, namely, when certain motivations meet with contrary evidence and the costs of inaccuracy are low, can be ignored or discounted. Prevention depends, first, on the possibility of detecting such favorable circumstances for SD to occur. It may be that under these circumstances, people with certain characters are more easily prey to SD, but, from the viewpoint of social and political analysis, the crucial feature to predict, and in case prevent SD is constituted by the favorable circumstances. Consider, moreover, that in the political domain most cases of SD worth of analysis are actually collective products, as shown by the well-known studies on groupthink, hence character considerations become irrelevant, and circumstances paramount. Considering that even if the agent acknowledges the foolishness and the potential harm of SD, she may yet be unable to resist SD when the pertinent circumstances obtain, for the direct control over one’s SD, although not impossible in principle, is difficult and rare in practice. A good starting point is, however, the acknowledgment of the problem and of the harmful impact of SD in decision making. When a politician’s SD is uncovered, she may feel less guilty but definitely more ashamed than if she had lied. For she is responsible not only of the ominous consequence of her false belief, but also of having duped herself, and duping oneself is a terrible source of embarrassment for a politician. The embarrassment and shame provide strong motivation to avoid SD in the future, even stronger than the motivation to be honest, for in case she lies, she may hope to get away with it, while in case of SD, it is hard to escape for the deceptive beliefs directly affect and usually doom decision making. Moreover, while lying does not compromise the cognitive capacity of the liar and, in the political realm, it is sometimes claimed to be justified under hard circumstances, SD always backfires on the selfdeceiver, who can hardly hope to get away given its ominous consequences. It is thus very important that the role of SD in politics is acknowledged as the first preliminary reason to engage in its prevention. Even though, as previously said, SD is not under the direct control of the agent, direct control is not the only form that we have at our disposal in regulating our actions and beliefs. Moral psychology has singled out at least two forms of indirect control precisely in order to bypass potential weakness of the will: character-building and precommitment. Character building is the strategy devised by Aristotle in the third book of Nicomachean Ethics. According to him, moral action is the default choice 

Janis, Groupthink: Psychological Studies of Policy Decision and Fiascoes.



Political Self-Deception

of the virtuous person, and virtues are the right dispositions to act morally. Virtues follow from the right character, in turn, molded out of good habits and discipline. The latter are necessary to prevent agents from falling prey to weakness of the will. From Aristotle, George Ainslie picked up the idea of discipline to win the conflict between short-term and long-term interests, wherein morality can be seen as a long-term project imposing itself on immediate rewards. Ainslie’s account helps to understand why the character building project is of limited efficacy as far as SD is concerned. According to his view, self-discipline is acquired to pace immediate rewards to our long-term projects. With reference to knowledge, the default motivation is to believe what we wish according to our desires, and the secondary motivation, regulated by self-discipline, is truth-seeking and evidence-constrained knowledge. Lapses from self-discipline are, however, to be expected under certain circumstances (abundance of rewards and/or crumbling down of future plans). If this account is accurate, it is not the case that the self-deceiver is someone who lacks self-discipline and character, for were that the case, the subject would be delusional instead of being occasionally self-deceived. The self-deceiver is rather someone experiencing a lapse of his self-discipline under special circumstances, lapses that must be actually expected by beings as we are. This is not to say that agents cannot improve their self-discipline and fortify their prudential motivations, yet character-building alone cannot be depended on for preventing SD from taking place. .. Alternatively, for a more efficacious prevention of SD, the resort to precommitment is in order. Precommitment is the strategy symbolized by the story of Ulysses and the sirens. Ulysses wants to listen to the sweet singing of the sirens, but knows that no man could resist that sound without jumping off board and dying in the sea. Thus, the clever Ulysses fills his crew’s ears with wax and orders them to tie him tight at the ship’s mast: when the temptation comes, his sailors are unable to hear the sound and Ulysses is prevented from following the urge to jump off board by being tied up. What Ulysses did was to create some constraint on one’s options at time t¹, in condition of cognitive lucidity, so as to avoid, at time t², under emotional pressure, being prey to temptation that he knows it is difficult to resist. Precommitment is thus the rational strategy to control one’s lapses of discipline and rationality in difficult situations.  

Ainslie, The Breakdown of the Will. See Elster, Ulysses and the Sirens, and, more recently, Id., Ulysses Unbound, Cambridge University Press, Cambridge, .

The Attribution of Responsibility to Self-Deceivers



Yet philosophers regard this solution as less than desirable, as an admission of defeat and weakness, for the relevant behavior at time t² is not autonomously chosen, nor follows from rational choice, but is forced by external constraints which bind individual’s freedom. Even if the constraints have been autonomously chosen by the agent at time t¹, at the time of action the agent is bound, so that his behavior is not his action properly. Along this line of reasoning, precommitment is redescribed as an intentional manipulation of one’s autonomy resulting in bringing about a rationally desired outcome but also as a decrease of overall autonomy. A proper discussion on precommitment as a general rational and moral strategy is, however, beyond the point here. The point is, rather, that precommitment looks like a suitable candidate, no matter how defective in terms of ideal rationality and morality. In the case of SD, precommitment may work if the prospective selfdeceiver trusts herself to a referee. I am here suggesting reversing what usually happens in SD cases. As widely acknowledged, the self-deceiver often discusses her deceptive hypothesis with friends for reassurance and confirmation; usually friends detect SD but also do not feel like to awake her abruptly, especially seeing her powerlessness. So, they listen to her attentively and implicitly assent, without thinking that such assent will constitute an important piece of evidence for the subject in favor of her false belief and reinforce her deception. This feature is all the more evident in case of political SD that, as mentioned, is often produced and/or sustained by the group around the leader, as we shall more properly see in the next chapter. The precommitment strategy should instead transform the collusive community into a referee-acting one to help the subject away from the SD shortcut. If the agent explicitly confers her special friend(s) the authority of referee(s) over her potential SD, the friend(s) are then entitled to intervene by alerting the agent that her treating of the evidence is biased and her conclusion faulty. Obviously, she may not listen, but if she has previously authorized the referee, she has bound herself to a position where she cannot simply dismiss her friend’s advice. The authorization is thus important in more than one respect. In the first place, the friends of the prospective self-deceiver should avoid the self-appointed role of guardians, with its implicit self-righteousness and paternalism; in the second place, the agent ought to take responsibility for their intervention in order to subscribe her (pre)commitment against SD fully. Lastly, only someone with the explicit authority to speak her mind clearly and truthfully can



Political Self-Deception

expect to be taken seriously by the authorizing agent, despite the difficult circumstances and the emotional pressure in favor of the soothing deceptive belief. Conversely, just because SD can be avoided indirectly, through the assistance of a friend acting as a referee for one’s belief, the author can take credit for SD prevention only with an explicit authorizing agreement, made ex ante, under condition of cognitive lucidity. Once such agreement is in place, the agent becomes fully responsible, hence more blameworthy if she dismisses the referee’s advice, whereas if she takes the advice she will have the full credit for SD avoidance. The remedy for SD is reached by a detour, yet this binding neither diminishes the subject’s liberty, nor decreases her autonomy, for, on the one hand, avoiding SD means enhancing her autonomy as rational cognizer, and, on the other, after all, it is still up to the agent to follow the friends’ advice at time t². I think that the idea of precommitment can find an institutional application imagining an independent check on certain kinds of decision making. How to put in practice this idea is complicated, but that is not a reason to dismiss it. I shall come back to this point in the next chapter.

 Concluding Remarks The starting point of this chapter concerns the fact that it is unclear whether responsibility can be ascribed to self-deceivers; yet the possibility of responsibility ascription is crucial for applying SD in politics, hence the question must be thoroughly examined. The issue seems particularly problematic with unintentional accounts: Can someone in the grip of emotion be held responsible for the biasing of her thoughts, which took place causally and behind her back? This question has led me to examine first how nonintentiontionalist scholars of SD have faced this problem. Following the traditional view of responsibility informed by the control condition, they have mostly attempted to single out a juncture in the process of self-deceptive belief-formation in which control may plausibly be imputed, hence responsibility attributed. I have argued that these solutions are unsatisfactory, for in addition to being highly implausible in point of fact, they are conceptually muddled since the agent is at the same time portrayed as a passive victim of biases and as a resolute selfknower and master of his passions and desires. My analysis has then turned to different views of responsibility dispensing with the control condition. Via critical examination of Sher’s theory, I have provided an account of how self-deceivers can be held responsible of their state, even if they did not perform it intentionally and knowingly.

The Attribution of Responsibility to Self-Deceivers



In my perspective, the attribution of responsibility is linked to the possibility of future prevention, by learning the lesson and coming to see SD as bad and wrong. Having reasons recommending the avoidance of SD, however, cannot be sufficient for its future prevention, given the nature of SD. Because its social and political effects may be harmful and far-reaching, effective measures of prevention need to be studied and be put in place. Within the moral tradition, we can find some suggestions for dealing with desirable/undesirable states that cannot be reached directly. One suggestion comes from Aristotle, who takes akrasia very seriously and sees its solution in the process of character-building, by means of good education and healthy habits. Yet it is not clear how the prospective self-deceiver can fortify herself and acquire the good habit of evidenceprocessing if it is true what Ainslie says concerning self-rewarding fantasy, that is, when long-term projects crumble down. The second suggestion, precommitment, seems more promising. This strategy requires some moral learning as well: the agent, reflecting on her previous SD, should come to the conclusion that SD is detrimental to her and others, all things considered, hence should be avoided. But then, in order to carry out her resolution, the agent should invest some friend with the authority of a referee when typical circumstances for SD arise. The referee should represent the external reasonable point of view concerning the negative evidence and highlight the motivated biases in the agent’s belief. Obviously, such precommitment against SD does not represent a physical constraint like for Ulysses who was literally bound. The agent can reject the referee’s advice. Nevertheless, if the referee has been explicitly authorized, the natural tendency to defend one’s unwarranted belief will be reduced by the very act of authorization freely subscribed by the agent, in condition of cognitive lucidity. Moreover, the very same act is also what allows the agent to take credit for the avoidance of SD. And, if the strategy is successful, one can imagine that good habits, concerning evidence processing, will be reinforced and the character also fortified. Devising the precommitment strategy in the political realm is definitely a complex matter, yet there are some points in this discussion on responsibility worth stressing for the consideration of political SD. First, SD does not lift responsibility from politicians and government officials. In this respect, imputing SD instead of simple deception is not a way to exonerate political leaders from moral responsibility. Second, if political SD is acknowledged as playing a (negative) role in decision making, then politicians may be motivated to avoid the embarrassment to dupe



Political Self-Deception

themselves, and to choose poorly as a consequence. This motivation does not per se grant that SD will actually be avoided, but it may make the idea of some institutional prevention more acceptable. Third, SD is in principle open to prophylactic measure by means of some institutional form of precommitment, no matter how complex the institutional design may be.

 

Self-Deception in Politics

 

The Self-Deception of Political Leaders, Officials, and Governments

It is sometimes essential for a state, even for a democratic state, to undertake clandestine operations, as I learned in OSS during Second World War. But when such operations are undertaken, it is important never to forget that the relationship between an intelligence agency and its instruments tend to be a corrupting one. Arthur M. Schlesinger Jr., A Thousand Days: J.F. Kennedy at the White House

 Preliminary .. In the previous part of this book, I explored the notion of SD. After a critical review of the relevant literature, I have proposed a view of SD as a genuinely puzzling but not paradoxical phenomenon, to be understood as the unintended outcome of intentional steps of the agent. More precisely, according to my invisible hand model, SD is the emotionally loaded response of a subject confronting threatening evidence relative to some crucial wish that P. The threat is such that cannot be met, or that cannot be undone at a reasonable cost. Unable to counteract the threat, the subject lingers in thought, and easily becomes prey to cognitive biases in the processing of evidence. As a result of her biased thinking, the subject unintentionally comes to believe that P which is false. Besides explanatory advantages over the standard causal model, my view is consistent with: (a) the consideration of SD as basically detrimental to the subject’s well-being and potentially leading to harmful actions, hence morally problematic; (b) the attribution of responsibility for being self-deceived. Even if the subject was not aware of deceiving herself, the process was her doing, though under an emotionally confused state. The fact that the outcome was not intended is not an excuse for lifting responsibility from her shoulder, as it is not in cases of inattention, impetuousness, and absent-mindedness. SD is performed by an agent, though admittedly a confused one, who is reason-responsive and ex post 



Political Self-Deception

capable of learning the lesson. The nature of SD is such that learning the lesson is not sufficient to avoid future episodes directly. Yet, indirect moral strategies enable agents to make effective the lesson via either characterbuilding or pre-commitment or both. I have thus outlined a notion, which accounts for the puzzling aspects of the phenomenon without the risk of paradoxes, is consistent with the ascription of responsibility, and opens up the possibility of prophylactic measures. In sum, I have reached a notion that, in principle, is applicable to politics. .. In this chapter, I shall argue for a distinct role for SD in politics. To be sure, I do not intend to replace straight deception with SD; citizens’ deception is often brought about by lies and willful misinformation, omissions, manipulation, and spin by government officials and politicians, as well as by epistemic mistakes of leaders and their counselors. In the wide and gray area of political deception produced by a variety of ways ranging from lies to mistakes, I contend that SD is distinctly placed. We have seen that SD is a widespread and common phenomenon in daily life and experience; hence, there is no reason to think that politics may be spared. Yet the point I want to make is more specific and concerns the influence that SD might have on certain kinds of decision-making processes. I argue that certain momentous decisions to be taken under the pressure of various sources constitute favorable circumstances for SD to take place; if SD affects the beliefs grounding the decisions, the latter are misconceived and very likely to end up in failure. At the same time, the SD of presidents, cabinets, leaders, and politicians induces the deception of the people. Consequently, often the people’s deception is coupled with governmental failures. In this case, if the people’s deception is attributed to straight lying, the subsequent failure of the policy must be explained by something else. By contrast, if the people’s deception is attributed to leaders’ SD, then the failure is as well explained by the same token. No plan can be successful if it is grounded on false beliefs. True, the false beliefs may be produced just by cold mistakes, but if that were the case, the motivated appearance of those mistakes, which look anything but cold and casual, need to be accounted for or is imputed to a simple coincidence. Yet it is a coincidence repeated too often to be acceptable without further argument. In sum, I hold that episodes of genuine political SD can be detected within political deception if the three minimal necessary conditions for SD characterize the decision making as follows: a.

Contextual condition. The appropriate context is a decision-making process, which involves momentous decisions to be taken under

The Self-Deception of Leaders, Officials and Governments



pressure of various kind, in case the data available appear unfavorable to the ends and aims most wished by the decision-makers, and in case the costs of inaccuracy are, right or wrongly, perceived as low, hence discounted. In such a context, the reasoning is likely to be distorted toward the desired conclusion. b. Cognitive condition. The resulting decision is grounded on false beliefs, formed by disregarding contrary evidence available at the time. Consequently, the plan is doomed from the start. c. Motivational condition. The cognitive failings correspond to a motivation of the decision makers, in the absence of which such failings would reasonably be avoided. These three enlisted conditions are meant to set apart SD from either straight deception or cold mistakes. If the government simply lies concerning why a certain decision is necessary, the related plan need not be flawed or based on misinterpretation of data. If, on the contrary, the administration is just mistaken, there should not be an obvious motivation to pursue the misconceived plan. But when mistakes and motivation are both present and when the decision-making context is one of high pressure and high anxiety, the presumption of SD is a reasonable hypothesis to be confirmed or disconfirmed by a proper analysis of the episode. In the following pages, I shall present the special features of leaders and governments’ SD compared with personal SD, which will provide specific guidelines for the analysis of SD instances within decision making. .. Political SD does not just concern politicians and affect the decisionmaking process of governments and cabinets. It can actually affect the people in their coming to terms with governments and political reality, and projecting one’s identity and character as a people. This phenomenon has actually been acknowledged and analyzed under the notion of ideology, or false consciousness, as in the Marxist tradition, and under the notion of the reduction of cognitive dissonance, as in social psychology. Putting the two notions together, a more recent analytical approach has developed the notion of “adaptive preference” as a vicarious alternative to social change perceived as impossible. The adaptive preference for the status quo corresponds to a faulty perception of reality in line with a cherished image of oneself or of one’s group or nation. By means of SD, often misguided governmental conduct comes to be justified and acceptable. The consideration of people’s SD clearly raises problems for normative democratic theory which are, in many respects, more challenging and more complex than the issues discussed in the literature on



Political Self-Deception

voters’ ignorance, rational or otherwise, on deliberation-induced polarization, and other pathologies of democratic rationality and public opinion. Actually, the consideration of people’s SD seems to run afoul of the normative theory of democracy that grounds democratic legitimacy in the autonomous claims and preferences of citizens, by undermining even those innovative procedures – like deliberative decision making – which aim at enlightening the perspectives of citizens. In this respect, the scrutiny of beliefs, convictions, views, and expectations of the people making use of SD as an analytical category opens up a whole host of considerations and issues concerning the assumed epistemic independence and autonomy of citizens. In this work, however, I will confine myself to unpacking the SD of politicians and leaders as well as its effects and consequences on the consideration of political deception in general. Before starting, I would like to draw attention to some common features and problems that one encounters moving from the theory of SD to its practice in the political reality. Some methodological caveats need to be pointed out and empirical complications must be addressed beforehand. The first caveat concerns the kind of reconstruction I shall make of these cases. I want to clarify that the analysis is carried out as a philosophical argument, as a conjectural hypothesis, which draws its plausibility from the research and insight of various works from various disciplines, ranging from social and political psychology to political science to history. But it is not a new historical interpretation of what happened based on new documents and primary sources. In fact, I am relying on existing studies and I am not going to dispute their views, generally speaking. I shall, rather, read through the lines of established work, and see whether misconceived decisions and actions provide grounds to be interpreted as possible instances of SD. Many works I have been looking at suggest that something like SD must have been the case, under labels such as “political illusion,” “willful misreading of evidence,” “undisputed though false assumptions,” and the like, in order to explain gross miscalculation and irrational planning by usually competent and rational agents. My attempt will be precisely to step in where such suggestions are put forward and try to make sense of them by means of my account of SD. To be sure: my interpretation is a purely speculative hypothesis, as I will never be in the position to prove that SD was the case; doxastic states are by definition not available, more so of people who are no longer with us. But I like to stress that both straightforward deception and cold mistakes share similar problems as to their evidence. True, we are more familiar with deception and

The Self-Deception of Leaders, Officials and Governments



mistakes whose explanation seem more direct, but imputing SD to decision makers is no more speculative than imputing deceptive intent or cold mistakes. In fact, there are specific reasons to adopt SD. Briefly, in some instances, lying does not make sense, as it comes at the expense of the liar’s supposed interest. By the same token, the presumption of cold mistakes overlooks the emotional pressure of some strong wish for P to be the case which is always the bottom-line of such pieces of irrationality. My analysis intends to check whether the three minimal conditions for SD were actually present. If the conditions are met and SD applies to the case, conduct that otherwise appears incoherent or bluntly and persistently mistaken will be understood. It will be less than a conclusive argument, but, despite its being basically speculative, it will be more than a general speculation about politicians duping themselves. The second caveat concerns the selection of the cases that I will be discussing. They are all cases of international politics taken from the Cold War and from the war against terrorism of today: Cuba and the Kennedys, episodes from the Vietnam War, and the invasion of Iraq in . In this selection there is no implication that SD pertains exclusively or especially to international rather than domestic politics. But there are two reasons for picking those examples, one more intrinsic to the nature of SD, and the other pertaining to matters of presentation. The first reason concerns the contextual condition: SD typically takes place when dramatic threats arise, when something so important and apparently beyond rational control happens so as to trigger great anxiety. International policy, and more specifically international crises, are actually considered by politicians, and citizens alike, as much less under national control than domestic policies, and much more momentous. Thus, one fundamental condition for SD processes and episodes to take place is clearly present. The second reason is that international policy concerns people all over the world, across borders and national interests. The impact of the Cuban Missile Crisis, the Vietnam War, and the invasion of Iraq in  have been far-reaching and engendered polarized positions and discussions everywhere. Moreover, the aftermaths of these cases had various consequences on people’s attitude toward governmental action. In this respect, examining these cases will allow me to point out how governmental failures have often engendered either a justificatory attitude of the people or forms of denial, as recorded by polls and historical research. Once we move into the reality of international policy, a third caveat must be added: SD episodes do not occur in isolation. Deception, strategic secrecy, data opacity, and misunderstanding are all mixed up with wishful



Political Self-Deception

thinking and SD. Yet, I think that, within the gray area where truth is intentionally kept from the general public, some genuine examples of SD can be singled out. Not only can the three basic conditions sort SD out of straight deception and mistakes, but the provision of a typology of political SD also represents a helpful tool for carving out its distinct room and role in empirical complexity. Before getting into the empirical illustration, I shall provide a general typology of political SD based on the different link that each type establishes with deceiving others. If SD, as I contend, is actually always intertwined with deceiving others, one can doubt its relevance in the overall reconstruction of political reality. If SD occurs in the general fog of deception and secrecy, what does its presence prove? I think it can prove that deception and secrecy, on the one hand, cannot stand on their own and need the support of comfortable false beliefs, and on the other, that they systematically lead to motivated misperception and miscalculation. Deception and secrecy often backfire into SD. If, however, the reality of international policy shows that SD is the collateral damage from deception and secrecy, one may want to conclude that the emphasis on SD is misplaced, that the problem is always and basically deception, while SD is merely epiphenomenal. But, before granting such conclusion, consider the following. Moral arguments against political deception are countless, only balanced by opposite moral arguments justifying “dirty hands” in exceptional cases. All in all, they do not seem to provide a sufficient motivation against lying in politics. An argument showing that political deception is likely to end up in SD, which is beyond the conscious control of the liar and has always far reaching counterproductive consequences, may prove to be more convincing. Think of the prospective Machiavellian politician moved only by strategically astute schemes: he can contemplate failure by sheer luck, and take his risks, but I doubt that he can welcome the likely prospect of ending up as a confused and irrational agent. If SD can be avoided by indirect strategy, it may turn out that an important component of such a strategy is precisely refraining from lying in the first place.

 Objections to Political SD .. Before granting that SD is a common phenomenon also among politicians and political leaders, some preliminary questions should be answered. As we have seen in the previous chapters, SD, although epistemically puzzling, may serve the individual’s short-term interests. Even

The Self-Deception of Leaders, Officials and Governments



twisted cases share the purposive feature of SD, in their own twisted way. Most instances of SD exhibit an immediately beneficial, although shortlived, effect on the individual, easing her anxiety and worries about unfavorable states of affairs, among which emotional states and selfconceptions are also to be counted. That is why some thinkers see SD as practically rational, or adaptive, at least in the short term. When the subject comes across evidence supporting an unfavorable hypothesis that ~P and if the subject’s power to alter ~P is impaired or believed to be impaired, then SD, consisting in the belief that P, induces a sense of relief in the subject, at the price of doxastic confusion. Such a sense of relief is short-lived and fragile, always jeopardized by the negative evidence, which eventually will impose itself if the subject does not fall altogether into self-delusion. In this respect, a problem seems to arise with reference to political SD. For, this beneficial role appears to be lost in political life. Considering the public life of political leaders and officials, the very possibility of SD is apparently jeopardized by the very circumstances of political survival. The argument is spelled out by Robert Jervis: A statesman would like his enemies to be weak – or better yet, for them not to be enemies at all – but if he believes this and it is not true, he will pay a high price.

Obviously, leaders and officials, being humans, will have their share of SD in their personal lives, concerning their health and their family’s wellbeing. Nevertheless, as political actors, it would precisely seem that they were prevented from taking such comfortable escape from reality. Or, to put it differently, it would seem that the reality of the political profession would select out people who were self-deceived about their real supporters and friends, their real chances to win, the real important issues to raise, and so on. Politics does not seem the domain where actors who tell themselves comforting stories about their standing, their success, and their popularity can prosper. Politicians cannot be self-deceived about the reality of political competition: who are their friends, who are their enemies, and where one stands in the power struggle are all pieces of fundamental information to take into account in a political career. They can play out in overconfidence; they can pretend to be certain of their victory and final success, as well as of the weakness of their competitors, but these are all instances of 

Jervis, Perception and Misperception in International Politics, p. .



Political Self-Deception

political propaganda, of what Harry Frankfurt would name “bullshit” in political communication, where neither the actor nor the public care about the truth of the message. By contrast, the ideal candidate needs to be well informed about his chances in political competition. Information accuracy and inaccuracy are, according to Robert Jervis, respectively explained by real-life incentives and prices for one’s welfare. For the politician the perception concerning friends and party loyalty and the information relative to his chances must be correct in order to win and remain a successful political actor. SD must then be excluded from this area. Political competition to get into and stay in power, however, is quite a different matter from decision making and policy planning: those are the areas where the politicians who are actually in power are daily engaged. And examples of decision making based on insufficient or deceptive information, shaped by illusions of invulnerability and obstacle denial, by failure of reconsidering assumptions, and the discounting of negative information are countless, and a major subject of political science and history. Hence, if the conditions for acquiring political leadership select SD away, the real job of those holding offices and making political decision leaves ample room for SD. The proper explanation of how the very same person who avoids SD in one aspect of her political career can fall prey to it in another refers to the selectivity issue and is also linked to the response to another objection to political SD. Some political thinkers, like Jervis, discount the role of motivation in political misperceptions and misconceptions, while others take the interference of wishes and emotions in cognitive distortions as granted. Some clues for settling this alternative can be found digging into the issue of selectivity concerning information accuracy. Jervis stresses incentives, costs and expectations as the reasons for either accuracy or sloppiness: the higher the cost for inaccuracy and the more important the expectations connected to it, the more likely is the adoption of a vigilant attitude about gathering, processing, reviewing information, and vice versa. In this way, Jervis thinks he has disposed the influence of wishes on cognition for good. But, this is not the case: costs and incentives simply constitute constraints on cognitive processes, which otherwise could be influenced by desires, especially in cases of high anxiety. With low cost inaccuracy and high anxiety, there is a very high probability that the resulting belief is shaped by the expectations rather than by correct information. Expectations, pace Jervis, however are not just probable outcomes, according to the theory of subjective probability, but 

Frankfurt, On Bullshit, p. .

The Self-Deception of Leaders, Officials and Governments



also desired results. They are actually believed probable insofar as they correspond to some wish, which, in turn, influences a biased processing of available data. In the case of the Bay of Pigs plan, for example, the expectation of success was not simply based on a cognitive inaccuracy in probability distribution, for the inaccuracy was motivated by the desire of a successful operation. Having been (falsely) assured that the political costs were low, Kennedy and his team did not assume a vigilant attitude concerning the lists of negative data speaking against any chance of success and let themselves be influenced by the desired outcome for the operation. Moreover, highly emotional conditions may induce agents to adopt “defensive avoidance,” that is, shutting off negative evidence, as Jervis acknowledges. Then, granted low or discounted inaccuracy costs, if an emotionally loaded expectation induces someone to discard negative evidence and to form the opposite belief, we are actually confronting a case of SD. In sum, costs and incentives are not the whole story; the level of anxiety and stress concerning evidence processing must be considered, too. Cognitive distortion as a reaction to high anxiety and stress is acknowledged not only by those who admit the interference of motivation and emotion on cognition but also by someone who does not, like Jervis: Outside the laboratory, people sometimes believe any slight indication of an impending disaster, and sometimes refuse to accept overwhelming evidence. But most of these findings can be accounted by a twofold proposition that accords with commonsense and is directly relevant to foreign policy perceptions: if there is nothing a person can do to avoid the pain that accompanies the stimulus, his perceptual threshold for the stimulus will be raised (defense). If on the other hand, he can avoid the pain by recognizing the stimulus and taking correcting action, his threshold will be lowered (vigilance).

What Jervis is describing is actually SD as the typical response to threats that the agent perceives beyond his control. In this way, these two objections have found an intertwined answer: SD is not the only and ready response when either threat or negative information reaches the subject. It is a selective response just in case costs of inaccuracy are perceived as low and/or the threat is perceived to be so much beyond control that SD is worthwhile as at least temporary soothing thought. This explains why the very same person can be vigilant in one circumstance and fall prey of SD in another. In political decision-making, SD cases and cold misperception cases share the faulty cognitive condition. If, however, (a) there is a clear 

See Jervis, Perception and Misperception in International Politics, p. .



Political Self-Deception

motivation sustaining the misperception, in the form of a desired outcome (motivational condition), and (b) the decision-making process is suffused by high anxiety and the cost of inaccuracy are, right or wrongly, perceived as low (contextual condition), then SD is very likely the case. A final note on the costs of inaccuracy. It would seem that all political decisions imply high costs for inaccuracy, given that failure backfires on the reputation and career of the politician. But, in fact, what counts is the perception of the costs rather than their actual weight. For example, covert operations under the deniability clause induce the belief that costs of inaccuracy can be discounted for in any case they would not be imputed to the administration. In general, discounting the costs of inaccuracy is due to the (possibly self-deceptive) belief that they will not befall on the decision makers. Thus, while in personal-life-SD the costs of inaccuracy are usually sunk by the sense of powerlessness of the self-deceivers, in politics, they are instead discounted by the leaders’ belief in their invulnerability to the consequences of their decision.

 The Make-Believe Effect of Leaders’ SD and the Realist Objection .. There is one feature that distinguishes leaders and officials’ SD from SD in personal life. In the latter domain, SD has the effect of making the subject believing something that is untrue but soothing and easing her mind. This effect can work at times as a defense mechanism against difficult truths, which the subject has no power to change, and at other times as a positive projection of the self, pushing the subject ahead of difficult experiences and frustrations. In either case, SD provides the subject with false beliefs about herself or self-related state of affairs: but such deception is usually confined to the self. Sometimes even in personal life SD works as a more efficient way to deceive others: spouses, friends and kin. But I would say that this is not the main role of SD for individuals, as the wide and rich phenomenology provided by literature shows. Political SD, by contrast, always has the effect of making others believe something untrue via the false belief that leaders, officials, and governments incorrectly come to hold. This make-believe feature of political SD sets it apart from personal cases concerning its evaluation. The moral evaluation of SD is a matter of controversy for some thinkers point out the positive dimension of SD as a defense mechanism for the self while

The Self-Deception of Leaders, Officials and Governments



others stress the defective nature of SD both in terms of rationality and in terms of moral character. When we move to the political domain, and more precisely to the conduct of leaders, however, the effect of SD as the makebelieve of untrue statements and states of affairs necessarily changes the terms of its evaluation. For, even if ex hypothesi, SD were judged a functional response in general, in political life the judgment needs to be reversed considering its harmful effects. No matter how good, in functional terms, SD can be to a political leader for his self-image, self-esteem, and confidence, it is definitely bad for the people as recipients both of his decisions, policies, and attitudes based on a defective assessment of reality and on false beliefs, which subtract the power of veritable judgment on political actions from citizens. In the political domain, leaders’ SD is transformed into otherdeception: like lies, it produces the effect of making the people believe something that is false and that is in the interest of leaders to have it believed. Political leaders should then respond not only to having fooled themselves but also to having fooled the people as a consequence, and having made wrong decisions grounded on self-deceptive beliefs. This is a matter not just of moral blame but of political responsibility, one for which democratic leaders are rightly held accountable. But if this is the case, if what is distinctive of leaders’ SD is the makebelieve effect above and beyond the soothing or enhancing effect on the leader himself, then one may wonder whether the recourse to SD is not a roundabout and somewhat baroque way of explaining political deception, while lies seem a much more direct and simpler account. This is the third objection to political SD; it is what I would call the “realist objection”: if we look at the false statements and bogus information coming out from official sources in politics, we can easily find explanations for them in terms of self-serving lies, and self-interested desires to manipulate public opinion, dispensing with all the puzzles and complications typically surrounding SD. Even nonrealists may prefer straightforward deception for, in their view, SD may blur politicians’ responsibility when perfectly sensible explanations for morally reproachable conduct are available. .. To the realist objection, I can offer at least two lines of argument in favor of considering leaders and officials as possible subjects of SD in their political decisions. According to the first, the straightforwarddeception theory does not work well precisely on the explanatory side. It may represent a simpler explanation than SD, but, in fact, it is one that leaves unaccounted much of what is going on in international crises or in strategic war decisions. The point is that often the motivated beliefs and



Political Self-Deception

judgments passed over from one Cabinet member to another are hardly serving the interest of the deceiver. As a matter of fact, they often turn out to produce disastrous outcome and political fiascoes. Thus, why the supposedly liar deceived others to get to such self-defeating results is mystifying. Many political and social theorists supplement deception with “bad thinking” in order to explain a behavior that would make the liar utterly irrational and incompetent. Cases and types of “bad thinking” are recurrent and scholars have theories and labels for them, such as “mirror imaging,” that is, falsely projecting one’s own mindset on enemies, and, more generally, “political illusions and myths,” implying the reference to unexamined and unscrutinized beliefs for which contrary evidence was in fact available. They have not gone deeper into the analysis of such bad thinking, but they all mention the presence of emotionally loaded desires, expectations and goals which, in their own phrasing, induce a “willful” misreading, ignoring, or dispensing with evidence gathering and processing. Reading through historical reports of such events, two things seem clear: (a) that lying alone could not have done all the mischief, and (b) that mistakes and misperceptions were not blind but always oriented so as to confirm desirable goals and expectations. The second line of argument is that there is no better natural-born liar than the self-deceived person. I actually do not share the view of a certain strand of evolutionary psychology, according to which SD is an evolutionary mechanism to defy the lie detector possessed by humans. But even if SD is generally a defensive mechanism, it is a fact that if someone is genuinely convinced of his false statements, he will appear more convincing to others. In the realm of politics, this fact opens up the possibility for 





As an example, see the questions raised by many participants in the panel with reference to the irrational behavior of top CIA official, Richard Bissell, who is generally held responsible for deceiving Kennedy and his cabinet about the insurgency forces in Cuba and the chances of success of the invasion. In Politics of Illusions: The Bay of Pigs Re-Examined, J. Blight and P. Kornbluh eds., Lynne Rienner, Boulder, CO, , p. . See, among others, J. Orman, Presidential Secrecy and Deception: Beyond the Power to Persuade, Greenwood Press, Westport, CT, ; J. Snyder, Myths of Empire: Domestic Politics and International Ambitions, Cornell University Press, Ithaca, NY, ; Blight and Kornbluh eds., The Politics of Illusions; Edelman, The Politics of Misinformation; D. Munton and D. Welch, The Cuban Missile Crisis: A Concise History, Oxford University Press, Oxford, . See Robert Trivers, Social Evolution, Benjamin/Cummings, Menlo Park, CA ; Daniel Goleman, Vital Lies, Simple Truths; Shelley E. Taylor, Positive Illusions, Basic Books, New York, ; George Valiant, The Wisdom of the Ego, Harvard University Press, Cambridge, MA ; Robert Salomon, “SD and SD in Philosophy,” in Self and Deception, ed. by P. T. Ames and W. Dissanayake; Steven Pinker, The Blank State, Viking, New York, ; Z. Moomal and S. P. Henzi, “The Evolutionary Psychology of Deception and SD,” South African Journal of Philosophy, , : .

The Self-Deception of Leaders, Officials and Governments



a specific type of SD, where the make-believe effect is not just simply the by-product of leaders’ SD, but also provides the motivation from which SD is set in motion. This type of SD is sometimes hinted at by political analysts who comment, in passing, that leaders are often “taken in by their own lies” or “end up believing in their own fabrication.” If we are to make sense of these rather casual intuitions, we must consider that the best way to persuade the people that something is required by necessity is to convince oneself that it is indeed the case. But given that no one can just believe by fiat, simply because it is convenient, then we must suppose that the work is done either by SD or by wishful thinking which are the two well-known mechanisms of irrational belief formation under the influence of a desire. If we acknowledge that SD may play such ancillary role to deception, then the decisive alternative is not whether presidents are liars or self-deceived; rather we have to consider that, in order to be good liars, sometimes they are better be self-deceived. Lying and SD share the motivational condition but lying requires neither the contextual nor the cognitive condition, which are instead necessary for SD to be the case. When lying is accompanied by misperceptions there are good reasons to suspect SD, and if the contextual condition is in place as well, SD is highly probable. There is a further reason for taking SD seriously which concerns what to do about it. As I have noted earlier, the treatment of lies and SD is not the same. Lies can only be exposed ex post, and liars can be made accountable for them. SD can possibly be detected ex ante; and hopefully some preventive measures might be devised. I have already argued that SD can be avoided only via indirect strategies of character building and precommitment. The latter may be worked out at the institutional level. Hence, equating SD to lies on the basis of their common effect of makebelieve may turn out a big mistake for a normative theory of democracy. A democratic concern for the treatment of political deception cannot dismiss leaders and officials’ SD, given that the risks of missing the target would be high indeed.

 A Typology of Political Self-Deception .. If the make-believe effect represents the distinctive character of political self-deception, different ways of deceiving the people set apart different types of SD by political leaders. Focusing on the link between SD and otherdeception in the political domain, three SD types can be singled out:

 • • •

Political Self-Deception Other-deception is a by-product of SD SD is ancillary to other-deception SD provides the justification for explicit other-deception.

Whereas the third type is self-explanatory, as SD’s role is the rationalization of a lie, the first two types need explanation. Given that in both cases other-deception is a consequence of SD, how can we set apart occurrences where SD is a means to other-deception from those where other-deception is merely a byproduct of SD? And how can SD be a means to otherdeception, given the non-intentional account of SD I favor? The answer lies in the different circumstances leading to the two SD types and sorting out the different types of wishes of the self-deceiver. I begin with the first type which is quite standard in political as well as in personal life, insofar as its main point is soothing leaders and officers’ anxiety and worries about negative evidence. The agent A falls prey to SD because he or she has a crucial wish that P, and that P be the case is threatened by contrary evidence which the agent ignores or explains away so as to keep on holding P in good faith, so to speak. As a consequence, the false, but genuinely held belief that P also has a deceptive effect on others, the same effect of lying. Yet, such a make-believe effect is not a wish of A, who rather wants to keep on believing that P despite contrary evidence. The difference between politics and personal life concerns, first, the nature of the wish, which in politics is not linked to personal well-being but to strategic interests and long cherished political goals. These goals shape the politician or officer’s self-perception as a confident, capable, and successful leader who can stand up to his competitors. In this respect, even if the goals are politically strategic, they are always intertwined with self-serving motives. It also concerns the effect of the deceptive belief. Feeding on such self-related convictions has dangerous consequences for politics, which are absent in personal life. As much as in personal life, this case of SD, seen from the outside, exposes the subject’s irrational ground of holding P against the evidence. But, given that the false belief here concerns the whole political community, it is likely that there is no outside viewpoint, so that such irrationality usually disappears behind the make-believe effect on others who come to share the belief that P, as a consequence. Two subtypes of this kind of political SD shall be singled out, according to whether SD concerns beliefs related to future or past actions, that is, guidelines to follow or fiascoes to cover. Considering SD related to future actions, the false belief that P usually supports misconceived plans and decisions, which end up very badly. This

The Self-Deception of Leaders, Officials and Governments



being the case, one must admit that the original proponent that P could not simply lie to others, since lying in that case would not be self-serving, but self-destructive and self-defeating: he or she must have been genuinely self-deceived about P, or else totally stupid and incompetent. Think of the Bay of Pigs fiasco. The administration “had to do something about Cuba.” Given this crucial wish for some kind of action, acutely felt by the two main CIA officers, Bissell and Dulles, and despite the many drawbacks of the plan, the CIA willfully ignored Kennedy’s explicit prohibition for any military backup for the invasion. In turn, the administration dumbly overlooked all the evidently effective obstacles for the operation to be successful. As a result, they all went ahead on the counterevidential belief that, no matter what, one way or the other, they had to win. I shall later examine the process leading to this belief in details. Now, I only highlight that the two officers had to be self-deceived about the chances of success of the operation, because, while their motivation was selfserving, it was against their strict self-interest to deceive the Cabinet intentionally on this point. Their career and self-image were linked to the success of the operation, so much so, that they ended up losing their positions due to this failure. Either they were utterly irrational and incompetent, and that does not seem to be the case, or they were willfully misreading the evidence of the president’s explicit word. Their SD, then, had the secondary consequence of acting as make-believe effects on the Cabinet, whose members were so impressed and taken in by Bissell and Dulles’s competence, conviction, and self-assurance at Cabinet meetings so as to silence their own previous doubts. The make-believe effect, in this case, was genuinely a byproduct, because Bissell and Dulles’s SD was set in motion by the desire to go ahead with their plan and not by the desire to be convincing, which instead characterizes the second type of political SD. The second subtype of SD concerns the cover-up of political fiascoes. In such instances, leaders’ SD is basically meant to reduce the cognitive dissonance once a political decision or a policy has turned out badly. Whereas in the previous case, SD constitutes the ex ante grounds for (bad) decision-making, in the case of the reduction of cognitive dissonance SD comes in ex post. The theory of cognitive dissonance is well known in social psychology and subsequently in the social and political science as well. The theory tries to explain the double need of humans to (a) justify 

The theory was first worked out by Leon Festinger, A Theory of Cognitive Dissonance, Stanford University Press, Stanford, , then revisited by many. See, for instance, Arthur Cohen, Exploration in Cognitive Dissonance, Wiley, New York, . Among philosophers, it has been



Political Self-Deception

their doings and (b) consistently account for their attitudes and behavior. Dissonance is produced by a gap between one’s aims and the impossibility attaining them, on the one hand, and between the ex ante deliberation and the ex post evaluation of one’s chosen option, on the other. The ex post feelings of defeat and of having been wrong cause frustration and unhappiness. Given that the frustration cannot be cured by changing the world according to one’s wishes and bringing about one’s favored outcome or reversing the choice and its effects, psychological well-being can be vicariously reinstated by means of an epistemically ungrounded perceptual change, which allows the actor to understate the unsuccessful pursuit and to discard the neglected alternatives as not really available or worth pursuing. Thus, the unpalatable reality turns out to be justified under the circumstances, and not a bad outcome after all. Here again, a nonepistemic motivation (self-justification and frustration reduction) appears to set in motion a faulty chain of thought ending up in a doxastic shift, against the available evidence. Jon Elster presented Aesop’s fable Sour Grapes as the archetypical example of reduction of cognitive dissonance via SD. Then he proceeds to analyze other historical and social cases along the same line. The sour-grapes model seems to explain, among other occurrences, workers’ false consciousness about their position in the capitalist production, as captured in the Marxian theory of ideology. Elster makes a convincing case that this is an instance of SD motivated by the need to reduce cognitive dissonance. Those who are suspicious of the role of motivation in the production of perceptual mistakes tend to disjoin dissonance reduction from the interference of any wish or desire on cognition. So they say that the need for consistency can bring about a reevaluation of the alternatives, as they were ex ante, which induces a corresponding change of desire, ex post. Dissonance would thus be reduced without recourse to any motivational interference, but by purely cognitive mechanisms bringing about irrational thinking. Yet, once again the purely cognitive argument does not succeed in being convincing. The new deceptive belief of the fox is hardly explained by a drive to be consistent, for in this, like in many other cases, the point is not inconsistency, but rather dissonance between desired goals and failed outcome; the point seems instead to accommodate a failure,



picked up by Jon Elster and used to ground his theory of SD, see Sour Grapes. Essay on Rationality and Irrationality, Cambridge University Press, Cambridge, , and more recently The Alchemies of Mind, Cambridge University Press, . See Jervis, Perception and Misperception in International Politics, pp.  ff.

The Self-Deception of Leaders, Officials and Governments



readjusting criterial evidence with self-esteem and self-perception. It is this motivation, which induces the perceptual shift in the absence of any change in the evidence producing a belief in line with one’s desires and by means of a quite sophisticated argument. Coming to political cases, when a political decision turns out badly, it is clear that leaders and officials involved in the decision need to come to terms with their failure. One common reaction, instead of a proper admission for one’s bad judgment, is in fact a change of priorities ex post, which involves a re-description of the ex ante situation, a redescription in the face of contrary evidence, which supports the view that the goal was actually different, or that the missed result was not really worthwhile. In this way, the mistake is denied, and the blow on self-esteem blunted by means of the reduction of the cognitive dissonance between the desired goal and the actual failure. Recall, for example, the reason provided ex ante for the very controversial intervention in Iraq by the Bush administration in . The crucial reason advocated by Colin Powell in front of the UN Security Council was the present and real threat and danger posed by Saddam’s regime, because of his possession of weapons of mass destruction (WMD). There were definitely other reasons for the attack, which, however, did not figure as decisively in the prewar campaign. After the intervention, it became clear that the crucial “selling” reason was groundless. So the reason to bring democracy to Iraq, at the time presented as only supplementary, was officially reinstated as fundamental justification for the intervention. In this way, a failure was covered up by a redescription ex post of the priorities ex ante, so that the dissonance between prewar and postwar justifications was reduced. The important normative implication here is that SD, as a form of cognitive dissonance reduction, sustained what I take to be the original SD on WMD. Instead of an honest acknowledgment and public apology, opacity is added and does not help to assess the intervention and its aftermaths. In conclusion, even in this case – the sour-grape type of SD – the effect is not just to ease leaders’ frustration and reinstate their self-esteem: in these cases as well, the effects are more far-reaching. They not only include the indirect deception of the people but can also have dramatic consequences on future policies. If wrong judgments, bad decisions, and incorrect conclusions are not acknowledged as such, but are redescribed so as to appear justified, they are likely to be recurrent. Thus, if the fox can be better off with the belief that the unreachable grapes are sour, the belief that the war was justified in any case, can have dangerous consequences, and well beyond the government and the country that waged the war.



Political Self-Deception

.. The second type of SD, the one in which SD is ancillary to otherdeception, is specific to the political domain, and is also the most difficult to disentangle and prove. Usually in fact this type of SD is intertwined with other types, and also with ideological convictions, unexamined assumptions, straightforward lies, and cold mistakes, so as to make the process very muddy. Yet, I think it important to single out the distinctive SD component for it has the perverse force of keeping together the whole of a misconceived plan. In this case, the wish which sets the self-deceptive process in motion is precisely convincing others – Parliament, the people, and international audiences – that P is the case. Let us try to unpack this SD type: the political agent has an independent belief about P which, in turn, may be the result of ideological assumptions, myths, contested theories, cold mistakes, or a different piece of SD. The grounds for P, being controversial and divisive, or else based on classified data, are such that cannot make P universally believed. In order to be shared, the belief that P should be based on incontestable grounds. Thus, the politician has a wish for a different piece of evidence for P, one which can be disclosed and effectively convince other officers, politicians, and the population at large that P is the case. If they all come to believe that P, say that a certain threat to national security constitutes an immediate danger of enemy attack, then they will be likewise convinced that the decision of a preemptive strike is indeed necessary. This is the typical situation of selling war to the people: the decisions to wage wars are painful, controversial, and costly for any country, and the government needs to be backed by popular support, well beyond the divide between the ruling majority and the opposition. The president and his cabinet may have come to hold that the threat to national security is indeed serious and imminent, and that an air strike on the enemy is indeed necessary: but they have to persuade others and the people that this is the case. The president’s belief that the air strike is necessary is backed on reasons which are either unfit for public discourse (for example, important interest-group pressure), or inconclusive (ideology), or classified (provided by unorthodox intelligence operations) or a mixture of these reasons. Hence, the president and his advisers need an independent reason R, which conclusively supports that P and, as a consequence, the air strike even if one does not share the president’s convictions and perspectives and does not know the undisclosed reasons for such a decision. The general vulgata holds that R is the pretext for war, fabricated by propaganda and spin. But there might be an alternative explanation which in any case does

The Self-Deception of Leaders, Officials and Governments



not exonerate the government from presenting a rationale which is indeed a pretext. Nevertheless, the rationale for selling the war needs not be fabricated since SD may come in handy for providing a less costly solution. Thus, the government has a wish that R be true, because R will convince people that the threat is serious and that the air strike is indeed necessary, no matter how costly. And the wish for R to be true is also emotionally loaded for time pressure, and other sorts of pressure to plan the attack. These elements constitute the background condition for the SD process to start. Evidence for R may be, as usual in politics and especially in international politics, confused and controversial, even though, seen from an impartial viewpoint, negative evidence outweighs positive information. Under such circumstances, political leaders and officers can obviously solve the problem by lying and fabricate fake evidence for R. Lying would be the straightforward means to the desired end, that is, persuading people that the air strike is necessary. Lying, however, comes with a cost, for the government is sincerely convinced that the threat is serious and a military response is required; hence, like all true believers, the president and his advisers would like people to be likewise convinced. Tricking people into some false motive for attack will taint the president’s sincerity and produce psychological discomfort, besides making exposure easier. In this case, if the government lies, the lie will then need to be justified as necessary and noble, indeed required for the commonwealth’s sake. If the government chooses lying, this fact may create the circumstances for the third type of political SD, which I shall take up in a moment. However, lying is not necessary if the president and his advisers enter SD about the truth of R. Obviously there is no open and deliberate choice between the two alternatives, given that SD is not planned, but if the circumstances for SD are favorable, then SD may simply be taken and unintentionally produces the belief that R. In this case, lying becomes superfluous. The SD process may work as follows: the independent and sincere belief that P, that an imminent threat is the case, and its consequence, that a preemptive strike is necessary, helps and feeds the self-deceptive process toward R. The government’s wish for R to be true shapes the search in favor of confirmatory evidence and blocks or explains away negative evidence. The costs of inaccuracy in considering data for R can be discounted thanks to the firm and independent belief that P, that the threat is serious. If P is true, as it is for the cabinet, then R must be true a fortiori. Therefore, the positive evidence for R, slim as it may be, looks sufficient because the truth of R has an independent “test” in the conviction that P is true. As a result, here we have a clear case of



Political Self-Deception

circularity: R should ground P, that is, the belief that the threat is serious, which was the original problem to solve. Given that, however, the truth of P is already established in the cabinet, although not in such a way that can be universally shared, P, the explanandum, becomes part of its own solution, inducing the government’s officers to considerably lower the threshold of positive evidence required for believing R. Once the evidence is found, the president can sincerely present R and make the case for P, dismissing all negative evidence against R and stretching all the positive bits, because he has independent grounds for believing R, namely, his own belief that P. In this way, the proof that P is true (R) is grounded in the president’s (independent) belief that P is true. An example of this type of political SD is the issue of WMD leading to the Iraqi invasion in . At the time, Saddam’s possession of WMD was presented as the crucial and final reason for the preemptive military intervention. As it will be more comprehensively shown in Chapter , the available evidence for the existence of WMD was slim and, though confused, more negative than positive. Despite the usual uncertainty in international affairs, the existence of WDM clearly lacked grounds. Instead, it was presented as a sure fact proving the need for intervention. In fact, the Bush administration and Blair’s cabinet had independent convictions about the necessity of war, convictions which were certainly controversial, to say the least, as grounds for invasion but which also made easy for them to believe that the available evidence for WMD sufficient. The proper argument on the issue of WMD will be spelled out in the last chapter; now I only want to point out how the second type of political SD works in the service of other-deception. .. Finally, the third type of political SD does not replace other deception, which is openly there, but works in its support, providing a useful self-serving justification for the public lie. Having told a lie that is a moral wrong in order to keep a straight face and a positive self-image, leaders and officials have come to believe that the lie was indeed necessary, required by exceptional circumstances, a real “noble lie” in Plato’s tradition. It is thus a lie that does not challenge their self-conception as just and righteous. No one denies that exceptional circumstances may justify exceptions to the duty of honesty, but it is precisely the appraisal of the circumstances as exceptional that is often the content of the deceptive belief. And it is easy to see that if a statesman and his staff are really convinced that a special lie is necessary for the good and security of the nation and possibly also of the humanity, as happened for the Kennedys in the Cuban Missile Crisis, then the lie as a moral wrong

The Self-Deception of Leaders, Officials and Governments



disappears, and the dissonance between deceiving, which is wrong, and being good, honest and well-meaning is canceled by a redescription of the lie as “doing what is right in hard circumstances.” In sum, no matter which type of leaders’ SD we are dealing with, whether it is ancillary to deception or not, political SD has always the make-believe effect, hence is always intertwined with deception. I do not make the opposite claim that political deception is always intertwined with SD, leaving open the case for straightforward lies and for liars who do not use the comforting blanket of SD, not even for justifying themselves. I suspect that the political liar as assumed in the realist tradition, the Machiavellian hero, so to speak, flourishes in societies where the moral values of truth and honesty are not generally considered so prominent. If the social context is not deeply imbued with the crucial importance of truth and transparency, there is less need to turn to SD as help for better lying or for not feeling guilty. Comfort is rather found in the common opinion which underplays the moral wrongness of lying, and which occasionally had been worked out in proper theories, such as that of the double truth in the Middle Ages. In contemporary democracy, however, any politician must at least pay lip service to truth and honesty, and, in these circumstances, SD comes in handy. Thus, my thesis is that even though not all political deception is SD, all SD implies people’s deception, too, and often SD is either an efficient substitute or support to otherdeception. If this is true, as I argue, then in order to face the normative challenge posed by deception to democratic theory, we cannot dispense with considering SD and the special treatment it requires.

 Collective SD: How It Works .. Before turning to the normative discussion of what is to be done about SD, a crucial aspect of how political SD works must be considered. Contrary to what happens in cases of personal SD, political SD is almost 

See Eric Alterman, When Presidents Lie: A History of Official Deception and Its Consequences, Viking, New York, , ch. , “JFK and the Cuban Missiles Crisis.” Alterman tells the story of Kennedy’s lie about the secret deal with the Soviet President Khrushchev. Contrary to most interpretations, Alterman’s reconstruction supports the view that the lie was not required by circumstances, hence justified. In his account, admitting the deal would not only have been possible, but also would have avoided many subsequent mistakes based on the official story. Kennedy’s lie produced the illusory belief that toughness in international crisis was the winning strategy, while later mistakes proved that belief dangerously false.



Political Self-Deception

always a collective product. No president, no matter how powerful according to the country’s constitution, takes his/her decisions alone and in isolation. The deliberative process takes place in teams, and then there are external checks to the decision, like a Parliamentary vote, or other more or less formal controls, like a UN Security Council deliberation, or the consent of NATO or other alliance members, or even just internal support by interest groups and citizens, and external support by crucial allies, and so on. This fact is crucial for understanding how political SD is produced and for working out institutional devices against it. We can imagine an inner circle around the political authority, working within other, wider circles, each contributing for its part, with a different weight, to the definition of the problem and to the selection of the action to take. If the contextual condition is favorable, collective SD is likely to be produced since the decision-making is always intertwined with a mix of political and personal interests, thus fulfilling the motivational condition. There is nothing like an ideal problem-solving setting where each participant is motivated only to find the best possible outcome. The contextual condition is usually represented by a real or imaginary threat to the normal dayto-day politics, a threat that calls for momentous decisions. The decision comes from the president or premier’s team and is usually phrased taking into account the constraints posed by external circles, domestic politics, powerful interest groups, parliament, electorates, allies, and so on. Given the contextual and the motivational conditions, let us see how the collective reasoning is liable to be distorted so as to reach the leader’s preferred conclusion. In order to understand the production of collective SD, reference to the notion of groupthink can be helpful. Groupthink is a category put forward by Irving Janis while researching the causes of major political fiascoes like the Bay of Pigs and the Watergate. By groupthink, Janis means mechanisms of group-psychology, which lead to systematic misperception and miscalculation, hence to ill-conceived plans. As it will become apparent, groupthink is only a different label to describe a process of collective SD. Groups per se are not the problem: in a sense if a decision process takes place in a group, in theory, there should be more possibility to check information and personal biases and idiosyncrasies, to test hypothesis via



Janis, Groupthink.

The Self-Deception of Leaders, Officials and Governments



criticisms and diversity of opinions. In this light, the group might represent a barrier against SD. The group, however, can harbor pathologies that lead to the systematic circumvention of all potential checks and internal criticisms. Whether or not the group can prevent or favor SD from taking place depends on (a) the way its members have been selected and (b) its internal organization. According to Janis, the distortion is due to the desire to keep the group a comfortable and cozy place where the individual member is recognized and willing to maintain group cohesion and harmony. From this fact, he derives two tendencies: a first tendency to please the leader, hence to reinforce him in his opinions and attitudes, and a second tendency to conformity, which makes it extremely uncomfortable for anyone in the group to voice a critical view on whatever has gained consensus. To my mind, though, the desire to keep the group harmonious and the two implied psychological tendencies derive from the structural organization of the group, as the leader’s team, where competence is intertwined with personal relationships and with its explicit or implicit power structure. Under this perspective, the wish to please the leader is something more than a psychological dynamic. Positions within the group and career prospects are seen by each member as dependent on the favor of the leader. Contrary to what happens with deliberative groups in which each member is equal to all others and where each participant pools her competence, intelligence, and situated perspective to the collective problem-solving, enhancing the chances of good outcome, here the power structure and the dependency from the leader greatly facilitate conformism, factual inaccuracy according to the leader’s perceived wish and the emergence of false beliefs. .. When a political leader is facing an international crisis perceived as a threat to national security, her wish is to overcome the danger and makes it a personal success. On the one hand, the fulfillment of her wish is  



See H. Landemore, Democratic Reason: Politics, Collective Intelligence and the Rule of the Many, Princeton University Press, Princeton, . I shall use the expression “power structure” instead of “hierarchy” because the president’s inner circles are not always formally organized as a Cabinet or Council. That was particularly the case with the “ExCom” group picked by John Kennedy to face the Cuban Missile Crisis, so much so that the loose atmosphere within the group was disconcerting for senior officers such as Acheson. See L. Freedman, Kennedy’s Wars: Berlin, Cuba, Laos and Vietnam, Oxford University Press, Oxford, , p. . See L. Hong and S. Page, “Groups of Diverse Problem Solvers Can Outperform Groups of HighAbility Problem Solvers,” Proceedings of the National Academy of Sciences (), : –; Landemore, Democratic Reason.



Political Self-Deception

constrained by all external influences, among which most powerful are domestic considerations; on the other, she depends on her experts and advisers to appraise and assess the situation for a successful response. What the leader communicates to her team is often something different from a problem-solving attitude, calling for a vigilant appraisal of the fact of the matter and for a prudent calculation of alternative options according to a cost-benefit analysis. She communicates a wish to solve the crisis so as to please her sponsors and most faithful political allies, to stand up against the opposition’s criticisms, and, by the same token, to emerge as the brave and successful leader. These constraints often go in opposite directions, thus making a reasonable solution, grounded on rational beliefs, all the more difficult. Reconstructing the group decision-making process in abstract terms, we can see how the initial conditions for SD are met. •





• • •

The group, as a collective subject S, has the wish that P (that the crisis is successfully solved in line with the administration’s sponsors and friends); the wish that P is self-serving and emotionally loaded, because the political career of the leader and of whole group depends on P. The plan Q fits with the wish that P. But in order to make a case for Q some factual conditions must preliminarily be met. S, or some subset of S, has actually met with evidence contrary to those conditions, and appraised that this evidence is likely to reduce the chances of Q’s success. Say, that in the general uncertainty of international politics, the only certainty is usually negative evidence, insofar as some data provides clear grounds against Q, making the attainment of Q, hence of P, unlikely to succeed. In such a situation, given the wish to please, some advisers may well start an individual self-deceptive process by discarding the negative data and looking instead for (biased) evidence supporting Q as a solution. The emerging proposal in favor of Q is then self-deceptive, based on a motivated interpretation of data against the available evidence. Alternatively, the proponent needs not be self-deceived about Q, at this juncture. He may well be aware of the actual lack of warrant for Q, but being eager to please the leader, he is willing to push it anyway. Once Q has emerged in the group, the collective SD process can start. The proposal is likely to be well received by the leader, and this very fact will convince the most eager to please to follow. Such initial credit in the group is often taken as an independent positive evidence for Q, both by the leader, by the proponent, and

The Self-Deception of Leaders, Officials and Governments



by other members of the team. The deceptive conviction that Q is good is thus reinforced by the emerging consent, which is instead induced by the tendency to please. • When the consensus of the majority is reached, the conformity tendency kicks in and takes on board the most recalcitrant groupmembers. It is important to stress a feature of collective SD: each individual’s assent is taken by all others as evidence in favor of the proposal. Thus, even if the original proponent did not really believe in Q’s chances of success, but simply wanted to please, he might eventually be taken in by the lack of dissent and criticism which provides a source of independent confirmation. The different steps to SD, which I have described in Chapter , are taken by different persons in the group. At the end, each is convinced by the conviction of the others and by their motivation, the whole process remains opaque, and the deceptive belief is collectively shared. Yet, considering the high stakes involved in decisions such as military intervention, why a non vigilant attitude in data processing is selected? We have seen that the selectivity of SD is explained by reference to costs for accuracy/inaccuracy. The costs of inaccuracy for a misconceived intervention plan are high indeed; this consideration would seem to exclude lack of vigilance, hence it would seem that the cognitive condition for SD is barren in international crisis. The cognitive condition, however, is fulfilled thanks to the perverse effect of collective reasoning that induces each team member to consider the stakes of the collective decision less than her or his own personal stakes in the group power structure. Elaborating from the Bay of Pig fiasco, Janis highlights two collateral factors for the conformity tendency to be implemented: “no one in the group accepted complete responsibility” and “collective learning was inhibited because subordinates were at personal risk if they told the truth.” The diminished responsibility component lowers the cost for inaccuracy for each member, thus creating the crucial circumstance for SD to kick in. At the same time, the eagerness to please the leader is matched by the fear of contradicting the leader who can fire the naysayer.  

See Janis, Crucial Decisions, The Free Press, New York, , p. . The risk of dissenting from the leader, and the whole team, found expression in what happened to the undersecretary of state Chester Bowles in the Bay of Pigs fiasco. While Secretary of State Dean Rusk entertained the same doubts, but did not want to play the “naysayer,” Bowles voiced his doubts, which eventually cost him his career despite being right about the operation’s result. The lesson that Kennedy’s team learned from Bowles’s firing was that loyalty is the highest value and



Political Self-Deception

Together, they provide a strong motivation for each member to believe that Q is feasible. Each member then is placed in favorable circumstances for his/her own SD to start: each has an eager desire to believe that Q is successful, despite contrary data, given the perception of personal low costs of inaccuracy. The feeling of limited responsibility of group-members and the fear of voicing criticism suggest that, contrary to what Janis seems to think, groupthink is not so much induced by affective elements, such as the coziness of the group but instead by its power structure. If one feels he or she is a mere node in a broader power-web, he/she tends to feel less responsible for the group-decision, which is not perceived as a rational collective choice that each equally contributes to reach. As we have seen previously, feeling powerless about producing a certain effect is precisely one of the conditions for SD to take place. If someone feels powerless about changing reality, she may change her perception of the reality so as to falsely realign her beliefs with her wishes. I am not arguing that each team member falls prey to SD and comes to hold that Q genuinely, though deceptively. I am, rather, pointing out that each of them is placed in such circumstances that are favorable for SD to take place. The actual collective production of SD may be sustained either by hypocrisy or by the individual SD of each member who is trapped in a power structure, under pressure to provide a solution on which his or her career may depend, and powerless as to an alternative solution contradicting Q. Yet, if the diffusion of responsibility within the group concurs to lower the costs of inaccuracy for each of the team member, it does not apply to the president’s responsibility. How can then the president discount the costs of inaccuracy? First, if the decision under discussion is covered by the deniability clause, in that case, the president believes that the consequence of a failure will not befall on her head, hence has a reason to discount inaccuracy costs. Second, the president easily takes the consensus on P shared by the whole group as the proof of the solid grounds for Q, which, in the second type of political SD analysed earlier, has confirmation in the independent belief that P. In such cases, her vigilance is fed by the feeling of assurance about the grounds of Q. Third, in case we are dealing with covering up a failure, or justifying a lie, instead, the president and her cabinet have nothing to gain from accuracy, for they have simply to accommodate an unwelcome reality beyond possibility of change. skepticism is a very imprudent strategy. See Lloyd C. Gardner, Pay Any Price, Ivan R. Dee, Chicago, , p. .

The Self-Deception of Leaders, Officials and Governments



Coming back to groupthink, even the potential critic instead of reversing the collective SD can become an ingredient of its collective production. Consider the team member who actually harbors doubts about the interpretation of the problem and of the proposed solution. She very likely feels divided between her role, her loyalty and duty to comply, on the one hand, and her doubts that give her reasons to oppose Q, on the other. Worries about her career and the relative feeling of powerlessness concerning alternatives add anxiety and stress to her uncertainty. In such a situation, hypocrisy is the first and obvious course of conduct. Adopting a hypocritical posture, she may preserve her loyalty and save her career at the expense of her integrity and honesty. The member pretends to agree with the plan, spares herself a difficult dissenting position, and justifies herself because of her powerlessness. Yet, hypocrisy comes with a cost in terms of one’s moral integrity and courage and may engender psychological discomfort. The latter can be reduced by a justificatory and possibly self-deceptive story or can be bypassed by a different type of SD realigning her beliefs with the group’s and with her conduct. In this case a sour-grapes mechanism looks appropriate: the potentially critical member evades her doubts and reshapes her perception of the facts in order to make them fit with Q and in order to feel comfortable with her attempt to please within that exclusive circle. For, in addition to power, exclusiveness is the other crucial structural feature for sustaining collective SD. Dependency does not encourage free thinking but, rather, willingness to please and a lack of responsibility as to the final decision. Exclusiveness, in its turn, provides an incentive not only to stay put, but also to climb the implicit grouphierarchy by means of loyal support to the leader. . Summarizing, the circumstances for the potential critic to turn into a self-deceiver are: .

. . .

The proposed plan that Q meets with the leader’s wish that P, hence the subordinate S¹ has a corresponding wish that Q be true, because she wants to be a loyal and honest supporter with good chance of career. The wish that Q for S¹ is thus self-serving and emotionally loaded. S¹ knows of evidence contrary to Q. At this point, S¹ has three alternative routes: i. S¹ could form a belief according to the evidence and try to change the leader’s belief that Q is a good plan. This move, however, is not available to S¹ without damaging her future career.



Political Self-Deception ii. iii.

S¹ can alternatively pretends that Q is a good plan thus furthering her career expectations, but at the price of discomfort due to her dishonesty. S¹ can set in motion a train of thought about Q which will be affected by biases and unintentionally arrive at confirming Q–hence reducing the dissonance between being a loyal official with good chances of career advancement and being a frustrated critic.

The official does not choose course (iii) over the others, but for SD to start under the circumstances it is sufficient for her to linger over Q and her reasoning may easily fall prey to biases under the influence of her wish to believe that Q is a good feasible plan. I am not saying that this is what always or most frequently happens to potential critics in the team, but looking at the records of Kennedy’s decision-making teams, both concerning the Bay of Pigs operation and the Cuban Missile Crisis the following year, it is clear that the president’s advisers failed to provide critical checks. On the lack of dissenting voice, Arthur Schlesinger Jr. wrote: Our meetings were taking place in a curious atmosphere of assumed consensus. The CIA representatives dominated the discussion. The Joint Chiefs seemed to be going contentedly along. They met four times as a body after March  to review the Bay of Pigs project as it evolved; and, while their preference for Trinidad was on the record and they never formally approved the new plan, they at no time opposed it.

Many participants afterward admitted that they did not want to play the naysayer and that the firm conviction displayed by others took them in. In this way we can explain how dissenting voices are automatically silenced without any imposition and despite the evident drawbacks of plans for any external observer. The unintended outcome is that a general veil of opacity appears to bind together the whole group though different members may come to be self-deceived by different routes.

 

A. Schlesinger Jr., A Thousand Days: John F. Kennedy at the White House, Riverside Press, Cambridge, MA, , p. . Schlesinger, A Thousand Days, p. , about his own regret, followed by his own justification, at having being too complacent with a project he objected to. See also Blight and Kornbluh, eds., The Politics of Illusions, p. ff. See also the comments made by Allen Dulles after the fiasco about the “doubting Thomases” in Kennedy’s entourage who were not enough powerful to cancel the attack, cited by T. Higgins, The Perfect Failure: Kennedy, Eisenhower and the CIA at the Bay of Pigs, Norton, New York, , p. .

The Self-Deception of Leaders, Officials and Governments



To sum up, the contextual condition for collective SD is met when a momentous decision has to be taken for example in an international crisis. The leader has a wish that P and that leads some adviser to propose a plan Q that, if successful will satisfy P. Plan Q thus represents the collective motivational condition for SD. Plan Q fulfills the leader’s wish but is based on groundless assumptions contradicted by data such as to make it very unlikely to succeed. This is the premise for the cognitive condition to kick in as well. Despite its drawbacks, Q is uncritically received by the leader and by some of the most eager-to-please members. This initial credit will gain Q other supporters; their support will provide what appears to be independent evidence that Q is good, which will eventually convince or reinforce the conviction also of the initial proponent. In such consentbuilding mechanism, each participant feels diminished responsibility relative to the adoption of Q, while each has a personal incentive to converge on the leader’s perceived desire. As a result, though the costs for inaccuracy relative to Q are very high, the collective process leading to Q tends to obscure the risk. Each participant has personal stakes that lower inaccuracy costs for him or her, and no one feels actually responsible for the final decision, while the leader is reassured by the emerging consent, deceptively taken as evidence in favor of Q. Then, potential critics find themselves trapped between the costs of dissent and their perceived powerlessness, so that they may enter SD of the sour-grapes type, unless they choose hypocrisy, which will probably engender a justificatory SD to be sustained. As a result, the group emerges of one, self-deceived mind. At this point, Janis has remarked that often the “illusion of invulnerability” kicks in, further obscuring the risk of an ill-conceived plan. Janis sees the illusion of invulnerability as following from the exclusivity of the group, which, in his phrasing, consists in the “preconscious assumption that everything is going to work out all right because we are a special group.” I actually tend to interpret the illusion of invulnerability as a supervenient property of the collective SD or of the collective wishful thinking. Once the leader’s wish to face the threat successfully has found a common response in a collectively shared plan Q, the group emerges confident through shared consent, as if it were a sign of a superior problem-solving capability. The plan has met no significant internal obstacle, hence the leader may think not only that Q is right, but also that the group thinks together, with no discrepancy, and such cohesiveness is an extra-power, a special gift, a unique quality of his team, 

See Janis, Groupthink, p. .



Political Self-Deception

which, consequently, can climb and win any mountain, just because they all move along with the same harmonious pace. The illusion of invulnerability played its role in the overconfidence of Kennedy and his team at the time. Similarly, a great deal of optimism was displayed by President Bush in his meeting with Prime Minister Blair on January , . Both Bush and Blair thought that the war would last four days, and Bush and Condi Rice expressed confidence about the rosy aftermath of the invasion referring to plans at hand, and appearing to be in control. Seen ex post, what is striking is that overconfidence was in both cases displayed in connection with very misconceived plans which did not stand a chance of succeeding. In this respect, it seems reasonable to view the illusion of invulnerability as supervenient upon the collective motivated irrationality, instead of a group-psychology trait. In turn, the collective motivated rationality may be a specimen of SD, or simply a case of wishful thinking, as probably was the overconfidence expressed by the Bush administration about the imminent Iraq invasion. If we look into real cases of group decision making, as they are reported by the participants afterward, we find that many say that they harbored doubts, some of which were actually spelled out, but only half-heartedly, while some were never voiced. At the end of the day, however, all seem ready and content to accept any reassurance and silenced their doubts. The few who actually raised clear criticisms were not taken seriously, swiftly marginalized, and, afterward, often found their career ruined.

 What Is to Be Done? .. An important reason for favoring the use of SD in political analysis concerns the possibility of adopting prophylactic measures. I intend here to reverse a common suspicion about the application of SD to political decision making. Imputing SD instead of straightforward deception has no exculpatory impact on politicians’ responsibility. As I have argued, SD does not exonerate the self-deceiver from responsibility in any case. In the  

See Don Van Natta Jr., “Bush Was Set on Path to War, Memo by British Adviser Says,” New York Times, March , . As an example, I shall quote A. Schlesinger again about the way in which Chester Bowles’s criticism to the plan were developed and received. When Bowles took part at one of the meeting sitting in place of Dean Rusk, he “was horrified by what he heard but reluctant to speak out in his chief’s absence” (p. ). Then he wrote a strong memo which he handed to Rusk. Rusk reassured him “leaving him with the impression that the project was being whittled down into a guerrilla infiltration, and file the memorandum away ” (ibidem, my italics).

The Self-Deception of Leaders, Officials and Governments



political domain, the responsibility is heavier than in personal life given that (a) the consequences of leaders’ SD are far-reaching and harmful and (b) politicians carry political beside moral responsibility for which are held accountable by citizens. Let us return to the example of the WDM as the reason to wage war against Iraq. Suppose, as I have argued above, that the belief in their existence before the war was a genuine case of SD by Bush and Blair, and by some members of their inner circle. Would then the sincerity of their belief lessen their responsibility for deceiving their constituents into war and the massive destruction, which followed? Would the deaths provoked by the invasions be, at least, partially justified by the fact that they were in good faith and meant well? I do not think so, given that meaning well and causing deaths is never morally, let alone politically, enough. In addition, they might have been sincere concerning WMD, but their SD concerning WDM was in any case ancillary to the deceptive obfuscation and misinformation about the real motives for the invasion. Thus, a retributive normative response is appropriate for SD no less than for straightforward deception. In addition to the retributive response, however, SD presents the advantage that, while lies and mistakes can be detected only by hindsight, the chance of SD to take place can instead be foreseen. This fact opens up the possibility of working out some preventive strategies. Without minimizing the difficulty on this end, SD, in principle, widens the range of normative responses from mere retributive to preventive measures. The treatment of political SD and of its momentous effects must be focused on prevention. The prevention of SD requires, firstly, the acknowledgment of the problem and, secondly, the cultivation of a disposition to accuracy, to be then supplemented by forms of institutional precommitment. A first prophylactic measure, then, concerns the political education of leaders and officials: they must learn to be disciplined with reference to information accuracy as part of their responsibility toward the public. And this training should be made a condition for holding office: not just moral blame, but political accountability for officials’ SD is in order. 

Good intentions cannot be enough in politics, as Max Weber vividly illustrated in Politics as a Vocation, from Max Weber: Essays in Sociology, transl. and ed. by H. H. Gerth and C. Wright Mills, Oxford University Press, Oxford, , pp. –). Weber’s preference for the ethics of responsibility in the political realm has been recently taken up and applied to the (negative) case of Tony Blair’s position on the Iraqi war by David Runciman, The Politics of Good Intentions, Princeton University Press, Princeton, .



Political Self-Deception

The collective dimension of political SD is at the same time a curse and a blessing. It is a curse because the mechanism of groupthink, as I have rephrased it from Janis’s notion, facilitates a self-deceptive assessment of the facts of the matter. It is a blessing, at least a mixed blessing, because collective SD offers more opportunities of prevention than personal SD. The leader alone, despite his resilient wish that P, cannot come to believe that Q is a good plan by himself; he needs his advisers and counselors both to formulate the plan Q and to support him in believing and pursuing Q. We have seen that the reasons why team members tend to be inaccurate in their diagnosis and planning, either intentionally or not, are often due to the two motivations embedded in the power structure of the group: the wish to please and the conformity tendency. These two reasons are probably harbored in any group following a charismatic leader, but I like to stress that the power structure and the exclusive nature of the team surrounding a president or a prime minister makes the charisma of the leader unnecessary, or better, only a contingent feature. The power makes members driven both by fear – of losing their position, of being overcome by others – and by the ambition to advance before others. The fact that the group is so exclusive makes them willing or acquiescent to play their role within the group. Even though their position may occasionally be uncomfortable and inconsistent with their autonomous judgment, they still have their share in the power at its top and that is why their job is generally envied and regarded as most prestigious and desirable. This social attitude helps them feel special, above commoners, and in that sense contributes to the illusion of invulnerability. Even if they have been selected for their competence and ability, their intelligence is unlikely to represent a winning tool in the problem-solving context, for the conformity tendency does not capitalize collective intelligence in general – as we have stressed earlier. If the power structure plays such a crucial role in the complex mechanism conducing to collective SD, it appears the appropriate target of SD prevention. In personal cases, precommitment against SD can work in the form of an authorized referee; in political cases, we must imagine either that similar figures be placed within the leader team or that an independent body oversees the decision-making of the group. Both options are only general suggestions to be worked properly out at the institutional level, and both meets with immediate objections and problems. The common point of both is to provide decision makers with independent viewpoints capable of breaking the internal tendency to conformity and of looking at data with an unmotivated gaze. To put it differently, their point is to

The Self-Deception of Leaders, Officials and Governments



institutionalize the devil’s advocate principle, which many analysts as well as many participants have declared crucial in order to avoid misconceived policies. The independence of the overseers whether internal or not must be of two kinds: (a) personal independence, in so far as the personal prospect and career of such devil’s advocates must not depend on the leader’s favor or disfavor; (b) cognitive independence, with reference to the motivational dynamics taking place within the team so that they should be in the position to review the plan with unprejudiced eyes and to assess the truth-value of data interpretation with a diagnostic attitude. How can such a double-layered independence be secured? Let’s examine the two hypotheses in turn: (a) devil’s advocates within the team, and (b) an independent body of overseers acting as referee. The first option requires that the devil’s advocates must be appointed by an independent body (perhaps, Parliament) to which they are directly accountable, with the specific and institutional task of being critical reviewers. They should exercise a check on the collective decision and try to understand the role played by the various members in reaching a decision, so as to remind everyone of one’s precise responsibility. In other words, what I have in mind is something similar to the spirit of the adversary system in courts. There are clear obstacles to the implementation of this option. For one thing, it will be resisted by the leader’s group on the ground that the presidential team should be based on loyalty and trust in order to work smoothly and efficiently, free from bureaucratic constraints. We have seen that trust and loyalty are usually tainted by jealousy and fear, conformism and uncritical assent within the power structure. Nevertheless, such proposal is likely to be strongly opposed by the cabinet. For another, were the suggestion ever be accepted, the referees might risk the loss of their cognitive independence working side by side with the other team members. It is not clear that their formal role might spare them the acquisition of biases and prejudiced views from other members of the team. Alternatively, they might be marginalized by the group’s collusion and de facto deprived of the relevant data. In other words, there is an alternate risk between being taken in by the team and lose independence and being shut off the relevant work and information. The other option, an independent body of overseers acting as referees on certain well-defined governmental decision, bypasses the two correlate risks aforementioned, but raises other difficulties. If it is the Parliament’s job to appoint this body, is the latter a parliamentary committee or is it something outside the Parliament? If it is something like a parliamentary committee, there is the risk of being trapped in power games of a



Political Self-Deception

different sort that those going on in the cabinet and in the team of president’s advisers, but still affecting both personal and cognitive independence. If it is instead appointed ad hoc, then the problem of its democratic credentials may arise. Other complications concern when such a body should be consulted, for which kind of decision, who decides when to consult it, and whether its opinion is just consultative or binding in some way. From what is known about collective problem solving, the outcome produced by a diverse group is superior to that produced by a highly competent and smart but homogeneous group, where each member is equally participating in the deliberative process. Generally speaking, the two kinds of independence, which I have referred to earlier, would be best implemented by a group of citizens sorted by lottery for each momentous decision, such as the decision to wage war. This solution would preempt the criticism of the democratic credentials of the body of overseers, but it is complex as to its implementation. Whether such suggestion might be institutionally feasible, I shall leave it open to experts of institutional design. In any case, in order to move forward in the direction of institutional prevention some preliminary conditions should be met. The crucial precondition is the acknowledgment of how collective SD works and its related danger, lurking in situations of exceptional political threat leading to momentous political decisions. If the risk of SD, of how it is produced, and of its effects is made clear and acknowledged across the board, then a consensus on the need for its prevention may be reached. From such a consensus the alternative options may be explored, weighted and properly worked out in a solution acceptable by all. But no shared acknowledgment on the risks of SD and on the need of its prevention is viable at present. There is an impressive literature on the Bay of Pigs fiasco, on the Cuban Missile Crisis, and on the Vietnam War. Most participants, writing their memoirs, and historians recounting the events have underlined the lessons to be learned by such episodes. Although the lessons were meant to teach future politicians and officials to avoid “being so stupid,” and although many sensible suggestions are made, a proper understanding of how come intelligent, competent, responsible and well-meaning people behaved so stupidly is not really provided. There are many hints and many passages suggesting SD: but the phenomenon is 

See Landemore, Democratic Reason.

The Self-Deception of Leaders, Officials and Governments



not taken up seriously, while attention is focused on mistakes, on the one hand, and on deception, on the other. The fact that mistakes are often acknowledged as “willful” and induced by “illusions” is not given a second thought. Similarly, that deception cannot be the whole or the right story, given that it went against the deceiver’s interest, has not elicited further analysis. Yet, it is crucial to understand the working of SD in order to take measures against it. Different tendencies are at work here against the recognition of SD as an issue. Once a government’s deception has been exposed, citizens prefer to think of it as a straight lie, for which a more precise responsibility and correspondent liability can be assigned. On their part, the president or prime minister prefer to admit the mistake, while holding responsible and accountable the advisers who originally proposed and sponsored the plan. In this way, the villain is found, the guilt assigned, and the reparation of political trust seems at hand. However, if political SD works as I have argued earlier, such collective rituals of atonement will not even touch the original problem, and, what is worst, SD will resurface at the next occasion. In sum, in order to treat political SD, the following steps are necessary: . . . . .

The issue should be acknowledged as such, instead of being obscured as a specimen of political deception and/or misperception and mistaken judgment. Political SD should be understood in its nature and working. Moral training about SD should be required for all politicians and people holding public office, to make them alert to the most favorable circumstances when SD is likely to occur. Institutional devices should be designed so as to work as pre-commitment against SD. Given that political SD is a collective product that takes advantage of the power structure around the head of the executive and of the administration, the device should interfere with internal hierarchy and constitute a check on the two factors implementing conformity, that is, the sense of diminished responsibility of members as effect of being a token in a pyramid, and the sense of personal risk for group critics.

Institutional constraints are not meant to substitute the duty of politicians for moral training against SD, nor to lessen their political and moral responsibility and the related moral sanctions, but simply to supplement moral reasons whose appeal is not always so strong.



Political Self-Deception

 Concluding Remarks In this chapter, I have argued that there are good reasons to think that SD is a widespread phenomenon in politics as well as in personal life, and that political SD exhibits some special features of its own. I have then considered and responded to the two immediate objections to political SD, one reducing it to cold mistakes and biases, the other to straightforward deception. Having made the case for political SD, I have then proposed a typology based on the link between different kinds of political SD and other-deception. The first SD type induces other-deception as a byproduct; this type can be divided in two subtypes depending on whether SD concerns future or past actions. In the second type, SD is instead ancillary to other deception. Finally, in the third type, SD covers up otherdeception. While I think that this typology represents a helpful tool for the analysis of real cases, I have also stressed that ideal-types cannot be found pure in the complexity of political reality: they are all mixed up, and also intertwined with other phenomena such as deception, pretension, ideology, ignorance, misperceptions, and mistakes. In such a gray area, however, genuine episodes of SD can be detected if the contextual, cognitive, and motivational conditions are jointly present. In the next chapter, I shall move to the analysis of political cases; there it will become apparent that there is an intricate web of beliefs, assumptions, attitudes, expectations, some of which deceptive, some self-deceptive, some misguided. My attempt is not to make SD the main explanatory mechanism of the case under scrutiny, but, rather, to find its proper space and role. In my reading, SD crucially provides the account for what the participants acknowledge ex post as their stupidity. In addition to explanatory merit, SD analysis can be recommended also for normative reasons, because the understanding of its working is the first step for its treatment. Preventive measures should take the form of some institutional arrangements to avoid letting uncritical thinking to go unchecked.

 

Kennedy and Cuba

 How Could I Have Been So Stupid? .. “How could I have been so stupid?” This was the first question that JFK asked himself after the Bay of Pigs Fiasco. He felt shame and regret about his own misjudgment because it was clear ex post that the plan could not have worked, being the result of discounting data and wrong assumptions. Hence the question, which concerned himself and the whole group too: how could all of them, who were intelligent, competent, experienced people, either in the military or in the political, be committed to such a misconceived and weak plan? A simple mistake by one of them should have been spotted by someone else. How come then? The hypothesis of SD is thus worth serious consideration. Many memoirs, reconstructions, and comments of the Bay of Pigs, in fact, have tried to explain the fiasco through illusions, evasion of hard facts, being taken in, willful misreading of evidence, unexamined assumptions, suppression of doubts for fear of being a nuisance and the like. McGeorge Bundy, then special assistant to the President for National Security Affairs, in a memo to President Kennedy on the th of April , just after the fiasco had become apparent, wrote: Hope was the parent of belief . . . In prolonged balancing and rebalancing of marginal elements of this operation, all concerned managed to forget—or not learn—the fundamental importance of success in this sort of effort.

 

Quoted in Freedman, Kennedy’s Wars, p. . Schlesinger, A Thousand Days, pp. –; L. Vandenbrouke, “Anatomy of a Failure: The Decision to Land at the Bay of Pigs,” Political Science Quarterly, , : –; Gardner, Pay Any Price; L. Freedman, Kennedy’s Wars, Oxford University Press, Oxford, , pp. –; J. G. Blight and P. Kornbluh, eds. The Politics of Illusion: The Bay of Pigs Reexamined, Lynne Rienner, Boulder, CO, ; D. Munton and D. Welch, The Cuban Missiles Crisis: A Concise History, Oxford University Press, Oxford, , pp. –.





Political Self-Deception Limitations were accepted that should have been avoided, and hopes were indulged that should have been sternly put aside.

References to indulged hopes leading to forgetting clearly suggest SD, and some scholars have gone beyond mere hinting. I have already discussed Irving Janis’s theorizing of groupthink, which he elaborated precisely from a reflection on the Bay of Pigs. Similarly, Lucien Vandenbrouke, examining different models for explaining decision making in foreign policy, stated that “Neither the goals or values of individual actors nor the pulling and hauling of the players seem adequate to explain the decision-making’s persistent refusal to face up to unpleasant facts.” On this basis, he suggests that better explanation must refer to psychological mechanisms such as defensive avoidance and wishful thinking. Even more explicit, in this respect, is Daniel Goleman who makes use of the Bay of Pigs to point out that defense mechanisms resulting in SD, although occasionally adaptive, may cost self-deceivers a high price. The block of information may in fact be crucial for distorting the decision, as perfectly exemplified in the Bay of Pigs. In this case, no one has claimed that deception provided a better explanation to the fiasco than SD, given that all people concerned had put a high stake on the success of the operation and its failure was a blow for everyone, though different people paid different prices. Therefore, deception would not be understandable in such circumstances. Many have pointed out various kinds of mistakes by different agents, but they also have noticed that mistakes were accompanied by a stern will to carry out the plan and by a basic confidence that it could not fail. That is why, Kennedy “was crestfallen” at the failure, and the Cuban exiles and Brigadistas were in a state of shock and could not believe what had happened. After the fiasco, the most common explanation, the one which was relied to, and stayed with, the American public was that the CIA, and more specifically the two senior officers in charge of the operation, Allen Dulles and Richard Bissell, had deceived the president and his Cabinet into a faulty plan. Kennedy’s mistake was to believe them and that was excusable     

McGeorge Bundy, memorandum to President John F. Kennedy, April , , quoted in The Politics of Illusions [Document .] Janis, Groupthink, and Janis, Crucial Decisions. Vandenbrouke, “Anatomy of a Failure,” p. . Daniel Goleman, Vital Lies, Simple Truths: The Psychology of SD, Bloomsbury, London . Kennedy’s reaction is described at length by Mark J. White, The Cuban Missiles Crisis, Macmillan, London, , pp. –. The reaction of Cubans and Brigadistas is reported by Rafael Quintero in The Politics of Illusions, p. .

Kennedy and Cuba



because he was so young in office and still inexperienced in the administration. According to this story, Dulles and Bissell led the Cabinet to believe that an anti-Castro movement was ready to take action if only the United States helped the insurgents with military supply and with a force of CIA-trained expatriates to ignite the revolt. This comforting account, however, does not stand up to a more accurate inspection. Even Schlesinger’s reconstruction, partisan as his view is to the President, admitted that the simple CIA betrayal thesis was not sufficient. He acknowledged that while the whole Cabinet was led to think that the operation was meant to trigger a rebellion, Dulles and Bissell did not explicitly say that an internal rebellion was going to happen and to be the winning force of the operation. They neither said it explicitly, nor denied such a possibility: they left open the possibility of such interpretation. No matter what they said or suggested, the point is rather: why should Dulles and Bissell “betray” the president, against their own selfinterest? At most, they were badly mistaken, yet the amplitude of their mistake, compared with their previous records, was such to make it difficult to believe it was just a mistake. .. As a matter of fact, the original plan, presented to Eisenhower in the summer of , was an infiltration plan, based on the hypothesis of triggering insurrection by anti-Castro groups helped by highly trained fighters in guerrilla warfare. This plan was then abandoned because Bissell lost confidence in fomenting Cuban insurgency, probably because intelligence reported that Castro’s hold on Cuba was well established and that anti-Castro movements were not sufficiently organized or coordinated. This disparaging news notwithstanding, Bissell turn then from the idea of fomenting insurrection to that of amphibious invasion, Anzio-type, which implied an escalation of the original operation. And here the first question arises: Why did negative news about a guerrilla infiltration have the effect of enlarging the plan and raising the stakes instead of dumping it altogether? It does not look sensible. More so because the escalation of the plan – the landing of an expatriate force backed by tactical air force – met with the constraint posed by Kennedy against any use of US military  

See Schlesinger, A Thousand Days, p. . The shift in the plan from infiltration to invasion happened in the passage between the two administrations. Why the plan was changed is not finally established, although the more widespread interpretation is that Bissell had come to doubt the size and organization of the counter-insurgency and had intelligence of Castro’s hold on Cuba. For a viewpoint from inside the CIA, see Samuel Halpern, Politics of Illusions, p. . For the dominant interpretation, see Freedman, Kennedy’s Wars, p. .



Political Self-Deception

force in the operation. Bissell and Dulles clearly wanted to go ahead with the plan, despite bad information on the internal insurgency and Kennedy’s constraints. Overthrowing Castro was considered a strategic goal in the antiCommunist campaign of the Cold War; and it was a goal shared by the previous as well as the present administration, by the joint chiefs and the CIA. Given that the Eisenhower administration had entrusted the CIA with the task of coming up with a plan for overthrowing Castro, Dulles and Bissell were naturally going along with their task. Yet, one may speculate that their task should have consisted in drawing up a workable and feasible plan, not any plan, for the sake of it, as if it were a school assignment. Moreover, shared goals do not imply shared means: the administration wanted deniability of US involvement and this requirement put significant constraints on the means to be deployed. If deniability doomed the chances of the invasion plan, then a rational agent would scale the plan downward, postpone the goal and work out some alternative longterm strategy. If they stubbornly clung to their idea, making the necessary adjustments so as to meet presidential requirements, their wish to go ahead could not simply be that they wanted Castro out of the way but that they felt their prestige, self-esteem and role bound to the larger operation. They became advocates instead of experts and advisers, as pointedly remarked Schlesinger. And yet, in order to make sense of their pushing forward with the larger plan, given that they could not intend a fiasco, they had to believe that the plan was going to work; but how could they believe that, given what they knew? In fact, they knew that (a) insurgency was disorganized and that Castro was well established in the country, (b) an American military intervention was excluded; (c) the landing should have been supported by air strikes, the first of which was indeed reduced to the minimum and the second cancelled; and (d) the Bay was surrounded by swamps that would have blocked the Brigada’s escape to the Escambray mountains. They had such hard facts in front of their eyes and, being experienced and competent, should have drawn the conclusion that the plan was doomed; nevertheless, they pursued and pushed it through the end, in fact betting their careers on its success. The general explanation of their misconception is that they did not believe Kennedy and thought that, despite his open statement to the contrary, he would not have let fail the invasion and would, eventually, have authorized a US Marine landing to 

Schlesinger, A Thousand Days, p. .

Kennedy and Cuba



save the operation. But in fact, disbelieving the president’s open statement and coming to believe the opposite, which was in line with their wish, looks precisely like SD. All of the threatening evidence they possessed was either ignored, or explained away, or downplayed in order to go on believing that their plan was good and would be successful.

 Collective SD .. Considering President Kennedy’s position, the official version is that he was not keen on the project from the start but that he could not dismiss it completely because in his presidential campaign the issue of Cuba had played such an important role. Moreover, he wanted to look resolute against Communism, and he was eventually persuaded by Bissell and Dulles’s confidence and fame. Many reports, however, insisted on the president’s desire to be tough and determined against Communism and against Castro, in particular. The threat posed by Castro to national security was actually largely exaggerated on the basis of Cold War ideology, the monolithic view of communism and domino theory. Hence, Kennedy shared the view that Castro had to be overthrown and that this was a priority in the international agenda of his presidency, in line with the Eisenhower administration and with the CIA top officials. We thus have two portraits of Kennedy’s attitude: the first stresses his uncertainty on this Cuba operation and his being dragged into it by the CIA determination; the second points instead to his hawkish position against communism and to his preference for covert intelligence operation over long-term diplomacy which would support a much stronger conviction in favor of the plan. This discrepancy in interpreting Kennedy’s attitude, however, does not much affect the argument for his SD. In either case, it is clear that Kennedy both shared the goal of overthrowing Castro and that he did not support the plan wholeheartedly from the start. He felt that he wanted and had to do something about Cuba, but the “something” was still undefined. Thus, he let Dulles and Bissell go on with   

L. Vandenbrouke, “The ‘Confessions’ of Allen Dulles: New Evidence on the Bay of Pigs,” Diplomatic History, , , pp. –. This version finds support in Schlesinger but also in the reexamination of the event reported in The Politics of Illusion. This is the interpretation emerging, for example, from T. Paterson, ed., Kennedy’s Quest for Victory: American Foreign Policy –, Oxford University Press, Oxford,  and White, The Cuban Missile Crisis.



Political Self-Deception

the preparation of a plan, which, according to Schlesinger, Kennedy initially considered only a “contingency plan.” Driven by his desire to do something, Kennedy focused his attention on limiting the costs more than on maximizing the chance of success. In this way, he overlooked the fact that failure never comes cheap. He thought that “doing something,” even if short of the final goal of overthrowing Castro, was in any case a better result than doing nothing, provided that the political costs were minimized. He also did not consider that deniability and military success hardly reconcile with one another and was ready to accept Bissell’s proposed solutions to his political-cost-questions. For feasibility, he relied on the two experts who apparently were very eloquent in presenting the successive readjustments. He was not suspicious of the multiple changes, and did not considered the double nature of the operation, which actually was an illconceived blend of two different plans: an Anzio-type invasion, in the style of World War II, having nothing to do with resistance, and an infiltration plan to mobilize internal insurgency. The double and ultimately incompatible nature of the plan was noted in Schlesinger’s memoirs, hence bringing evidence to the fact that the faulty and ominous design was there for all decision makers to see but was not sufficiently considered. This same point was later emphasized in discussions at the Musgrove Conference on the Bay of Pigs, the proceedings of which were published in The Politics of Illusion. Another shortcoming which was not given sufficient consideration concerned the deniability clause, which was apparently already breached: the Cuban expatriates group trained in Guatemala by the CIA was no longer a secret as of October , and Kennedy and his advisers had been informed of that. Much as he was willing to do something in Cuba, he was very concerned with the political risk of an overt American invasion, both at the domestic and at the international level. The attack would be a patent violation of international law and, in that sense, it might strengthen Castro, and win him sympathy and support. Thus, Kennedy was very keen that the operation had to look “all Cuban,” and that no weapons and military personnel, which could be traced back to American involvement had to be used. Because of these worries, his position was not fixed until April; from late January, he went on questioning the CIA proposal and asked to reshape the plan over and over on the deniability clause, without taking into account that the operation had already been leaked to the press, 

See Schlesinger, A Thousand Days, p. .

Kennedy and Cuba



and its covert nature was “a pathetic illusion.” A most crucial request of the president was made as late as March and concerned changing the landing site of the Brigada in order to have a quieter operation without the appearance of a World War II assault. Thus, the obliging Bissell and Dulles, driven by the wish to go along with the operation, switched the landing site from Trinidad to the Bay of Pigs at the last minute. Apparently, no expert took into account the new circumstances of changing the landing site, such as the distance of the new location from the escaping area (the Escambray mountains), and its being surrounded by swamps. They focused on the positive side, namely, that swamps would have made it easier to keep the beachhead from Castro’s forces, but disregarded the negative aspect that they would have blocked the Brigada there. As Freedman remarks: It is hard to know what was more extraordinary, the President’s belief that a plan could be changed in such a material fashion so rapidly or the CIA’s readiness to acquiesce. When the CIA returned so quickly with the new scheme, the Kennedy team, with its can-do-ethos, was impressed rather than incredulous.

.. From all reports, Kennedy’s wish was apparently quite different from that of the CIA’s top officials. They wanted to go ahead with their own plan, which they were deeply committed to, no matter how decapitated and trimmed. Kennedy instead wanted to do something plausibly deniable. The concern with deniability was paramount: on April  he made a public statement denying that there was any attempt to invade Cuba by American armed force; moreover, until the last minute he was keen to reduce American involvement to a minimum. This implied a reduction of the aircraft cover to two strikes, the second of which was eventually canceled because the cover-up of the first (as an action by Cuban defectors) was blown. Seen ex post, such worry on deniability looks doubly misconceived because, on the one hand, the planning had been amply leaked outside the Cabinet and, on the other, the consequences of his attitude on feasibility were completely disregarded. Nevertheless, on April , the National Security Council found Kennedy in a resolute mood. This is reported in detail by Schlesinger. How had this switch come about?

 

Kirkpatrick, “Inspector General’s Survey,” p. , quoted in Freedman, Kennedy’s Wars, p. .  Freedman, Kennedy’s Wars, p. . Schlesinger, A Thousand Days, pp. –.



Political Self-Deception

The acquiescence of Bissell to his many requests for changes reassured rather than worried him. By single-mindedly focusing on deniability, Kennedy managed to ignore all the worrisome evidence on feasibility; and finding an easy reassurance for his qualms by the CIA’s readiness to please him, he disregarded the obvious fact that deniability was already a lost cause. Meanwhile, having disposed of all threatening factors, he came to be convinced by the confidence of the accommodating Bissell and Dulles that “one way or another” the plan was guaranteed to work. Its maximum hope was a mass uprising leading to Castro’s overthrow, triggered by the Brigada’s landing, holding the beachhead, and announcing a provisional government which the United States could recognize with the Latin American Alliance. But if the maximum goal failed, then, at least, guerrillas on the island could be reinforced and supplied. An “all-purpose plan” fits the wish of somebody who wants to do something but is keen on reducing the costs more than on gaining a definite goal; yet it is hardly a plan that results from a careful and rational consideration of the effective probabilities of success and the consequences in case of failure. Kennedy knew and ignored the following factors: (a) that the plan had undergone too many quickly made changes; (b) that the deniability clause imposed military risks; (c) that the operation and the American involvement were already known, hence the deniability clause was no longer available; and (d) that the consequent failure could not be blamed on Cuban insurgents alone. How could he decide to go on in these circumstances? He should have discounted, downplayed, or evaded those hard facts, focusing instead on the idea of an “all-purpose plan” that could produce political advantages in any event. So Kennedy was fooled by his two contradictory desires that pushed him to entrust uncritical confidence to the two CIA advocates without adequate checking of their evaluation. Kennedy’s confidence in the operation, however, was certainly helped by the lack of criticism by the Pentagon and his advisers.

 Consent and Criticism .. The position of Kennedy’s advisers is a debated matter among historians. Many reported internal agreements with the president and imputed his misgivings to the atmosphere of confidence, trust, and optimism surrounding him. It was on this feeling of internal agreement that

Kennedy and Cuba



Irvin Janis developed his theory of the groupthink mechanism. Others, such as Trumbull Higgins and Marc White, have pointed out that there were various critical voices and dissent among Kennedy’s advisers and inner circle. Higgins stresses the fact that the plan was never enthusiastically subscribed by anyone, apart from Bissell and Dulles, and that its final endorsement was more the effect of letting things roll and of indecision, magnified by the internal disorganization of the Security Council, than a sign of the fierce and misguided will to forge ahead. By contrast, White underlined internal dissent with the aim of contrasting it with Kennedy’s resolution and responsibility in the final decision. He argues that the presence of so many dissenting voices falsifies the thesis that the president was carried along and was unable to stop the operation without support. I shall not take issue with either of these theses, because my aim is not to ascertain the responsibility of the president or of the machine in making bad decisions but, rather, to see whether such a bad decision was grounded on motivated false beliefs held in the teeth of contrary evidence. It is definitely true that there were dissenting voices, and this fact cannot be dismissed. Yet, when placing dissenters under scrutiny, one can see that (a) some criticisms were not too firm and were later converted to (willy-nilly, maybe) assent to the plan; (b) some came from people whom the President could disregard either because of their role, or because of their political position; (c) with one exception, criticisms were not raised at the Special Group of the National Security Council, where the plan was officially presented and discussed. Thus, the dissent was not publicly articulated in front of all participants. This condition must be stressed because publicity makes it impossible to casually discount and ignore a criticism as it happens when given in private; and (d) Moreover, the most prominent dissenters focused on goals rather than means. .. In the decision process, three main critical voices were raised: the first was that of Senator William Fulbright, the only official opponent of the operation as acknowledged by the president himself; the second, Chester Bowles; and the third, Arthur Schlesinger. Others expressed or felt various

  

Janis, Groupthink. For a proper analysis of groupthink, see infra, Ch.§ . Higgins, The Perfect Failure, pp. –. “The idea that there was an inexorable administrative pressure compelling Kennedy to authorize the Bay of Pigs is a fallacy . . . there was still a potentially large constituency which would have backed Kennedy against the Cia had he wished to scrap the plan.” White, The Cuban Missile Crisis, p. .



Political Self-Deception

doubts at different stages, but they did not press their reservations strongly enough, or missed the right moment and place to raise them; in any case, after the April  meeting, Kennedy was convinced that no one disagreed with the decision apart from Fulbright, possibly because they suppressed their misgivings or felt reassured by the CIA’s explanation. It is on record that, at the end of the meeting, Kennedy said that, with the exception of Fulbright, no one could afterward say: “I told you so.” Fulbright’s opposition to the Bay of Pigs operation was laid down in a memo to the president on March  and then reiterated at the Security Council of April . It was a clear, stern, and decisive opposition coming from the chairman of the Senate Foreign Relations Committee, that is, from a person who was not part of the Cabinet but who was prestigious and whose views should have been valuable precisely because they were coming from outside the Cabinet. As a criticism of the plan, however, Fulbright’s position presented a major drawback, namely, the fact that it questioned the goal rather than the means. It questioned the very idea of the American involvement in Castro’s overthrowing, in the name of American values and traditions in foreign politics. He thought that, in so doing, the United States would jeopardize its international position among Latin American and European allies, that a puppet government in Cuba would lack legitimacy and would violate the spirit of the Latin American Alliance (OAS), and moreover, that in engaging in covert activities, the United States would precisely follow the much denounced steps of their Soviet counterpart. Instead, he suggested a policy of containment and isolation. In sum, although his critical position was clear, straightforward and noble (as Schlesinger put it), it was not as such capable to reverse the president’s views. For Kennedy was convinced about the goal of overthrowing Castro and he did not have problem with covert operations; his worries concerned the how. If the plan could work or, in any case, be deniable, he did not see any problem. For this reason, Fulbright’s opposition did not move him an inch. His Cuba policy and Cold War mindset may be seen, then and now, as morally dubious, but Kennedy was not 

Freedman, Kennedy’s Wars, p. . In fact, Higgins presses the idea that doubts and oppositions were raised until the very end, and that Kennedy himself was unsure till the very end whether to get along or to cancel the plan. This fits with his thesis of the plan as an orphan child, a product of a mechanism processing various and contrasting intentions more than of a definite will. But his interpretation does not fit with the report by Schlesinger, who was one of the main critics. Schlesinger remarked that the president shifted into a combative mood as of April  and that Robert Kennedy warned him to stop doubting the plan and to give the president all possible support. See Schlesinger, A Thousand Days, pp. –.

Kennedy and Cuba



self-deceived in their regard. He ended up in being self-deceived in believing that the CIA misconceived plan was going to work and would achieve its goals with deniability. In order to avoid the SD trap, other kinds of criticism were necessary, questioning the CIA confidence about the operation, pointing out the negative evidence, and underlining the contradictions in the plan. Unfortunately, the other two critics were also more concerned with goals than with means. Actually, Chester Bowles’s opposition never reached the president: it was written in a memo handed over to Dean Rusk who, despite harboring doubts himself, reassured Bowles and filed the memorandum away. But to be fair, it is unlikely that Bowles’s criticisms, had they been known, would have had consequences regarding the president’s decision, given their ethical tone and high principled argument. Finally, there is Schlesinger’s opposition to be considered. Certainly, Schlesinger was placed in a better position as a special assistant to the president with oversight of all Latin American initiatives. Moreover, he was a personal friend, present at all meetings, and in constant contact with Kennedy; his suggestions could have been considered and easily communicated without intermediaries. Nevertheless, it is important to note that Schlesinger basically kept quiet during the meetings when Dulles and Bissell advocated their plan. He felt uneasy with their display of competence and confidence that apparently was much admired in the group and much valued by the president himself. He expressed his objections in memorandums, which, as he himself later commented “look nice on record, but they represented, of course the easy way out.” He later bitterly reproached himself for not having spoken out in the meetings. But he also added that the opponents to the plan had only intangible arguments to balance the tangible thing offered by the supporters: “the moral position of the United States, the reputation of the President, the response of the United Nations, ‘world public opinion’ and other such odious concepts” versus “fire power, landing craft and so on” presented with a virile pose.” Had opposition come from the joint chiefs and from Pentagon experts, the president would have been definitely more receptive, but in fact the Pentagon did not speak up, despite the many doubts the joint chiefs and the military personnel entertained about the plan. In sum, I think that the kind of opposition and the ways in which it was expressed made it relatively easier for the president to disregard the negative views  

Schlesinger, A Thousand Days, pp. –. Schlesinger, A Thousand Days, p. .



Schlesinger, A Thousand Days, p. .



Political Self-Deception

that the dissenters wanted to convey and to keep on believing that no one in the Special Group was in a position to tell him afterward, “I told you so.” And the belief that consensus was shared appears just another piece of SD, given that opposing views were disregarded. .. The fact that military experts of the Pentagon did not openly express their doubts transformed their tepid consent into another component in the unfolding of the collective SD. Their lack of opposition at the right moment and place was taken as evidence in favor of the plan. Higgins reports a very telling story by Henry Kissinger on this point. A year before the Bay of Pigs, Kissinger wrote that American pragmatism saw “in consensus the test of validity. . . . Disagreement is considered a reflection on the objectivity or the judgment of the participants. . . . Even very eminent people are reluctant to stand alone.” This quote exhibits an ambiguity concerning the issue consent/disagreement. In the first part, Kissinger derives from the pragmatist approach that consent validates the soundness of certain beliefs, judgments, and decisions. By the same token, disagreement is seen as a symptom of false beliefs or bad judgments in some of the participants. In case of disagreement, by implication, the right attitude of accuracy would recommend a diagnostic search for the unwarranted beliefs and misjudgments that cause the disagreement. Hence, the emerging consensus would thus be reached by the convergence of independent beliefs and evaluations and could be taken as a test of validity. In the final part of the quote, however, if “even very eminent people are reluctant to stand alone,” it is apparent that agreement is not solely the result of sound beliefs and judgments converging because they are true but also the result of the psychological discomfort of standing alone in a group. Such a discomfort elicits conformity more than accuracy and, in conclusion, the validity test provided by consensus is problematic. As a result, we have groupthink defended as a sound procedure of truth discovery. It must also be added that the conformity tendency elicited by the discomfort of standing alone does not necessarily produce hypocritical consent for it may end up in the self-deceptive conviction that the majority belief is the true one. For the present purpose, what must be stressed is that consensus or lack of explicit dissent is taken by decision makers as evidence in favor of the plan under scrutiny. Hence, consensus counts as good and positive evidence countering the negative aspects of the plan. 

H. Kissinger, The Necessity for Choice, Cotler Books, New York, , pp. –, quoted in Higgins, p. .

Kennedy and Cuba



.. Summarizing, the CIA top officers felt their prestige and their role in the agency linked to the Cuba plan; both Dulles and Bissell wanted badly to carry it out, and to this purpose, they acquiesced to any presidential requirements and to last-minute changes. Somehow they ignored the drawbacks which the required changes produced, dismissing the evidence that convinced them in the first place to move from an infiltration to an invasion plan. How could they discount that the costs of inaccuracy were high in such a case? Apparently, because they believed that eventually American military backup would have been provided, despite having been assured by the president himself in that sense. Their inaccuracy in considering the data was thus prompted either by wishful thinking or by another piece of SD concerning the president’s intention not to back the plan militarily. In the general uncertainty of foreign policy, success is never granted, but occasionally failure is clearly foreseeable. And the Bay of Pigs operation was one of such cases. This was a plan for igniting internal insurgency, which was known to be both divided and weak; the landing conditions were clumsy to say the least and no US military backup was to be expected. There was no chance to succeed in these conditions. Despite all these contrary aspects, which were clearly there in front of their nose, they wanted to go ahead and were also optimistic about the result. The president, on his part, wanted to do something about Cuba in the context of deniability. And deniability, supposedly disposing of his own inaccuracy costs, lowered his vigilance concerning the plan. He repeatedly questioned the plan as far as American involvement was concerned, meanwhile evading the fact that the operation had been leaked throughout the international and national press months before the landing. His concern was to avoid that the potential failure could fall on his head, while he did not focus on the chances of positive result. Given that American involvement, however, had become known well beforehand, his belief that the plan could be pursued with minimal political costs, because failure would not touch its Administration, was clearly (self ) deceptive. His selfdeceptive conviction, though, was not an individual product: it was being triggered and fed by his advisers, primarily Dulles and Bissell, whose competence the president trusted due to their past records, but also by the Pentagon and the Joint Chiefs who never clearly vetoed the plan, no matter how uneasy they felt about it. Their silence was definitely taken as consensus, and consensus on the side of military experts was taken as positive evidence. Most of his advisers, whether or not fully convinced, got



Political Self-Deception

along with the plan. The dissenting voices were avoided, not clearly heard, not attended, and even openly dismissed. In this respect, Kennedy did not let his reasoning and decision be distracted by criticisms, which were not “constructive,” that is, by criticisms not focused on details and technicalities. There is a striking correspondence between Kennedy’s attitude toward critics and the phenomenology of SD. In the SD process, the subject is sensitive to others’ reactions to her attempt at explaining away threatening evidence; but the sensitivity goes in two opposite directions. The subject welcomes any comment supporting her cover story, which reinforces her conviction as independent evidence, but she is stubbornly blind and even resentful to critical remarks pointing out the negative reality, which threatens her story and her self-image as a rational person. Correspondingly, the president was receptive to all suggestions helping him to strengthen his conviction and develop the plan, but dismissed and avoided more proper criticisms that would have implied his exiting the SD process. This attitude toward criticisms has to be taken up seriously when considering the treatment of political SD. As mentioned in Chapter  §, the need for an external viewpoint, overseeing the decision process from outside, is crucial in order to avoid such cognitive trap. Yet, we have to take into account the political leader’s ambivalent attitude toward remarks: on the one hand, he is annoyed with criticism and tends to dismiss the critics’ advice. On the other hand, in the aftermath of the fiasco, both Jack and Bobby Kennedy realized that unanimity, far from being the test for validity, was actually a crucial component of the fateful decision. Jack remarked that there was “only one person in the clear” and that was Senator Fulbright; then he added that probably also Fulbright “would have been converted if he had attended more meetings.” The president thus seemed to have realized that the meeting atmosphere was certainly conducive to consensus but not to truthful and dispassionate assessment of the pros and cons of the decision at hand. His brother was even more explicit on this point when he stressed the need for “a devil’s advocate opposing established intelligence opinion in the future.” Unfortunately, however, those comments expressed more the anger of the president and his brother for the CIA “betrayal” than any serious commitment to establish such a figure as a devil’s advocate. The committee later appointed  

Schlesinger, A Thousand Days, p. . See Robert Kennedy, Thirteen Days: A Memoir of the Cuban Missile Crisis, Norton, New York, , p. .

Kennedy and Cuba



to investigate over the Cuban issue, chaired by General Maxwell Taylor was not provided with a devil’s advocate but was carefully composed by friendly and loyal people, among them Bobby himself, who could not be presumed to play the devil’s advocate role. .. We have so far considered the American agents involved in the Bay of Pigs and reconstructed their illusions and misconceptions as episodes of SD of the first type, in so far as they were driven by wishes and dismissive of contrary evidence. There were other agents involved in the fiasco though: the Cubans of the Brigada and the Cuban opposition in general. What did they believe? Were simply deceived by the CIA and betrayed by the Americans, or did they partake in the general SD? The reconstruction of their attitudes and expectations emerges from the discussion, which took place at the Mulsgrove Conference on the Bay of Pigs, reported in the Politics of Illusion. The Brigadistas present at the conference, Rafael Quintero and Alfredo Duran, expressed their feeling of having being betrayed both by the CIA – who had trained them and put on the boats – and by the American government – who let them down and did not rescue the operation with a mighty military intervention. The two editors of the book, James Blight and Peter Kornbluh, described the Brigadistas’ view as driven by illusions, which channeled and encoded the defective and ambiguous information and directives from the CIA top officers. More precisely, the illusions that the Brigadistas themselves acknowledged to sustain their attitudes and behavior were: (a) The belief that the United States, because of their strong antiCommunist stand, could not accept the idea of having a communist island ninety miles away from the American coast and that the American government would have done anything to get rid of Castro; and (b) The “John Wayne syndrome,” that is the belief, patterned after Western movies, that Americans always win, and even when everything seems lost, John Wayne comes to rescue the good guys from the bad guys and Indians. Illusions are not necessarily mistaken: they are beliefs triggered by wishes beyond available evidence; they characterize wishful thinking rather than SD. The lack of sufficient evidence to make a belief rational is what characterized wishful thinking and illusions, while the appraisal of threatening evidence concerning the truth of one’s wish by the agent is 

Rafael Quintero, The Politics of Illusion, p. . In their “Epilogue” to the Politics of Illusions, the editors explained the John Wayne Syndrome with the words of a Cuban-American colleague: “[Losing Cuba would be] like John Wayne backing down from a gunfight with an evil dwarf,” p. .



Political Self-Deception

what sets in motion the self-deceptive process. It would then seem that the Brigadistas were prisoners of wishful thinking more than SD. Yet, I think that those illusions constituted the favorable grounds for the production of subsequent self-deceptive beliefs concerning the instructions and reassurances they received from both the CIA and the Administration. Let us focus on Schlesinger’s report about his meetings and exchanges with Mirò Cardona, a lawyer and university professor in Havana, known to be a leader in the civic opposition to Batista. Cardona was at the time a refugee in the United States; in the context of the Bay of Pigs planning, he was conferred the authority to organize a Cuban Revolutionary Council that, in case the occupation of the beachhead and the uprising proved successful, should have appointed the provisional government to be recognized by the United States. Schlesinger met with him twice in April prior to the expedition. In the first meeting, on April , their exchange basically concerned political guidelines for the provisional government. Schlesinger was worried that he had been misunderstood by the Cuban leaders about American military intervention and alerted the president in this respect. Kennedy responded in two ways: publicly, in a press conference two days later, by denying any intention of invading Cuba, and personally, by means of direct instructions to his own aides (Schlesinger included). According to Schlesinger, the press conference denying American involvement in Cuba just few days before the Bay of Pigs invasion was a message to the Cuban exiles. I will not comment his benign interpretation of such a public lie. Here I am concerned about what Mirò Cardona knew or was told about American military intervention. In their second meeting, on April , Schlesinger reported that Cardona was explicitly told of the limitation of American military aid to the expedition, given the deniability clause. “He displayed resistance and incredulity at the statement that no United States troops would be used. He waved at the president’s news conference disclaimer aside as an understandable piece of psychological warfare and kept pressing us to say how far the administration really meant to go . . . Everyone knows, Mirò said, that the United States is behind the expedition (. . .) Berle said that it could not succeed without an internal uprising, and that if one came, we would provide democratic Cubans with the things necessary to make it successful.”

 

 Schlesinger, A Thousand Days, p. . Schlesinger, A Thousand Days, p. . Schlesinger, A Thousand Days, pp. –.

Kennedy and Cuba



Notwithstanding the painstaking explanations that, according to Schlesinger, were given to Cardona by the two envoys, Cardona later claimed that Berle promised him that no fewer than ten thousand American troops were ready to step in and help the Brigada. Whether it was a fault in the translation or not, Schlesinger concluded that Cardona was “a driven man, probably heard what he desperately wanted to hear.” Given that he was caught up in the illusions of all Cuban exiles, he thought that the United States could not let the operation fail, because, so his conviction went, they could not allow a Communist island in the American continent to consolidate its power against American and Western interests, and also could not accept to be defeated by a small Latin American state that had always been considered as part of the US backyard. Hence, well entrenched in his illusory convictions, and having nothing to lose to be vigilant about information, he filtered out the unpalatable words about the limitations of US involvement. In conclusion, looking closely to the participants in the fiasco, we can say that the general atmosphere was one of illusions and ideological assumptions, on the one hand, and secrecy, on the other. Both constituted a favorable background for SD process to take place. Illusions and ideology contributed by defining Cuba as a maximal threat and the overthrowing of Castro as a priority in foreign policy – assumptions that would be falsified by history, but that, even then, in the Cold War context, were unwarranted, as many participants, such as McNamara, had openly stated. The goal of doing something against Cuba was thus not questioned, but, on the contrary, it provided all participants with a strong motivation interfering with a cool and rational plan to carry it out. Secrecy, which was the consequence of the deniability clause, then helped setting SD in motion adding general opacity and constraints on information gathering, assessing, and circulating, while it diminished the vigilance in considering data. Even the complacent press, which consented in censoring itself for patriotic reasons, in fact, contributed to the general misinformation. The decision process thus took place in compartimentalized agencies, well protected from external disturbances, each having partially different interests albeit sharing the final goal and the wish to accomplish it. No one of the participants, whose knowledge, attitudes, and beliefs we have considered in order to make sense of their resolve and position, can be judged immune from SD, in one respect or another. The result was a collective SD, even if after the fiasco many 

Schlesinger, A Thousand Days, p. .



Schlesinger, A Thousand Days, p. .



Political Self-Deception

participants tended to dissociate themselves and to put all the blame on Bissell and Dulles. As the President himself eloquently put it “victory has a hundred fathers, defeat is an orphan.”

 Sour Grapes .. How was the fiasco processed by the president? Schlesinger reported Kennedy’s reaction as manly, responsible, without recrimination, and truthful. The president, he said, stated that the decision was shared by all senior officials involved but that the responsibility was his own. And Schlesinger’s overfriendly report goes on to list all of the lessons that the young president learned from the Bay of Pigs, lessons that, to his mind, were soon to be tested in the following Cuban Missile Crisis, which involved the deployment of Soviet nuclear missiles. Such a rosy picture of the president’s reaction, however, is contradicted by some speeches he made in the days following the fiasco and by his subsequent attitude about Cuba, on the one hand, and about those around him whom he selected to pay the price of the fiasco, on the other. Altogether, I think that his reaction was not a manly and rational acceptance of the failure and frank assessment of what went wrong, but rather vitiated by some sour-grapes mechanism, for one thing, and by unfair resentful feelings, for another. Schlesinger mentions Kennedy’s address to the American Society of Newspaper Editors on April  and interprets his warning of communist methods and of the need to fight them decidedly as a double message, one for the Soviets and Castro reaffirming his tough resolve, and the other for the Allies and the American public to reassure them of the America’s stand and strength. Whether or not Kennedy’s tough and obscure words were meant to minimize the political damage of the fiasco, as suggested by Schlesinger’s loyal explanation, the display of toughness does not show that he learned his lesson. Describing the threat posed by communism, he said that communism relied “primarily on covert means for expanding its sphere of influence – on infiltration instead of invasion, on subversion instead of elections, on intimidation instead of free choice, on guerrilla by night instead of armies by day.”

 

 Schlesinger, A Thousand Days, p. . Schlesinger, A Thousand Days, p. . Quoted in Douglas Blaufarb, The Counter-Insurgency Era: US Doctrine and Performance, The Free Press, New York, , pp. –.

Kennedy and Cuba



His words sound very striking to me under the circumstances, for “covert means,” “infiltration,” “subversion,” and “guerrilla by night” constituted precisely the ingredients of the Bay of Pigs operation, ingredients that at the end of April were openly known by the public. Instead of a frank declaration of what had happened, and why it had failed, and how much the failure was indeed produced by the secret covert operation that was chosen by the American government, and how much damage such secretive and duplicitous means had brought to the cause of freedom and democracy, he chose to address the terrible threat posed by the enemy. Does not that sound as an evasion of his own responsibility, which according to Schlesinger he accepted so manfully and frankly? In addition: how could the president attribute those duplicitous means to the enemy with a straight face, when he had just authorized a secretive operation, meant to be denied publicly, precisely using infiltration, guerrilla, and night landing. Which kind of difference between the “bad communists” and “us” was he implying? Was he implying that because communism represented a “monolithic and ruthless conspiracy,” those means were bad if used by them, but acceptable if used by freedom fighters? Or was he instead proposing an indirect justification of the American use, just made, of those very instruments: we had to use duplicitous means because they are using them and we can take no more. Or was it possibly an even more indirect justification of the fiasco: we were defeated because we, good guys, are not as expert as the communists are with those despicable instruments. Now that we have learned the lesson, we will make a more efficient use of them. The latter is an interpretation of the speech provided by Marc White: the representation of the crucial difference between Communist dictatorship and Western democracy was meant as a self-exonerating reason for the failure. Whether or not White is right, what interests me is that Kennedy’s words projected onto the enemy practices, which, so far, had just been used by Americans. Obviously that speech might have been only a trivial piece of political propaganda for the sake of toughness, and yet, looking from outside, it did not seem wise to speak those words under the circumstances; actually, it seems rather shameless, and it could provoke outrage given the blatant contradiction between accusations of the enemy and deeds of the government. All this considered, the tough speech against Communism may be interpreted as the outcome of a sour grapes mechanism reducing the 

White, The Cuban Missiles Crisis, p. .



Political Self-Deception

cognitive dissonance after the fiasco. He definitely admitted his defeat and his mistake about the Bay of Pigs: but then, in order to reduce the discomfort that the fiasco brought about, why not reframe the perceptual framework ex ante, projecting the questionable covert action onto the enemy and redescribing the American covert operation as its necessary response? A response, moreover, which failed because Americans were not (yet) proficient in the duplicitous means of the Communist conspiracy. In this narrative, the American fiasco was the counterpart of the American good faith, good intentions, and trust in people’s freedom and selfdetermination. The president had wanted the Cubans to free themselves, and only provided them with the help and assistance to accomplish their dream of a free Cuba. The covert operation was made necessary by the Cuban dictatorship that silenced political opponents by jailing and killing them, but Castro with his Soviet aides was obviously more expert in covert methods and all the more ready to use them, having disposed of democratic accountability. It followed that the president stressed the need to become more acquainted with the “Communist procedures” in order to stop their pervasive conspiracy. My reconstruction of the president’s sour grapes is clearly speculative, yet, I think that it makes sense of the striking words quoted above and which are otherwise difficult to understand under the circumstances. The explanations in terms of political propaganda or of toughness do not consider the potential backfiring effect of those words just after the Bay of Pigs fiasco. True, no backfire was ever apparent as a reaction to Kennedy’s speech. That was not due to the speech being an instance of successful political propaganda, but, rather, to the sour grapes of the people, as I would contend. Before getting into the people’s SD, however, I would like to point out that, despite Kennedy’s initial acknowledgment of his mistakes and responsibility, his subsequent attitude and behavior suggest that a rationalization process was at work. Instead of questioning his SD, frankly answering his first question, “How could I have been so stupid?” and assessing his own twist in reasoning and deciding, Kennedy took measures only against the CIA agents and against the innocent and disagreeing Bowles. Apparently, the president explicitly and candidly told his friend Bissell that had he been a British prime minister he would have had to resign over the Bay of Pigs fiasco. But given the American presidential four-year rule, he would simply have to demand Bissell’s resignation. That Bissell and Dulles had to pay was obvious: their faults and mistakes were openly there, although they worked on the illusion, the unquestioned assumptions and SD of the president and the Security

Kennedy and Cuba



Council as a whole. What was really unfair, so much so that even Schlesinger remarked, was the treatment reserved to Chester Bowles. Bowles’s main fault was simply to express his doubt to Rusk who, first, failed to pass the memo to the president, and, second, leaked the story of his opposition to the press. Kennedy saw in this leakage an attempt to dodge responsibility and was enraged by the fact. But he also resented Bowles’s presentation on future Cuban policy at one National Security Council soon after the fiasco. At the meeting, Bowles opposed the belligerent attitude shown by the president in his public speech of April  and shared by most of the Council. While the lesson that the administration drew from the fiasco seemed to call for more covert activities, more guerrilla infiltration, and more involvement in military deniable intervention, Bowles spoke up for diplomacy, containment, and for getting Cuba into perspective, stating that it did not represent a significant threat at home and abroad. As a result, in November  Chester Bowles was moved to an irrelevant post down the State department ladder. .. All in all, the fiasco did not engender a serious reassessment of what went wrong. A postmortem commission was appointed, chaired by General Maxwell Taylor, but the commission focused exclusively on the technical and military aspects of the operation, never questioning the political dimension and the unexamined assumptions about Cuba and its potential threat to national and international security. Moreover, the commission was made up of complacent members, while potential critical voices were carefully selected out, despite Robert Kennedy’s reference to the need of a devil’s advocate. The result was that Kennedy learned never to rely on experts, as Schlesinger proudly emphasizes. But this looks a meager conclusion if compared with the need (a) to reassess foreign policy about Cuba, in general, (b) to reevaluate the risks and potential damages of covert operations that contradicted the open stand for democracy, freedom, autonomy, truth, and honesty of the administration, and (c) to secure not only the presence of a devil’s advocate but also that his critical voice be listened and attended to, and that the critic not be punished for his criticism (as in Bowles’s case, which taught the opposite 

 

Schlesinger just timidly remarked that the president “was particularly, and perhaps unjustly, aggrieved over Chester Bowles, whose friends had rushed to the press with the story of his opposition in the days after the disaster.” A Thousand Days, p. . See White, The Cuban Missiles Crisis, p. . C. Bowles to Kennedy, April , , Papers of Chester Bowles, box , Sterling Library, Yale University, New Haven, CT, quoted by White, p. .



Political Self-Deception

lesson to the president’s advisers). I contend that the lesson was not adequately learned because of the lack of a proper grasp of what had driven the decision making. If SD is not acknowledged then one sees only deception, betrayals, and brute mistakes, without grasping the motivational and emotional forces triggering such mistakes. Furthermore, the frustration with the fiasco without a clear reconstruction of the causing mechanism is easily translated into a further SD of the sour-grapes type and, more generally, into the evasion of one’s responsibility and search for the scapegoat. That the lesson from the Bay of Pigs was not learned is evident in the subsequent Cuba policy, which included a number of covert actions such as assassination plans and guerrilla operations for the ultimate goal of destabilizing of Castro. This secret policy called “Operation Mongoose” was in the hand of the Special Group Augmented, of which Robert Kennedy was in charge. According to the later assessment by various participants, including CIA officers and Cuban opponents, in the contest of the aforementioned Musgrove conference, the Operation was prey to the same kind of mistakes typical of the Bay of Pigs plan. Most striking was the contradiction between the goal of overthrowing the regime and the desire to conceal American involvement. How could the same illusion of success within deniability be kept alive after the Bay of Pigs fiasco? How come that so many resources were put in an operation of that kind? And how come the commission trusted an expert such as General Lansdale, after the pledge of never trusting experts again? I take that the Operation Mongoose was the effect of the previous fiasco, not properly unveiled. The Operation Mongoose was not only unsuccessful in reaching the goal of doing away with Castro or, at least, destabilizing his rule, but it also constituted the background for the Cuban Missiles Crisis, which was prompted by a reasonable concern for the American intention regarding Cuba by Castro and his Soviet allies. .. Let us now turn to what I take to be the SD of the people. In this respect, too, I can advance only a speculative argument, which, however, fits well with the general reconstruction of the events and atmosphere of the time. It is reported that Europeans reacted to the fiasco much more loudly than the American public. Schlesinger went on a scheduled European tour after the fiasco and reported the sense of disillusionment of 

See Samuel Halpern in the Politics of Illusion, p. .

Kennedy and Cuba



Western Europeans toward Kennedy who had elicited their hopes about a move toward détente. In the United States, on the contrary, there were few domestic episodes of dissent and indignation, but, on the whole, the American public stood by the administration. On May , an opinion poll recorded that the popularity rate for Kennedy had risen to an unprecedented  percent. Kennedy wryly commented the favorable result as follows: “The worse I do, the more popular I get.” Why did people react to the fiasco siding with their president instead of holding him accountable? Had the fiasco been caused by a struck of unlucky events, sympathy would have been in order. But the Bay of Pigs was clearly a case of misconception, miscalculation, and repeated mistakes for which the president was undoubtedly responsible given his power to authorize or cancel the mission. Maybe the support came because there was nothing else the public could do: as Kennedy himself acknowledged, had he been a British prime minister, he would have been thrown out of office. But this latter alternative was precluded by the American political system; thus, lacking any option of change, the American people reduced their frustration buying into the comforting story that the president had been betrayed and deceived by the CIA – that, in other words, he was a victim instead of a perpetrator. They may have also believed and sympathized with the circuitous justification spelled out in the speech commented on earlier. Certainly, they shared the Cold War mindset; they were worried about national security, the communist threat, and the possibility of a Soviet beachhead at their doorstep. There was little that American citizens could do to reassure themselves, apart from trusting their own government to protect them and the world effectively against communist invasion; the perception of their government and of their young and energetic president as ineffectual and incompetent would leave them powerless and frustrated. The conditions for the reduction of cognitive dissonance were all in place. As a result, the president was shielded and the people’s worries reduced. Their anger could be directed against the CIA, and even more so, against communist cunning and scheming so that their trust could survive the fiasco. Given that Kennedy had been elected a few months back with a slim majority, the approval rating after the fiasco was really exceptional and difficult to explain in terms other than those I have suggested. 

The sentence is reported also by Schlesinger as directed to him, by commenting the favorable Gallup poll. See A Thousand Days, p. .



Political Self-Deception

 The Missile Crisis .. The Cuban Missile Crisis erupted on October , , little more than a year after the Bay of Pigs fiasco, when an American U-, on a photographic mission over Cuba, reported suspicious objects on the island that intelligence later interpreted as missiles and atomic weapons. That discovery brought about an unprecedented crisis between the United States and the Soviet Union, and the whole world was put on the brink of a nuclear war. Fortunately, after thirteen days of intense discussions in the group surrounding Kennedy, of high tension and difficult search of a way out short of the nuclear conflict, good sense prevailed and negotiations allowed a peaceful resolution that was seen as a major success of the Kennedy administration and of the president himself. While in the unfolding of the crisis some specific episodes of SD can be detected, nevertheless the crisis was successfully overcome and became a milestone of Kennedy’s image as wise and courageous leader of the free world. However, the crisis was eventually resolved by a secret deal which was known only to a very small group of the president’s advisers and became publicly and officially known only after the fall of the Soviet Union and by Soviet participants. I think that the way in which the participants, and later the scholars, related to the secret deal and to the public lie exemplifies a case of SD of the third type, insofar the presidential lie that would have stained the integrity and the wisdom of the two Kennedy brothers was covered up by a handy self-deceptive justification. How the crisis was handled by the president and the Ex-Comm group is well known and amply reported by a vast literature and even a movie. The most popular record of the crisis is Robert Kennedy’s Thirteen Days: A Memoir of the Cuban Missile Crisis. Robert Kennedy was Attorney General at the time and played a crucial part in the decision process and in the Ex-Comm group. It is widely known how the discussion went on within the group as of Tuesday, November ; how the group was divided between hawks and doves, although many shifts from one opinion to the other are also reported; how the air-strike option was considered and how the quarantine option, strongly supported by McNamara, finally won. The address by President Kennedy on October , announcing the presence of missiles in Cuba to his fellow citizens and to the world, denouncing the Soviet deception, demanding immediate withdrawal of the missiles, and ordering a quarantine of Soviet weapon shipment to Cuba, is also widely known and vividly remembered. Then the second difficult week on the

Kennedy and Cuba



brink of escalation into nuclear conflict started. On the one hand, the Ex-Comm were waiting for the Soviet response, on the other, they had to enforce the quarantine in such a way as to give Khrushchev more time and more opportunity to stop his ships and to find a way out without losing face; meanwhile, the military option was still on the table and pushed by the Pentagon. Finally, Khrushchev’s first letter arrived on October , offering withdrawal of Soviet missiles in return for an American noninvasion pledge for Cuba, shortly followed by a second letter demanding a more costly deal: withdrawal of Soviet missiles from Cuba in return of withdrawal of American missiles from Turkey. .. At this point, the official story and the true one diverge. The official story, the one that has earned President Kennedy a place in history and a legacy of wisdom and courage, says that the trade proposed by Khrushchev’s second letter could not be accepted, because no negotiations could take place with a gun at one’s head. Then, when the military option seemed all the more likely, Kennedy’s brother – or at least as it is reported in Robert Kennedy’s Memoirs – proposed the well-known “Trollope ploy,” that is, to pretend the second letter never arrived and to answer the first instead, accepting the noninvasion pledge in return for Soviet withdrawal from Cuba. Despite Kennedy’s pessimism about the ploy’s chances of success, it actually worked: by Sunday, a third letter by Khrushchev was received, which accepted the terms of Kennedy’s response. Thus, the crisis came to an end, even though the actual withdrawal took place later and without UN supervision. The happy ending of the story conveys that courage, firmness, and cleverness pay: Khrushchev accepted Kennedy’s proposal, nuclear war was avoided, and the crisis was solved with a clear American victory. The true story is instead that a secret deal was actually struck by Robert Kennedy, as deputy for the president, and the Soviet ambassador Anatoly Dobrynin, a deal trading the Cuban missiles for the Turkish ones. The condition for the deal was secrecy, which went as far as to destroy any written record of the agreement. Very few people on the American side 

The proposal to respond to Khrushchev’s first letter and ignore the second tougher one was seemingly advanced by Robert Kennedy and supported by Sorensen and others in the Ex-Comm group. Such a maneuver had been named, and then become known as the “Trollope Ploy” by Bundy, after the nineteenth-century British novelist Anthony Trollope, one of whose characters deliberately interpreted an ambiguous message as an offer of marriage. See Kennedy, Thirteen Days: A Memoir of the Cuban Missile Crisis, p. . However, it is controversial that the ploy’s author was really Robert Kennedy. For the illustration of the Trollope ploy, see Munton and Welch, The Cuban Missiles Crisis, p. .



Political Self-Deception

knew of the deal; not all Ex-Comm members were informed, not even Vice President Johnson; actually, only four advisers in addition to the Kennedy brothers were informed of it. When the Turkey missiles were taken out months later, suspicion that a trade had taken place spread; despite such suspicion, the deal stayed secret for a long time. When, how, and by whom was disclosed is controversial: some maintained that Robert Kennedy admitted it in his Thirteen Days, but in fact he only hinted at it. Schlesinger, in his work on Robert Kennedy, in  came closer to the truth, and yet the deal was disclaimed by the participants well into the s. Finally, at a conference in Moscow in , Anatoly Dobrynin challenged the official truths recounting the details of the meeting with Bobby Kennedy and the agreement to secrecy that followed the deal.

 A Success Story .. Before analyzing how SD worked in the cover up of the crisis resolution, two SD episodes can be detected in the unfolding of the crisis. The first concerns Khrushchev’s conviction to get away with the missiles deployment, and the second the reaction of shock of President Kennedy and his team at the discovery of the presence of nuclear missiles in Cuba. Concerning Khrushchev’s decision to secretly deploy the missiles in Cuba, how could have he taken such a risk of a nuclear confrontation, knowing what he knew? The facts at his disposal all spoke against such a decision; he knew that: (a) the US government could not acquiesce to missiles in Cuba without taking action: (b) President Kennedy had made publicly very clear that he would not have tolerated Soviet nuclear weapons in Cuba; and (c) once the presence of missiles in Cuba had become known, then the American response would have had to follow. How could he believe that by deceiving the American President as to the nature of the Soviet aides to Cuba, he would have gotten away with a fait accomplit? How could he believe that the administration’s public statement guaranteeing action would count for nothing and that Kennedy would accept losing face to his fellow citizens (before an election) and to the world in general, and to acquiesce to Soviet deception? I am not questioning here the reasons he had for desiring to make Cuba a Soviet missile base, whether they were good or bad from a strategic and from a moral point of view. I am instead 

The story toward the truth of the trade is analyzed in depth by Alterman, When Presidents Lie, pp. –.

Kennedy and Cuba



asking why, having that desire, he chose to pursue it by a bold and secretive move that he had all the evidence to think that his adversary could not accept as the new status quo. Certainly, the belief that Americans would eventually have come around in order to avoid risking a nuclear escalation was what drove his decision. But such conviction, derived from his interpretation of the American policy in Cuba as incompetent and weak-willed, displaced another belief from his consideration, which should have been of paramount importance in this instance. In the Cold War scenario, the two superpowers developed certain common knowledge and conventions so as to keep a hostile equilibrium and to prevent nuclear war. Neither power could win in a nuclear war, but under certain conditions each had to be willing to risk the attack. The two opponents knew of this situation and also knew that if one of them was unwilling to risk waging a war, then the opponent gained a strategic advantage and could get away with many hostile actions and with securing many privileges for his country. The shared goal of avoiding a nuclear war must be placed in context of persistent conflicting aims and of the mutual attempt to secure advantages dependent on the counterpart’s unwillingness to provoke the nuclear holocaust. Deterrence, that is, the capacity to make credible threats to the opponent, was the way to avoid both the risk of nuclear war and, at the same time, being duped by the adversary. This game was common knowledge to the Cold War leaders and politicians, and it definitely should have been the major consideration in Khrushchev’s decision. No matter how weak-willed or unclear Kennedy might have been about Cuba, the US president could not tolerate the missiles’ presence in Cuba, for that would have implied accepting Khrushchev’s disregard of the nuclear age conventions for avoiding both war and enemy’s provocation. Thus Khrushchev, wishing to win a match in this dangerous game, or simply wanting to protect Castro, made up his mind to secretly deploy missiles, focusing on his conviction of Kennedy’s weakness and disregarding the hard facts of the Cold War reality, and its strategic necessity of staying away from the brink. His disregard was especially emphasized by his duplicity: public statements were made, ignored, and contradicted. That was a breach of trust between the two main players in the risky Cold 

This situation, known under the name of the nuclear paradox, is described in detailed in the Afterword of Kennedy’s Thirteen Days, by R. Neustadt and G. Allison, pp. –, while explaining why Kennedy could not simply acquiesce to the Soviet secret deployment for the sake of world peace.



Political Self-Deception

War game and trust between the two main agents was a condition for reasonable expectations about the others’ behavior and in stabilizing conventions. All in all, it seems that Khrushchev had been seduced by his wish to win a move in the Cold War game through a bold and not well-thought through operation. It is not clear whether the decision of going public would have helped him avoid the crisis altogether or made things easier for Kruschev, as his Cuban allies contended. Some participants, as Theodore Sorensen, believed that had Kennedy been properly informed of defensive missile deployment in Cuba, he would have had to accept them, maybe after some negotiations and provided that their defensiveness could be ascertained. The claim is difficult to prove; maybe proper information would have opened negotiations at an earlier time, and without the gun at Kennedy’s head the same solution could have been reached openly and in less dramatic terms. But as Schlesinger has stressed in his Foreword to Thirteen Days, had Khrushchev followed Castro’s advice to go public, the US president would have found himself in a more difficult position. It would have been harder for the United States to demand the missile withdrawal, given that under international laws, two sovereign states have a right to send and accept military aid. Moreover, the presence of nuclear missiles in Turkey at a short distance from the Soviet Union could have counted against any American claim to force withdrawal from Cuba. In sum, Khrushchev disregarded the fact that his boldness would actually have undermined his bargaining position, blinded by the conviction of being able to strike a strategic blow. That there was SD involved here is hinted by the fact that after Kennedy’s warnings in September, Khrushchev became nervous and anxious, changed the shipment plan, and rushed to add tactical nuclear weapons. So it seems that he found himself trapped into a self-deceptive process, which he went on to defend till the very end, till the moment when President Kennedy’s address of October  made his dream of a successful surprise fall apart. Given that the slim chance of success of his gamble were entrusted in making the Cuban base fully operative before any weapon being detected, how could he believe that the work at the missile site, after all the Soviet shipment and the movements in the island, could go unnoticed? He clearly brushed away a  



See Munton and Welch, The Cuban Missiles Crisis, p. . Schlesinger, “Foreword” to Kennedy, Thirteen Days, p. . On the point of whether secrecy had been an advantage or a disadvantage to the American management of the crisis, see also Munton and Welch, The Cuban Missiles Crisis, p. . Munton and Welch, The Cuban Missiles Crisis, p. .

Kennedy and Cuba



few pieces of information, which were before his eyes, focusing instead on the favorable belief that Americans lacked determination and intelligence efficiency in order to believe his plan feasible. .. As noted by Marc White, JFK’s bewilderment at the discovery of the Soviet missiles was puzzling. In the two first meetings of October , he declared more than once his inability to understand why the Soviets had put the missiles in Cuba, concluding that “it’s a goddamn mystery to me.” And instead of considering that the United States had represented an actual threat to Cuba stability, given the Bay of Pigs invasion, the Mongoose operation still in place, and the ejection of Cuba from the OAS, he argued that the Soviet deployment of missiles in Cuba was the proof that the decision to carry out the Bay of Pigs “was really right.” According to White, there is a striking nonsequitur between the belief that Khrushchev’s motivations were mysterious and the belief that attacking Cuba was a national security priority. In fact, I think that Kennedy’s belief that Khrushchev’s decision was not understandable can be understood perfectly in the context of the Cold War strategic considerations described earlier, the same that Khrushchev foolishly disregarded. For, it is one thing to acknowledge that Khrushchev’s worries about Cuba were founded in the American anti-Castro policy, and another to think that those worries, reasonable as they may be, justified the secret and blunt deployment of missiles. It is true that Kennedy and his administration were singularly blind to how American Cuban policy was regarded by Castro and by the Soviets. Yet Kennedy’s mystified reaction at the discovery is not necessarily the consequence of that blindness; it might have been the product of strategic considerations. Thus, I do not see the two beliefs as inconsistent, but I find it striking that the discovery of the bold and secretive, although risky and possibly foolish, move to deploy was considered to have justified the Bay of Pigs operation in retrospect. Here I think we confront a very interesting type of SD. Although Kennedy’s deceptive belief in this regard may be of limited relevance in the crisis’ development, it is worth exploring for its own sake. Instead of seeing Soviet worries about Cuba, in consequence of the American hostile policy and, above all, of the aborted Bay of Pigs, as reasonable, Kennedy had come to see the misconceived invasion plan as “right” because the (subsequent) duplicitous and hostile Soviet action was  

Quoted in White, The Cuban Missiles Crisis, p. . Quoted in White, The Cuban Missiles Crisis, p. .



Political Self-Deception

final proof that American suspicions about them and Cuba were well founded. And if American suspicions were well founded, then the idea backing the Bay of Pigs plan was justified. This reasoning considered that symptoms of Soviet untrustworthiness justified ex post the American preemptive, although, alas, ineffectual, operation. But those symptoms might have been, instead, consequences of the worries engendered by the “preemptive” operation. The wish to redeem the Bay of Pigs, at least in its underlying conception, made Kennedy take consequences for symptoms and come to believe that they proved “we were right.” This apparently peripheral episode of SD made possible a process of moral laundering of Kennedy’s policy about Cuba, which came to play a major role in the Cuban missiles crisis. In the winding discussions of the Ex-Comm during those two weeks, when positions for the strike and the blockade went back and forth, and views shifted, the role of Robert Kennedy emerged progressively as crucial in proposing solutions and convincing the group and his brother of his position. It was in fact Bobby’s negotiation with Dobrynin, which eventually won and ended the crisis. Bobby’s first argument against the military solution, that is, air strike, was actually based on moral and principled reasons. This is obviously much underlined in his memoirs but also reported in all accounts. Even discounting the embellishments of his position for propaganda reasons in the impending presidential campaign, Robert Kennedy has been seen as the dove beyond the President’s attitude of caution and prudence. Bobby took up the analogy of Pearl Harbor from George Ball when, at the end of the second meeting, he passed a well-known note to his brother: “I know how Tojo felt when he was planning Pearl Harbor.” Then, on October , he developed his argument in favor of the blockade, stating that it was not acceptable that the United States could kill thousands of innocent civilians in a surprise attack, and that the history and the tradition of the country were both against such course of action: “This [surprise attack] could not be undertaken by the US if we were to maintain our moral position at home and around the globe.”

 The Secret Deal and the President’s Lie .. I shall, finally, focus on the secret trade of the American Jupiter missiles in Turkey in return for removal of the Soviet missiles in Cuba. To sum up: the deal was struck by Robert Kennedy and Anthony Dobrynin in a secret 

Kennedy, Thirteen Days, p. .



Kennedy, ibid., p. .

Kennedy and Cuba



meeting on the fateful evening of Saturday, October . Khrushev had sent Kennedy two letters with different proposals for the crisis resolution: the first was sent on Friday, October , offering Soviet withdrawal in return for a formal pledge not to invade Cuba; the second was sent on Saturday, October , demanding a symmetrical withdrawal of the American missiles in Turkey. The first letter was clearly much more favorable to American interests than the second, offering an exit without any trade of nuclear bases and missiles, hence without changing the balance of nuclear power from the status quo ex ante, and asking in return only what was minimally necessary for the Soviet leader to save his face, the noninvasion declaration. Khrushchev first message reached the president only in the evening when he immediately summoned an emergency Ex-Comm meeting. The discussion that followed and the fact that the administration did not jump on the offer with an immediate positive answer suggests that the declaration of noninvasion was perceived as costly by the president and some of his advisers. In any case, the following morning this opportunity seemed to have vanished. The new letter raised the issue of the trade, and the majority regarded it unacceptable. Why the trade was regarded unacceptable is not immediately clear, given that the idea of removing the deployment for technical reasons and to substitute the Jupiter in Turkey and Italy with nuclear submarines in the Mediterranean had already been considered in the administration. Moreover, in the course of the crisis, suggestions about the trade had been made by Stevenson, Bundy, and then taken up in an article in the New York Times by Walter Lippman, suggesting precisely such a compromise. In a way, there was a clear symmetry between deployment of nuclear weapons in Turkey, just across the sea from Khrushchev’s dacha, and the base in Cuba ninety miles away from Key West. But if the trade was not unreasonable substantively, it looked unacceptable publicly. The issue of publicity and secrecy was indeed crucial throughout the crisis: (a) the crisis exploded because of the secret deployment while it was openly promised the opposite; (b) Kennedy made public declaration as to the unacceptability of a nuclear base in Cuba and as to the required actions for their removal; and (c) Khrushchev secrecy and deception entrapped him in a  

This is, at least, the interpretation provided by White, The Cuban Missiles Crisis, p. . On the idea of the swap between Jupiter in Turkey and missiles in Cuba, see the detailed reconstruction of Adlai Stevenson’s position in White, The Cuban Missiles Crisis, pp. –. See also Freedman, Kennedy’s Wars, pp. –, and, especially, Munton and Welch, The Cuban Missiles Crisis, p. , where the persistent myth that Kennedy had ordered to dismantle the Jupiter missiles previous the Cuban crisis is reported, analyzed, and dismantled.



Political Self-Deception

weak position for negotiation to start, as much as Kennedy’s public stand restrained his bargaining choices. Yet between Khrushchev’s secrecy and Kennedy’s publicity there was an asymmetry: the former was secretive as to his intentions and actions, the latter went public with warnings and threats that he did not wanted to carry out, but could not later be dismissed to keep his credibility as a leader. The solution was accordingly found in drawing a line between the official public story and the unofficial secret one. The official solution was founded by means of the Trollope ploy, that is by answering the first letter and ignoring the second, agreeing to the first very favorable deal and bluntly dismissing the mention of the Turkey missile trade. The secret unofficial solution was struck by the off-record acceptance of the trade, only postponed by a few months and publicly denied, which constituted a substantive supplement to the Trollope ploy. .. This partition between the official and secretive story of the crisis’ end, and the fact that the secret had been kept for such a long time opens two interpretive options concerning the involvement of SD for covering up the trade. The first option emerges in Eric Alterman’s book on presidential lies, in which he contends that the lie about the trade was not justified as argued by the need of Kennedy’s to keep up his public pledge and of looking firm and tough against the Soviets. The administration claimed its right to lie justified by the result, in the words of Assistant Secretary of Defense for Public Affairs Arthur Sylvester. Alterman instead maintains that the overconfidence in disposing of the truth was an example of a blinkered attitude to the ominous consequences of Kennedy’s posture in the subsequent foreign policy of the United States. The belief that presidential lies were justified is, according to Alterman, self-deceptive and it obscured alternative options to, and actual consequences of, lying. The second interpretation relates to the fact that the secret deal, once made public, did not change the perception and the recasting of the happy ending of the crisis: in this respect, we are confronting a sort of collective SD by participants, historians, and by the public who seem unwilling to revise the favorite and heroic account of the crisis. The first option would be a case of SD of the third type, that is, of justificatory SD concerning one’s lie as necessary, justified, even noble in the Platonic tradition. Alterman’s claim actually does not get into an analysis of such self-justificatory attitude, whether it was a case of SD or  

Alterman, When Presidents Lie, pp. –. Quoted in Alterman, When Presidents Lie, p. .

Kennedy and Cuba



simply of cynicism and shrewdness by the Kennedys. He, rather, takes issue with the self-congratulatory account of the crisis, and contends that it is inconsistent with the presidential lie and its effects. For the crisis was not solved by the lie, as implicitly argued by Sylvester but, rather, by a reasonable trade. The secrecy of the trade did nothing to avoid nuclear confrontation, it only helped to create the image of the American president as firm, tough, wise and successful. Thus, it was useful for his political interests, not for national or world security. True, President Kennedy might have had a hard time in making the trade publicly acceptable, given his belligerent words of uncompromising stand against turning Cuba into a Soviet nuclear base. And he might have met with strong reactions from the military people who were all too eager to jump on the opportunity of doing away with Cuba, apparently unmoved by potentially disastrous consequences. But these considerations were relevant for his image, his domestic strength in the midterm election, and so on, not for the world peace. Can saving one’s face justify the deception of the public? It is likely, in fact, that the Kennedy brothers did not think that their deception was meant to enhance their personal image and power but, rather, that the Cold War strategy and the nuclear paradox required them to be firm and tough, in order to avoid being exploited by their adversary. Still, the appearance of toughness was all for the public, not for the other player. Obviously, how the president and his brother thought is not known, but the national security reason is not compelling. Hence if they thought along those lines they were self-deceived. Alterman’s claim goes further than that: he actually holds that lying about the trade brought about negative consequences for subsequent American foreign policy, especially with Vietnam. Hence, not only national security could not justify the public deception at the time, but it was also later put at risk precisely by that deception, because the official story left the following American administrations with the legacy of Kennedy’s success, namely, that, in a crisis, toughness wins over the communists. Without discussing Alterman’s thesis further, it is a fact that the congratulatory atmosphere after the crisis resolution might have taken in the president who, instead of justifying his lie, came to believe that the trade was really only a side component, while the Trollope ploy did resolve the crisis. In this way, the lie was covered not by a direct justification, but rather by its downplaying so as to become opaque and peripheral in the narrative. This possibility is backed by the later account of the few participants in the trade decision. In the  statement – jointly made by Dean Rusk, Robert McNamara, George Ball, and Roswell Gilpatrick on



Political Self-Deception

the twentieth anniversary of the crisis – published by Time magazine, the negotiation was not denied, but it was presented not as a “deal,” a swap, a quid pro quo, as demanded by the Soviet, but simply as an announcement of a future course of action of the administration. Obviously, this version might have been only a callous defense of the original lie, but likely the distance between the official and unofficial version was covered by opacity and reframing of the perceptual field. In this case, SD as a justification of a lie and as a reduction of the cognitive dissonance merged and obscures the deal from the beliefs of participants and later recounts of the crisis. In any case, given that the successful solution was achieved by negotiations the false belief in the power of national guts was certainly misleading for future foreign policy. The fact that the secret was so well kept by the very few people who knew it, in turn, contributed to create and sustain the myth that international crisis must be managed by a firm and clear resolution so that the opponent will eventually back down in the face of superior American military capabilities with ominous effects in the unfolding of the Indochina crisis and American involvement. The American public liked the heroic story of how Kennedy found himself on the brink of nuclear war by the duplicitous brinkmanship of his adversary, and managed by his courage and wisdom to avoid the catastrophe and to end the confrontation without weapons being fired and without concession to the Soviet blackmailer. It enhanced the Cold War imaginary of America as the free world champion, committed to a principled fight against the treacherous communist enemy. So much so that when the true story started to emerge, it did not supplant the old comforting one. The secret deal should be by now common knowledge, but, in fact, it did not penetrate into general awareness. Evidence of the persistence of the old self-congratulatory version is the movie Thirteen Days, which is the screen version of Robert Kennedy’s Memoirs, plus some artistic liberties. In the movie, no effort is made to the effect of taking advantage of the new body of evidence of the missile crisis coming out of the new documents from Kremlin after the fall of Soviet Union and massive declassification in the United States. Thus, the secret deal is merely hinted as a minor diplomatic exchange of messages, instead of a negotiation. Similar downplaying of the swap of missiles is found in many scholarly works, where the fact is acknowledged but not as a deal, as in the version provided by the participants in the twentieth anniversary of the crisis. This is especially striking in the Afterword to Thirteen Days, 

“The Lessons of the Cuban Missile Crisis,” Time, September , , quoted in Alterman, p. .

Kennedy and Cuba



published in , in which the secret deal received no mention whatsoever, and despite the fact that Schlesinger’s Foreword acknowledges the deal and the role of negotiation in the peaceful solution of the crisis. But it is also noticeable in a work such as Marc White’s The Cuban Missile Crisis, in which he, generally speaking, takes a very critical stand of the official version and of the Kennedy glorification. The deal is acknowledged, but its role in the solution of the crisis is downplayed compared to the long critical analysis of who was the author of the Trollope ploy. Lawrence Freedman gave an accurate and detailed account of the deal; he nevertheless also stated that: The undertaking to Khrushchev on the removal of the missiles [in Turkey] was important as an earnest of Kennedy’s good will, but it was not part of the public record, it was irrelevant to the political fallout, domestic and international, of the crisis. Indeed the assumption was that Moscow had been successfully rebuffed.

With the exception of the studies in critical oral history resulting from a project of James A. Blight, who put together the participants of the three sides, Americans, Russians, and Cubans, and historians in six major conferences for reconsidering the events in the light of the new declassified material, and of Alterman’s work, there seems to be a widespread resistance at a revision of the story, so that the original version is basically kept against the contrary evidence. This is a kind of SD ex-post, not by politicians but by the public to preserve a positive image of one’s country and governments.  

Freedman, Kennedy’s Wars, p. . For the Blight project, see J. G. Blight and D. A. Welch, eds., On the Brink: Americans and Soviets Reexamine the Cuban Missile Crisis, Hill &Wang, New York, . B. Allyn, J. Blight, and D. Welch, eds., Back to the Brink: Proceedings of the Moscow Conference on the Cuban Missiles Crisis, University Press of America, Lanham, MD, ; J. Blight, B. Allyn, and D. Welch, Cuba on the Brink: Castro, the Missile Crisis and the Soviet Collapse, Pantheon, New York, .

 

Johnson and the Gulf of Tonkin Resolution



Introduction

.. Lyndon B. Johnson, the president responsible for American escalation in Vietnam, is reported to have said: “They won’t be talking about my civil rights bill, or education or beautification. No sir, they’ll push Vietnam up my ass every time. Vietnam, Vietnam. Vietnam. Right up my ass.” Given Johnson’s priority for his “Great Society” agenda, for the reform towards more equality and justice among Americans, it is all the more striking, and perhaps ironical, to consider that the Gulf of Tonkin Resolution, opening the way to an all-out-war, was passed less than a year after he became president. The Resolution was passed on August , , two days after a retaliatory airstrike by American bombers on North Vietnamese targets. Johnson presented the strike in a public television announcement to the nation, as the necessary response to an alleged unprovoked attack by North Vietnamese torpedoes on two American ships, the Maddox and the Turner Joy, while patrolling international waters in the Gulf of Tonkin on August . The evidence backing the incident was controversial from start, but it has now become clear that such an attack never took place, being rather a case of misreading of radar signals in bad weather by an overexcited and inexperienced sonar man. At the time the attack was taken by military officials and the administration as bona fide. The reprisal followed right away.





Reported by David Halberstam in “LBJ and Presidential Machismo,” in J. P. Kimball, The Reason Why: The Debate about the Causes of US Involvement in the Vietnam War, McGraw-Hill, New York  (–), p. . On Johnson’s priorities, see J. W. Helsing, Johnson’s War/Johnson’s Great Society, Praeger, Weston, CT, .



Johnson and the Gulf of Tonkin Resolution



That was the first time, after two decades of involvement in Southeast Asia, that Americans, openly and without hiding behind South Vietnamese counterinsurgents, attacked the Democratic Republic of North Vietnam, fatally changing the scale and the direction of the conflict. Up to that date, American involvement had varied in terms of number of men, means, and funds, but it had always stayed within the limits of aid and support for South Vietnam, in accordance with the official view, supplemented by covert operations either by CIA or by US forces, within the deniability clause. Obviously, such limits were clearly self-serving and hypocritical. After all, South Vietnam was a puppet state created by the United States as a defense against potential expansion of Communists from the North, and Diem was a cruel dictator whom Americans had supported against their better judgment for countering the risk of a Communist takeover. But leaving aside all the complexities, mistakes, responsibilities, faults, and deceptions of successive American administrations, it is a fact that up to that moment in the summer of , US involvement did not include open military action against Hanoi, and that, as a result, the American role could be, and was, ignored by the American public. To be more precise, the administration, as well as the American public simply discounted that the United States were involved in a war. The common narrative was that they were simply supporting anti-Communist patriots in Southeast Asia as well as in Cuba and other parts of the world. As Robert McNamara later stated in his memoirs: “The closest the United States came to a declaration of war in Vietnam was the Tonkin Gulf Resolution of August .” The numbers speak clearly to that effect. A year after the Resolution, US forces increased from , to ,. Thanks to that blank check almost unanimously given by the House after the outrage about the attack, the shift was swiftly accomplished by the administration without any debate in the Congress and without the public being made aware of the escalation. .. Such a turning point in the Vietnam conflict was decided by a reluctant president on the basis of one of the most controversial incidents of the whole war. In fact, the Tonkin affair is often taken as the paradigmatic example of the deception and manipulation characterizing the American administrations in Vietnam. Some interpreters even hold the view that the incident was a pure fabrication to have the Resolution passed. 

See Robert S. McNamara, In Retrospect: The Tragedy and Lesson of Vietnam, Random House, New York, , p. .



Political Self-Deception

The salience of the incident in the history of the Vietnam conflict was such that it was later used as the starting point for developing a dialogue between the two old enemies. The debate is recorded in a series of meetings that took place from November  to February , and that ended up in the book edited by Robert McNamara, former Secretary of Defense in the Johnson administration, Arguments without End. The meetings were set up by McNamara with the help of some historians with the purpose of confronting the different assessments and views of the Vietnam conflict twenty years after it ended. At the opening of the proceeding, the Vietnamese officials seemed to mistrust their American interlocutors, and to stick to the old propaganda instead of opening up to a frank exchange. In the attempt to change that attitude, McNamara came to the front and, facing his old enemy, General Vo Nguyen Giap, openly asked him whether the North Vietnamese attack on the two American destroyers had taken place. He was the one who, candidly or deceptively, made Congress and the American public believe that the attack was real and that, even after the Senate hearings of , and after much contrary evidence later made available, still maintained the attack was “probable, but not certain” in his  memoirs, In Retrospect. By asking that question directly, McNamara wanted to show an open mind and his will to admit mistakes, review previous convictions, and take responsibility for wrong judgments. It was no coincidence that he picked the Tonkin Gulf incident, because that was a controversial event where his credibility was at stake. Entrusting his old enemy to clarify what happened finally was proof of good will and good faith. Incidentally, as we shall see amply later, the fact that he reported Giap’s answer in the negative and that he admitted his mistake supports the view that the incident was not a pure fabrication “to create a pretext that would allow [Americans] to take over the war from the Saigon government, which was incompetent” – as was Giap’s conviction. But, even granting McNamara’s good faith, his reassessment of the incident, as due to “mistakes and missed opportunities” is not sufficient. It does not explain all his and other officials’ motivated discounting of contrary evidence, their anxious search for confirmatory reports, the hasty response of the president, and eventually the covering up of what had really taken place. More than a fatal mistake, it was a case of SD.   

R. McNamara, J. Blight, and R. Brigham, Argument without End: In Search of Answers to the Vietnam Tragedy, Public Affairs, New York, . McNamara, In Retrospect, p. , and, McNamara, Blight, and Brigham. Argument without End, p. . Argument without End, p. .

Johnson and the Gulf of Tonkin Resolution



Indeed, I take the Tonkin incident as one of the most clear and telling cases of SD in the course of the Vietnam conflict. More precisely, I take it as an exemplary case of the second type of SD where SD is ancillary to deception and used to deceive others. In the course of the reconstruction, we shall see how this type works by providing the rationale to convince the country that escalation in the Vietnam military involvement was indeed necessary. Its advantage for self-deceivers is, however, short-lived, for when the truth surfaces, the handy but false belief backfires on the self-deceiver with all its ominous consequences for the latter. In this chapter, I shall attempt to reconstruct this special instance of SD in the context of the Vietnam conflict at the time, which actually provided the ideal illustration of a messy mixture of ideology, illusions, unexamined assumptions, mistakes, and straightforward deception in many directions: the military to the National Security Council, the National Security Council to the president, the president to Congress and to the public. .. Commentators on Vietnam, most prominently Hannah Arendt, have stressed that the war was built upon a mass of lies and that lying was the cause of the tragedy of Vietnam. More precisely, it was the accumulation of deceptive decisions that were hidden from the American public and to Congress, which made possible fighting a war which was publicly denied. Unpacking the web of lies, though, commentators hint at SD, their view being that (a) in a generally confused background, the lies meant for the public ended up being believed by the liars themselves; and (b) that bad faith and good intentions cannot be ascertained for sure nor can be clearly separated, but insofar as officials and presidents were honest, they definitely duped themselves. Others scholars who favored a more structural view of the Vietnam War, such as Gabriel Kolko, pointed to economic interests and symbolical motives behind American foreign policy in Vietnam. Nevertheless, they acknowledged that the administrations involved in Vietnam deluded themselves and willfully ignored defeats and difficulties, keeping intact the faith in their military superiority and final success. Finally, the reference to SD (though never properly



 

See Arendt, Lying in Politics; Altermann, When Presidents Lie; also David Wise, The Politics of Lying: Government Deception, Secrecy and Power. Random House, New York, , who devoted much analysis to the Vietnam War and to the Gulf of Tonkin incident especially (pp. –). Among those who most hinted at SD is Gardner, Pay Any Price. See Gabriel Kolko, Anatomy of War, Phoenix Press, London, , p. .



Political Self-Deception

examined) came naturally to someone like Schlesinger who viewed the Vietnam War as the outcome of a slippery slope, entered unwillingly and inadvertently. He wrote, “One experience after another made the newspapermen more certain that the Embassy was lying to them. They did not recognize the deeper pathos, which was that the officials really believed their own reports. They were deceiving not only the American government and people but themselves.” I will take up these widespread hints and try to analyze them in a consistent way. I will argue that SD was not only present in various steps of the decision making, but also that a theory of SD is indeed needed to make a better sense of the move toward the escalation. .. Among the many interpretations of the Vietnam catastrophe, we can single out two opposing groups. On the one side, there is McNamara’s approach, pointing out the cumulative effect of successive mistakes and misjudgments as the cause of the tragedy. On the opposite side, the main reason for the disaster is viewed in the officials’ deception of the public. Each side can then be articulated in more nuanced and sophisticated theories, but such basic alternatives can be found over and over in most studies on Vietnam. Those stressing the role of misjudgements and deficiencies in the decision making process, leading then to colossal mistaken policies, support the view that the war was indeed the result of a slippery slope more than of a deliberate choice by the administration. Often (though not necessarily) such interpretations imply a sense of inevitability in the decision process, with a clear self-justificatory intent. Within this group, even those who instead underline the deliberate nature of single decisions, acknowledge that the unfolding of the American involvement was not altogether planned. Wrong choices and unintended outcomes are then hastily attributed to Cold War prejudices and to correlated theories, and to the ignorance of the enemies’ mindset. But, on the one hand, the motivations triggering the prejudices and biases of American decision makers are not attended to and set apart from the Cold War   



See A. Schlesinger Jr., The Bitter Heritage, Andre Deutsch, London, , p. . See Jeffrey Kimball, ed., To Reason Why: The Debate about the Cause of US involvement in the Vietnam War, McGraw Hill Publishing Company, New York, pp. –. This position is also known as the “Quagmire theory on Vietnam,” meaning a step-at-a-time process where the United States became entrapped involuntarily in the Vietnam conflict. The term comes from David Halberstam, The Making of a Quagmire: America and Vietnam during the Kennedy Era, Knopf, New York , and was later elaborated by Schlesinger, The Bitter Heritage. This view is also known as the “stalemate theory” of American involvement in Vietnam. See Larry Berman, Johnson’s War: The Road to Stalemate in Vietnam, Norton & Company, New York .

Johnson and the Gulf of Tonkin Resolution



assumptions and, on the other, all the obvious deceptions of Congress and the general public are made to disappear as if irrelevant, and hence unaccountable. On the opposite side, the belief that the administration lied dispels fatalism, makes clear that different decisions would have changed the course of the conflict and permits us to assign responsibility to the various agents. And yet, for all that, the lie view alone cannot explain the poor planning, the contradictions and ambiguities, nor the self-sabotaging dimension of the whole process. To be sure, not all lie-supporters have a conspiratorial overtone, though some definitely do. The difficulty of keeping up a purely conspiratorial view is shown by the fact that most of the lie-partisans have to mention, in passing, self-delusion, wishful thinking, and the fact that liars were then caught up in their own lies. No matter how widespread, such hints have not been consistently pursued. Neither the misjudgement-supporters take up the hypothesis that mistakes could be willful and motivated by self-serving interests, nor do the lie-supporters explore the possibility that SD was at work there, well intertwined with deception of others. My view is that both interpretations – the one attributing the Vietnam conflict to ‘honest mistakes’ and misjudgements ending in unintentional disaster and the other attributing the disaster to bad faith, secrecy, and deception – are partial and one-sided. I think that we need to connect the two, so as to impute precise responsibility either for lies or for mistakes. And, in order to make such a connection, SD comes in handy to piece together unjustified beliefs, with desires, mistakes and bad faith in a meaningful way. SD has a specific explanatory role in that it links a number of apparently inconsistent features in an intelligible way, such as: (a) misjudgements and mistakes, (b) bad faith, (c) false beliefs that were also genuine convictions, (d) subsequent decisions that had almost been reached upon unintentionally or inadvertently, though consciously taken; (e) the fact that the decisions taken and the reasoning backing them were deceptively hidden to the public, and subtracted from open discussion and challenges; (f ) and, finally, the self-sabotage of the overall process. All these 

It must be said that not all supporters of the mistake-misjudgement view also side with the inevitability thesis. For example, McNamara in Argument without End clearly states that choices and decisions by individuals have a causal effect on outcome, and that even if outside pressures and circumstances limit actions and constrain choices, the latter are not completely determined by them, and leaders must especially be able to resist this force, to have a larger vision and choose for the best. See pp. –.



Political Self-Deception

different components of what was going on in the decision-making process can actually be accounted for in a meaningful way by SD. Given the mass of documents and reports on Vietnam, I shall basically confine my analysis on the Gulf of Tonkin incident, which I take as a paradigmatic example of SD of the second type, and to its antecedents, and to the subsequent resolution. As in the previous chapter, my aim is theoretical, concerning the role and relevance of SD in democratic politics; the case is therefore construed for its exemplary nature by means of a speculative argument, making use of the already available body of knowledge and evidence; hence in the following reconstruction I shall mainly rely on secondary literature, on existing historical accounts, well-known documents and interpretations. My aim is to make a case of what is already known as SD in order to provide better insight into the puzzling aspects conducive to the Gulf of Tonkin Resolution.

 Plotting the Escalation (–) .. The year before the Gulf of Tonkin incident, the US administration had become disillusioned with the situation in the South Vietnam and with the capacity of Diem’s government to hold the country together politically and fight the Vietcong insurgency efficiently. The situation in South Vietnam deteriorated after the fall of Diem’s regime in November . President Kennedy was reportedly shocked and, according to McNamara interpretation, he was probably going to withdraw from Vietnam. Whether that was the intention Kennedy would have carried out had he lived, we are not in a position to know. In any case, the situation in South Vietnam after the coup, went from bad to worse. The military junta that overtook the power and led by General Minh was, in turn, overthrown by General Khanh in January . Meanwhile, Vietcong action intensified and their penetration in the South became wider and enjoyed wide popular support in rural areas. If the American goal was, as publicly stated and repeated over and over, defeating the Vietcong guerrilla in the South and fortifying the government of South Vietnam so as to leave South Vietnam government in charge, without risking a Communist expansion, that goal was basically lost between the end of  and the beginning of . In the first six months of , however, the Johnson administration, instead of planning withdrawal, started planning escalation. It enlarged the plan of covert operations, which was in place since , and had already

Johnson and the Gulf of Tonkin Resolution



expanded by Kennedy in , under the name of OPLAN A. Such operations, which involved different kinds of actions by SouthVietnamese personnel, instructed and led by US forces, were actually unknown to the American public but known to the enemy. The US Navy was also involved in what was known as the “DeSoto Patrol,” that is an operation of patrolling off the coast of North Vietnam apparently for intelligence purposes, but most likely also for affirming the American presence in contended waters closer to the coast than conceded by Hanoi. Given that the patrol was carried out by destroyers, the presence of such vessels looked clearly a hostile sign of aggressive intent. In addition to these operations, in the first six months of , proposals for escalation, in the form of raid attack on the North, were advanced by the Joint Chiefs of Staff, by Johnson’s advisers and by the newly Johnson’s appointed Vietnam Working Group, led by William Sullivan in December . The proposals to escalate the conflict were based on the conviction that there was no other alternative in Vietnam except pulling out, which, however, was not considered acceptable, because American credibility was at stake. While the planning for the escalation was intense, as reported in The Pentagon Papers, the actual decision was Johnson’s who was at the time noncommittal and very unsettled about what to do in Vietnam. .. Many analysts and former officials, notably McNamara, have later argued that escalation was a mistake, based on ignorance, misjudgments, wrong assumptions, and so on. Besides being too easily exculpatory, this view hides all the deception of the public that escalation implied. Yet, the deception, for one thing, does not explain why such a bad decision was pursued, and, in turn, is not explained by a plan for an all-out war, which, if denied, would have clearly backfired and exposed the lie, as it happened. Thus, we need an explanatory bridge between a faulty reasoning and the public deception, a bridge, which I argue, can be provided by motivated irrationality, in the form of wishful-thinking and, especially, SD. Not just plain ignorance or cold bias induced the miscalculation, but rather, the desire to win despite all negative reports suggesting victory as highly unlikely for any dispassionate outsider.  

This fact prompted Hannah Arendt’s outraged comment that secrecy and deception in the Vietnam War was not used to fool the enemy, but the American citizens. See J. A. Goulden, Truth Is the First Casualty: The Gulf of Tonkin Affair: Illusion and Reality, Jaes Adler-Rand McNally comp., New York , pp. –, The Pentagon Papers, pp. –, E. E. Moïse, Tonkin Gulf and the Escalation of the Vietnam War, The University of North Carolina Press, Chapel Hill-London , pp. –.



Political Self-Deception

The decision to escalate was taken on the background of unwarranted, but widely shared beliefs, namely: the conviction of American invincibility, as a long-lasting effect of World War II, the Cold War mindset, and, finally, the conviction of the noncolonialist nature of the American involvement in South Vietnam. The reason why the escalation was considered needed was that South Vietnam was losing on military grounds, against the Vietcong insurgency, as well as on the political and social level, because the government of South Vietnam was not able to stand up as a state and did not enjoy the consensus of the majority of the population. All the aid and the heavy funding of South Vietnam by Americans were unable to counter this trend. Why, then, would moving the conflict to the North represent a solution for the defeat in the South? The South was losing a civil war in spite of all the American support; how could the civil war in the South be won by bringing the conflict to the North? The North Vietnam government, admittedly, was helping the Vietcong guerrilla, but only at a limited rate, especially if compared with the American help to the South. In any case, contrary to the US narrative, the guerrilla in the South was not fueled by Communist agitprops, but actually operated by southerners who resisted both the American occupation and the dictatorial regime by Diem. Evidence of this fact was available to American personnel by the Sixties. Thus, how hitting the North could have stopped the guerrilla in the South and bolstered the South Vietnam government is not immediately evident. In order to make any sense at all, this move had to be sustained not only by ideological or unexamined assumptions, but also by a lot of evidence editing and discounting, which, if taken into consideration, would have made the move to the North irrational under the circumstances. In turn, the negative evidence discounting was strongly sustained by the desire “to do something” and to secure American victory, the certainty of which was not allowed to be challenged by the discomforting situation. By attributing the lack of success to the cunning of Communists and to an insufficient military involvement, the Pentagon and the Cabinet avoided reconsidering the final goal of victory. Hence, the loop in the reasoning for the escalation was fed by the anxiety about saving face and doing something against Communism.  

See Moïse, Tonkin Gulf, p. . “Nearly all the guerrillas who came from North Vietnam until we started bombing the north in , however, were South Vietnamese who had gone north in ; most of the Viet-Cong in any case continued to be recruited in South-Vietnam; and most Viet Cong arms and equipment were captured from Diem’s army,” Schlesinger, The Bitter Heritage, p. .

Johnson and the Gulf of Tonkin Resolution



As a matter of fact, there was little America could do about the Vietcong success apart from letting things roll and eventually pulling out. But, this was precisely what American administration considered a defeat, a show of weakness and softness, a loss of face and credibility. Hence, it was a barren option. Given that the negative evidence was supporting precisely what was most feared, the loss of face, the plan to move North was precisely a way of evading that evidence, explaining it away by imputing the defeat to the cunning support of Hanoi. Thus, the American officials discounted the primary reason for the Vietcong gains in the South and focused instead on the much lesser factor of North Vietnamese support that was actually the one element that they thought could be taken care of with an adequate military build-up. Meanwhile they also decided that the risk of a Chinese involvement and of an all-out conflict were limited and could be set aside. The belief that airstrike to the North could, first, dissuade North Vietnam to support the Vietcong resistance; second, convince the Vietcong to surrender to the South Vietnamese government; and third, make more confident and capable the South Vietnamese government as a result of this new level of American involvement has all the appearance of a mixture of SD and wishful thinking. But the illusory nature of this strategy was lost in generally faulty reasoning, which characterized the whole decision process about Vietnam. The reference to SD and other forms of motivated irrationality allows me to make sense of two different features of the decision to escalate highlighted in later reconstructions. The first is the sense of inevitability, as an unfolding of events rolling on by their own on a slippery slope. Despite the self-justificatory coloring of this thesis, the feeling of being entrapped was probably there, and can be explained by the presence of SD; moreover, after excluding “pulling out” as an option, “going on” necessarily led to more and more involvement. Each step was entered intentionally and openly, and clear responsibility can be ascribed to each of them, nevertheless the whole process with its irrational elements 

This decision that in fact contrasts the Cold War mindset and the overemphasis of the Communist threat is founded on the concomitant knowledge of a power imbalance between West and East, much in favor of the West, which made Communist countries, and China in particular, objectively weak and prone to appease American show of strength. This is the thesis argued by G. Porter in Perils of Dominance, which to his mind provides an alternative explanation to Cold War mindset. The problem is that the belief in American military superiority actually coexisted with the Cold War mindset, which still made fighting in Vietnam important and pulling out unacceptable for America’s position in the international arena.



Political Self-Deception

escaped consciousness and the outcome was unintentional – and in this sense inevitable. The second feature, pointed out by many commentators, starting with Hannah Arendt, is the faith in technology that constituted the blind drive in the decision-making process. Actually, the irrational twist of highly rational instruments and theories has been exposed. The men selected by Kennedy and then confirmed by Johnson as part of the Cabinet and advisers to the presidents were, according to a well-known definition, the “Best and the Brightest.” Their overconfidence in technology and in rational problem-solving made them curiously blind to the context of application of such intellectual and technological means, which was never examined and questioned. Given the final goal of defeating Communism in South Vietnam, they reasoned taking for granted the military and technological superiority of the United States. Hence, they faced the difficulties proposing to go on to the next ladder of military involvement as if the solution were just a matter of dosing the optimal quantity of strength to secure success. Meanwhile, they dismissed alternatives, difficulties, and defeats. The overall opacity surrounding the strategic reasoning was triggered by the joint desire of winning and of proving themselves right against all available evidence. .. Given the puzzling dimension of the planning in the first part of , it is no wonder that it was later interpreted as a conspiracy to further American involvement in the war by provocation, by public denial, and manipulation of facts. There are at least two different conspiratorial views of the Vietnam War. The first considers the will to invade the North and take over Hanoi as the real, though disguised, goal of American imperialism. According to this view, the failure of the puppet regime in the South provided the opportunity to enlarge the conflict and move forward to the real goal. This view was held by Vietnamese officials, as stated in the meetings reported in Argument without End, and was also held by many interpreters. The second conspiracy view points to the military-economic interest involved in the war as the driving force for pulling through to the last instead of pulling off. The assumption of either one or a combination  

See D. Halberstam, The Best and the Brightest, Random House, New York, . For instance, the enemy’s interests were inferred from the monolithic and mechanistic view of Communist and never assessed in their own light. The nationalistic nature of Vietnamese intent, both in the South and in the North was not given serious consideration. See Gabriel Kolko, Anatomy of a War; R. McNamara, Argument without End, p. ; J. W. Gibson, The Perfect War: Technowar in Vietnam, Atlantic Monthly Press, New York, .

Johnson and the Gulf of Tonkin Resolution



of the two views provides a rationale for the decision of escalation, otherwise puzzling. The apparent faultiness is understandable by the hidden will or structure that was set on a more ambitious goal than was publicly admitted, while the secrecy was thought necessary because of the implications for American draftees and of its problematic justification. If such explanations lift the semblance of incoherence of the decision to escalate, they nevertheless do not provide a satisfactory account of the complex reality of the planning and of the decision process. The reconstruction of the different viewpoints, of the ambiguities, of the uncertainties, of the pulling in different directions does not sit well with the idea of a planned conspiracy. By contrast, the theory of SD offers a better, if less conspiratorial account where the concurrent desires of showing toughness and resolve against the “commies,” and avoiding defeat, were served by deterrence theory and confidence in military might, thanks to heavy discounting of the negative evidence of losing South Vietnam. Instead of accurate processing of relevant data, with its threatening potential for American officials’ self-confidence, the trick was to bypass them by moving the target and increasing involvement. The fact that so many different agents and agencies were involved in this planning allowed discounting the costs of inaccuracy, thanks to a diffusion of responsibility. Yet, Johnson was troubled about Vietnam. It was not his wish and was not his decision to get involved down there. He was afraid of risking America (and his) credibility if he decided to move out, but he was even more worried of an all-out war with big military deployment. All reports in his first months told him that the situation in South Vietnam went from bad to worse, which suggested that the American policy of aiding and supporting the South Vietnam government was not working. He was not willing to leap into an all-out war with North Vietnam. At the same time, he wanted to look resolute and tough on Communism. Feeling trapped, he let his advisers convince him concerning the need for escalation. Yet, such conviction could not ease his mind completely, because he not only wanted to win, but to win without fighting a full-scale war and without big troop deployment. In order to contain his anxiety, he had also to believe that the military escalation was something different from an all-out war. That the military action to the North was indeed “moving the war to the North” was denied. The description of the military action as the strike of selected targets for deterrence was instead reassuring the president in several ways. It was a show of toughness against Communism, crucial for his credibility as Commander in Chief as well as for the United States; yet



Political Self-Deception

it was short of fighting an all-out war and it followed the recommendation of the Pentagon and of the Kennedy’s men who, being the “wisest and the brightest” could not but know what they were doing. The decision to escalate the conflict became palatable to the president and useful to mask the lie to the public. By means of the administration’s representation, what was hidden from public scrutiny was not a war, but an act of reprisal and deterrence in line with the secrecy required for reasons of national security. It was a secrecy that no one would dispute. Given that the nature of the military escalation was denied for what actually was, and that such denial was indeed believed by the administration, then the lying to the country was discounted as the great deception that it was, but simply processed as keeping classified information from the public eye for preserving the military success of the various operations. SD made the lying at the same time less problematic and more justified: it was not lying about waging a war that was denied as such, but only about specific acts, and it was justified by national security reasons as a matter of course. This is obviously only a speculative, though plausible, reconstruction of how Johnson solved his uncertainties subscribing the escalation, and then built up a cover story for avoiding confronting his lying. And yet, that some SD was at work in this instance is indirectly proved by Johnson’s keenness to have a Congress resolution before striking North Vietnam. When in the Spring of , contingency plans were made for escalation and more precisely for airstrike in the North, Johnson was very eager to underline to the press that they were only contingency plans, but that there were no official decisions about enlarging the military involvement, and that, in such a case, the decision would have been brought to the Congress for a resolution. He stated more than once that he wanted to avoid the Korean mistake that is to strike without the support of a Congress resolution. And indeed he got a resolution on August , after the alleged incident in the Tonkin Gulf. .. In sum, SD concerned first the decision to escalate, then the nature of the military escalation, which, coupled with the illusion of invincibility, made the administration oblivious to its own lying. The latter did not seem much of a lie, or better, did not seem much of an unjustifiable lie, as in SD of the third type. Meanwhile SD helps also to account for seemingly contrasting features of the escalation process, namely the intentional and the unintentional dimension of the decision-making process. SD enables one to make sense both of the intentional decisions to escalate further, which were carefully planned, and of the consequent intentional hiding

Johnson and the Gulf of Tonkin Resolution



from public scrutiny, and of the overall opacity of the participants as to the whole process, which was blurred by the blocking of the negative evidence and by the misrepresentation of reality in a more palatable way. In turn, the overall opacity made possible a certain candor in the lying, because they knew they were lying, but were not clear about what they were lying about. In this way, the theory that the war was entered “inadvertently” by a slippery slope and the theory that each step was indeed the outcome of a deliberate choice are reconciled, since by using SD each step was indeed deliberately taken and yet, given the deceptive and self-serving representation of what was happening, a clear insight on the whole process was precisely lacking. Similarly, deception and manipulation of public opinion and of Congress was abundantly and straightforwardly present, but it was discounted as a lie by the self-deceptive belief that it was the usual matter-of-course secrecy required by national security. Moreover, the lie was going to be vindicated by the certain attainment of the final goal. As a result, the distinction between dishonesty and candor was blurred and was particularly blurred to the participants. I stress once more that my view by no means intends to lessen the responsibility of that terrible conflict and of all the dead and wounded of the war and the destruction it had produced; the administration and the president were responsible and not just for mistakes and lost opportunities, but for the motivations that triggered the mistaken calculation and the massive evasion of what was happening, motivations that privileged credibility, ambitions, theory and technology testing over moral considerations, and sensitivity to the human costs there implied. And yet I think we have the task of trying to understand how such evils are produced as a premise for avoiding them in the future.

 August , the Gulf of Tonkin .. To sum up the antecedents to the Gulf of Tonkin incidents: in the first half of , the situation in South Vietnam went from bad to worse. From a political viewpoint, the two coups did not produce a stable government and a reliable interlocutor for the United States. Meanwhile, Vietcong penetration had increased, undeterred by the joint effort of the South Vietnam army and of US forces; covert operations went on, yet on a limited scale and with dubious results. As a response, the civil advisers and, more vehemently, the Joint Chiefs of Staff recommended to take



Political Self-Deception

actions against Hanoi, in the form of airstrikes. On May , a Draft Presidential Memorandum was prepared envisaging open military action to take place in a thirty-day schedule. The operation was to be preceded by a presidential request to the Congress to pass a resolution authorizing the administration to do whatever necessary for solving the Vietnam conflict. Yet, at a conference in Honolulu, held on June , American policy-makers recommended a more cautious approach than starting the airstrike plan immediately. Moreover, the prevalent opinion was that no resolution should be brought in front of the Congress unless it could be passed quickly, overwhelmingly and without too much discussion. Priority was given to the Civil Right Act, which was going to be discussed over the summer. It seemed that the resolution was simply postponed for few more weeks, maybe in September, before the Republican and Democratic Convention. There was however a consensus of the top officials that in case of a “dramatic event” the resolution would be brought to the Congress right away. The incident of August  represented the “dramatic event” justifying the presentation of the resolution to the Congress. And indeed, in the general public outrage about the “unprovoked” attack on American ships engaged in “normal sea patrolling,” the resolution passed without much discussion and with only two contrary votes. The circumstances favorable to the second type of SD were all on the table. The administration, and especially McNamara, was convinced that the resolution of the Vietnam mess required an escalation of the American military commitment. This conviction however could not openly be conveyed to Congress and to the country for it implied sending American draftees to Vietnam, and it involved going beyond the Cold War doctrine of defensive containment and deterrence. Both implications were likely to provoke internal resistance in the Congress as well as in the country. If however it could be shown that reprisal attacks to Hanoi forces were necessary responses to provocation, then the Congress and the American people could likewise be convinced to license an increase of American troops and means. But in order to convince them, some “dramatic event” needed to take place so as to make reprisal justified. The alleged incident in the Gulf of Tonkin provided precisely the justification to back the Resolution licensing the escalation. Thus the incident was required to make believe the country that the escalation was necessary, while the administration thought that the escalation was necessary for independent reasons, not good enough for the public to accept. Under such circumstances, the

Johnson and the Gulf of Tonkin Resolution



question is: did the incident really happen or was it a pure fabrication? Let us first summarize the facts. .. At the end of July , two different operations were taking place in the Gulf of Tonkin waters: an OPLAN A operation, and a DeSoto patrol by the American destroyer Maddox. The first was one of the covert operations planned, funded and led by American forces, but ostensibly carried out by South Vietnam. On the night of July –, four SouthVietnamese boats attacked radar and military installations on the islands of Hon Ngu and Hon Me, few miles off the coast of the Gulf of Tonkin. The following morning, the Maddox, starting her patrol mission, saw the boats going back to the base. The DeSoto patrol operation was meant to find sensitive information about the enemy’s coastal installation via radio interception and visual observation and photography. From the NorthVietnamese point of view, however, not knowing the electronic equipment on the ship, the patrol of a big destroyer such as the Maddox appeared as a form of intimidation and as a support of the boats making raids against the coast. Actually, it is not clear whether the two operations were intended to be connected and, more specifically, if the Maddox was meant to back up the OPLAN raids against the North Vietnamese coast. The OPLAN commandos launched additional attacks on August  and . Commodore John Herrick and the officers of the destroyer were not briefed about the first attack but were informed about the two subsequent ones. Meanwhile the Maddox went on patrolling the coast, at a distance between eight and twenty miles off the mainland, and four miles off the islands. Given that the North Vietnamese claimed thirteen miles of territorial water, the armed vessel had a clear aggressive appearance. On the afternoon of August , in retaliation to the previous day raids, the Maddox was attacked by three North Vietnamese PT boats, coming out of the island of Hon Me. The ship turned to the open sea, but it was followed by the three boats. Capt. Herrick ordered to fire on the torpedo boats well before they were close enough to fire back, and called for help from the Ticonderoga, an aircraft carrier stationed nearby.





Since the Gulf of Tonkin alleged incident played a major role in the following escalation, it has been analyzed many times and in detail. I shall basically follow the reconstruction by Moïse, Tonkin Gulf, which could count on more documents and declassified materials than earlier reconstructions such as Goulden, Truth Is the First Casualty. See Moïse, Tonkin Gulf, p. .



Political Self-Deception

Before aircraft arrived, the Maddox and the boats exchanged fire, but without much damage being done on either side. Then the boats turned away, either because the fire from the Maddox was too intense, or because they had fired all their torpedoes, or because they were recalled; they were pursued by the Maddox still firing on them. And shortly after, they were attacked by four Crusaders from the Ticonderoga. As a result, one was dead in the water, and the other two damaged. Even if there is no doubt as to this attack, there was much confusion about its reconstruction, the damage inflicted, and who started firing. It was also unclear who ordered the attack: officers on the Maddox actually intercepted three messages containing attack orders, but the target was not specified, but they assumed it was going to be their vessel, yet the chain of command was not clear, though apparently did not come from the North Vietnam government. The attack was reported to the Congress few days later, after the second incident of August  while the Resolution was presented for approval. McNamara’s account was faulty in more than one respect: (a) he claimed that the attack was unprovoked, given that he stated that there was no connection whatsoever between the patrol and the raids on North-Vietnamese islands, carried out by South-Vietnamese sea force against specific infiltration area. This was the cover story for the OPLAN  A operation, whose nature however was absolutely clear to the North Vietnamese force. (b) He then told the Senators that the vessel was thirty miles off the coast when the attack started, while the distance was much closer. (c) He told them a confused story about the start of firing. McNamara said that the Maddox fired three warning shots, while in his testimony General Wheeler said that Maddox responded to the fire from the boats. Neither version was true, because the Maddox fired at them with the intention to hit them when they were approaching the ship. (d) Finally, McNamara gave the Senators the impression that, according to rules of engagement, US forces did not pursue the North Vietnamese boats but simply drove them away. Had the official (for the Senate) version of August  attack been true, probably it would have made sense to call a reprisal then, but, as a matter of fact, the president opted for holding off on a retaliatory attack.  

For a full reconstruction of the incident and of its controversial aspects, see Moïse, Tonkin Gulf, pp. –. Even had the cover story been true, it would not have made sense that North Vietnamese would have not made the connection. William Bundy said that “rational minds could not readily have foreseen that Hanoi might confuse them [the DeSoto patrol and the OPLAN raids]” (reported in Moïse, Tonkin Gulf, p. ), but “Why on earth might Hanoi have decided it was unthinkable for a US destroyer to participate in A operations?” (p. ).

Johnson and the Gulf of Tonkin Resolution



As mentioned, this misrepresentation of the August  attack to the Maddox was presented to the Senate only after the second alleged incident and after the retaliation for that had been carried out, in the context of getting support for the resolution. Such misrepresentation was meant to motivate the Senators to hand a blank check to the president, by emphasizing the kind of wanton provocation to which US forces were exposed. The evaluation that the administration did of the August  incident was quite different from the official report, so much so that Johnson decided not to strike back. The reason he provided in his memoirs concerns the fact that it had not been clear whether the attack was ordered by the government in Hanoi or by a low-rank officer. In fact, two separate Vietnamese documents support that there was confusion about the attack order and that there was a recall message when the attack was in progress, as reported by Moïse’s reconstruction. Johnson’s decision might have been influenced also by some intelligence pointing out that the attack was meant as retaliation for the raid on the island; hence it was not the unprovoked and puzzling incident as pretended in the Senate’s presentation. In any case, Johnson’s restraint was in line with the attitude of caution about Vietnam he had followed so far, caution that was publicly coupled by tough statements. Similar conduct was typically attributed to a “paper tiger” that was precisely the accusation of General Khanh in Saigon. Why then was the restraint shown after the first incident so hastily abandoned two days later, after the alleged second incident? Even assuming the second incident real, as the administration believed it to be, the reasons in favor of restraint in the first case were present in the second as well: the lack of clarity as to who ordered the attack and the supposition that the attack might have been retaliation in this case too should have prompted caution given that there were other raids to North Vietnamese islands on the night of August . But clearly the response to the alleged second incident was not informed by a vigilant attitude as to the processing and the evaluation of data. The fear of being considered a “paper tiger” became paramount. As Moïse suggests, the “sense that restraint has been shown may have predisposed the United States to react more violently after the second incidents.” If Moïse is right, then the response was geared on the will to look tough and to rebuke the allegation of being a “paper tiger” both on the domestic and on the international scene.  

Moïse, Tonkin Gulf, p. . Moïse, Tonkin Gulf, p. .



Reported in Altermann, When Presidents Lie, p. .



Political Self-Deception

.. After the first incident, Captain Herrick wanted to terminate the patrol operation, but his suggestion was rejected since the Pacific Commanders privileged saving face over security. They did not want the enemy to think their attack had intimidated the US Navy. Thus the DeSoto patrol resumed on August , with a new vessel added, the Turner Joy, and the orders were that the closest approach of the destroyers to the coast was to be  miles, that is a mile less than what Hanoi claimed to be its territorial waters. As said, on the evening of August , new raids on the North Vietnamese islands took place in the contest of the OPLAN A. This operation was indeed known to the civilian officials of the administration and to Johnson himself, being discussed in a meeting held at the White House on August  concerning the line of action to be taken after the incident. Yet the records of this meeting are only indirect, via a memo to Dean Rusk referring to the decision of going on with the raids. As Moïse has later reconstructed in detail, this is the public’s only way of knowing that President Johnson was informed of the August  raid, which was never mentioned in the reconstruction of the supposed incident of August . Captain Herrick was in any case worried that this new operation about which he had been informed might expose the Maddox to a harsher retaliation than the previous attack, and indeed in Washington intercepts were received that indicated an imminent North Vietnamese attack on the American destroyers. On the precise content and the correct interpretation of such intercepts there was later a controversy, given that these intercepts were taken as proofs of the attack had been real. And yet the message from Washington to the Maddox contributed to the climate of anxious expectation of retaliation. On the night of August , in bad weather condition, not long after the alert from Washington, some “shunks” (possibly hostile radar contact) were recorded on the radar screen of the Maddox. In the detailed analysis provided by Moïse of the supposed trajectory of the shunks, we gather that the radar contacts could not possibly have been North Vietnamese boats, both because it was not clear how the North Vietnamese could have localized the destroyers, and because the shunks never got within twenty miles of the two American ships. In a while, other seemingly threatening objects appeared on radar and, even if the fire-control radar never managed to be locked on targets for the sufficient time, the Turner Joy opened fire, and when the targets disappeared from radar, officers assumed that they were sunk or hit. The fire went on for good four hours, and altogether the destroyers fired over three hundred rounds. Nevertheless, no clear target

Johnson and the Gulf of Tonkin Resolution



was ever locked on; to be precise, as Patrick Park, at that time the main gun director of the Maddox, said, only one clear and genuine target was ever locked on by the fire control radar the whole night, and that turned out to be the Turner Joy. The supposed “enemy” torpedoes were reported by a sonar man in the Maddox, David E. Mallow, who was inexperienced and got mixed up between the noises from weather conditions and those created by the ship itself, plus those of the firing and the excitement of the battle. At  am Washington time, McNamara, who had been informed that the two destroyers were under heavy attack from no less than twenty torpedoes, met with members of the Joint Chief of Staff and considered what to do. “We agreed that, assuming the reports to be correct, a response to this second unprovoked attack was absolutely necessary” At  am, he decided to call up Johnson at the breakfast after the meeting with Democratic congressional leaders. McNamara’s call seems to indicate that he took the report as sound and corroborated, despite the fact that it was a well-known rule within the Pentagon to assume that first reports from combat area were always inaccurate. We are not in a position to affirm whether the Secretary of Defense knew or did not know this basic rule. Apparently Johnson’s response was very angry and very determined to pay the attackers back. Half an hour later, in a second phone exchange between McNamara and the president, now in the White House, their conversation did not concern what happened, but focused on how to respond, granting that an attack had happened. McNamara recommended not only to pursue the attacking boats, but also to retaliate against the coast of North Vietnam. Johnson seemed to share the recommendation heartily. From that moment on, the  

 



McNamara, In Retrospect, p. . Moïse reports what P. G. Gouldin, the assistant secretary of defense for public affairs, later wrote: “A cardinal rule in an establishment as large as the Department of Defense is to assume the first reports are always wrong, no matter what their security classification, no matter to whom they are addressed. [. . .] Do not under any circumstances or conditions share them with the press, for they will come back to plague you,” p. . Altermann, When Presidents Lie, p. . I got this first Johnson’s reaction from Altermann, When Presidents Lie, p. , who was quoting from Robert Scheer, Thinking Tuna Fish, Talking Death: Essays in the Pornography of Power, Hill and Wang, New York, , p. . If Johnson’s immediate reaction was for a forceful reprisal, then the interpretation by Porter that the president was actually manipulated by McNamara and forced by him with his misrepresentation to take the military response, results much weakened. See G. Porter, Perils of Dominance: Imbalance of Power and the Road to War in Vietnam, University of California Press, Berkeley , pp. –. M. Beschloss, ed., Taking Charge: The Johnson White House Tape –, Simon and Schuster, New York, p. .



Political Self-Deception

records shows two different types of issues being the object of August  conversations and discussions of the administration. On the one hand, in the Cabinet, the Pentagon, and the Congressional leaders meeting, the issue was (a) how to respond in order to convey the message to the enemy and to the world that a deliberate act of war should not go unpunished, and (b) whether a resolution conferring authority to the president to order military strikes and whatever else he judged necessary for Vietnam conflict would be accepted by the Congress without much trouble. On the other hand, McNamara was exchanging telephone calls with Admiral Sharp asking for conclusive evidence of the attack. The doubts about the attack started pretty soon: at : a new message from Captain Herrick arrived in Washington expressing such doubts and counseling caution as to further action. Another message, which was sent shortly after, is fully quoted by Moïse and sounds very ambiguous, almost self-contradictory. It reported that the Turner Joy was fired upon by guns and illuminated by searchlight, that it fired back on thirteen contacts, hit three, and sunk one. Yet the message ended with these words: “Entire action leaves many doubts except for apparent attempted ambush at beginning. Suggest thorough reconnaissance in daylight by aircraft.” In the frantic message traffic back and forth trying to establish an accurate report of what happened that ominous night, the information coming from the Turner Joy was much more positive than anything from the Maddox. .. Meanwhile, in Washington, McNamara was discussing the plan for the airstrike, and the details of the president’s television announcement to the country. At the same time, he was pressing Admiral Sharp for final evidence of the attack. In a telephone conversation between the two, at : pm (Washington time), Sharp signaled that there were doubts, but also that “initial ambush attempt was definite,” but it was not clear how many torpedoes had been launched, definitely less than originally claimed, and the report was a little confusing about what exactly happened. McNamara then asked if there was any possibility that there had been no attack; and Sharp admitted that there was a slight possibility. McNamara urged Sharp to come back to him with good evidence as soon as possible, because, before taking action, the administration wanted to be “damn sure” of what had happened, and the executive order was going to be launched in three hours; consequently, Sharp had two hours at most to



Quoted by Moïse, Tonkin Gulf, p. .

Johnson and the Gulf of Tonkin Resolution



come back with an answer. McNamara here exhibits some of the typical traits of the self-deceiver. Early in the morning, he was informed of the ongoing attack. It was the “dramatic event” he badly wished for to push the escalation plan. Dismissing the Pentagon rule of doubting the first report, he took it for good, and concentrated on planning the response, as well as on the presentation of the Resolution to the Congress. In the heat of the preparation for the military response, the incoming doubts on the incident were clearly nuisance to him. They were an unforeseen obstacle to going on with his favorite course of action that appeared to have gained the necessary general consent. No matter how McNamara regarded the need for reprisal, at  pm he asked General Sharp for evidence that the attack had taken place. As all motivated believers, he wanted his belief confirmed and, by the same token, retaliation justified. Consequently, he instructed Sharp to provide him not just with whatever evidence could be found, but with confirmatory evidence. “If you get your definite information in two hours, we can still proceed with the execute and it seems to me we ought to go ahead on that basis [. . .] Continue the execute order in effect, but between now and  o’clock get a definite fix and you call me directly.” McNamara did want to go on with the reprisal plan, but he did not want the reprisal to be based on a fake incident; hence, he wanted the Tonkin incident to be a real fact. He wanted to be “damn sure,” and yet he considered the lack of final evidence as a “problem to be fixed” by a later report that could not seriously consider as upsetting the preparation of the executive order. The implication was that the possibility of overwhelming negative evidence was not taken properly into account. The subtext to Sharp was to “look for confirmation and clear up the doubts.” Thus pressed for the right kind of answer, Sharp, in a subsequent telephone conversation with General Burchinal, showed that he could do little but focus on those aspects of the ambiguous cables from Herrick that provided positive evidence for the attack. A third cable from Herrick said that the ambush was bona fide, though how many boats and torpedoes were involved and fired upon the two American destroyers, how many were hit, sunk, and whether they disappeared was unclear. Hence, a daylight aerial recognition was recommended. Considering and assessing the different messages from the destroyers in a motivated frame of mind, Sharp and Burchinal became  

See the transcript of the telephone conversation between McNamara and Sharp, reported in Lyndon B. Johnson’s Vietnam Papers, ed. D. E. Barrett, Texas University Press, Austin, , pp. –. Barrett, Lyndon B. Johnson’s Vietnam Papers, pp. –.



Political Self-Deception

inclined to be more positive about the attack; still they repeated to each other that the airstrike would have had to be put off until a definite picture on the incident was available. And yet no one expressed the view that the launch timing needed to be revised. There is a further conversation recorded between the two officers at : pm. Here Sharp announced that he got a new message that “pins it down better than anything so far.” Burchinal commented, “indicates that they were out there on business, huh?” And Sharp: “Oh yes, very definitely.” Once Sharp found a message confirming and comforting McNamara’s wish, that message came to be considered definite proof that the attack had happened, together with the North Vietnamese intercepts, which apparently were ordering the attack. Meanwhile, the pilot James B.Stockdale, flying off the Ticonderoga, moving over the waters where the two destroyers were positioned and where the alleged attacking boats were supposed to be, stated that he could not see anything but sea and the American firepower. When the following morning he was ordered to make reprisal strikes, he was astonished and wanted to call the president for he was worried that the first action of war openly directed to North Vietnam was decided on false pretenses. Despite the confusion and the unclear and contradictory reports, a careful reconstruction of what went on, as the one provided by Moïse, shows that the incident was not a fabrication: the crews of the destroyers believed themselves to be under attack, radar recorded contacts, however defective such contacts might have been, and fired on hypothetical targets, no matter how intermittent they were. The first report spoke of an attack. True, first reports should not be trusted as a rule, but nevertheless the alert was real. Later reports were not completely and finally dismissive of the attack, but rather very doubtful and ambiguous. The naval officers were unsure, not certain that the attack had not been real. This is the typical circumstance for SD process to be triggered: seen from outside and ex-post, the evidence was more negative than positive and such to suggest a careful collection and review of data. Yet it was not conclusively negative; hence it offered sufficient hooks for a biased interpretation confirming the fact to have taken place. In turn, the bias was triggered by the anxiety induced by the contrary evidence, which, instead of being considered as a  

Barrett, Lyndon B. Johnson’s Vietnam Papers, p. . The Stockdale story is quoted in Altermann, When Presidents Lie, p. , from J. Stockdale and S. Stockdale, In Love and War: The Story of a Family’s Ordeal and Sacrifice during the Vietnam Years, Harper and Row, New York, , pp. –.

Johnson and the Gulf of Tonkin Resolution



reason to stop the preparation for action, was blocked, activating an anxious search to explain it away. Apparently, McNamara was finally convinced by the North Vietnamese intercepts. As it happened, those intercepts were immediately classified; only much later it became clear that they referred to the previous attack of August , but the misunderstanding seemed to have been real at least in the early stage following the announcement of the attack. In turn, Captain Herrick and Commander Ogier, assessing the event back and forth in the hours following the alleged attack, became convinced that it did happen. They were certainly under heavy pressure to come up with a final assessment, yet they did not just lie in order to satisfy Washington’s questions. They were certainly in a special position to fuel the general SD process, being the ones directly acquainted with the evidence: under the pressure of the Pentagon, the High Naval Command, and the Secretary of Defense, they had extra-epistemic motivations for believing the attack bona fide, and for biasing the evidence in favor of this belief. Against the straightforward deception thesis, Moïse mentions that both officers had an unquestionable record of honesty, and though pressed for confirmatory evidence, they were not just lying for the Pentagon’s sake. Sometimes later, Herrick started questioning this conviction when negative data became clearly overwhelming. Thus, there is no reason to doubt that his previous conviction was indeed genuine. In fact, his conduct perfectly fits the SD explanation of coming to believe a false reconstruction fitting his wish. Summarizing, on the night of August  (morning in Washington), two American destroyers, the Maddox and the Turner Joy, in the course of a DeSoto Patrol in the Gulf of Tonkin, believed to be under attack by a number of North Vietnamese torpedo-boats and fired back for a considerable amount of time. As soon as McNamara was informed of this alleged attack, he with his Deputy and members of the Joint Staff started planning the reaction, which Johnson heartily supported. Then the whole day of August  in Washington was spent making preparation for the reprisal strikes and, at the same time, getting the necessary evidence for the airstrikes to be justified. The two processes run parallel, but the airstrike plan, despite some assertions to the contrary, was apparently not made conditional on getting a conclusive evidence of the event. After the first   

See McNamara, In Retrospect, p. ; also reported by Moïse, Tonkin Gulf, p. . Moïse discussed the misunderstanding on the intercepts at length on pp. –. Moïse forcefully argues that the incident is not the product of fabrication or of a straightforward lie; see p. .



Political Self-Deception

message from the destroyer about the ongoing fight, others followed, which suggested doubts about what exactly had happened in the Gulf of Tonkin. The doubts did not stop the preparation but made all participants anxious to be reassured that the incident had been real, and to have the information in time for the executive order to go out as planned. Under this pressure the two naval officers in charge assessed the cloudy evidence in a biased way and became convinced the incident had been real, though the details were still to be checked possibly with a recognition flight in daylight. Notwithstanding the persistent ambiguity of the communications from the area, Herrick’s sentence that the attack was bona fide was taken as final by Admiral Sharp, then Burchinal and McNamara; the inconsistency between the attack having taken place and the lack of evidence of the attackers was ignored, while the North Vietnamese intercepts were taken as the confirmatory proof. After settling that the attack had happened, the strike followed.

 The Response and the Interpretations .. The airstrikes went on as scheduled and the president announced the strikes on TV an hour before the airplanes took off, in order to be on the national news at : pm. Hanoi monitored the announcement and managed to shoot down two airplanes. The misplaced announcement, actually alerting the enemy, obviously became the subject of another controversy connected with the Gulf of Tonkin, which, however, need not concern us here. The Resolution was then presented to the Congress on the morning of August . If the airstrike was important insofar as it was the first open act of war against the North, nevertheless without the Resolution it could have remained an episode of military action without much bearing on the conflict as a whole. Together, they represent the first step in the escalation to an all-out war, though the participants might well have believed that it was a case of tit for tat in a deterrence strategy. No doubt the Congress was misled both with reference to the evidence of the attack and about its interpretation. While the cloudiness of the evidence was kept from the discussion, the two attacks (of August  and ) 

McNamara, commenting on the resolution, wrote that while the meaning of the resolution was clear, the war’s potential was not; and that was the reason why later on the Congress felt that it had been misled by the resolution. No doubt he wanted to justify itself from the charge of manipulating the Congressmen’ will, but it is true that the scale of the war was then unclear even for the administration. See In Retrospect, p. .

Johnson and the Gulf of Tonkin Resolution



were presented as part of a series of aggressive actions of Hanoi that were alleged to be proof of the North Vietnam intention to conquer the South as well: “The present attacks . . . are no isolated event. They are part and parcel of a continuing Communist drive to conquest South Asia . . . and eventually dominate and conquer other free nations of Southeast Asia.” Moreover, the attacks were described as unprovoked and this was clearly misleading. McNamara denied that there was any connection between the DeSoto patrol, which he argued was a routine operation of the American Navy throughout the world, and the South Vietnamese attacks to the islands in the North; he added also that the destroyer’s officers were not even informed of OPLAN operations. The last statement was plainly false: Captain Herrick knew of the covert operation, and McNamara had to admit it as a mistake in his memoirs. But even if the two destroyers’ patrol were not intended as a backup for the OPLAN strikes, as he contended still in , nevertheless he gave an inaccurate account of the patrol that was not so innocent and was much closer to the coastline than he had stated. The explicit intention of showing the flag was an act of intimidation and challenging the Hanoi claim to territorial water looked aggressive. Moreover, even granting the two operations – Oplan A and DeSoto patrol – completely unconnected, as claimed, how could the North Vietnamese have guessed that, given the apparent connection? All in all, the presentation of the alleged incident was a pretense to convey a sense of danger and drama apt to stir up patriotic outrage. Under the circumstances, a Congress in the grip of the Cold War mindset did not need much persuasion as to the appropriateness of the previous day’s retaliation, and of the delegation of power to the president to take action in case of future developments in Vietnam. Rusk’s statement to the point clearly hinted to the fact that if the situation developed in yet unforeseen ways “of course there will be close and continuous consultation between the President and the Congress.” That was precisely what later the administration carefully avoided doing. In that respect we can say that the Congress was misled as to the use the president later made of the resolution. No representative then thought that he was voting a declaration of war. And since the resolution was presented and passed under the pressure of an (alleged) dramatic event just to avoid extensive discussion and open public debate, it is difficult to believe that the administration intended to discuss all future steps and scale of involvement in Vietnam, while in  

Dean Rusk’s statement in the Senate, reported by McNamara, In Retrospect, p. . Quoted in McNamara, In Retrospect, p. .



Political Self-Deception

reality requesting a blank check. Yet it is likely that the administration and the president did not realize that they were stepping into a full-scale war in Vietnam. Johnson and his advisers clearly meant the resolution as the legal tool to deepen the American involvement in the North. They also wanted this Resolution to allow the administration to bypass Congress as to the next steps of such involvement, and to avoid a public debate on such a touchy issue as Vietnam. Finally, they knew they were not honest about the Resolution. But, very likely, they were blind as to the overall meaning of the escalation and of the Resolution in the context of the all-out war, which they were self-deceptively preparing. Thanks to this SD, their lie to the Congress and the public did not look to them as gigantic and monstrous as it would later appear. They knowingly lied, but did not fully realize that they were denying that a war was fought, an undeniable war where thousands and thousands of draftees were “secretly” sent to die. Even if the belief in the incident of the Gulf of Tonkin was genuine, it was used as a pretext to extract the Resolution from the Congress. In this respect, McNamara lied to the Congress and his SD was instrumental to this deception. The deception consisted in making the Congress (and the country) believe that the repeated provocative attacks of North Vietnam called for quick responses for which the president was asking Congress’ authorization. On the basis of such deception, they managed to swiftly pass a resolution that formally delegated the power to make decisions about the Vietnam conflict to the president’s judgment on the manipulated understanding that the president was not going to use that power to bring about an all-out war in Vietnam. If the alleged incident represented an instance of SD as I have argued, then it produced the deception of the American public, and provided the grounds for further deliberate deception. As often is the case with SD, its short-term outcome was positive for the administration, and for Johnson’s popularity as well. The quick and decisive action in front of a national security emergency gained an overwhelming consensus to the president also from his Republican opponent Goldwater and pushed him up in polls for the oncoming election. The press backed the action and provided an imaginative report of the battle that had never happened. Despite all the doubts, which surrounded the incident from start, the official version was not challenged even by the usually more accurate media. Nor was the press in any way moved by a 

For the media coverage, see Goulden, The Truth Is the First Casualty, p. , Moïse, Tonkin Gulf, pp. –, and Altermann, When Presidents Lie, pp. –. Moïse makes reference to the

Johnson and the Gulf of Tonkin Resolution



front page article appeared on Le Monde on August , which reported the role of American forces in backing the guerrilla actions of South Vietnamese in the North, suggesting therefore a link between the Tonkin incidents and the raid on the Islands, a link that had been officially denied and swallowed by the American press without much fuss. As we shall see the benefits were but short-lived. .. Some interpreters supported a conspiratorial view of the Gulf of Tonkin Incident, some others backed the official version, despite all the contrary evidence, and finally some scholars provided accurate and balanced reconstructions that acknowledge the mixture of mistakes, confusions, biases, and lies. I will not devote much attention to the official version story, even in the updated account of historical revisionists, because the evidence now available is so overwhelming that it does not leave any latitude to the revisionist interpretation. After the very detailed reconstruction by Moïse and the General Giap’s testimony that had finally convinced McNamara that the August  incident never took place, the contrary thesis can be endorsed only manipulating documents and records. I shall rather concern myself with the conspiratorial interpretation and with what I would call the apparent conspiracy view. For it must be admitted that, seen from outside, the whole chain of events, which ended up with the resolution, have all the appearance of a conspiracy. There was a background interest (the plan to escalate the conflict); there was the need of a pretext, because of the president’s caution and uncertainty about what to do; there were covert actions just then intensified so as to possibly provoke a reaction; finally there were the incidents, or something that could pass as an “unprovoked act of aggression”; the August  incident was hastily presented as a “dramatic event” and as such it enabled the activation of the much sought for military response. All of the elements for a conspiracy were actually in place: whether there was a deliberate conspiracy or only a chain of events that came to look as if they were brought about by a coordinated plan of actions is a different matter

  

John Wayne syndrome: “The press was, in fact, presenting a classic John Wayne image for American behaviour: the quiet man who is not easily provoked, but whose wrath is devastating when he is pushed too far,” p. . Also in Altermann, When Presidents Lie, p. . We have already mentioned Giap’s testimony reported in Argument without End, p. . Moïse shows how revisionist historian Douglas Pike has misquoted the official history of Popular Army of Vietnam so as to convey the idea that the August  incident in the Tonkin Gulf is actually acknowledged by the Vietnamese. See pp. –.



Political Self-Deception

that has precisely to do with the role of deception and SD, respectively. Let us now consider the conspiracy view. Conspiratorial interpretations may vary according to the scope of the supposed conspiracy: given the final interest to escalate the war, the conspiracy might have consisted in: (a) the direct fabrication of the Gulf of Tonkin incident, (b) the artful provocation of the incident; (c) the deliberate misinterpretation of what happened on August , (d) finally, the willful hiding to the president of the lack of a final and consistent evidence about the incident. The fabrication theory was specifically held by North Vietnamese and with good reasons, given the evidence they had. Moïse, in the preface to his study, reports that it was a shared viewpoint among the Vietnamese that Johnson had directly faked the incidents in order to create a pretext to escalate the war. Though he believes that this view is groundless, given the amount of records available, he also holds that it was perfectly justified at the time from their perspective. The provocation view is indeed inconsistent with the fact that the incident did not actually take place. So it is often rephrased as follows: given the intention to provoke an incident, what happened was interpreted as if it were real, while the provocation was denied. In this way, the provocation view is conflated with the manipulative view. The manipulative interpretation concedes that the alleged incident of August  was not faked, but the outcome of mistake, confusion, overexcitement, weather conditions, and the like. Yet it considers the hasty jumping on the patchy report as a deliberate manipulation of the event for justifying firstly the reprisal, and, secondly, the resolution. According to this view, the original mistake, which had been genuine, was then dishonestly presented as good, and cunningly twisted in order to justify the retaliation, and finally to provide a pretext for securing the Congress authorization for whatever act the president decided to undertake. From the moment the second cable from Herrick arrived, suggesting that very likely no attack took place, the administration knowingly acted on a pretense, which was then artfully enriched and embellished so as to make the improbable attack a very vivid threat to national security and, at the same time, the indisputable proof of Communist aggressiveness and untrustworthiness.

 

 Moïse, Tonkin Gulf, p. . Moïse, Tonkin Gulf, p. XV. This view is quite widespread, and is held, for example, by Wise, The Politics of Lying, pp. –, and by Altermann, When Presidents Lie, pp. –.

Johnson and the Gulf of Tonkin Resolution



The manipulation view cannot be finally proved false; yet as Moïse remarks, there are no strong reasons to doubt the administration’s belief in the incident on the evening of August . After all, SD accomplishes more swiftly what an open manipulation would do and is more effective. There is a final view, which circumscribed the conspiracy to the military and Johnson’s advisers, especially McNamara, while exonerating thepresident. This view is argued by Porter, on the basis of the telephone traffic though the Pentagon, the Pacific, and the White House. According to this view, the conspiracy consisted in McNamara’s hiding the shaky evidence of the attack from the president who, therefore, decided to retaliate without suspecting that the confirmation of the attack was still pending. This final view, however, does not state that McNamara knew that the incident did not happen and deceived the president on this point, but simply that McNamara did not inform the president of all the ambiguities in the reports over the evidence. He did not expose Johnson to doubt, yet that does not imply that he believed the incident never occurred. The apparent conspiracy view, by contrast, admits the presence of lies in the whole affairs, and yet denies the conspiracy. The lies actually attached to a genuine episode of misperception, which was then interpreted according to wishes and expectations instead of careful data review and assessment. Once landed in Washington, the alleged incident put in motion a reaction that then the participants defended from threatening evidence, and later presented to the public in a version that was deceptive even granted that the attack had been real. In one of the first critical accounts of the affair by Joseph Goulden in , the author said “Washington acted on the basis of assumptions, not facts – hastily, precipitously, perhaps even unnecessarily – firing at an unseen enemy lurking behind the blackness of misinformation” and commented that the Tonkin affair is an instance of “the confusion between illusions and reality and the inclination of man to act upon facts as he anticipates they should be, rather than what a rational examination shows them to be.” Reading carefully the much-detailed account by Moïse, where all kinds of technical evidence is reviewed and all viewpoints are taken into account, one gets to the conclusion that (a) a conspiratorial plan of any scope does not sit well with the records, and (b) there is no need to suppose conspiracy to attribute responsibility to the administration and the Pentagon. Even siding with a nonconspiratorial view of the affair, the mass of lies and  

See Porter, The Perils of Dominance, pp. –. Goulden, Truth Is the First Casualty, p. .



Political Self-Deception

misrepresentation that various agents, starting with McNamara, the Joint Chiefs of Staff, the president’s advisers and the president himself presented to the American public was impressive and ominous in its catastrophic consequences. Moreover, I would add that SD helps explain various aspects of the whole affair making sense both of mistakes and of lies. Yet the conspiracy supporter may argue: given that the alleged attack had been used as if a conspiracy were in place, why then discard the conspiratorial view altogether? The first reason to be suspicious of the conspiratorial view is the following. If, in the whole affair, “the first casualty was truth” as in the title of Goulden’s book, then I think that truth is not well served by conspiracy theory, unless an actual conspiracy can be proved with rock solid evidence. If what was particularly disturbing for American citizens concerning this affair was “the extent to which the appearance of this incident differed from its reality,” then the stubborn intent to cover up the truth even twenty years later, as shown in the Navy history of early Vietnam years, should not be paralleled by a captivating, but underdocumented, alternative conspiratorial version. And the documentation patiently piled up by Moïse does not support any conspiratorial view: there was neither fabrication nor straightforward lying as to the reality of the incident when the reprisal was decided. (The doubts resurface sometime later when Ray Cline and the CIA officers pointed out that the interpretation of the intercepts were dubious.) According to Moïse, the records offer no reason to suppose that McNamara or the president were not genuinely convinced of the reality of the attack on the evening of August . True, McNamara had been exposed to the doubts and ambiguities of the reports from the Pacific, and he certainly put pressure on the Pacific commanders and on the destroyers’ officers to confirm the attack. But when he got reassurance from Sharp about the Vietnamese intercepts and about a cable from Herrick confirming the attack, he suppressed any doubt that might have bothered him before and believed such confirmation that luckily met his own wish. He, together with the military and the Cabinet, badly wanted the right opportunity to present itself to escalate the war and to pass the Resolution.  

See Moïse, Tonkin Gulf, p. XII. Moïse reported his indignation at seeing published in E. Maroon Marolda and O. Fitzgerald, The United States Navy and the Vietnam Conflict, vol. : From Military Assistance to Combat, –, Washington Naval Center, Washington, DC, , an account of the Tonkin Gulf incident that presented the incident as real with the support of such evidence enough to convince anyone and to do away with the doubts and criticism that surrounded the affair from the start.

Johnson and the Gulf of Tonkin Resolution



In that, he was well disposed to believe whatever met such desire and was less than meticulous in assessing the data and the reports. The lack of accuracy in reviewing data and assessing records may prove his belief ungrounded, but not counterfeit. The analysis of SD has shown that we have many beliefs, which are unwarranted and irrational, but nonetheless are genuinely believed. Had he thought the incident a decided case of misperception of the two ships’ crew, he would have been more careful to act on the pretense it was real, for he surely must have known that the stakes were high, and that such a pretense was likely to be uncovered sooner or later and to backfire on the administration, as it did. Johnson’s advisers had already considered the possibility of escalating the conflict and presenting the Resolution in June. Yet, they decided to postpone the decision to a favorable moment after the Bill of Civil Rights or after the nomination, unless “a dramatic event” would happen. As much as the administration did not take the first and real incident of August  as sufficiently dramatic to constitute such an opportunity, similarly it should not have considered sufficient a case of misperception, had they conclusively known that it was a misperception. The restraint shown after the first incident might well have left the administration with the sense of having being very, even too patient, almost soft on Communism, and that feeling might have contributed to the build-up of the toughness displayed after the alleged second incident. And yet, the desire to show resolve after the restraint would not have been pursued in spite of the knowledge that the incident was not a fact. After all, it is not easy to reconcile the supposed knowledge that the incident did not happen with (a) the subsequent inquiry on the evidence; (b) the caution shown by Johnson after the alleged third incident on September  when he decided not to intervene, because the evidence was too murky. The subsequent inquiry was definitely less than impartial and “did not encourage naval personnel to express their doubts.” As acknowledged by the assistant to McNamara, Alexander Haig, “the endless attempts . . . to verify that an attack had in fact taken place were focused on finding confirmation, not on finding what had actually happened.” This attitude can be fitted either into SD or in a conspiracy theory. On the one hand, the self-deceiver looks for confirmation to defend her belief from threatening evidence; on the other hand, the conspirator covers up the  

Moïse, Tonkin Gulf, p. . Alexander Haig, Inner Circles: How America Changed the World, A Memoir, Warner, New York, , p. .



Political Self-Deception

pretense with an adulterated inquiry. On this point, Moïse interviewed Dr. Daniel Ellsberg, who was assistant to Assistant Secretary of Defence; Ellsberg stated that Alvin Friedman in his mission to the Pacific was sent to find out not the truth but, rather, evidence “to shore up our case.” And yet Ellsberg acknowledged that if Friedman had stumbled on the truth that no incident took place on August , he would have reported it to his superiors, but then, he added “such negative evidence would certainly not have been revealed to the Congress or the public.” Taken together, Ellsberg’s assertions to what Friedman would have done had he found negative evidence, and what the administration would have done with it, support the SD story vs. the conspiratorial view. Like in all cases of SD, the search was oriented to fulfill the wish that P were true, but the self-deceiver, even if anxious to confirm that P, if not completely deluded, is nevertheless able to see conclusive negative evidence. Moreover, the fact that a negative result would in any case have been kept from the public scrutiny seems to confirm that the inquiry, though biased, was not a deliberate pretense to get smoke into the public eyes, given that the outcome was not meant for the public anyway. Finally, a later investigation done within the Pentagon and not disclosed to the public concluded that there had been no attack on August . And this conviction was shared by many sectors of the military, the JCS, some Navy people, and intelligence analysts. From the assessment of these records, the most plausible conclusion is that the belief that the attack had been real was not a pretense, but genuine. Being produced via a self-deceptive process though, the conclusion had to survive doubts and contrary evidence, and needed to be defended against new data; the inquiries can be seen as “desperate” attempts to defend the belief, and look for confirmation. In the end, some internal inquiry found the negative evidence compelling. After a few weeks, Undersecretary of State George Ball reported that the president said of the incident: “Hell, those dumb stupid sailors were just shooting at flying fish.” Though doubts and certainties, worries and confidence were not evenly distributed among the people in the administration and the Pentagon, the search for confirmation that produced “endless attempt to verify the attack” seems to show that the   

 Moïse, Tonkin Gulf, p. . Moïse, Tonkin Gulf, p. . The adjective “desperate” is a quote from Ellsberg who defined “desperate mission” the enquiry by Friedman. In Moïse, Tonkin Gulf, p. . The quotation from George Ball (Past Has Another Pattern, Norton, New York, , p. ) is reported both by Moïse, Tonkin Gulf, p. , by Altermann, When Presidents Lie, p. . A similar statement is reported in a slightly different form and placed a little later in time (early ) without source by Goulden, The Truth Is the First Casualty, p. .

Johnson and the Gulf of Tonkin Resolution



administration’s belief that the incident had been real was genuine on August , and later defended as far as possible, until data reviews conclusively proved otherwise. All in all, even though the affair seemed to be the outcome of a conspiracy, the view that the incident on the night of August  was a case of (willful) misperception and of self-deceptive belief finds more confirmation from the records than the conspiratorial view. .. Does this view lessen the responsibility of the administration with reference to the whole of the Gulf of Tonkin affair? Not at all. Even though the administration’s belief in the incident was genuine, at least in the first days after August , the incident was definitely used as a pretext for retaliation. And, in order to work as a pretext, the alleged incident had to be further manipulated, and in this case, knowingly. Even before the fire started in the Tonkin Gulf, only on the basis of the intercepts of a probable attack, preparation for a reprisal started in Washington, in the midst of the utmost confusion from the battlefield. In the second telephone call with the president, at : am Washington time, McNamara was ready to suggest that a proper response would include airstrikes on some North Vietnam sensitive targets, and Johnson agreed at once. At this point we have two different interpretations of who was the real hawk pushing for retaliation. According to Moïse, who relied on McGeorge Bundy’s testimony, Johnson made up his mind very early on, possibly even before the shooting started, to use the incident to have the resolution swiftly passed. He apparently was not bothered that full evidence proving that the attack had happened was still lacking, hence his resolve put the whole Cabinet under pressure and in the uneasy position of asking questions about the incident on behalf of the president. According to McBundy’s version, it was Johnson’s resolve to drive his advisers to prepare the attack while still waiting for the final evidence that was key. According to Porter’s version, it was instead McNamara and other advisers in the cabinet and in the military who pushed Johnson toward retaliation, without letting him know that both the task force commander of the US warship and the commander in chief had suggested further review before taking any action. Porter’s thesis, being based on a whole body of newly accessible documents, may have a stronger point here. From my view, which is to establish the explanatory role of SD, who led, or better yet  

See Moïse, Tonkin Gulf, p. , referring to the records of the Johnson’s year on Vietnam edited by Gittinger, p. . See Porter, The Perils of Dominance, p. .



Political Self-Deception

misled, the other does not change the argument significantly. It seems to me clear that the desire to make a casus belli of the incident was the driving force, which activated the SD process concerning the reality of the attack as in all SD of the second type. In turn, the deceptive belief of the attack triggered the straightforward deception of the Congress and of the public in order to justify retaliation, albeit ex-post, and to bypass a serious debate on the resolution. I stress that the responsibility of the administration was not just for the manipulative way in which the incident and the resolution were presented. I think that they had a previous responsibility concerning the plan to escalate the war, striking Northern targets, a plan, which was publicly denied both before and after the Resolution. The very idea of having a war policy, which could not be openly declared, admitted and discussed is not only duplicitous but also difficult to manage, given that young citizens had to fight in it and many died. So the very idea of a deniable war is twisted, both from a moral and from a rational point of view, and as such provides easy trap into illusions and delusions. We have seen earlier that the puzzling conviction for a need to retaliate was based on a self-deceptive discounting of what the military escalation meant and what constituted a war. Such blindness is however blameworthy, considering that it stemmed from a motivated lack of vigilance and of consideration of the dramatic consequences. Moreover, if the hiding of the escalation strategy was prompted by a gap in the country between the widely shared Cold War mindset and the actual will to fight a war against some Communists far away, the options to be responsibly considered should have been either to attempt to realign the country with its ideology or to question that very ideology. But the abuse of secrecy in supposedly national security matters produced the misconceived idea to bypass this gap taking advantage of a dramatic event so as to produce the appearance of being tough on Communists without waging a full-scale war. This move, which to my mind constituted a truly culpable evasion of the administration’s responsibility, also provided the fertile grounds for the SD about the incident, since it created anxious expectations in civil and military officials about a dramatic event to happen. And it must be admitted that among the many forms SD can take in politics, mistaking a misperception for a naval attack is certainly the most striking. I thus hold that the administration’s responsibility lies not in the fabrication, which I have argued was not the case, but in the search for a pretext as the easy solution for the want to escalate the conflict without being prepared to pay the price in terms of public controversy. “A war

Johnson and the Gulf of Tonkin Resolution



waiting to happen,” as Altermann remarked, set the trap for the SD process to start. Thus, even though the belief in the incident was honest, as at first I believe it was, the administration was at fault for creating the appropriate circumstances for such a deceptive process to start, as well as for the subsequent lies told to the Congress and to the American public. Only after the hearings promoted by Senator Fulbright the truth started to surface (despite the efforts of the participants to stick to the deceptive version), and the truth about the Tonkin Incident, finally, decided the end of the career of McNamara and of the president himself.

 Concluding Remarks The SD interpretation of the Tonkin Gulf attack is not equivalent to the “mistake interpretation” later favored by McNamara and most of the participants. Not only the SD about the incident was coupled with plain lies to Congress and the country, but also the circumstances setting in motion the SD process were willfully and irresponsibly produced by the administration and the military. Thus, the assessment of the whole affair’s responsibility amounts to something heavier than misperception, confusion and inaccuracy, which is basically what participants admitted years later. Here is the list of the straight lies coupling SD: a.

The lie about the absence of provocation, and all the details of the two covered actions taking place in the Gulf. b. The lie about the implications of the resolution that were played down in the Senate debate and again denied in the whole of the electoral campaign and afterwards. c. The lie about the incident in the hearings led by Senator Fulbright on the Tonkin affair in . At that point, as we have seen the Pentagon and the administration knew from all sources that the attack most likely did not happen. I have already reported earlier Johnson’s comment about the flying fish. Moreover, Altermann quoted a telephone conversation the president had with McNamara at the time of the alleged third incident on September  when the president told the Secretary of Defense that a response to imaginary 

I want just to recall that in the case of McNamara, he switched to the “mistake and misjudgement” version only after the confrontation with General Giap, in , while in his memoirs, published in , he still said that the incident most probably happened and strongly denied that the alleged incident was provoked. See In Retrospect, p. .



Political Self-Deception

shots was not justified and that he did not want to take the risk of an unjustified retaliation. This record proves that by then both men knew that it was highly unlikely that the first incident was real, and that the scanty evidence still in place would not have been sufficient ex ante to justify the attack. Nevertheless, McNamara’s testimony in the Foreign Relations Committee was a stubborn repetition of the original version he presented to the Senate on August , . He dismissed all the doubts about the reality of the incident, citing the intercepts as the final proof. They were, however, classified documents and could not be examined by the Committee at the time; now we know that they were misinterpreted. Moreover, he indignantly rejected the accusation that there had been an American provocation with the two secret operations, and dismissed all contrary evidence that was presented to him. Of this examination, in his memoirs, McNamara only remarked: “At February th, , hearing called to reexamine the affair, Senator Fulbright graciously absolved me of the charge of intentionally misleading the Congress.” As if only bad intentions carried responsibility. By then, in , McNamara had switched to the mistake/misjudgement/missed opportunity version, which supposedly saved the honesty of his intention, if not fully justified his deeds. Lastly, among the official lies (d) the final rehearsal of the deceptive report of the incident, colored by massive although biased documentation, provided by the official historians of the American Navy in mid-s. We can thus conclude that SD did not dispense with deception of others, but, on the contrary, was ancillary to lying and also provides good basis for further deception. This is precisely the last argument I shall present in favor of the SD interpretation for the Gulf of Tonkin affair. Not only does SD look more grounded on records and documents than conspiracy and not lift responsibility off the administration; but I also contend that the SD version can better explain the lies and the final backfiring. Self-deceptive convictions are at the same time genuine and self-serving misrepresentations of data. If one adds some details to such   



See Altermann’s quotations from Johnson’s secretly recorded private Oval office tapes on p. . McNamara, In Retrospect, p. . In Argument without End, McNamara takes the view that political leaders carry a political responsibility to the people for their misjudgement and for their mistake; it is part of their professional duty to be accurate and wise and not to be taken away by uncritical thinking and assessing of the situation and of the options. See the already cited comment by Moïse about the history of the early years of Vietnam published by the US Navy, edited by Marolda and Fitzgerald in .

Johnson and the Gulf of Tonkin Resolution



misrepresentation, he might not feel it was a lie, in a proper sense, but merely an enrichment of the reality as he believes to be. Yet, being in fact the “enrichment” of a misrepresentation, the distance from truth is much wider than what is perceived by the agent. Describing as “unprovoked” an attack arriving after a number of military raids is deforming the reality, but if the attack did not happen in the first place, the scale of deformation dramatically increases. SD helps to keep the further lie in scale, so to speak, which, in turn, helps the liar to convince himself that he was not really lying. Furthermore, the self-serving deceptive belief may convince one that the attached lie is indeed necessary. Thus, if the administration was convinced that a second attack to American ships did happen and if, moreover, it also believed that such an attacked should have been punished, because of tit for tat in deterrence theory and, that for similar episodes, a Congress authorization was actually in order, and easier to get if the further actions were left in the dark, the administration felt entitled to blur the meaning of the much wanted resolution, given the urgency to have one ratified. Finally, when contrary evidence started coming in and exposing SD, the men in the administration probably at first tried to defend their deceptive but comfortable belief by denial. When the belief became untenable, they openly lied, trying to hide their foolishness and their irresponsible actions. And they might well have felt forced to lie at that point, officially to protect the administration and the military in the conflict, but in fact to protect themselves. This kind of excuse is again a piece of SD of the third type, in order to justify the officials’ less than impeccable conduct. All in all, we see how a whole chain of lies about the attack developed from the SD. SD helped the liars to swallow their own lies keeping a straight face, because in the grip of SD the lies appeared either minimal or justified by noble causes. Hence the widespread impression of later interpreters that the administration “was caught in its own lies.” Altermann remarks – about the television coverage of the Senate hearings led by Fulbright – that administration officials “grilled” by Fulbright “were made to look foolish, duplicitous, or both, as they repeatedly tried to put a favorable spin on what was becoming an obvious calamitous situation.” SD accounts precisely both for the appearance of foolishness and for that of duplicity, because indeed the self-deceiver dupes himself, though not knowingly, and discarding or denying contrary data available in front of him, he looks duplicitous. 

Altermann, When Presidents Lie, p. .



Political Self-Deception

At the end of his account, Altermann makes final comments on the two main characters of this affair, McNamara and Johnson, where he argues that there was something more than plain lying for both. He says: “McNamara’s disquiet would have been understandable for any public servant who believed himself to be an honorable man. But even when he was truthful . . . it was on the basis of SD constructed out of a willful ignorance of uncomfortable facts, as in the case of the Tonkin Gulf.” I think that McNamara actually believed to be an honorable man, and that it was possible for him to be so due to repeated processes of SD. Contrary to what Altermann suggests, SD was prior to lying, and actually sustained the lies as justifiable, hence in line with his honor. As we have seen in the first two chapters, SD can become a bad habit staining the moral character of a person, and that is why indirect strategy to avoid falling pray of SD is necessary. When eventually McNamara confronted Giap in , he could but give up his previous position that the August  attack in Tonkin Gulf had happened, at least probably, and had to admit as much. Nevertheless, he did not admit any deception, and took two books to explain his and his colleagues’ failures denying any intentional misleading of the Congress and the country. Once again, the mistake-misjudgment theory became his cover story to deny the administration moral responsibility regarding the truth. SD about the Tonkin incident would not have so easily occurred had it not been preceded by the misguided desire to escalate the war that, as shown above, was the outcome of unexamined assumptions, ideological prejudices, cold bias, and a good dose of SD in the form of evading hard facts and taking illusory solutions that looked justified only because they were misrepresented. 

Altermann, When Presidents Lie, p. .

 

Bush and the Weapons of Mass Destruction

 Smoking Guns, Mushroom Clouds, and Smokescreens .. In his speech in Cincinnati (October , ), when the run-up to the Iraq invasion was at full speed, President Bush, using a metaphor from Kennedy via Condoleezza Rice, provoked the audience with these words: “Facing clear evidence of peril, we cannot wait for the final proof, the smoking gun, that could come in the form of a mushroom cloud.” We can take this statement as a paradigmatic presentation of the war to come, based on the fear elicited by a worst-case scenario, the mere possibility of which suggested the need of preventive measures against the infinitely dramatic consequences. At the time, the decision to go to Iraq had apparently already been taken; between the summer of  and the spring of , when the military campaign started, the problem, which the administration faced, was instead how to sell the war to the American public. And the justification that seemed publicly acceptable and capable of mobilizing consensus was Saddam’s possession of WMD, which was a rationale linkable with terrorism, in so far as a rogue state, ruled by a volatile tyrant, could be conjectured as the provider of WMD to terrorist groups, even if he had not done so already. That was, at least, the leading idea of Bush’s aids in international affair, actually an idea going back to the Cold War mindset, which prevented them from seeing terrorism as a different kind of challenge than that posed by the Soviet Union. On August , Vice President  

Reported in Bob Woodward, State of Denial, Pocket Book, London, , p. . This thesis is well argued by Holmes, The Matador’s Cape. Holmes’s position differs from F. Kaplan, Daydream Believers: How a Few Grand Ideas Wrecked American Power, Wiley, Hoboken . Both Holmes and Kaplan maintain that the end of the Cold War had not been properly processed and understood by Bush’s aids and counselors; however, Kaplan downplayed the significance of /, which in his view was overemphasized as a definite turning point, while Holmes holds that terrorists’





Political Self-Deception

Cheney for the first time publicly affirmed that “there is no doubt Saddam Hussein has weapons of mass destruction”; his bold statement was not vetted by CIA nor by President Bush and somehow put pressure on the administration to adopt a confrontational stance at an earlier time. On September , the story of an attempted purchase of aluminum tubes, assumed for uranium enrichment, had been published in an article in the New York Times by Judith Miller and Michael Gordon, so that the suspicion that Saddam’s WMD could include or would soon include nuclear weapons had been made vividly present to the American public. No matter the connection between the aluminum tubes and a nuclear program had appeared quite slim to American intelligence; it was actually available for the administration to start hinting at a nuclear danger, culminating in the president’s speech in Cincinnati. The suggestion was that the danger, though still unsupported by reliable estimates and clear probabilities, was so dramatically great as to trump all balancing considerations, and require immediate action against the risk, however small, of an infinite catastrophe. The threat posed by Saddam was approached with a logic similar to that of Pascal’s wager: the belief in the threat, as the belief in God for Pascal, was not epistemically grounded by proper warrant but, rather, justified by a practical rationality argument inviting to focus on the infinite danger over the probabilities of the outcome. In this logic, the belief in the possession of WMD, like the belief in God’s existence for Pascal, was entrusted to the will, instead of cognition, and epistemic rationality was put in service of the consequent practical aims, namely precautionary alertness and preventive measures. The worst case scenario by itself is not irrational, despite lacking epistemic warrant, if it is just a hypothesis; yet it is easily transformed into a real possibility by a twisted SD process. If the worst case scenario is supplemented by probability neglect, that is by the discounting of the very slim probability of it actually taking place, then one comes falsely to believe that it is also a likely scenario. And often, under emotional pressure, if a threat is vividly visualized, people tend to

 



attacks, views and modus operandi essentially escaped the understanding of Rumsfeld, Cheney and Wolfowitz who remained biasedly attached to the Cold War idea of enemy states. Reported among other in M. Wheeler, Anatomy of Deceit, Vaster Books, Berkeley, , p. . This is precisely the logic of Pascal’s wager: no matter how small the probabilities, as long as they are above zero, the gains of afterlife salvation are infinite, and that makes the value (the “value of the hope”) of this chance so superior to any other preferences to make it rational the will to believe in God’s existence. For a discussion of twisted cases of SD, see infra, Chapter .

Bush and the Weapons of Mass Destruction



“crowd out” the awareness that the probability of the disaster is actually quite small. This is a typical case of twisted SD, one that runs contrary to the wish of the agent, given that no one wants the disaster to materialize. Yet the image of a mushroom cloud is so vividly salient as to brush away the probability consideration, and to induce the false belief that the threat of a nuclear attack is real indeed and that precautionary measures are definitely necessary. .. The Iraq invasion in  is a matter of ongoing controversy, exacerbated by strong feelings and polarized positions, maybe because it is still very recent and Iraq’s political situation still very messy and dangerous. To be sure, there are a few people who still maintain that the removal of Saddam by a military intervention was a good idea, only very badly executed, with lots of misgivings and bad planning. Most of the commentators, though, hold that Bush and his government had been at fault in conceiving this folly in the first place, and that, once they made up their mind, they misled the country into a costly and useless war of choice by an intense marketing campaign. The threat posed by Saddam regime was artfully inflated and the complications and difficulties of regime change and democratization were symmetrically underplayed. There seems to be little doubt that there was a gap between the background motivations for invading Iraq and the public selling of it. There seems to be a wide consensus that the crucial rationale for waging the war, the threatening possession of WMD by a madman tyrant like Saddam, was actually the picked reason for winning the approval of the American people and possibly also of the world audience, while the fixation on Iraq and the thesis about regime change predated the issue of WMD. This duplicitous behavior of the administration, widely reported in many documents and studies, by itself constitutes a sufficient charge against a democratic government, which henceforth should be held liable in front of 







For an analysis of how probability neglect works to transform low-probability danger into real possibilities, see C. Sunstein, “Terrorism and Probability Neglect,” Journal of Risk and Uncertainty, , , pp. –. For a partisan defense of the war, see Luce J. Goldstein, Preventive Attack and WMD: A Comparative Historical Analysis, Stanford University Press, San Francisco CA . For a more critical assessment of war planning, in the context of the good goal, see David Owen, The Hubris Syndrome: Bush, Blair and the Intoxication of Power, Politico’s, Methuen, London, . There is a vast literature on the selling of the Iraq War to the American people. Among others, see E. Segunda and T. Moran, Selling War to America, Praeger Security International, Westport, CT, ; J. Record, Wanting War, Potomac Books, Washington, DC, ; Woodward, State of Denial. This thesis is, for example, supported by Paul Pillar, from the intelligence community, Intelligence and Foreign Policy, Iraq, / and Misguided Reform, Columbia University Press, New York, .



Political Self-Deception

its citizens. Such a momentous decision as waging a war needs to be scrutinized in depth by the citizens and their representatives, and not left to the supposedly superior wisdom of governments to be made in isolation. In a democracy, the medieval theory of double truths – one for the rulers and leaders, and one for the people – does not hold, given that the rulers are only the representatives of the people and politically responsible in front of them. That the public rationale diverged from the background reasons is well documented; that no WMD were found in Iraq after the invasion has likewise been acknowledged by the  CIA report in a final way. But even if under the circumstances it is all too easy to think that WMD were a lie of the government, the lie does not follow from the fact that WMD were the selling reason and that none were in Iraq at the time. In order to become the crucial selling reasons for the war, WMD did not need to be an invention of the administration, and the later discovery that there were none does not imply that the belief in their existence was not genuine. In this chapter I shall argue that the WMD belief was very likely produced by a self-deceptive process. My argument will be based on indirect evidence, secondary sources and speculative reasoning. I do not claim to prove SD in a final way, but only to provide a plausible reconstruction that I think may suggest a richer understanding of the administration’s decision than either the straight-lie view or the honest mistake view, and possibly provide guidelines to prevent future SD episodes. Interpretations concerning WMD are basically split between: (a) positions that, with different nuances and emphasis, held that the WMD story was the crucial lie of the Bush administration, supporting a preexistent and independent decision to go to war, and backing a wrong-headed policy; and (b) positions that tend to account the belief in the existence of WMD at time as plausible, though mistaken, nevertheless honestly believed. Honest mistakes are in turn accounted in term of cold biases, ignorance, ideology, personality traits, and historical pressure after /. Ironically, both positions make occasional reference to the possibility of SD and wishful thinking at work, but such an unexplored hint is taken by either position to go in opposite directions. The lie supporters seem to think that being taken in by one’s own lying is a sort of epiphenomenon of lying itself, which has no explanatory relevance for any accurate account of what 

See, for example, the minority reports on Bush’s crimes concerning Iraq published in J. C. Conyers and staff, The Constitution in Crisis: The High Crimes of the Bush Administration and A Blueprint for Impeachment, Skyhorse Publishing, New York, .

Bush and the Weapons of Mass Destruction



took place and no effect on the administration’s responsibility. The honest mistake supporters seem to see SD as just one of the possible honest mistakes in the range of those created by purely cognitive biases and twisted responses to evidence; according to this view, the administration is to be held politically responsible for the bad planning and the inaccurate preattack knowledge and so on, but the moral responsibility is basically discounted. In either case, the explanatory and normative significance of SD is downplayed, being assimilated either to lying or to cold mistakes. I shall try to suggest otherwise. I would like to provide an account of the WMD story that does away with the conspiracy tone of the lying position, without lessening the responsibility of the president and his team, as often implied by the honest mistake view. I will argue that the conspiracy approach exhibits shortcomings both from an explanatory and from a normative viewpoint. My anticonspiracy approach is in no way meant to excuse the administration that led the country in a disastrous invasion and that should hence be held morally accountable. Its responsibility is in any case beyond dispute, given that there was an intentional misleading anyway, concerning the real reasons for the invasion; like in the Gulf of Tonkin incident, SD about WMD would be an instance of the second type of SD, the one where SD is precisely instrumental, ancillary to other deception. Generally speaking, political SD is always intertwined with lying, and this type of SD is moreover specifically functional to the make believe effect, hence it cannot represent an easy way out of political and moral responsibility. On the other side, the honest mistake approach is usually partisan and exculpatory of the administration’s deeds, and, moreover, often theoretically muddy concerning the post-hoc interpretation of choices and actions as mistakes, as well as the lack of distinction between cognitive and motivated mistakes, environment pressures, unintended consequences, and so on. I am not trying to propose SD as a general model for accounting all wrong-headed decisions in foreign policy, but simply to single out episodes of SD in the general fog of political deception. In this respect, I think that the WMD case, and the Congress Resolution it elicited in October , is dramatically similar to the Gulf of Tonkin incident, and the subsequent Congress





There are actually interpretations that take SD more seriously, as in Jervis, “Understanding Beliefs and Threat Inflation,” Holmes, The Matador’s Cape, although they are not especially focused on WMD but, in general, with threat inflation. See infra, Chapter .



Political Self-Deception

Resolution of August , discussed in the previous chapter, in so far as both cases represent SD instances of the second type. The issue is not that the lessons from Vietnam had not been sufficiently learned, but rather that the problem had not been acknowledged as a case of SD, hence no measure for contrasting other similar cases have been considered, let alone taken. If the Vietnam case had been provoked by McNamara’s dishonesty and deception, by his intentionally misleading of the Congress into an in-allout war though false information, there would not be much to learn and do to avoid similar misleading in the future but hoping in the better judgment of the electorate in choosing the president. If SD is instead detected and acknowledged, the opportunity to create institutional checks is open so as to sort out beliefs supported by good enough evidence from those backed only by scanty data. I shall thus try to reconstruct the WMD story, as the crucial rationale leading to the Iraq invasion, as a SD episode and compare it with the straight lie interpretation and with the cold mistake one.

 The Gap between Selling and Background Reasons .. While the mushroom cloud remained the main rationale for the war in the public imagination, many works have pointed out from a variety of evidence that the decision to invade Iraq had been independent from WMD, and that was settled much in advance compared to the public campaign for selling the war. The gap between the true reasons and the marketing of the war is actually a recurrent theme in the memoirs, chronicles and works on the Iraq war. I shall here mainly refer to the work by Paul Pillar, which, being recent and well documented provides a good summary of the many pieces of evidence for this gap. A provisional list is as follows: •

  

From the early days of his presidency, Bush and his Cabinet had showed a fixation on Iraq, as reported by the Secretary of the Treasury, Paul O’Neill, who was present at a meeting in early February . O’Neill affirmed that the case against Saddam Hussein was being “built,” and the president wanted “to find a way to do it.”

It is worthwhile to point out a study of the “selling” of the war that analyzes the process by means of marketing and advertising; see Segunda and Moran, Selling War to America. P. Pillar, Intelligence and US Foreign Policy, Iraq, /, and Misguided Reform, pp. –. Quoted by R. Sunskin, The Price of Loyalty: George Bush, The White House and the Education of Paul O’Neill, Simon and Schuster, New York, , pp. –.

Bush and the Weapons of Mass Destruction



• It is, however, after / that plan concerning Iraq started taking form. Apparently, (a) the very same day, aides to Secretary of Defense Rumsfeld wrote a note on Saddam, in the search of a link between the attack to the Twin Towers and Bagdad’s regime; (b) A few days later, Undersecretary Feith, in an exchange with the director of operations for the Joint Chiefs of Staff, Lieutenant General Newbold, pointed out that Iraq had to be part of the reprisal; and (c) The day after the attack, President Bush insistently asked Richard Clarke, White House counterterrorist chief to see if Saddam was involved in the attack. , , Bush entrusted • After the Afghan invasion, on November Rumsfeld to build a fresh plan for Iraq. • Then, on January , , Bush delivered his State of the Union Address and, while stating the American resolution to an all-war against terror, denounced rogue states as the main suspects to support terrorists and provide them with WMD, the famously “axis of evil” grouping of Iraq, Iran, and North Korea. • On June , the president delivered a major policy’s speech at the US Military Academy at West Point, which is known as the proclamation of the doctrine of preemptive war, allegedly required if the threat concerns the risk of the possession of WMD by hostile, unreliable, and undeterrable regime. • In June , the State Department’s policy planning chief, Richard Haas, expressing his reservation about Iraq, received the following answer by Condoleezza Rice: “Save your breath Richard. The President has already made up his mind on Iraq.” So far, all these episodes are evidence of the fact that the administration started planning Iraq much earlier than was publicly admitted, and also that Iraq was on the administration’s mind even before the / attack, so as to suggest that the latter only created the opportunity to bring the Iraq fixation to the front of actual policy. But for all that, we have not yet seen     

Pillar, Intelligence and US Foreign Policy, p. . R. Clarke, Against All Enemies: Inside America’s War on Terror, The Free Press, New York, , p. . B. Woodward, Plan of Attack: The Definite Account of the Decision to Invade Iraq, Simon & Schuster, New York, , pp. –. N. Ritchie and P. Rogers, The Political Road to War with Iraq, Contemporary Security Studies, Routledge, London, , pp. –. R. Haas, War of Necessity, War of Choice: A Memoir from the Two Iraqi Wars, Simon and Schuster, New York, , p. , reported by Pillar on p. .



Political Self-Deception

any apparent evidence of the gap between the real motives and the selling of the war. It was a British Intelligence document of July , known as the Downing Street Memo, disclosed in  first in England and then in the United States that seemed to fit the bill. The well-known memo is usually cited as the critical evidence that the selling reasons were not the true reasons, though some disputed such view. It is a report of a meeting held in Downing Street with Blair, his security aides, and the head of British Intelligence Service, Sir Richard Dearlove, informing the prime minister and his aides on his recent visit to Washington. The incriminate sentence says: There was a perceptible shift in attitude. Military action was now seen as inevitable. Bush wanted to remove Saddam, through military action, justified by the conjunction of terrorism and WMD. But the intelligence and facts are being fixed around the policy. The NCS had no patience with the UN route, and no enthusiasm for publishing materials in the Iraqi regime’s records.

Here we have documented not only a time discrepancy between the decision made by the administration and its public presentation, which by itself is not much telling in terms of intentional public manipulation, but we have the record of the need “to fix” the justification around the policy. If the justification needed to be fixed, it could hardly have been the ex-ante reason backing the policy, but more likely an ad hoc rationalization of an independent decision. And, if the decision had been taken independently from the justification, as it is also supported by the evidence about timing, then there should have been some other reason or reasons, possibly unfit for public disclosure, backing the decision. Finally, the 

 



See the “Downing Street Memo” in Mark Donner, The Secret Way to War. The Downing Street Memo and the Iraq War’s Buried History, New York Review of Books, New York, . The story of the disclosure of the Memo in  and the American Media reception is interestingly recounted by Donner himself in this booklet, and much telling about the media ambivalence and ‘state of denial’ about the Iraq invasion which was originally largely supported. Now the meeting is amply and neutrally reported in the Chilcot Inquiry on Iraq, made public on July , , see www.iraqinquiry .org.uk/media//the-report-of-the-iraq-inquiry_section-.pdf. Donner, The Secret Way to War, p. . The thesis that the gap in time between planning and public statement is a necessary feature of any military intervention being prepared is stated in the rather partisan work by Owen, The Hubris Syndrome: Bush Blair and the Intoxication of Power, p. . It must be added that some commentators, notably Michael Kinsky, responding to Donner on the pages of the New York Review of Books, stressed that the document only shows the opinion of Dearlove formed regarding Washington gossips; hence, it is useless as a proof. Yet, given the role of Great Britain in the attack, it is highly unlikely that the Head of British Intelligence was simply reporting gossips, instead of solid information got from the administration, the Pentagon, or the CIA.

Bush and the Weapons of Mass Destruction



conclusion that WMD were just the selling of the war is more than just conjectural by piecing together this information. Indeed, it was candidly admitted by one of the most outspoken war-maker, Deputy Secretary of Defense, Paul Wolfowitz, who in a long interview with Vanity Fair, in May , after the invasion had already started, very openly said that the rationale for the war, WMD, was picked up as the reason upon which everyone could agree. .. What can we make of this gap? Three issues are interwoven here, which needs to be analytically set apart. The first issue concerns what the true motives were to wage the war in Iraq; and funny enough for a narrative where the gap between “truth” and mere marketing is so crucial, the true reasons are hard to know. There is obviously a long list, a “plethora of reasons,” as Robert Jervis has it, and most commentators have their favorite ones, but none is really compelling, as many have observed, starting with Richard Haas, director of Policy Planning in State Department, who said that he will go to his grave not knowing why America went into Iraq. The difficulty of establishing the real motives for sure has relevant effects on the second issue, namely whether the WMD story was a pure pretense, an ad hoc fabrication and invention to screen the real motives out of public attention, or whether it was in any case bona fide believed, either produced by cold biases or by SD, a functional although deceptive belief sustaining the administration in its conviction about the war. That WMD was picked as the best-selling argument is not sufficient for drawing the inference that it was a pure fabrication. Although the conspiracy approach takes that inference for granted, I shall argue that there are many points disconfirming the straight lie view, among which is the difficulty of singling out a compelling hidden agenda. There is, however, a third issue involved in the selling of the war, which needs to be addressed beforehand, namely, what the point was of the marketing campaign and whether it can find a justification. In this respect, interpretations are yet again split. Obviously, those who believe that the very existence of a selling reason shows the administration’s straightforward deception of Americans and of the world public alike have no further 

 

Paul Wolfowitz, interview, Vanity Fair, May , transcript available at www.defenselink.mil/ transcripts/transcript,aspx?transcriptid=. See also the comment of an intelligence official reported by C. Kaufmann, “Threat Inflation and the Failure of the Marketplace of Ideas: The Selling of the Iraq War,” in American Foreign Policy and the Politics of Fear, pp. –, p. . Jervis “Understanding Beliefs and Threat Inflation,” pp. –. Reported in G. Packer, The Assassin’s Gate: America in Iraq, Faber & Faber, London, , p. .



Political Self-Deception

problem of interpreting the gap, which is the natural consequence of the duplicitous and evil intentions of the administration. There might be various accounts of how such deception was brought about, about the role of spin-doctors, who along with the media presented the lie to the public. But the reason why and the consequent judgment, in moral and political terms, is straightforward and consistent with the big lie denunciation. If, by contrast, the selling reason of the war is viewed as picked for its marketing potential, but nonetheless conceded, at least ex hypothesi, also to be believed true, whether by a SD process or by mere mistakes or cold biases, then the problem of the gap – its explanation first, and its justifiability, or conversely its condemnation – constitutes a further problem. If the public rationale was not a lie, why then the gap between the background reasons and the selling? What is the point of this duplicity? And, is not the gap a deception in itself, to be condemned in any case? Here again, two questions are entangled: how to explain the duplicity and how to judge it in term of the public ethics of democracy. On this point, there are different perspectives providing different explanations, some of which also imply normative conclusions, of the reason why the war required to be sold in terms different from what it was the actual plan. First of all, there is the realist view. The realist maintains that politics in general, and foreign policy specifically, concerns the pursuit of the national interest and is to be judged on the basis of its effectiveness in achieving its goals; and, in doing so, it is inevitable that some dirty hand problems arise and are met so as to grant the political good at the expense of moral qualms. If the purported goal is a war, then the task to be carried out requires (a) that as little as possible information of the government’s knowledge and true intentions be publicly disclosed in order to surprise the enemy; and (b) that, moreover, the necessary national consensus should be elicited with the most efficient marketing campaign, consistent with the requirement of secrecy. Hence it is intrinsic to the planning of a war that the selling be different from the government’s background reasons, and thinking otherwise, and being outraged at the discovery of the marketing campaign is simply an instance of naive ignorance of how military operations are necessarily planned. In other words, according to 



Some of these perspectives have been suggested to me by a study on threat inflation by A. Trevor Thrall and Jane K. Cramer, eds., American Foreign Policy and Politics of Fear: Threat Inflation since /. This view is well represented in the pamphlet by D. Owen, The Ubris Syndrome. Bush, Blair and the Intoxication of Power, Politico’s, London- Methuen , p. .

Bush and the Weapons of Mass Destruction



the realist, there is no political, let alone ethical, problem in making use of spin, misinformation and secrecy because this is precisely the stuff of which politics is made. However, even granting the dirty hand thesis, I would note that realism does not prevent its adepts from falling prey to mistakes and illusions, from taking for granted unexamined assumptions and lacking sufficient reality checks. As Stephen Holmes has convincingly shown, the misguided disciples of Machiavelli forget that circumventing democratic procedures may also deprive cabinets and governments from the crucial reality test implied by check and balances, adversarial democracy, devil’s advocates, and so on, which also have the crucial function of hypothesis testing. Another perspective on the marketing of the war is social constructivism, according to which there are no objectively true reasons providing a reference point to evaluate the deceptive pretext; all reasons portray as objective facts what are indeed social constructions from the web of social meanings, norms and standards, values and arguments dominant in a society embodying its power structure, which is maintained over time because the people are complicit in the final outcome and in the persistence of social constructions. According to this perspective, power mechanisms are inevitably at work in social constructions, and the only escape is learning to deconstruct their narratives critically and to show the subtext of manufacturing information and knowledge, though no independent standpoint is ever granted, and no final reality check is ever offered. In this perspective, we cannot consider the selling reasons as smokescreen from the true hidden reasons, for this interpretation tends to blur the distinctiveness of various beliefs, convictions, reasoning, and arguments under the general heading of social constructions, which does not help in understanding who did what and in imputing precise responsibilities to the administration. Finally, we have the psychological explanation. What psychological models explain, however, is not the distance between the real motives and the pretext for war, but rather why the public rationale, which clearly looked insufficient and flawed from the viewpoint of an impartial rational  



Holmes, The Matador’s Cape, pp. –. For the social constructivist perspective, see Trevor Thrall “Framing Iraq: Threat Inflation in the Marketplace of Values,” in A. Trevor Thrall and J. K. Cramer, eds., American Foreign Policy and the Politics of Fear, pp. –. For a cold bias view of the Iraq marketing campaign, see D. Kahneman and Jonathan Renshon, “Hawkish Biases,” in A. Trevor Thrall and J. K. Cramer, eds., American Foreign Policy and the Politics of Fear, pp. –.



Political Self-Deception

observer with the information available at the time, was nevertheless taken up and apparently believed. The psychological approach may motivate different normative evaluations of the selling reason. Definitely exculpating interpretations of the administration’s marketing campaign for the war can find a hook on this explanation, especially if cold biases and honest errors are seen as the trick behind the faulty rationale. Instead, if motivationally biased beliefs are seen at work, responsibility for mistakes is not so easily extenuated, as I have repeatedly stressed. .. No matter what the favored explanation of the gap of reasons is, it constitutes a proper concern for normative political theory. Waging war implies the violation of the crucial moral norm proscribing the killing of people, hence it clearly stands in need of a justification, as well documented by the long history of the just-war doctrine. Thus, on the one hand, democratic legitimacy holds that government’s policies (and foreign policies specifically) should undergo citizens’ authorization through Parliamentary votes, that is the approval of the people’s elected representatives. Circumventing this procedure, by means of a marketing campaign, raising a smokescreen that unbalances parliamentary deliberation and citizens’ consent, is not a trivial detour from boring bureaucratic rituals (as the realist would claim), but is the illegitimate imposition of coercive measures by governments on citizens, who are thus deprived of their agency capability. Whether this have always been and always will be the case, as maintained by realists (and in a different fashion by constructivists too), I am not sure it is true, but it is certainly wrong: it has brought about the most horrendous mistakes, and certainly have no intrinsic logical or de-facto necessity. On the other hand, war is by all means the most momentous decision that a government and a people face, implying deaths, suffering, harm and destruction. Such patent violation of basic morality requires a crystal-clear justification to be made acceptable, and always as the last resort and the lesser evil. For combatants of a democratic country who will wage the war, and who have to kill and risk their lives, are themselves justified in their conduct only if the war is just. Democratic citizenship implies a sharing in the responsibility of governmental decisions, which looks especially significant in the case of war, where the government’s policy is then carried out by individual citizens who will be the actual violators of basic morality. 

Actually, when the war is just, as clearly waged by self-defense, as it was World War II, there might be dispute over the public presentation, the timing, and the how of the response, but there is usually a large consent for waging, and less room for fatal mistakes.

Bush and the Weapons of Mass Destruction



If democratic legitimacy ultimately says that no predicament should be coercively imposed by a lawful government without the informed and free consent of the coerced citizens, then war is the first and foremost locus where democratic legitimacy is to be tested. Combatants have a crucially fundamental right to know whether a war is justified, because they will commit the violence and cannot shove their responsibility off their shoulders on to the administration, not being the subjects of a tyrannical rule, which would attenuate their responsibility. In sum, democratic legitimacy requires that citizens of a democratic country be clearly and correctly informed of the reasons for waging a war and be convinced that such reasons actually justify the violence and the destruction implied. Otherwise, they are treated as subjects, deprived of agential capacity, and deceptively implicated in a destructive and altogether unjust enterprise, while combatants are made the carriers of unjustified killings from which they cannot easily detach their responsibility. In conclusion, the resurgent double-truth-argument, whatever its causes or reasons, ought to be rejected, and the Bush administration is to be held accountable for the wrong it had done to its citizens and to the Iraqi people, whether or not the WMD threat was genuinely believed.

 The Real Motive(s) .. Although there is a wide consensus that the threat posed by Saddam’s possession of WMD was the rationale for public consumption, so to speak, and that other forces were working behind the scene, there is no agreement on what the real motives had been. Regime change was definitely the general goal, as pointed out by most commentators. However, regime change stands in need of a reason, because by itself a non democratic regime has never been considered a sufficient reason for intervention either in the light of traditional patterns of foreign policy or in the light of international rules. Neither the realist nor the idealist considers the removal of a hideous dictator, independently from an actual or threatened invasion or from a humanitarian emergency, or from some vital national interests, a sufficient cause for intervening military into another state. Thus, granted the regime change goal, why was that goal believed so urgent and crucial for America to wage a war?



Pillar, Intelligence and US Foreign Policy, pp. –.



Political Self-Deception

In the discussion on the Iraq War, at least four factors are cited as having concurrently contributed to push Bush toward the invasion. They are: () the neoconservative ideology that influenced his administration, and whose righteous convictions and militant determination succeeded in winning the floor and silencing dissenting voices; () the president’s personality, privileging guts over reflection, instinct over rational deliberation, action and boldness over caution and prudence, and also marked by his Evangelical faith and Manichean vision of the world as sharply divided between good and evil; () the shock of /, which created an unprecedented feeling of insecurity in the American public and opened up a favorable disposition of military response; and () last but not least, economic and group interests for oil and for military spending. To be sure, conspiracy views usually trace back the war to a single determinant, oil and military spending, and the rationale on the threat posed by Saddam as a simple and straightforward lie, but, even if conspiratorial views are the easiest way to convey the right outrage and public cry at the administration’s deeds, they do not stand at a careful analysis of the actual complexities in need of interpretation and the multi-layered evidence to the point. For example, they cannot explain why a group of so skilled liars and cynics, so good at manipulating people and masterminding the attack, failed so miserably in managing the aftermath of the invasion and did not serve their own vested interests as well. Thus, even though these self-interested motives were surely present, the fast and wholesome conspiracy view oversimplifies the story. But even considering all these components as jointly concurring in grounding the common goal of regime change, none of them is really compelling and all together they are too many. There is a general consensus that the neocon ideology would not have traveled far, and that Bush belligerent and missionary zeal would not have unfolded if the / attack had not occurred. The terrorist attack on the Twin Towers in New York and on the Pentagon in Washington, DC, is generally acknowledged as the crucial antecedent of the Iraq invasion. There actually are a variety of nuances as to the role of / in changing the attitude both within the administration and in the country about the military intervention against Saddam. For some it was just the trigger of 



An explicit nonevaluative list of rationales for the war is found in A. R. Hybel and J. M. Kaufman, The Bush Administration and Saddam Hussein: Deciding on Conflict, Palgrave Macmillan, New York, , pp. –. Against oil conspiracy, see, for example, D. Harvey, The New Imperialism, Oxford University Press, Oxford, , pp. –.

Bush and the Weapons of Mass Destruction



the Iraq War, which was already decided and was put in waiting for the best opportunity to be publicly sold. For others, it was much more than that; / was a challenge that had never been considered before, the challenge of an enemy bringing destruction in the heart of the United States, making the worst nightmares of Armageddon seem possible, and impossible to detect and prevent. American citizens had been struck not in some remote place abroad, but right on American soil: something unheard of and difficult to process. .. That / represented a shock for the administration as well as for Americans is clear, but the response was not up to the challenge, as the expert of terrorism Louise Richardson has convincingly argued. She says that the first major mistake was the very proclamation of the “war on terror” for “war” is waged between states, with combatants as part of regular armies, while terrorists are stateless, and one of their primary goals is precisely to be recognized as “combatants” with an actual capability of bringing serious harm despite the asymmetry of force. Declaring the war to terror, which by definition cannot be won, implies acknowledging, first, the status of combatants to terrorists, fulfilling a crucial goal of theirs and, second, their capability of inflicting such a harm for which the just retribution is war. The war talk might have been just “a rhetorical flourish.” But in addition to expressing the horror at the surprise attack, on the one hand, it indirectly offered the terrorists a much sought after recognition of their ability and great capacity of destruction, and, on the other hand, it engendered a constraint on future steps, a promise of merciless retaliation, and one that had to take the form of a war. If instead of viewing the administration run-up to war against Iraq as a shrewd but well-designed plan to fulfill its hidden agenda, we consider the Iraq business, on the one hand, as the overlapping agreement of divergent intents, and, on the other hand, as the cumulative result of many steps constraining future decisions, then the first crucial step was precisely this rhetorical declaration of war to terror. The administration sensed that the terrorist challenge was of a different kind and on a different scale than what they were used to, and this novelty   

L. Richardson, What Terrorists Want, John Murray, London, . Holmes, The Matador’s Cape, p. . That this was the first crucial twisted step toward the decision is argued by T. S. Langston “The Decider’s Path to War and the Importance of Personality” in G. C. Edwards and D. S. King, eds., The Polarized Presidency of G.W. Bush, Oxford University Press, Oxford  (pp. –) p. , Richardson, What Terrorists Want, and Holmes, The Matador’s Cape, p. .



Political Self-Deception

was expressed in the declaration of “the end of containment era.” Nevertheless, their response was actually patterned after the Cold War model, that is, that the enemy is a state, and the response is a military strike on the state responsible of the offensive act. In other words, the misguided response to / either is read as the jumping on the first opportunity to carry out the hidden agenda put on waiting or as produced by a blurred understanding of the nature of the enemy and of the attack. That the response was also heavily influenced by preexisting motivations, such as Cheney’s desire to expand executive power, Rumsfeld’s desire to test his theory of new technological warfare, and Wolfowitz’s theory of democratic dominoes in the Middle East, is actually beyond reasonable dispute. But the very variety of intentions does not suggest an overall hidden plan for toppling Saddam. More than a secret plan, we have a “strategic confusion” about the military and political goal of the mission that, after tearing the statue of Saddam down, ended as a fiasco. And the portrait of cynical strategists does not sit well with the sequence of mistakes, misapprehensions, wrong decisions, the inability to define a plan and carrying it out, which characterized the shameful invasion of Iraq, bringing deaths and destruction to a country that had not attacked America. .. For this reason, among the more scholarly works on the Iraq War, many have suggested that the Iraq fiasco cannot be made sense of without reference to SD on the part of the administration, and then coupled by further SD of the public. For the most part, the reference to SD is nothing more than a hint. Robert Jervis, however, does more than that in suggesting that among the reasons for going to war in Iraq, many were just “functional” beliefs, that is, rationalization for decisions prompted by psychological needs of the administration. Yet he does not provide a comprehensive interpretation of the decision to go to war in terms of SD and cold biases. More detailed in this respect is the account provided by Stephen Holmes, which he emphasizes as being just speculative, yet enabling one to attain a deeper understanding of the mixture of dishonesty, stupidity, faithful conviction and muddled beliefs that animated the whole administration in its different roles. According to Holmes’s   



Holmes, ibid., p. . Ricks, Fiasco. The American Military Adventure in Iraq, Penguin Books, London . Among them: Ricks, Fiasco, p. , J. P. Pfiffer, “Intelligence and Decision Making before the War with Iraq,” in The Polarized Presidency, pp. –; Jervis, “Understanding Beliefs and Threat Inflation,” pp. –, Holmes, ibid., pp. –. R. W. Merry, in Sands of Empire. Missionary Zeal, American Foreign Policy and the Hazard of Global Ambition, Simon and Schuster, New York , revisiting The Best and the Brightest, with reference

Bush and the Weapons of Mass Destruction



conjectural account of why going into Iraq was the self-deceptive response to /, the emotional pressure of such unprecedented attack on members of the administration coupled with the perceived need of a quick and resolute military response produced the favorable circumstances for SD. In his interpretation, actually two separate motivated twists can be detected in the belief set induced by / in the administration, one concerning the (self ) deceptive belief that the enemy was statelike, despite terrorists did not act from and for any state, and the other concerning the conviction that the threat posed by the possession of WMD by the alleged enemy Saddam was terrifyingly real. Let us unravel Holmes’s argument to clarify these two episodes of SD. Firstly, the terrorist attack left the administration powerless concerning (a) the protection of the country and (b) the response adequate to the outrage and the military power of America, while social and political pressure for a response was at the highest. Bush and his aids sensed that the kind of blow was unprecedented, and repeatedly affirmed that / changed foreign policy for good. Yet, despite the implications of the novelty of the attack, wishing to strike back without knowing how, the administration took the easiest route, by relying on the Cold War mindset; accordingly, it conjured up the image of the enemy in the familiar terms of a state. The equation between terrorists and supporting states, and the declaration that America would not distinguish between them is telling in this respect. The unknown threat, about which they were at the same time powerless and anxious, was thus recast as coming from the traditional sort of enemy for which America had been piling up masses of weapons and military equipment, so that the emotionally loaded wish to avenge the outrage was not disrupted by the unpalatable reality of the lack of an enemy state for reprisal. This is a straight case of SD: (a) wishing for a clear, identifiable enemy, (b) sensing that the implications of the new reality of terrorist attack was contrary to the wish, and (c) moreover, feeling both outraged and powerless, they lowered their accuracy and selectively searched for evidence confirming that terrorists could not have done it without being backed



to Kennedy’s group, concluded that Cheney and his pals, for all the cleverness and shrewd attitude, were fools, because they courted disaster without seeing it coming, pp. –. That this was the theory about terrorism is widely shared. A good summary of such a theory is found in Wayne Bert, Military Intervention in Unconventional War, Palgrave Macmillan, New York, , ch. .



Political Self-Deception

by states, and then looked for a preexistent enemy as the more direct target for an avenging reprisal. This self-deceptive belief that Saddam was, directly or indirectly, involved with Islamic terrorists, served different purposes: () it had the immediate functional role of satisfying the urgent need for a definite and winnable enemy; () it subsequently became the operative belief for the run-up to war; () it also induced the public deception as a by-product of the Cabinet SD. It is likely that the public deception was helped by an independent SD of the public itself, for the American public shared the feeling of insecurity with the administration, the sense of a threat coming from an unprecedented challenge and an undefined enemy. The fact that one year after the invasion, a significant percentage of the population still pointed out the link between Al Qaeda and Saddam as the reason for the Iraq war despite all the evidence to the contrary may well be taken as a proof of the system’s capacity to spin events, but, to my mind, more plausibly represents the typical selfdeceptive conviction that is defended from any negative evidence. So, following Holmes’s suggestion, this would be the first episode of SD of the Bush cabinet: a SD episode of the first type according to my typology (where political deception is a by-product of SD) and a straight case, producing a belief matching a wish of the agents. The second episode of SD, which I have detected in Holmes’s suggestive narrative, is still a first type, yet a twisted case of SD, where the deceptive belief runs contrary not only to the evidence, but also to the agents’ wish. Here SD led Bush and his officials to believe that the worst case scenario, a surprise nuclear attack by unknown forces, might be not only possible, but also, thanks to the probability neglect bias, highly probable if preventive measures were not being taken. In the mind of the administration, so Holmes led us to think, / conjured up the terrorizing image of Washington destroyed by a nuclear sneak attack: it was a possibility whose very formation in the minds of the administration became a credible threat. The obsessive focus on 



The thesis that also the American public was prey of SD is stated in R. Parry, Secrecy and Privilege: The Rise of the Bush Dynasty from Watergate to Iraq, The Media Consortium, Arlington . He reports a remark by Senator R. C. Byrd commenting the poll on people’s attitude sometimes after the invasion took place, and, namely, “The reality is that sometimes, it is easier to ignore uncomfortable facts and go along with whatever distortion is currently in vogue,” p. . Similar point is made by Pillar, Intelligence and Foreign Policy, p. , by Donner, The Secret Way to War, p. . For polls, see Kaufmann, “Threat Inflation and the Failure of the Marketplace of Ideas.” A. Lazar describes the process by which fantasies induced by strong emotions, once developed in the mind, take a life of their own, ending up as being believed by the subject. SD as unfolding from a worst case scenario implies probability neglect, so that a highly unlikely hypothesis is taken for a realistic possibility. See Sunstein, “Terrorism and Probability Neglect.”

Bush and the Weapons of Mass Destruction



unverifiable warnings of nuclear, chemical or biological weapons in the hands of lose, geographically dispersed, terrorists, undeterrable because undetectable and, moreover suicidal, is an indicator of this second episode of SD. The latter represents a twisted case, where the wish that the threat did not exist triggers an obsessive focus on the contrary hypothesis, and that took form in the vivid image of the worst-case scenario. In both SD instances, the straight and the twisted, the focal error to be avoided at all costs was falsely believing (a) Saddam innocent and (b) the threat unlikely. Accordingly, the threshold of evidence deemed sufficient by agents for believing both Saddam involved with terrorists and the threat real was consistently lowered, while, conversely, the threshold for disbelieving either was much higher compared to what would be required by a disinterested cognizant. In sum, Holmes’s interpretation suggests that the fear and worries aroused by the unprecedented / provoked an “alchemy of the mind” such that the threat of further and deadlier terrorist attacks was not simply hypothesized as a worst case scenario but, rather, also believed to be very likely, thanks to the triggering of probability neglect. The anxious wish to respond to the threat then induced the self-deceptive process by means of which the unknown next-door terrorist was given the face, hateful yet familiar and ultimately beatable, of Saddam. This account is definitely only speculative, but it presents the advantage of doing away with the need of spin and propaganda to inflate the threat and take along the American public, given that the overestimation of the terrorist-risk was shared across the board, by media in the first place and by the public in general. Actually, Holmes derives plausibility for his reconstruction precisely by the argument that emotionally charged risks, especially if vivid in visual terms, tend to engender probability neglect. Once a worst case scenario is projected in order to take measure against the most adverse possibility, the vividness of the projection easily induces a discount of the effective low probabilities of its taking place, so as to produce the self-deceptive belief in its likelihood; hence probability neglect is precisely the process by means of which twisted SD is brought about. Now, Holmes says, if this can be projected onto the larger public, there is no reason to believe that a group of officials “closeted inside a partisan echo chamber and isolated from external sanity checks would not exhibit” the same biased conviction as the  

With this expression, Holmes revisits a study by Jon Elster on the effect of emotions on rationality, entitled Alchemies of the Mind (Cambridge University Press, Cambridge, ). See C. Sunstein, ibid.



Political Self-Deception

people outside. / thus brought about favorable circumstances so that preexisting assumptions and ideological convictions, under the emotional pressure of the attack and the wish for a fierce and effective response, took the leap into SD that provided the administration with the belief that Saddam was a threatening, dangerous enemy and had to go. .. In sum, given the plethora of reasons as well as a multiplicity of agendas, the conspiracy view does not fare well in explanatory terms. Alternatively, there are two interpretations of this mess. The first emphasizes the complexity of the reality appraisal that engendered typical cognitive mistakes, such as theory driven beliefs, fundamental attribution error, and self-perception theory. According to this view, the mess was produced by successive cognitive mistakes in the reality appraisal and processing. The second interpretation does not dismiss the possibility of cognitive mistakes, but it also stresses the role of emotions and motivations in distorting reality and producing self-serving beliefs. The first interpretation has the advantage of parsimony in its explanatory model, by referring exclusively to cognitive factors, as remarked by its adherents. Yet in this case, parsimony backfires, since in the purely cognitive framework it is difficult to see any causal role of / in distorting beliefs and impairing judgment if no explanatory role in blurring cognition is acknowledged to emotions and motivations. It is under this emotional wave that the fear of other terrorist attacks appeared plausible, that the worrisome uneasiness of stateless enemy pushed people to look anxiously for one, and that the pressure for a response reaffirming the unparalleled military might of America, reestablishing domestic security and the administration’s control of the situation, pushed for illusory bold actions. Both interpretations help to make sense of the confusion and contradiction in the script of the hidden agenda better than any conspiratorial view. But SD interpretation need not be seen as an alternative to the first one: / created the right circumstances for SD; SD conjured up the fear of a next attack as impending and Saddam as the enemy who could be victoriously  



Holmes, ibid., p. . For a discussion of such psychological “cold” models in Iraq decision making, see Jervis, “Understanding Beliefs and Threat Inflation,” pp. –, and Kahneman and Renshon, “Hawkish Biases,” pp. –. The purely cognitive explanation of biased decision making is strongly defended by D. W. Larson in Origin of Containment, Princeton University Press, Princeton, NJ, , where she applies selfperception theory to the shift in American Foreign Policy after World War II of the perception of Soviet Union from ally to enemy.

Bush and the Weapons of Mass Destruction



fought by a muddled link with Al Qaeda terrorism; then self-perception theory can explain how the script came to be written, by rationalizing conduct and making the rationalizations drive future steps.

 The Public Rationale for War .. Let us grant that the background reasons were a bunch of different and often contradictory motivations, grounded in a mix of ideological assumptions, absolute convictions, and self-deceptive appraisal of reality, more than a plot deceptively imposed on an innocent America. There is still to clarify the problem concerning the selling reason for the war, namely, Saddam’s possession of WMD. What to make of it? Was it the crucial lie to dupe the country into Iraq, or did the administration dupe itself about WMD existence? Let’s try to reconstruct the terms of the problem, step by step: • It is generally acknowledged that WMD represented the rationale for the public selling of the war, rather than the actual motive guiding the administration decision making. • This fact is confirmed by participants, and more conspicuously by Bush himself in a widely reported meeting the president had with George Tenet in December . Bush made it apparent that he was not after information for decision making, but for selling an already made decision. Another testimony of a participant is the already mentioned interview that Wolfowitz gave to Vanity Fair in May ; he candidly admitted that WMD were only the casus belli for public consumption. In a similar vein, Douglas Feith declared in , after the contrary evidence had been disclosed and the non existence of WMD admitted, that the rationale for the war was independent from “intelligence details even though the details of intelligence at times became elements in selling the war.” In sum, the WMD issue was acknowledged as the selling campaign by the very same war-makers. • The need for a public rationale is to be imputed to a confused and uncompelling set of reasons, sustained by a mixture of ideology, assumptions, SD and miscalculations, hence unfit for presenting the 



Bush asked for the available information on WMD to which Tenet famously responded with his “slam dunk” comment. Bush clarified that he needed data, which could specifically be convincing for “Joe Public” (reported by Bob Woodward in State of Denial, p. , and amply commented by Pillar, Intelligence and US Foreign Policy, pp. –.) Quoted in M. Isikoff and D. Corn, Ubris, Crown, New York, , p. .







Political Self-Deception

case for war in public, a case, it should be stressed, which administration members, for different reasons and through different processes, were instead firmly convinced. The possession of WMD represented a clear and vivid reason sustaining that Saddam was dangerous to the point of requiring preventive action. The ex-ante evidence sustaining that Saddam possessed WMD was scanty, ambiguous, basically old, while the more recent data were dubious. In particular, the three tidbits of intelligence on WMD, namely (i) the testimony of the Iraqi defector known as “Curveball” concerning mobile biological labs in Iraq; (ii) the story of the alleged Niger-Iraq uranium deal; (iii) the discovery of Iraq’s attempt to buy aluminum tubes possibly to be used for enriching uranium, were regarded by intelligence experts and officials as either faked, inconclusive or unreliable. Therefore, the balance of the available evidence was clearly tipped in favor of disconfirming the presence of WMD in Iraq before the invasion, and the administration had access to this evaluation. The administration conduct, with reference to the evidence supporting the possession of WMD, instead of being informed by a cautious and diagnostic attitude, by contrast moved: (i) to push the intelligence in search of confirmation of WMD existence; (ii) to discard all warnings relative to the ambiguity and unreliability of data: (iii) to publicly affirm that they had certain and clear knowledge of WMD existence.

Under these circumstances, it is only too easy to think that the WMD story was cleverly fabricated in order to provide a publicly acceptable rationale, fit to gain people’s consensus. As acknowledged by Bush and Wolfowitz, the public could understand the threat represented by WMD after the / shock, especially if a chance of nuclear weapons soon available for Saddam and his terrorist acolytes to use against America was presented as an impending possibility. Much less could the public be convinced by the larger aims of Wolfowitz, or by Cheney’s hegemonic intent, or any other reasons from the plethora exhibited by various members of the Bush aides. If we add to this openly instrumental use of WMD, the dubious evidence on which it rested, the pushing of the intelligence for confirmation and the repeated public statements as to WMD certain existence, it is easy to conclude that the WMD were a pure 

This very same attitude is found in the British government as amply documented by the Chilcot report in the Inquiry on Iraq.

Bush and the Weapons of Mass Destruction



fabrication, necessary to dupe the country into the war. And the ex-post findings that no WMD were present in Iraq appear just as the corroboration of this linear account. Yet, for all its simplicity, a few aspects of the public selling do not fit with this linear explanation. .. Firstly, behind the selling rationale there was no clear hidden agenda, which one can point out as the real, undisclosed motive to wage the Iraq war, as already remarked. The variety of motives held by different officials, the lack of a definite political goal for the war and of any adequate planning for managing the aftermath of the invasion, and, finally, the resulting fiasco cannot be easily reconciled with a secret design cleverly masterminded and deceptively imposed on the country for self-serving goals and interests. If one wants to stick to the conspiracy view, then stupidity and foolishness should be added to cynicism and dishonesty. Secondly, many actions of Bush and his aides seem to contradict the fabrication view. Think again of the Bush-Tenet exchange on December , that is just three months before the invasion. Bush told Tenet that he was not satisfied with the evidence provided by the CIA that he thought “not good enough (. . .) for Joe Public,” and pressed him to provide some stronger information. This exchange is usually taken as a proof that Bush cared about the selling instead of the strength of the warrant, and that his decision for war was taken at an earlier time and independently from proof of WMD existence. Yet, this same exchange points out something else too: namely, that Bush was keen in finding confirming evidence of WMD. Granted, he did not want any evidence to use as a sound warrant for a thoughtful decision, but instead searched for confirmatory evidence in support of his frame of mind and attitude. Now, if WMD were a pure fabrication, then why should he be bothered about evidence at all? If he knew that WMD were a convenient lie to win people’s support for the war, he could have more usefully asked the intelligence about something else instead of wasting time on investigation on WMD that he knew did not exist. The same applies to Dick Cheney as well: when in , for the second time, a report from Italian intelligence reached American officials that Iraq had struck a deal with Niger for a large quantity of uranium, Cheney asked for a check and more information. His request was then at the origin of the CIA decision to send ex-ambassador Wilson to Niger for a proper investigation on the uranium allegations. As widely known, from that trip Wilson reported back that the allegations were groundless, though his negative conclusions were cooked up in inaccurate notes in a CIA meeting and never went around, until, on July , that is after the



Political Self-Deception

invasion, he wrote his op-ed on the New York Times, publicly disclaiming the allegation. I shall come back to the “yellowcake” and on the WilsonPlame story. For now, it is sufficient to note that had Cheney known that the yellowcake was a fake and a fraud, he might have kept quiet and avoided the request for more information, which could not but come to negative conclusions, thus sparing himself and his aides all the embarrassment and troubles produced by that very investigation. In sum, the administration pressure on Intelligence to find confirmation of the WMD presence either was a mere pretense to keep up with the public lie – but in that case, it was a pretense likely to backfire as it did – or it was inconsistent with the fabrication thesis. If it were just pretense, then why was such a clever design so badly orchestrated so as to prepare the subsequent exposure? If they were straight-faced liars, they were very clumsy liars indeed. Thirdly, a final consideration concerns the search for WMD carried out in Iraq, after the invasion by three subsequent search units. To my mind, this represents the aspect of the WMD story less compatible with the lie view. If the existence of the WMD story was a pure fabrication, why bother to look for what they knew was not to be found? Why expose themselves to the final evidence that Saddam did not possess WMD? Why not fabricate the proof of WMD too? After all, it should not have been difficult to hide and “find” some unconventional weapons on purpose, just to support the lie in front of the world. Instead, three different units had been searching the whole country from March  to  and to no avail, until it was publicly acknowledged in the  CIA internal report that no WMD had been found because there was none from start. Why should liars be so incompetent as to risk the public exposure for their lie? Why could they not have ordered an inconsequential hunt coming up with ambiguous result instead of clear-cut findings contrary to their rationale for war? The administration was then compelled to reframing the rationale ex-ante in order to avoid to admit that the whole war was then unjustified. The search story does not look plausible unless assuming that the liars did not reason well. .. In fact, some commentators have stressed that the search was not well organized and was not given the necessary resources. In particular, the invasion was not designed for disarming Saddam of unconventional weapons, as publicly stated. Had that been the case, the first thing to do 

Wheeler, Anatomy of Deceit, pp. ff., B. Drogin, Curveball, Ebury Press, Reading, MA, ; Holmes, Matador’s Cape, p. .

Bush and the Weapons of Mass Destruction



would have been to seal the boundaries, in order to avoid the illicit transfer of the alleged weapons in neighboring countries such as Syria, where they could have been handed over to the Hezbollah terrorists. Instead, the military strategy was precisely designed to occupy the crucial areas of the country rather than aim at disarmament. That was why the bunkers where (conventional) weapons had been massively piled up, were left untouched by American troops, so that they could later be swiftly taken by the insurgents and used against American soldiers as well as civilians. As a matter of fact, the campaign was not actually planned on finding and disarming WMD; the primary goal was toppling Saddam who was considered the one who could give the order to deploy WMD against Americans. The presence of Saddam as Iraq’s dictator and the potential deployment of WMD were, rightly or wrongly, considered strictly linked. Thus, toppling Saddam cannot be taken as the proof that the problem of WMD was dismissed as the invasion took off. As Bob Woodward recounts, the fear of WMD was spread not only in the American Army, but also in the Iraqi military who were convinced that Saddam had WMDs and would have used them in self-defense. Thus, there is little doubt that the search could have been better organized, especially at the beginning, but, frankly, disorganization and lack of coordination and definite goals had been the mark of the whole war, not simply of the WMD hunt. Initially the search was entrusted to a military unit led by Major General James “Spider” Marks. Despite the fact that the intelligence he was given of WMD was only technical (satellite images and few intercepts) or old, and, on the whole, thin, Marks did not harbor any doubt about their existence. As he said in the interview with Woodward afterward, he took the words of Cheney, Bush and Powell as sound evidence of their existence. In discharging his job, which was to check all suspicious sites all over the country, he was apparently worried that he lacked any definite knowledge on any specific WMD sites and went around without success, but he was sure that WMD were there in some places. Thus, the first search basically started blindfolded on the basis of hypothetical sites drawn from satellite photos. Yet this was not a sign of dismissing the issue: no one in the American army doubted of WMD existence, and the military was so worried about being targeted with biochemical weapons that every  

 Ricks, Fiasco, p. . Woodward, State of Denial, pp. –. The search story is told in detail by Woodward, State of Denial.



Political Self-Deception

morning soldiers wore their chemical suits. In such circumstances, they did not detonate the bunkers stocked with conventional weapons not because the hunt for WMD was only superficial, but because they feared that the bunkers contained additional stockpiles of chemical weaponry that might be blown into the air and kill them as well as civilians. Given that the first months of the search did not lead to any result, on the one hand, doubts about the existence of WMD started to spread and to be voiced among Congressmen, and, on the other, rationalizations and denial started to circulate among the administration and its friends. On the basis of an indirect source, the embedded Judith Miller published an article in the New York Times explaining why the hunt had not been successful so far. A soi-disant Iraqi scientist told the story that the WMD’s had been destroyed by Saddam few days before the invasion, and took the unit to a site where apparent traces of chemical unconventional agents laid buried. Such a discovery was announced with great emphasis by the journalist as the proof that the ex-ante rationale for war was well grounded. Yet, given the unreliability of the source, who later was revealed as an Iraqi intelligence officer, and the weakness of the evidence for the labs, the enthusiastic piece was clearly functional to the reduction of the mounting anxiety about the undiscovered WMD. It was a typical episode of the second subtype of SD, taking care to cover past decisions and actions. A similar attitude of denial, maybe fed by that very article, was displayed by President Bush. Firstly, in his “Mission Accomplished” speech on Mai  on the Saratoga, he confidently declared that the search was in full speed and that thousands of sites were known and under investigation; secondly, in an interview to a Polish television (on May ), he assured that proof of WMD and biochemical labs had been found. In other words, for a while, the bleak reality that WMD did not exist in Iraq was simply put aside, and ignored so as to keep up with the contrary belief. This is precisely the conclusion of Woodward’s State of Denial, reporting his interview with the president on December , when, recounting how difficult it had been for him to have Bush admit that no WMD had yet been found in Iraq, he describes Bush attitude as one of willful denial.

  

This fact is reported also in Colonel Rotkoff’s diary from Kuwait, as quoted by Woodward, State of Denial, p. .  Ricks, Fiasco, p. . This story is well reported in Wheeler, Anatomy of Deceit, pp. –.  Woodward, State of Denial, p. . Woodward, ivi, p. .

Bush and the Weapons of Mass Destruction



However, despite the triumphal tone, the discontent with the result of the hunt and the doubts mounting at home soon led the government to substitute the search unit on June . Instead of a military unit, with no specific competence on unconventional weapons, the task was assigned to an intelligence unit, led by David Kay, who had been one of the experts for the UN inspectors, definitely the right person for the job. Like Marks, Kay did not doubt that Saddam had WMD somewhere, but he thought that the method used by the military search was wrong and, for that reason, there had been no findings. He was also shocked when he realized how scanty the prewar documents on WMD were, basically reduced to a single source (the highly controversial defector Curveball). He set himself to work with the help of a large group of weapons experts, scientists, translators, and all needed supporting personnel. This time the search unit was largely provided of means and staff. Yet, after a painstaking and meticulous effort, he and his team reached the conclusion that there were no WMD in Iraq at the time of the invasion. His report in October  said that flatly, but also left open the “capability option,” that is, that he could not exclude that had not been toppled, Saddam would have resumed WMD production later. Bush, displaying his mood for denial once more, jumped on this option, shifting the focus from the existence to the capability of production, and omitting to acknowledge openly that no actual weapon had been found so far. That the administration’s attitude, at this point, was more of denial than anything else seems to be confirmed by the appointment of a third search unit. This unit led by Charles Duelfer took over in October  with the task to find what the previous two teams had not been able to locate, as if the lack of stockpile of WMD were a problem of the incompetence of the search units. Like Kay, Duelfer was certainly qualified, having been a UN inspector himself, but like his two colleagues, came out with nothing. His final findings were contained in the CIA internal report of , which restated what Kay had written the previous year, but without leaving now any room to the administration for a charitable reinterpretation. Commenting on the report, Richard Kerr said that rarely in intelligence gathering and processing one could reach a definitely



See “The Commission on the Intelligence Capabilities of the United States regarding WMD: Overview to the President of the US,” March , , published as Appendix B in T. Johnson, The War on Terrorism: A Collision of Values, Strategies and Societies, CRC Press, Boca Raton, FL, , pp. –.



Political Self-Deception

unambiguous conclusion, but in the case of the Duelfer report, an exception to this general rule could be found, namely the unmistakable conclusion that “we had been all wrong.” Three considerations undermine the linear explanation of the public rationale for war as a straightforward lie, namely, (a) the lack of a definite, precise and consistently pursued hidden agenda; (b) the contradictory conduct of Bush and his aides who definitely ignored all warnings and contrary evidence, but nevertheless keenly asked for confirming evidence; and (c) finally, the search for WMD after the invasion that, despite a disorganized start, had been the central focus of the American activity in Iraq, and had absorbed a large quantity of resources and energy, and had finally brought home the sad conclusion that the administration had been all wrong about it. Taken together, all three considerations seem to point to the alternative interpretation in terms of SD. For one thing, the public selling of the war seems more due to the lack of a precise, well designed plan than to the need of a smokescreen for the hidden and secretively pursued agenda, given that the background plethora of reasons was not, after all, secret and hidden. The neocon doctrine was openly discussed in journals and magazines; the hegemonic intent of Cheney hardly covered from public sight, as well as his repeated attacks on multilateralism and on international institutions and rules. The new warfare envisioned by Rumsfeld was similarly no secret, and the preemptive war doctrine openly declared publicly by Bush. The problem was rather that none of these motives was publicly compelling and that they were pushed by different people. Thus, the terrorist threat vividly conjured up by the / attack and the worry about WMD, which may have fallen into the hands of terrorists by means of a complacent anti-American dictator, became the overlapping, though muddy and self-deceptive, conviction of the administration backing the regime change plan in Iraq. The WMD were the surface of those convictions, the surface to be shown and shared, to be polished and buttressed for public consumption, but also strictly dependent on those convictions. For another thing, SD is strongly suggested by the contradictory behavior of the administration concerning gathering (confirmatory) evidence on WMD. The contradictory behavior of someone who, on the one hand, holds a belief contrary to the available evidence, and, on the other, is trying to prove her belief true, by a tendentious and nondiagnostic search is typical of a subject under the grip of SD. The repeated requests for confirming evidence about WMD seem a typical 

Reported in Ricks, Fiasco.

Bush and the Weapons of Mass Destruction



instantiation of this process. Finally, the search does not make much sense in case of lie, and moreover, in the lapse from its start to its concluding report, shows an unmistakable attitude of denial or of rationalization of the lack of WMD – that again are typical symptom of full blown SD.

 Aluminum Tubes, Yellowcake, and Curveball .. In order to account for the WMD rationale as a veritable piece of SD, I have now to recall the circumstances and the web of beliefs and motivations setting in motion the SD process. In ., we saw that the emotional response to /, in the ideological context of the Bush Cabinet and officials, can be made sense of through two joint episodes of SD, one concerning the worstcase scenario of a nuclear attack on the United States by unaffiliated and undetectable terrorists, and the other referred to the identification of the new unknown enemy (Al Qaeda) with the old and too-well-known enemy (Saddam). We have seen that in the first twisted case of SD, the mere thought of a nuclear attack on American cities appeared probable and realistic by a typical probability neglect. In the second, the wish to beat the enemy and avenge the outrage of / was vicariously fulfilled by equating the image of Bin Laden to that of Saddam Hussein. Coming to conceive the new enemy along the lines of the familiar one may represent the easiest shortcut for dealing with the frustration induced by the new terrorist fear. Both convictions were well suited for the administration’s general attitude of privileging the use of military force over diplomacy, boldness over caution, and decisiveness over pondered decision-making. Indeed, these preferences can be counted as constituting the background motivations distorting the two beliefs about the existence of an actual, potentially nuclear, threat, and of the identification of the enemy as Saddam. So, in a way, the self-deceptive belief about the impending nuclear threat, though twisted for it did not correspond to a wish of an imminent catastrophe, nevertheless it was self-serving because it corresponded to the higher-level wish for a timely use of force dispensing with democratic and international procedure. In sum, under the emotional pressure of the / attack, the administration came to hold that there was a danger to national security that could take the form of an imminent threat by biological or even nuclear weapons. Such belief was genuine, and actually also confirmed by the mysterious anthrax business. The equation of the enemy with Saddam was instead fueled by unexamined assumptions from the Cold War, and



Political Self-Deception

neocon ideology, but especially was the result of the ignorance about the terrorist phenomenon coupled with the need to respond to the / outrage quickly and fiercely. The ignorance about terrorism prompted Bush officials to being incredulous of a stateless threat, hence to look for state(s) supposedly backing terrorists. The jump from terrorists to backing states was eased by the image of rogue states that had become prevalent at the end of the Cold War era, and that implied an intrinsic connection among (a) backward and autocratic regime; (b) development of WMD program; and (c) nestling of terrorism. Meanwhile the pressure for a quick response motivated them to look for a familiar solution, namely a military attack. Yet a military attack needs to be brought to a specific territory. The attack on Afghanistan was precisely targeting the territory that provided the safe haven for terrorists, and while it was successful with reference to the Taliban, it was much less so for capturing Al Qaeda leaders. At that point, the shifting focus from Bin Laden to Saddam was the quickest shortcut to get hold of the enemy, despite the fact that Saddam was not the mastermind of the / attack. Among rogue states, Saddam’s Iraq was at the top of the list, since Saddam had been a nuisance to US interests for a decade or so and, at the same time, it was the easiest target compared with other regional autocratic regimes, such as Iran, Egypt, and Saudi Arabia, because it was weakened by sanctions. Therefore, Saddam came out as the obvious enemy and the target for the desired response. To sum up the beliefs and arguments leading to the Iraqi invasion: (a) Iraq was a dictatorship led by a cruel and dangerous tyrant who was openly an enemy of the United States; (b) Saddam had already given proof of his treacherous conduct and of his willingness to use unconventional weapons; (c) therefore he could plausibly be considered as willing to aid American enemies, and to arm them, in order to attack America indirectly and secretly; (d) hence he should be punished for what he had probably secretly and deceptively done as well as stopped for what he was really capable of doing in the future; (e) besides, his defeat would have been a blow for the Muslim world, which would come to see that America cannot be attacked without bringing destruction on themselves; (f ) finally transforming Iraq into a democracy would have favorably changed the balance of power in the Middle East and engendered a domino effect on other autocratic regimes of the region. The second episode of SD makes sense of the concluding belief that Saddam 

See K. O. O’Reilly, “Perceiving Rogue States. The Use of the Rogue State Concept by US Foreign Policy Elites,” Foreign Policy Analysis, , : –.

Bush and the Weapons of Mass Destruction



was the enemy whom America was looking for after /, and of the consequent resolution that he had to go. .. No matter how self-serving and muddled, the beliefs on the threat of an imminent attack and the one about Saddam, jointly leading to the resolve that Saddam had to be toppled, I think, were strongly and genuinely held by the administration. However, as such, they were not sufficient to convince the Congress, the American people and the world at large of the necessity to wage a war on Iraq. Different orders of reasons explain why it was so. From the viewpoint of American people, there was a widespread sense of insecurity after /, which had the consequence of making the public generally ready to the use of force for striking back. The intervention in Afghanistan was indeed backed by the whole nation, as well as by the international community, and considered justified without requiring persuasion or spinning. The Taliban regime actually hosted Al Qaeda terrorist leaders and recruits, which was common knowledge. The attack on Afghanistan was seen as a legitimate act of self-defense after an unprovoked aggression. Once the Taliban regime fell, the involvement in another war was not perceived as necessary by most Americans, despite the fact that Al Qaeda leaders were still at-large. The complex web of assumptions, ideology and self-deceptive beliefs analyzed earlier were not amenable to public presentation, for they were basically articles of faith, and not evidence-based beliefs. In short, they were not sufficient to make a convincing public case for a war. From the viewpoint of the international community and rules, the Iraq invasion was even more outlandish. There are only three grounds justifying military intervention on other states by international law: (a) selfdefense; (b) humanitarian intervention; and (c) UN resolutions, which usually are drafted if there is an indisputable evidence of immediate danger to national or world security. In this case, the first two justifications were clearly out of question: neither Iraq had attacked the United States, or any other neighboring country, as it did in , nor it was responsible of any humanitarian emergency at the time. The only possibility to legitimize the intervention was to get a UN resolution that could be drafted only in case Iraq were acknowledged as posing an immediate threat and an impending danger to world security. Clearly such a danger could not be convincingly argued in front of the world audience on the basis of the assumptions mentioned earlier. To be sure, for the Bush administration, gaining international support was not an issue, given the prevalent attitude of unilateralism and scorn of international institutions and United Nations



Political Self-Deception

in particular. However, as the administration’s attitude was in any case partisan, and the support of the country and of Congress had to be won, international legitimacy could not be dismissed beforehand. Moreover, the administration’s strongest ally, Great Britain’s prime minister Tony Blair, was instead very keen on the UN resolution and pressed Bush to this end, even to the point of accepting France’s idea of a double resolution. This is the context where the issue of the selling reasons must be located. The administration had an independent conviction that there was the risk of an imminent threat to the United States, and that Saddam was a crucial enemy in the anti-American conspiracy. From this conviction it followed that, within the administration mindset, a military intervention to remove Saddam was required. Preliminarily though, the support of the nation was to be secured, and a salient reason, one on which everyone could agree (as Wolfowitz said), had to be put forward and this reason that even Joe public could understand and offer his support for was WMD, whose possession by Saddam was not only widely supposed to be true, but, given Saddam’s previous records, believed to be very dangerous. Here we have the conditions for SD of the second type, the one instrumental for other deception, to start. Let us briefly recall the features of this SD type. Political leaders and officials believe that P is the case (that Saddam represents a threat to national security) and that P requires action A to be taken as a response (military intervention in Iraq); the grounds for P cannot be used to convince the public, based as they are on ideological assumptions and partisan views. Thus, the administration wishes for a different piece of evidence for P, one that can be disclosed and be effective in persuading people that P is the case and that A is required. Hence the administration singles out the independent reason R that conclusively supports P and A, even for those who do not share the government’s political orientations and ideological convictions and ignore the actual more complex and controversial grounds for P. The wish for R to be true tends to shape the search in favor of confirmatory evidence for R, while blocking or explaining away contrary evidence; hence R comes to be hold despite unfavorable data. The government’s previous and independent belief that P actually helps to discount the costs of inaccuracy in the search for R, for if P is true, then R must be true a fortiori, engendering an interesting case of circular belief justification. Finally, the leader will be able to make a case for A based on R that 

On the attempt by Blair to secure a second Security Council resolution that would have provided the legal ground for invasion, see the Chilcot report, “Inquiry on Iraq,” ..

Bush and the Weapons of Mass Destruction



despite the scanty evidence will be believed true, and for that all the more publicly convincing. This scheme perfectly applies to the Iraq invasion and WDM precisely provides the reason that everyone could agree upon. As a matter of fact, the belief that Saddam had WMD was widespread and not only in the Bush administration. In the fall , after /, it was both a plausible hypothesis and an implication of the rogue state concept, and as such played a role in the complex and twisted reasoning leading to superimpose Saddam to Bin Laden. The role was precisely that supposedly having WMD, Saddam had the capacity and likely the will to provide potential suicidal terrorists for the next deadly attack on America. In order to become the public rationale for war, however, the possession of WMD required to be more than an inference from past behavior and from Saddam’s unreliability sustained by a blind conviction. If it was to be the rock-down argument, the possession of WMD had to be backed by rock-bottom evidence. The search for evidence and the pressure on the intelligence community in this respect started in this juncture. Given the relaxation in vigilance induced by the already established belief that Saddam represented a threat to national security, the search was not impartial and truth-oriented only, but also driven by the desire to find confirmation of the thesis that Saddam had stockpiles of hidden WMD, dangerously waiting to be detonated. .. There was an objective difficulty in conducting that search for, after the  war, the United States did not have ground intelligence in Iraq. American agents could rely on what is called technical intelligence, that is information gathered via satellites, and then they had to depend on foreign intelligence mainly from NATO allies. It is also to be said that the administration did not completely trust the two established agencies CIA and DIA, to the point that Rumsfeld created a new branch of intelligence, called the Official Special Plan (OSP) strictly depending on his office. This fact is often cited as the proof of the duplicity and manipulation of Bush top officials concerning structures partially independent from their control. This separate intelligence branch did not ultimately produce any more data than CIA, while it was actually responsible for the leakages of yet unconfirmed intelligence to the public.



See, among others, Kaufmann, “Threat Inflation and the Failure of the Marketplace of Ideas,” pp. –.



Political Self-Deception

The official intelligence report on WMD is the controversial NIE of October , which was then the basis for requesting the Congress Resolution authorizing the president to make the necessary decision concerning Saddam’s regime. With many caveats and warnings, the NIE report basically came to the conclusion of the existence of WMD in Iraq, conclusions that were grounded on circumstantial evidence at best, and crucially on fake data. As the report of the  Commission on the Intelligence regarding WMD stated, the NIE failure was not caused “by the hurried compilation of the report, but by a sheer lack of data.” Lacking good intelligence, NIE relied on assumptions based on Iraq’s past behavior; such assumptions up to a point were reasonable, but then they: stopped being working hypotheses and became more or less unrebuttable conclusions. Worse, the intelligence system became too willing to find confirmations of them on evidence that should have recognized at the time to be of dubious reliability. Collectors and analysts too readily accepted any evidence that supported their theory that Iraq had stockpiles and was developing weapon’s programs, and they explained away or simply disregarded evidence that pointed in the other direction.

While some commentators, and notably Kaufmann and Pillar, hold the view that the fault was not Intelligence’s, but the administration’s, which put pressure on the community and manipulated its conclusions, definitely most top intelligence officers, and Tenet especially, were not simply innocent victims, but at least complacent interpreters of the administration’s wish. Although some of them harbored doubts about data, and the NIE report was after all full of caveats, nevertheless they did not challenge 

 

Judgments on NIE are divided: Kaufmann, for example, holds that the report was accurate (ivi, p. ) and that the administration ignored the warnings, and caveats; others, like Ricks, that it contained serious misinterpretations (pp. ff ). Yet, even the intelligence’s advocate Paul Pillar admitted that the NIE was wrong (pp. ff ). Reported by Johnson, The War on Terrorism: A Collision of Values, Strategies and Societies, p.  (my italics). P. Pillar rightly stresses that decisions to go to war are rarely taken based on scrutinized intelligence reports, and that usually it is government to give the intelligence the orientation for the search. He also admitted that the intelligence community was largely wrong concerning active Iraqi WMD programs, but he also added that the policy makers did not challenge the community’s judgment at all (p. ). That shall be pretty obvious if Pillar’s primary thesis is true, namely, that governments ask the intelligence for confirming data. In such circumstances, it should be the intelligence community, as the independent body of experts, to resist the government’s pressure and to challenge their ungrounded beliefs. By contrast the view that the intelligence community was not innocent, that Tenet was prone to the administration’s wish, and that he silenced the dissident voices is stated by Drogin, Curveball. A balanced judgment on the role of intelligence in the decision on Iraq, see Pfiffer, “Intelligence and Decision Making before the War with Iraq,” in The Polarized Presidency, pp. –.

Bush and the Weapons of Mass Destruction



the government’s instructions that asked for a confirmatory search. Like McNamara with Tonkin Gulf Incident, the government wanted to be reassured that its conviction was sound and solid: the request for intelligence data was not diagnostic, but rather functional to their need to be confirmed in what they wanted to hold as the truth of the matter. On the intelligence side, the willing-to-please attitude, more than direct political pressure, which apparently were also present, seemed to have guided Tenet and intelligence analysts in drafting NIE. The  report concluded that while there had been no direct political pressure, “it is hard to deny the conclusions that intelligence analysis worked in an environment that did not encourage skepticism about the conventional wisdom.” On the one side, there was Bush and his aides asking for more data confirming the presence of WMD in Iraq, on the other, there was Tenet who wanted to satisfy government’s request, hence making Bush’s wishes his own. On the whole, it seems likely to speculate that a “groupthink” process was at work here, even though different beliefs and convictions were spread within the Intelligence community as well as within the government. In this general muddle, what is clear is that very ambiguous, little, and controversial evidence was made to sustain the belief on WMD, which corresponded to a wish of the administration but ran contrary to an impartial analysis of data. In sustaining this belief, however, the administration had been helped in part, though not wholly, by the intelligence who acquiesced in stretching the data for meeting the prevalent conviction. Then, given that the WMD unwarranted belief was meant for convincing the American public of the necessity of the Iraq war, already independently established, its public presentation, notwithstanding its being genuinely held true, underwent all sorts of embellishments and rhetorical flourishing for the public’ s sake. This mode of presentation was a case of straight deception for reasons of spin: as usual, political SD is always mixed up with straightforward deception, and rooted in a background of ideological premises, unexamined assumption, and prevalent uncritical opinions.



  

Apparently, there was a difference of tone between the full NIE report, much more cautious and balanced, and the five-page summary that was the only to go around and be read. See T. Ricks, Fiasco, pp. ff. Kaufmann reports that intelligence retired officers accused the administration not just of pressures but of coercion on them (p. ). Reported in Johnson, The War on Terrorism: A Collision of Values, Strategies and Societies, p. . Janis, Groupthink: Psychological Studies of Political Decisions and Fiascoes.



Political Self-Deception

.. A significant factor that has contributed to the view that the WMD story was straightforward deception was the preposterous nature of the “juicy tidbits” of intelligence that, in addition to satellite pictures and old data, constituted the sole grounds for the belief in the existence of WMD. There were three pieces that were crucial to selling campaign, for they were amply leaked to the public, thanks to the cooperation of complacent journalists such as the well-known Judith Miller. The first is the Iraq’s attempt to acquire aluminum tubes that, in a first CIA analysis on April , was regarded as intended for uranium enrichment. This first assessment was later questioned by the Department of Energy (DOE) according to which the aluminum tubes were interpreted for rocket casings, given that they almost exactly matched the ones used by Iraqi army. This opinion was later shared by the International Atomic Energy Agency (AIEA), on July . This more likely second interpretation of the aluminum tubes was however resisted by CIA analyst Joe Turner, an expert on centrifuges, who stuck with the original interpretation and consistently avoided serious reviewing of his argument. As a result, the original thesis on aluminum tubes survived criticism, and was finally leaked to the public in the famous New York Times article by Judith Miller and Michael Gordon, on September , , when the run up to war was in full speed and the public selling the main issue. The Iraqi attempt to acquire aluminum tubes was described as a suspicious sign of an ongoing Iraqi nuclear program, taking for granted the connection between aluminum tubes and uranium enrichment, while the alternative connection with rocket casing was not even mentioned. Hence the aluminum tubes case went both in the NIE report of the beginning of October , was hinted in Bush’s speech in Cincinnati on October  (the already mentioned mushroom cloud), and in Colin Powell’s presentation at the UN on February . Considering the aluminum tubes case dispassionately, it was, after all, only a failed purchase that could be loosely connected to a program for uranium enrichment; even leaving aside the two contrasting assessments, it was in any case a slim piece of evidence for concluding that Saddam would soon have had the capacity for nuclear weapons. But given that such threatening possibility could not be ruled out with absolute certainty, and that the aluminum tubes might in principle be used for uranium enrichment, the finding was taken as



A very effective summary of all three pieces of information is provided in M. Wheeler, Anatomy of Deception, pp. –.

Bush and the Weapons of Mass Destruction



supporting evidence of Saddam’s threatening capacity: not properly a smoking gun, but a worrisome hint of a potential mushroom cloud. Whether Bush aides had been liars or self-deceived in this regard is, after all, a matter of plausible speculation, but it is a fact that this very same piece of evidence, an attempted purchase of aluminum tubes, artfully presented by the New York Times, was apparently convincing to a majority of Americans, clearly in the emotional grip of the mushroom cloud image. Americans, after the shock of /, did not want to believe that there was no danger, if there was an ever so slim a chance of it, and the aluminum tubes provided a worried people with sufficient evidence to believe the threat was real. If the aluminum tubes can be defined as inconclusive and insufficient evidence of a nuclear program to any impartial observer, disregarding such possibility very likely represented the focal error to be avoided by Americans as well as by the government. If this information was simply slim, ambiguous, and inconclusive, the other two main pieces of information share a surreal character and, if it were not for the devastation brought about by the war that they helped to start, should be basically considered exhilarating stories. I will not get into the intricacies of details of both, the first on the defector Curveball, amply recounted by Bob Drogin, and the second on the yellowcake, the alleged Niger–Iraq uranium deal, whose Italian origin was masterfully pieced together by two Italian journalists, D’Avanzo and Bonini, while the American development was reconstructed in all details by Mercy Wheeler. Very briefly, Curveball, a soi-disant Iraqi scientist, asked for asylum in Germany in November , and brought to German intelligence his testimony about the existence of mobile biological labs for the purpose of developing and building biochemical weapons in hiding from satellite recollections, labs that he claimed to have built. Apparently, the Germans soon started doubting Curveball’s credibility, for he was an alcoholic, psychologically unstable, and a compulsive liar. In September , CIA’s head of operation in Europe, Tyler Drummheller, was informed by the Germans about the unreliability of Curveball’s disclosure. But such warnings were not taken up by the administration, and the story of the existence of mobile biological labs  



Drogin, Curveball. C. Bonini and G. D’Avanzo, series of articles on La Repubblica, October–November , translated by Nur al-Cubicle, October , , updated November , , http:// nuralcubicle.blogspot.com///berlusconi-behind-fake-yellowcake.html. Anatomy of Deceit.



Political Self-Deception

was included in Powell’s speech at the UN, although a few days later, on February , the United Nations Monitoring, Verification and Inspection (UNMOVIC) searched Curveball’s former worksite and determined that his claims about MBL to be false. Nevertheless, apparently, the American Cabinet dismissed this conclusion. Did they ignore it on purpose in order to further the deception on American people, or went on in their self-deceptive belief that there was testimonial evidence of MBLs? Difficult to say for sure, but some concomitant elements seem to suggest a willful confusion more than a simple lie by the government for the MBLs had been the focus of the hunt for WMD. When David Kay was invested with the search substituting the first military unit led by Marks, he was given all the existing intelligence data concerning WMD, and he was appalled by the quality of documents on MBLs coming from a single source, Curveball, who had not even been interrogated by American agents. Nevertheless, despite the scant evidence, he did not doubt a minute about the existence of those lethal MBLs, and set up his specialists’ team precisely to their hunt. The impression is that Curveball’s revelation met with the emotionally loaded and distorting expectations of Bush aides that Saddam had been developing lethal biochemical weapons. Hence the warnings about his unreliability were dismissed, the UNMOVIC disconfirmation disregarded, and defensive avoidance prevented American intelligence to interrogate the defector anew. In a word, all the evidence challenging Curveball’s testimony was put aside, and the conviction in MBLs persisted, oriented the after-invasion hunt, and was maintained until the conclusive evidence of the  CIA report finally excluded that any mobile biological lab had been there. The yellowcake story is composed of three relatively independent events: (a) the Italian original fabrication, (b) the American investigation in Niger by Joseph Wilson ending up with Wilson’s op-ed in the New York Times in July  exposing the case, and (c) the leakage about Wilson’s wife, and the subsequent investigation of the Department of Justice that helped bring journalist Judith Miller to jail first and finally to the indictment of Scooter Libby. I will basically skip the third American development since it is not especially relevant for establishing whether the yellowcake story was known to be a fabrication and intentionally used  

Reported by Woodward, State of Denial, pp.  ff. Drogin reports the discomfiture of one of WINPAC (Kay’s hunt unit,) analyst at the realization that he had been wrong: he fell apart like a true believer at his own conclusions that “he had only seen what he wanted to see. He had lied to himself. His expectations had blinded him, twisted his logic, distorted the evidence in front of him” (p. ).

Bush and the Weapons of Mass Destruction



to deceive the public or whether it was believed to be true, in the teeth of evidence, that is, a veritable piece of SD, instrumental for convincing others. At the origin of this very complex story, there is a pure fabrication by an Italian former intelligence agent who apparently did it for money with the help of a friend working at the Niger embassy in Rome. The forgery was quite blatant, but at certain point, the crudest inaccuracies had been corrected, possibly by the Italian intelligence agency (SISMI), so as to become more plausible. In this revised form, the information of a deal between Niger and Iraq concerning a large quantity of uranium reached the agencies of France, England, other European countries, and, most notably, America. SISMI knew that the report was a forgery. Nevertheless when, after /, the Americans were looking for evidence on terrorists and on rogue states, the Italian prime minister Silvio Berlusconi, eager to ingratiate himself to Bush, put under pressure SISMI to provide Americans with all possible information; answering the request, SISMI forwarded the fake yellowcake deal to CIA, for lack of better data. As Wheeler remarks, “the forged document formed the basis for at least seven intelligence reports shared by agencies around the world.” The second report sent to CIA by SISMI actually prompted Vice President Cheney to ask for more information about the deal that mixed plausible and implausible elements. The CIA appointed for the task Joseph Wilson, who had been ambassador in Niger and was the husband of their operative agent, Valerie Plame, working on WMD. A relatively brief visit was sufficient to Wilson to conclude that the yellowcake story did not stand up: too many inconsistencies and inaccuracy, and above all, the difficulty of moving all that uranium through Africa without being spotted by neighboring countries, Israel and various intelligence. Coming back to America, Wilson was summoned at CIA headquarters in Langley to give his report. However, he was not asked to write a report but only to tell his version; notes had been taken by analyst Douglas Rohn, apparently inaccurate ones that went into the ensuing report and that were sufficiently ambiguous so as to be taken as a confirmation of the Niger-Iraq deal. To be sure, in October , CIA warned the National Security Council against Niger uranium claims, and refused to allow President Bush to use it in the Cincinnati speech and in the State of the Union  

Wheeler, Anatomy of Deceit, p. . On February , , when IAEA asked the United States to provide supporting evidence for the Niger-Iraq deal, other than the Italian report, the United States mentioned the CIA report from Wilson’s trip; thus, Wilson testimony was distorted so as to support the allegations, instead of disproving them, as it was the case. See Wheeler, Anatomy of a Deception, p. .



Political Self-Deception

speech. But such warnings fell in a very muddy context, where reports on the deal were arriving from different sources. Later in October , the US embassy in Rome received the forged document of the deal from Elisabetta Burba, an Italian journalist; while the CIA rejected the documents as forgeries, the embassy forwarded them to the State Department on Non Proliferation, which handed them over to Undersecretary John Bolton. The forgery then took on a life of its own, outside CIA control. Meanwhile, also Great Britain became a source of the alleged uranium deal, which, as a consequence, acquired credibility as licensed by the British Intelligence. Obviously in the background there was the wish of the administration “to fix the evidence” that influenced the double standards approach, partial toward the confirmation, and deaf to the negative interpretations. Before the invasion, the uranium allegation was disclaimed by the IAEA as well. In this respect, Wheeler is right to remark that before the war started, abundant data were available disconfirming the uranium allegations, and that the war was waged on the basis of a manifest forgery and false claim. But was the forged deal believed to be false by Bush aides, and cunningly used to deceive people intentionally? Or was it the case that the administration was taken in by the regenerating capacity of this story, coming back again and again after any rebuttal and despite all its inconsistencies from different sources? Again, it is difficult to say, and the answer cannot but be speculative. Yet I think that, on the one hand, the story does sit well with the unfolding of a collective SD of the second type; and, on the other hand, the SD interpretation can better account why such a preposterous story was picked as supporting evidence. The elements for SD were all in place: there was the wish for the existence of WMD in Iraq, which could have played, as it did, the selling reason for the war. There was the independent conviction that Iraq, as a rogue state, actually had stockpiles of chemical weapons and was likely developing nuclear program. This conviction was based on assumptions, ideology, and inferences from past behavior: nothing that could stand a close examination and be persuasive to all. But for all that it was a genuine and firm conviction. The wish for evidence supporting the existence of WMD was determined by the need of public selling. Under such circumstances, the search for evidence was oriented by the wish that operated by selecting data, and by lowering the threshold of evidence deemed necessary to believe that the case for WMD was warranted. The fact that the evidence was slim, scant and full of inconsistencies and actually baseless to any independent observer is precisely a necessary condition for SD to be the case, for SD is believing that P is true in the teeth of contrary evidence.

Bush and the Weapons of Mass Destruction



It is an irrational way of holding belief induced by the influence of a wish in the process of belief formation and lay hypothesis testing. In this respect, the abundance of contrary evidence even before the war started, cannot just be taken as the proof that the administration was lying openly and cunningly, because contrary evidence is a necessary condition for SD. If the evidence is simply ambiguous, we can have either wishful thinking (believing that P beyond the available evidence) or a wrong guess, or even a correct perception of probabilities. Other concomitant elements concur to interpret the yellowcake story, as well as the other two, as truly, though self-deceptively, believed, which have to do with the type of SD under consideration. Firstly, the collective nature of this SD process made it that any surfacing of the story from a different source was taken as an extra test of its being true, and in this case, we have seen that the yellowcake story came back several times from different sources, each of which was likely regarded as corroboration of the story itself. It is also true that warnings repeatedly reached the administration, but in a SD process, data are selectively considered in accordance with the wish, and negative information is either blocked or explained away. Secondly, in the second SD type, the independent conviction of the existence of WMD, in an example of circular reasoning, worked as an independent reason for believing the poor evidence as good. Given that they were in any case convinced that WMD were in Iraq, then all the evidence for it, no matter how implausible, insufficient and contradictory for an independent observer, had to be true a fortiori, so to speak. In other words, what had to be proved by the evidence, the existence of WMD, in a twist of thinking became the proof of that very evidence, the grounds for believing the yellowcake, the aluminum tubes, and Curveball’s testimony as true. In sum, the actual circumstances well fit the conditions for SD, and more specifically for the second type of political SD. In addition, the SD interpretation has an explanatory advantage over the straightforward deception one. In hindsight the three stories looked not only weak, but altogether badly patched together, frankly hard pressed to stand up to any examination even by nonexperts. Two of them were pure fabrications and the third a fanciful interpretive hypothesis: why should shrewd liars make use of such poor lies? Why did they not fabricate something better, more plausible to start with and less easily exposable? Once again, the lying view needs to be supplemented with the stupidity thesis, which does not sit well with the Machiavellian deceiver. By contrast, if we think that the administration people were prey to SD, that is, that they believed what they wanted to believe thanks to a very common and lay twist in rationality



Political Self-Deception

triggered by motivations, then the preposterous nature of the three stories can be reconciled with the irrational (self-deceptive) belief that they were true. Finally, the exposure resulting by the later search was so blunt and uncontroversial that a group of scheming liars would have avoided, and pretty easily so for they could have put the smoking gun themselves, like all bad cops do with drugs and gun in any crime story.



The People’s SD

.. Finally, we have to consider that the public selling of the war was successful also thanks to the “complicity” of the media, of the political opposition and of the public itself, as remarked by most commentators. Is the failure of the marketplace of ideas to check on governmental claims and the widespread support of the intervention of the American people to be attributed only to spin, manipulation and steady distortion of truth by the government? I am not questioning the dishonesty and duplicity of the administration, which I hope to have sufficiently stressed earlier, and I do not doubt that the public was deceived by government, though I tend to think not through straight-faced lies. I am instead wondering whether the manipulation thesis is sufficient to account the general support for the war in the media and in the country. Chaim Kaufmann who defends the manipulation thesis, states that “if the public and opposition politicians had understood the weakness of the evidence of Hussein’s determination to attack the United States, links to September  or Al Qaeda or nuclear, chemical, or biological weapons programs, the administration probably could not have made a persuasive case for war.” If interpreted literally, the claim is trivially true: had American understood the weakness of the evidence, they would have both known the facts of the matter and grasped all the implications. If instead Kaufmann wants simply to say, “had Americans known the weakness of the evidence,” then the claim is much stronger, implying that being provided with a correct representation, Americans would hardly have been led into war. Yet, this stronger claim seems contradicted by polls, which reported: “a year after March  invasion, public belief in the Administration’s pre-war threat claims had declined only slightly.” Hence, it seems that the belief of American majority in the existence of  

This is the thesis by Kaufmann, “Threat Inflation and the Failure of Marketplace of Ideas,” pp. –.  Kaufmann, ivi, p. . Kaufmann, ivi, p. .

Bush and the Weapons of Mass Destruction



WMD and in a link between Saddam and Al Qaeda was relatively independent from access to correct data about it. It may be that these beliefs had been deceptively induced by the selling campaign and, once formed, had been defended by their holders against available evidence; or it may be that the deception was greatly eased by a symmetrical wish to believe the danger of WMD by the public. In any case public support started steadily to decline only when there was a clear perception that things did not go well in Iraq, with insurgency, death tolls, and scandals. In sum, the persistence in the belief of the existence of WMD despite contrary evidence seems to point to SD, maybe only ex-post, as a convenient supplement to the government’s spin and manipulation. For it is true that intelligence information was controlled by the government, hence that Americans did not have access to data and details concerning yellowcake, Curveball and aluminum tubes, as Kaufmann stresses, but the fact is that, even if true, all three tidbits of evidence were only hints from which to build up a conjectural hypothesis that Saddam was pursuing a nuclear program. The public did not know they were false, but on the whole had available sufficient information to conclude that there was only circumstantial evidence for ongoing WMD programs in Iraq, and that the claim that there was an imminent danger for national security was vastly exaggerated. Thus, if Americans had been manipulated by the deceptive beliefs that WMD existence was corroborated by intelligence data, they nevertheless convinced themselves that the disclosed circumstantial evidence actually constituted sufficient grounds for believing that Saddam had stockpiles of WMD and was ready to use them in a second and more deadly /. Possibly the convictions of the Presidency in this respect was contagious, possibly the mere repetition by governmental sources that the threat was imminent made it more vividly salient and realistically imagined. But in any case, no matter how well-crafted governmental rhetoric was, the public response was not a necessary and direct consequence of government’s distortions. It seems that Americans wanted to believe that the circumstantial evidence, which they did not know it was misrepresented, was sufficient to draw the conclusion that stockpiles of WMD were ready to be detonated in the next attack to America. As Ricks remarks, “the Iraq fiasco occurred not just because the Administration engaged in sustained SD over the threat presented by Iraq and the difficulty of occupying the country but also because of other major lapses in several American institutions, from the military establishment and the intelligence community



Political Self-Deception

to the media.” In other words, to the administration’s sins of commission, sins of omission were added by Congress, and other political and social institutions included public opinion and citizens. .. In a study devoted to the selling of the war, focused on the advertising and marketing strategies employed by the government, a crucial issue is raised, namely the contrast between the self-perception of Americans as a peace-seeking nation, and their persistent support to presidents’ decision to get involved in war. How the pacifistic self-perception and the opposite conduct can be reconciled is complex, but some recurrent convictions help to work out an explanation, namely the division of the world in “good guys” (us) and “bad guys” (others), joined by the idea that good guys always win. The good guys are peaceful, in principle, but cannot stand injustice and wrongs by the bad guys; hence they get involved into war for redressing wrongs, and they will be able to do it quickly because they are powerful and winning. When, however, the conflict in which the country has got involved turns out too costly and too long, and victory cannot be envisaged (as in the Vietnam case, or in the Iraq case), then Americans tend to make sense of their dissatisfaction, coming to believe to have been misled by the president in a useless war. Elaborating from this interpretation, it seems (a) that the selling of the war, in general, and of the Iraq war, specifically, found a fertile terrain in a Manichean vision of good vs. evil, and in the overconfidence of being on the winning side; (b) such vision needed to be activated by some special event, some perceived injustice that, in our case, was the / attack, and that invariably made the resort to force an acceptable, even obvious response; (c) under these circumstances, the public selling of the war stood a good chance of being successful, quite apart from the spinning capability displayed; (d) finally, when the military intervention turned out badly, the fault was found in the president’s misleading the country, and/or in the intelligence misleading the president. The first two points concern the cultural grounds, the basic shared assumptions, within which discourses on war can be successfully framed and that predispose the country to overconfidence of its aims and means, and to overoptimism over the result. (c) concerns the public disposition to trust the government in matter of

 

Ricks, Fiasco, p. . Ivi.



Secunda and Moran, Selling War to America, pp. –.

Bush and the Weapons of Mass Destruction



national security and foreign policy, which creates favorable conditions for the government selling, on the one hand, and, on the other, for a SD process by the people. This latter process has been described by Jon Elster, revisiting Marx’s concept of false consciousness of workers, and explains how people, who have no actual control over foreign policy, reduce their discomfort by coming to believe that the war is indeed necessary and just for the general interest and national good. Finally, the last point concerns the rationalization of past conduct, when people have to come to realize that the war was neither necessary nor worth their efforts and costs; and here definitely a sour-grapes type of SD is at work. Under this reading, we can recognize the people’s complicity to the run-up to war, a complicity that: (a) is ingrained in American political culture, predisposed to optimism and trust, hence to wishful thinking, (b) is activated by some dramatic event, by providing a favorable terrain for government’s public selling, which (c) may count on a false-consciousness conviction of the war as necessary and just, until (d) the negative reality can no more be denied engendering a harsh distancing from the administration and consequent accusations of deception. My argument is, as usual, mainly speculative and not meant as a general explanation of the public support for the Iraq war. I do not argue that there had been no spinning and manipulation by the government for the Iraq war, but I claim that this is not the whole story. Americans cannot be regarded as innocent victims duped by scheming liars and deceptively pushed to support a hidden plan by brainwashing. This view, in addition to being disrespectful, does not fit with the lack of a precise, well defined plan behind the public selling, as we have seen, nor with the success of the selling despite only circumstantial evidence and, lastly, with the counterevidential persistence of the prewar beliefs about WMD and Iraqi threat a year after the invasion. 

Jon Elster, Sour Grapes.

Conclusion

Explanatory and Normative Reasons in Favor of Self-Deception C.. In the first part of the book, I have explored the concept of SD as preliminary to its use in political analysis. In the wide philosophical debate over the nature of SD, I have proposed an alternative account to the two main views, namely, the intentional view according to which SD is intentionally produced by the subject, and the causal-motivationist view according to which SD is instead causally produced by biases triggered by some motivational state of the subject. With the causal-motivationist account, I share the view that SD is to believe that P against the available evidence and under the influence of the desire that P be the case. In this respect, it is a form of motivated irrationality, displayed by usually rational subjects, capable to form and hold beliefs appropriately. Yet my explanation differs from the causal-motivationist one, for I have proposed the view that SD is the unintended outcome of intentional steps taken by the agent. I have argued that SD is produced indirectly by motivated mental acts elsewhere oriented. I call this view “the invisible hand model of SD,” given that in my argument SD is brought about in a fashion similar to those social and natural phenomena that are the products of human actions but not of human design, for which the apt kind of explanation is precisely the invisible hand. If my account sides with the causal-motivationist one in doing away with the paradoxes affecting the intentional view, it also provides a response to the issues of specificity and selectivity left unresolved in the causal model. Moreover, it allows a proper attribution of responsibility to self-deceivers, which instead represents a problem for the causal model, and, most importantly, opens up the possibility of preventive measures against SD, which are of paramount importance in political analysis.



Conclusion



Moving to the political domain, I have first mapped out the specificity of political SD compared to personal SD and the differences between the context of daily life and the political reality where episodes of political SD take place. Equipped with a well-defined notion of SD, and with a typology of political SD, I have then set myself to illustrate the theoretical analysis with some exemplary cases. C.. My reconstruction of the cases is based on known and available records and mainly secondary sources, but it differs from other accounts insofar as I make use of political SD as an analytical tool to make sense of the mixture of epistemic confusion, motivated search and faulty processing of data, bad planning, and, in the end, deception of the public. In the gray area of uncertainty and unclear records characterizing foreign policy crises, coupled with complex and blurred agendas, I have been able to single out specific episodes of SD of different types. In the Bay of Pigs case, a SD of the first type was at work both in the two CIA top officers’ determination for pursuing the plan, and in the Security Council and the president’s decision to go along. The motivation was partly due to the wish “to do something with Cuba,” according to the Cold War mindset, and partly to self-serving reasons different for the different agents. In the course of the decision making, the plan met with considerable obstacles dooming its chances of success. Yet, the negative evidence, well in front of the eye of decision makers, was discounted so that Kennedy gave the executive order and the operation ended in a fiasco. Was the fiasco due to gross mistakes on the part of the CIA, the Security Council, the Pentagon advisers and the president? The cold mistake hypothesis does not seem to stand, considering the number of people involved, and the reasonable possibility for at least some of them to check the facts critically. Nor fares any better the thesis of CIA deception, for deceiving the president into a fiasco was against the interests of Bissel and Dulles who paid for the failure with their career. The hypothesis of a motivated biased consideration of the data according to the wish to go ahead with the plan seems the most plausible. How could the level of accuracy be so lowered, given that the stakes were so high? I think that different reasons were at work in different agents: the two CIA officers did not believe, hence discounted, Kennedy’s words to the effect of denying any military backup to the landing of the Cuban expatriates. Members of the Security Council and other advisers were in fact caught in a groupthink dynamic that contributed to a diffusion of responsibility. The president was instead shielded behind the screen of the deniability



Political Self-Deception

clause and believed that the costs of a failed attempt did not befall on him. Afterward, the president accommodated the reality of the failure with the comforting belief that he had been betrayed by the CIA. This story was actually believed by the people, as shown by the favorable polls concerning Kennedy’s popularity. Americans mostly exonerated their president from being the culprit and thought he was rather the victim of the Intelligence, showing that the sour-grapes form of the president’s SD was mirrored in the people’s SD. In the subsequent Cuban Missiles Crisis, Kennedy emerged as a wise and successful leader who averted the risk of a nuclear confrontation, without giving in to the blackmail of the counterpart. However, the happy ending was actually due to the secret deal trading the removal of the Cuban missiles for the Turkey ones. The lie concerning the deal was covered up by a form of SD of the third type, which, on the one hand, justified the lie as necessary and, on the other, downplayed the role of the deal in the conclusion of the crisis so as to make the lie irrelevant. Both ways concur to the reduction of the dissonance between Kennedy’s public image of a wise and strong leader who did not give in to blackmail, and the reality of the hidden deal. In the second case, concerning the Vietnam War, the plan to escalate the American involvement bringing attacks to the North was very likely a piece of SD, due to the strong motivation to win the conflict and to save face to counter the disparaging evidence that the counterinsurgency in the South was ineffective and was losing grounds. The most interesting and specific example of SD is however related to the alleged incident in the Tonkin Gulf, which illustrates the second type of political SD, where SD is ancillary to the deception of others. The want to escalate the conflict needed a preliminary Congress resolution; the latter would have been the object of a harsh discussion for it implied a wider deployment of American troops and a higher toll of American lives. Moreover, the president did not want a discussion on Vietnam before his bill of rights passed, unless a dramatic event provided a strong reason in favor of the resolution, a reason that the whole country would share. The alleged incident of August  provided the dramatic event that the Secretary of State and the administration needed in order to pass the resolution swiftly and without dissent. What happened in the Tonkin Gulf on the night of August  was due to misperceptions and to bad weather conditions, but reports of the attack reached Washington, and despite the confusion about facts, the attack was believed bona fide. When doubts about the attack started to mount, McNamara ordered a check confirming what he anxiously wanted to be the truth. The two admirals who

Conclusion



reviewed the data were under the pressure of time and of McNamara’s anxious desire for confirmation. Despite persisting doubts, they concluded the incident was bona fide so that the reprisal followed as planned. The incident was used as a pretext to extoll a resolution from Congress, and it corresponded to the wish of the administration, but, for all that, it was not a fabrication. The strong, though deceptive, conviction in favor of the escalation and the anxiety due to pressure of time jointly contributed to discount the costs of inaccuracy, lowering the threshold of evidence required to believe the incident true. Thus, the administration lied concerning the implications of the resolution, but genuinely believed that there had been an attack. Finally, in the third case, I have detected two SD episodes of the first type relevant in building up the conviction that toppling Saddam was necessary for national security. The first is a case of twisted SD, concerning the belief that after September  the United States was under an imminent, possibly nuclear, attack. The second is instead a case of straight SD concerning the belief that terrorists must be sponsored by a state and that Iraq was such a state. In either case, an anxious wish was operative, and ample contrary evidence was available and discounted. The more specific episode of SD with reference to the Iraq war, however, concerned the belief in the presence of WMD in Iraq, which, like the Tonkin Gulf, was a case of the second type and represented the necessary pretext for selling the war to the national and international audience. The motivation to find a concluding argument for invading Iraq was pushing the search for evidence of WMD; the search, in turn, was biased, thanks to the fact that the administration was already independently convinced of their existence – given the neocon ideology of rogue states, and the unexamined assumption of Saddam as sponsor of terrorism. Thus, despite the negative data, I argue that the belief in their existence was sincerely held by the Bush administration and by its British ally as well. C.. Despite the fact that my argument has only been speculative in all three cases, SD fares better than a purely cognitive approach to decision making, which explains faulty reasoning in terms of cognitive biases and other epistemic fallacies, but cannot properly account for the role of selfserving motivation in the belief formation of decision makers. Symmetrically, the straight deception view cannot explain the subsequent failure of the policy. By contrast, SD explains the deception of the people together with bad planning, thus dispensing with conspiracy theory. Rehearsing a distinction drawn by Robert Nozick, the straight lying view does not



Political Self-Deception

constitute a fundamental explanation, since the explanans (the deceptive intention) contains terms of the explanandum (the deceptive belief ). Instead, SD represents a case of fundamental explanation where explanans and explanandum do not share any common terms, for the agent has no intention of deceiving herself, or anybody else but comes to be deceived by a process that is altogether unintentional, although composed of intentional steps. Hence, the deception of the people is a by-product of SD, while the bad planning follows from being based on false beliefs. Whereas no logical link between the intention to deceive and bad planning can be imputed, such a connection is intrinsic to any plan based on SD, that is, on false representations of data. The analysis of the cases suggests me three remarks on the use of SD in politics. First, despite the intricacies and complication of international reality, SD can be singled out distinctively. I have been able to detect SD specifically, dint to the conception outlined in the first part of the book. SD is in fact a plausible hypothesis if the three minimal necessary conditions apply, namely, the contextual, motivational and cognitive condition. In this way, I have been able to account for the role of SD, which, in the grey area of political deception, is limited but crucial for explaining certain policies. Even if the self-deceptive beliefs in the incident in the Tonkin Gulf or in the existence of WMD were used only as pretexts, nevertheless they represented crucial elements for making the escalation in Vietnam and the Iraq invasion possible. The second remark concerns the role of ideology imbuing the context where SD takes place. From what we have seen in the previous analysis, ideologies usually provide a basket of firm and unexamined convictions, fixed mindset, ideas, and unquestioned views that not only provide fixed lens for interpreting reality but also for defining agendas and political goals. In politics, the motivation setting in motion SD usually comprises the wish for attaining a certain political goal plus self-serving wishes for success, fame, promotion, and power. If the goal is set by ideological convictions, as it was in all the cases examined, it is often immune to public scrutiny and cost-benefit analysis, providing a favorable motivational grounds to the production of unjustified beliefs. Only in the Cold War mindset, Cuba could have been conceived of as a priority in the American foreign policy agenda, and a distant Asian small country a crucial domino for preserving the free world from the Communist takeover. Similarly, only the neocon ideology could 

See Robert Nozick, Anarchy, State and Utopia, Oxford: Blackwell, , p. .

Conclusion



have prioritize regime change over the risk of stateless society. Then, when the desire to reach a certain objective is turned on emotionally by certain contextual conditions, the SD process is likely to start, for the negative evidence does not lead to change the objective but to a distortion of the data. Moreover, ideological convictions represent not only favorable grounds for SD to start but also, providing agents with undisputed certainties, have the effect of lowering the level of accuracy in the consideration of data. If the belief in the treacherous and aggressive nature of the communist North Vietnam forces is firm and beyond doubt, then threshold of evidence required to believe the night attack to the two American ships by North Vietnamese torpedoes turns out to be considerably lowered. The third remark from the analysis of cases precisely concerns the costs of inaccuracy. In the first part of this work, I have stressed that costs of inaccuracy sinking are crucial for explaining the selectivity of SD, together with high anxiety and stress. In personal life, the sinking of inaccuracy costs is usually induced by the condition or by the feeling of powerlessness. But political decision makers would seem the opposite of powerless. Yet, that depends on (a) the type of SD under scrutiny; and (b) the position of the decision maker in the team. If SD is of the sour-grapes or third type, covering up failure or justifying deception, the politician is actually powerless to reverse the bad decision or to change the lie and does not have anything to gain from a diagnostic data review; accordingly, the costs of inaccuracy sink and vigilance is lowered. When SD concerns future plans instead the stakes are definitely high, but members of the decision-making group may perceive to be powerless in the hierarchy as an effect of groupthink inducing a related diffusion of responsibility and discounting of costs of inaccuracy. The president or crucial decision makers, however, should not share the sense of diffusion of responsibility of team members, and are definitely not powerless. Nevertheless, they may come to discount the costs of inaccuracy, thanks to other supplementary beliefs, independently supporting the selfdeceptive process. In these cases, inaccuracy costs are not low, but are pushed aside, with similar effects on epistemic vigilance. A good example is the already quoted deniability clause which may induce the discounting of inaccuracy costs, because the Administration is convinced that potential failure would not befall on them. Alternatively, the ideological conviction that a certain plan is needed and is right and that the enemy’s trickiness is established beyond doubts may externally support the truth of the selfdeceptive belief ex-ante, independently from the belief formation process, hence lowering the threshold of evidence required to believe that P.



Political Self-Deception

A caveat is now in order if proper use is to be made of political SD as an analytical category. Its explanatory capability cannot be overused for any instance of unclear, blurred, or misconceived political decisions. In other words, I am not claiming that cold mistakes do not come into play in the process, or that dishonesty, manipulation and duplicity are never the case. Usually, the circumstances of political decision making, when momentous foreign policy choices are at issue, are blurred and confused both epistemically and motivationally. Sorting out simple miscalculations from genuine uncertainty, and dishonesty and duplicity from SD is often a difficult task, for, as I have shown when analyzing the cases, all these elements are present and entangled. Yet, I contend that making use of the analytical category of SD helps to dispel the fog, giving each element its due. I have made it clear that SD is not meant as a substitute for straight deception, and, the typology I have proposed precisely specifies the different connections that political SD establishes with the deception of the public. For, at the very minimum, if political leaders and officials are self-deceived, they in any case induce deception in the people as a consequence. Yet, in some types the connection is more direct, if SD is ancillary to other-deception, or if it represents the covering up of a lie. Unpacking the lying and singling out the episodes of SD makes the subsequent backfiring understandable, as in the case of the Gulf of Tonkin incident in  and of the WMD in . A proper use of political SD is in general conditional to the availability of a well-defined, specific category, and this is the reason why I spent the first part of this work digging into the philosophical discussion over SD. Many political analysts and commentators have hinted at SD or, in some cases, at wishful thinking in order to explain policies that were both misguided and deceptive. Their reference to the commonsense notion of SD, however, did not significantly alter the explanations in terms of straight lies or in terms of epistemic confusion, for, in order to pick out the role of SD in the fog of complex decision making, one must refer to a well-specified concept of SD. For example, if SD is to play its role as a fundamental explanation, as I contend, it cannot be conceived of as deceiving oneself, but it must be understood as an unintended outcome of mental steps elsewhere directed. If SD were conceived as an intentional 

In this respect, Nozick’s distinction is especially telling. He was comparing the social contract model with the invisible hand model for explaining the origin of the State. In my argument, I discard the intentional model of SD in favor of a model I have labeled “invisible hand” because of its similarity with the invisible hand explanation of social phenomena. See Nozick, Anarchy, State and Utopia.

Conclusion



doing of the subject, it would not deliver the advantages I have pointed out above in comparison with the straight deception explanation. Furthermore, only a rigorous conception of SD makes it possible to draw proper distinctions in the web of unwarranted beliefs induced by ideology, unexamined assumptions, stubborn convictions, and received theories, and to single out the actual episodes of SD. Picking out the real piece of SD requires criteria for identifying the phenomenon, both in terms of circumstances and in terms of characterizing features. At the end of the first part, I argued that no single set of necessary and sufficient conditions for SD to be the case could be enlisted, given the wide variety of SD phenomenology and its fuzzy boundary. Consequently, most of the sets proposed in the literature are either incomplete, or problematic, or suitable only for a subset of cases that may not be the most typical in the social and political domain. If SD is to apply to social and political reality, the concept needs to be tailored to its real complexity, and balance comprehensiveness with rigor. In such a spirit, I have proposed considering three minimal necessary conditions, which any kind of SD must meet, and which jointly enable the observer to identify SD cases with a high degree of probability, even if lies or mistakes cannot conclusively be ruled out from the start. They are: () the contextual condition, comprising the circumstances conducive to SD; () the cognitive condition, picking the flaws in the reasoning and the biases in the data treatment leading to the false conclusion; and () the motivational condition, referring to the desires, aims, and emotions driving the decision maker, in the absence of which the flaws could reasonably be avoided. If no motivation can be imputed to the decision making, then we have just cold mistakes. If there are no flaws in the reasoning, but only a distance between what the decision makers believed and wanted, and what they publicly asserted, then we are looking at a case of straight deception. SD is thus singled out by the joint presence of the two components. As SD is not the usual way of processing data when the evidence is unfavorable, the cognitive and motivational components are activated only in certain circumstances, namely, emotional pressure and evidence contrary to the decision maker’s wishes. These circumstances are often typical of international crises, where the pressure of time and the crucial nature of the decisions heighten anxiety in decision making, especially when the data appear to run counter to the preferred option. At that point, epistemic accuracy is lowered by the anxious desire to “fix” the policy and by a variety of reasons for discounting inaccuracy costs.



Political Self-Deception

In these conditions, episodes of motivated irrationality are likely to take place either in the form of SD or of wishful thinking. How can we set the two apart? I have argued that SD is specific compared with wishful thinking, which is believing what one wishes regardless of the evidence, while in SD the wish is instead activated by the appraisal of the contrary evidence, making the wish, too, emotionally loaded. Moreover, the self-deceptive belief comes to be held by dint of much argument in favor of discounting the data to the contrary. In the complications of international relations, evidence is usually very confused and difficult to disentangle, but I have tried to pick exemplary cases where negative evidence had been, on the whole, weightier than positive, and available at the time of the decision-making process. In addition, wishful thinking is of no use in cases where cognitive dissonance is reduced, that is, when unpalatable, but known facts must be redescribed so as to minimize discomfiture and failure; similarly, it is out of place as a rationalization of a lie. In other words, when SD is working backward as a cover-up for fiascoes or misdeeds, it cannot be mistaken for wishful thinking, which has an obvious forward-looking dimension. It may be more difficult to set the two apart, when there is a plan to be decided upon at issue, or a pretext for a plan to be carried out. In such cases, the criterial difference between the two is both the amount of negative evidence available at the time, and the complex argument sustaining the false belief. Compare, for example, the belief that an escalation in the American involvement was needed in  with the belief that the war in Iraq would be won and done in a few days. I have argued that the first case is likely a case of SD, for this conviction was precisely prompted to evade the evidence of the failure of the American policy in the south of Vietnam, which made the desire to win and save face especially anxious. Moreover, it was a collective product of many meetings and people. Thus, while the reasoning backing the belief was faulty, it was nevertheless articulated in complex arguments. By contrast, the belief that the military operations in Iraq were going to be swift and to end in a few days was probably more a case of wishful thinking. There was no discussion about the plan for the after invasion, and no argument produced sustaining any belief. The administration and its ally let themselves simply be driven by the wish to win, and to win swiftly and quickly, without giving much thought to the “how.” They trusted the American military might and their conviction that their decision was right and needed. One can say that a careful consideration of data would have suggested the difficulties of the aftermath, but very likely their optimistic mood

Conclusion



was not the product of the evasion of hard facts, but directly of their wish inducing the correspondent belief. C.. Finally, I recommend using the SD category in political analysis because it opens up room for prophylactic measures, at least in principle. The crucial consideration is that, contrary to lies and mistakes, SD can be foreseen, when the circumstances likely to give rise to the process obtain. Most of the ex-post reports on decisions and policies that failed, with disastrous repercussions, are devoted not only to establishing the truth, that is, understanding what went wrong, but also, and I would say especially, to the avoidance of similar mistakes in the future. The last inquiry on Iraq, delivered on July , , is exemplary in this respect. In his public statement, Sir John Chilcots voiced the following considerations: Military intervention elsewhere may be required in the future. A vital purpose of the Inquiry is to identify what lessons should be learned from experience in Iraq. There are many lessons set out in the report. [. . .] The UK’s relationship with the US has proved strong enough to bear the weight of honest disagreement. [. . .] The lessons also include: The importance of Ministerial discussion which encourages frank and informed debate and challenge [. . .]

As in most preceding reports, on Iraq as well as on Vietnam or the Bay of Pigs, the purpose of learning lessons from failure is clearly expressed and, among the lessons to be learned, honest disagreement and wide, open and frank discussion, challenging received intelligence are always present and eagerly stressed. I have reported that Robert Kennedy pointed out the need for a devil’s advocate when deciding on something like military intervention, the very thing absent in the Bay of Pigs incident. These repeated recommendations are, however, weakened by the lack of a proper analysis of why the policy was misconceived. The Chilcot report is no exception in this regard: it underlines that the information available at the time of the decision was insufficient to authorize a critical choice such as military intervention in a foreign country without first having been attacked. It made it clear that the evidence on Iraq’s possession of WMD was mostly presumptive and based on past records and stressed that the lack of any conclusive evidence proved neither that there were WMD nor that there were none. But the report does not raise the question whether the hasty 

Sir John’s public statement, July , , www.iraqinquiry.org.uk/the-inquiry/sir-john-chilcotspublic-statement/.



Political Self-Deception

decision to invade was based simply on miscalculation, or on bad faith or plain deception, in order to go along with the American ally. From the records, what clearly emerges is Blair’s wish to back the American invasion, once his attempts to postpone it after a second UN resolution ceased. Whether his wish made him lie, or made him appraise the data at hand less stringently we cannot possibly know for sure. The inquiry actually leaves all those possibilities open, for they cannot be ascertained with absolute confidence. Actually, the option of a cold mistake in considering the data does not tally well with the records, for it simply ignores the strong influence of the anxious wish to align the United Kingdom on US resolve. The distinctive position of the UK cabinet on the matter focused on the IAEA inspections and, only in the event of the inspections failing, on another UN resolution, which would have provided legal grounds for the invasion under international law and broad international support. Blair was convinced he could win the American ally over to such grounds. When he realized that Bush had made up his mind, he contented himself with the fact that the inspections had not been able to clear Saddam of the possession of WMD, hence he took the lack of final evidence for there being no WMD as proof of the possibility of their being there. Given the risk inherent in Saddam’s possession of WMD in the circumstances of international terrorism at the time and given the reiterated proof that Saddam had lied to the international community in this respect, the worst case scenario (possession of WMD) not only could not be ruled out, but, by dint of probability neglect, even came to be believed as very likely. This conjectural reconstruction seems to me more plausible than the simple deception option. For someone like Blair, who wanted and tried to do the right thing (inspections and UN resolution), the discomfort of direct lying might have proved hard indeed, while letting himself be taken in by a less stringent appraisal of the data was a much easier way out. My point is that if lessons are to be learned from past failures, the question of SD must in any case be raised. In addition to the conjectural reconstruction of decision makers’ thinking, the response may also be based on other, contextual and less speculative, reasons for thinking that SD played a role. Despite all the reiterated recommendations, frankness, dissent, and critical thinking are always, and not for nothing, missing in misconceived decisions. If the two decision-making processes leading up to the resolution of August ,  and the decision of March  were analyzed in parallel, the striking similarities would become apparent, and jointly they would support the thesis that SD was at work, and that is why they featured neither frank and open debate nor a challenge to received

Conclusion



intelligence. The first lesson to be learned, to my mind, lies in acknowledging the problem of SD, and understanding how it operates. Political SD is a collective product; in this respect, the opacity characterizing the unfolding of SD is spread among all involved, silencing the criticisms and qualms of the less accommodating members. The critical views are either not voiced, or if voiced, not heeded, and pushed aside, while the views concurring with the desired conclusion constitute evidence for everyone else that the conclusion is sound. Appraisal by independent observers would appear to be crucial breaking the support for the deceptive conclusion provided by the participants, who are eager to please the leader and tend to disregard their own responsibility. The Chilcot statement simply expressed the need for frequent, frank, open discussion challenging received views, but I have argued that given the way collective SD works, frankness, and independence are hardly to be expected from within the group engaged in the decision-making process. Not only should the review be epistemically independent of the trap of groupthink, but the future careers of the reviewers must not depend on the government, for we have seen that intelligence agencies and military personnel cannot be trusted to play devil’s advocate, despite having their own separate, organization-specific agendas. I have no fast solution for addressing this problem: whether it be a parliamentary committee or a specific independent authority, or even a jury of citizens, I leave it entirely up to constitutional designers. As I see it, the problem in this respect is to strike a balance between the democratic credentials of such a body and the need to work fast under the pressure of timely decision making. Governments and cabinets cannot postpone their decisions indefinitely, yet a specific authority with a parliamentary mandate may look like another group of experts without democratic accountability. I leave the intricacies of institutional design to scholars specializing in the field, but I should like at least to point out that such an institution is meant to constitute a precommitment on governmental decisions based on self-deceptive beliefs. In this respect, it must somehow mirror the features of precommitment in cases of personal SD, where friends are acting as referees for the self-deceiver. What allows the friends of the self-deceiver to see through his or her SD is the lack of the emotionally loaded wish driving the self-deceiver’s thinking. Without that wish at work, the referee is able to assess the evidence with normal accuracy and to draw the correct conclusion about the friend’s belief and state of mental confusion. In turn, the self-deceiver who usually resists criticism may accept the friend’s suggestion if (a) she has become aware of the general problem of SD and



Political Self-Deception

the reasons for avoiding it; and (b) she has appointed the friend or friends as referee(s), and authorized them to come to her aid should she fall prey to SD. Such steps prior to the self-deceptive episode specifically represent the precommitment of the subject to averting any future SD and make it likely that the friend’s advice will be taken seriously in the case in point and not dismissed beforehand. In sum, for precommitment to work through the intervention of a referee, the preliminary conditions are the acknowledgment of SD as a problem, the will to avoid it, and the authorization given to friends. Similarly, the precommitment strategy can work in the institutional arena, if (a) the problem has been acknowledged, not just in general, but by the very decision makers themselves, and (b) the government have agreed to their decision being overseen by some independent body, for only in that case can they accept the overseers’ view. This book may be seen as putting forward arguments for meeting the first precondition for precommitment against political SD. I leave it to another work to dig into the institutional details. Before working out specific prophylactic measures for countering political SD when decision making involves momentous decisions with far-reaching consequences, made under the pressure of time and in difficult circumstances, the problem must first be acknowledged.

References

Ainslie, G. () The Breakdown of the Will, Cambridge: Cambridge University Press. Allyn, B., Blight, J., Welch, D., eds. () Back to the Brink: Proceedings of the Moscow Conference on the Cuban Missile Crisis, Lanham, MD: University Press of America. Altermann, E. () When Presidents Lie: A History of Official Deception and Its Consequences, New York and London: Viking. Ames R. T., Dissanayake, W., eds. () Self and Deception, New York: State University of New York Press. Arendt, H., () Lying in Politics: Reflections on the Pentagon Papers, in Crises of the Republic, San Diego, New York, and London: Harcourt Brace Jovanovich. Aristotle [ BCE] () Nicomachean Ethics, trans. D. W. Ross, Oxford: Oxford University Press. Audi, R. () “SD, Action and the Will,” Erkenntnis, : –. () “SD and Practical Reasoning,” Canadian Journal of Philosophy,  (), –. () “SD vs. Self-Caused Deception: A Comment to Mele,” Behavioral and Brain Sciences, : . Bach, K. () “Thinking and Believing, in SD,” Behavioral and Brain Sciences,  (), –. () “An Analysis of SD,” Philosophy and Phenomenological Research, : –. Bamford, J. () A Pretext for War, New York and London: Doubleday. Bandura, A. () “SD: A Paradox Revisited,” Behavioral and Brain Science: –. Barnes, A. () Seeing through SD, Cambridge: Cambridge University Press. Barrett, D. M., ed. () Lyndon Johnson’s Vietnam Papers: A Documentary Collection, College Station: Texas A &M University Press. Berman, L. () Johnson’s War: The Road to Stalemate in Vietnam, New York: Norton.





References

Bermudez, J. L. () “SD, Intention and Contradictory Beliefs,” Analysis,  (), –. Bert, W. () Military Intervention in Unconventional War, New York: Palgrave Macmillan. Beschloss, M., ed. () Taking Charge: The Johnson White House Tapes –, New York: Simon & Schuster. Blaufarb, D. () The Counter-Insurgency Era: US Doctrine and Performance, New York: Free Press. Blight, J. G., Welch, D. A., eds. () On the Brink: Americans and Soviets Reexamine the Cuban Missile Crisis, New York: Hill &Wang. Blight, J. G., Allyn, B., Welch, D. () Cuba on the Brink: Castro, the Missile Crisis and the Soviet Collapse, New York: Pantheon. Borge, S. () “The Myth of SD,” The Southern Journal of Philosophy, : –. Butler, J. [] () Upon Self-Deceit in Works, ed. W. E. Gladston, vol. , Oxford: Clarendon, –. Chilcot Inquiry on Iraq () www.iraqinquiry.org.uk/media/. Clarke, R. () Against All Enemies: Inside America’s War on Terror, New York: Free Press. Cliffe, L., Ramsay, M., Bartlett, D., eds., (), The Politics of Lying: Implications for Democracy, London: Macmillan. Clifford, W. K. [] () “The Ethics of Beliefs,” in Lectures and Essays, Stephen L., Pollock F., eds., London: Macmillan. Cohen, A. () Exploration in Cognitive Dissonance, New York: Wiley. Conyers, J. C., and staff () The Constitution in Crisis: The High Crimes of the Bush Administration and a Blueprint for Impeachment, New York: Skyhorse Publishing. Darwall, S. () “SD: Autonomy and Moral Constitution,” in Perspectives on SD, ed. B. P. McLaughlin and A. O. Rorty, pp. –. Davidson, D. () “Paradoxes of Irrationality,” in Wollheim, R., Hopkins, J., eds., Philosophical Essays on Freud, Cambridge: Cambridge University Press, pp. –. (), “Deception and Division,” in E. LePore, B. McLaughlin, eds., Actions and Events: Perspectives on the Philosophy of Donald Davidson, Oxford: Basil Blackwell, –. Demos, R., (), “Lying to Oneself,” Journal of Philosophy, , –. Deweese-Boyd, I (), “Taking Care: SD, Culpability and Control,” Theorema, : –. () “SD,”The Stanford Encyclopedia of Philosophy (Spring  Edition) Edward N. Zalta (ed.), https://plato.stanford.edu/entries/self-deception/. Dobbs, M. () One Minute to Midnight, Kennedy, Khrushev, and Castro on the Brink of Nuclear War, New York: Vintage. Dommen, A. () The Indochinese Experience of the French and the Americans: Nationalism and Communism in Cambodia, Laos and Vietnam, Bloomington: Indiana University Press. Doris, J. M. () Lack of Character, Cambridge: Cambridge University Press.

References



Douglas, W., Gibbins K. () “The Inadequacy of Voice Recognition as a Demonstration of SD,” Journal of Personality and Social Psychology, : –. Donner, M. () The Secret Way to War. The Downing Street Memo and the Iraq War’s Buried History, New York: New York Review of Books. Drogin, B. () Curveball, Reading, UK: Ebury Press. Edelman, M. (), The Politics of Misinformation, Cambridge: Cambridge University Press. Edwards, G. C, and King, D. S., eds. () The Polarized Presidency of G.W. Bush, Oxford: Oxford University Press. Egan, L. C. () “SD Is Adaptive for Itself,” Behavioral and Brain Science : –. Elster, J. () Ulysses and the Sirens, Cambridge: Cambridge University Press. () Sour Grapes. Essay on Rationality and Irrationality, Cambridge: Cambridge University Press. Elster, J. ed. () The Multiple Self, Cambridge: Cambridge University Press. () Alchemies of Mind, Cambridge: Cambridge University Press. () Ulysses Unbound, Cambridge: Cambridge University Press. Fallows, J. () Blind into Baghdad: America’s War in Iraq, New York: Vintage. Festinger, L., () A Theory of Cognitive Dissonance, Stanford: Stanford University Press. Fingerette, H. () SD, London: Routledge & Kegan Paul. () “SD Needs No Explaining” Philosophical Quarterly, : –. Fisher, J. M., and Ravizza, M. () Responsibility and Control. A Theory of Moral Responsibility. Cambridge: Cambridge University Press. Forrester, M. () “SD and Valuing Truth,” American Philosophical Quarterly, : –. Foss, J. () “Rethinking SD,” American Philosophical Quarterly, : –. Frankfurt, H. () On Bullshit, Princeton: Princeton University Press. Freedman, L. () Kennedy’s Wars, Oxford University Press. Frey, V., Voland, E. () “The Evolutionary Route to SD: Why Offensive vs. Defensive Mechanism May Be a False Alternative,” Behavioral and Brain Sciences, : –. Freyd, J., Birrel, P. () Blind to Betrayal: Why We Fool Ourselves: We Aren’t Being Fooled, Hoboken, NJ: Wiley. Friedrich, J. () “Primary Error Detection and minimization (PEDMIN) Strategies and Social Cognition. A Reinterpretation of Confirmation Bias Phenomenon,” Psychological Review, , –. Funkhauser, E., () “Do the Self-Deceived Got What They Want,” Pacific Philosophical Quarterly,  (), –. Galeotti, A. E. () “SD: Intentional Plan or Mental Event?” Humana Mente, , –. () “Liars or Self-Deceived. Reflections on Political Deception,” Political Studies, : –.



References

Gardner, S. () Irrationality and the Philosophy of Psychoanalysis, Cambridge: Cambridge University Press. Gardner, L. C. () Pay Any Price, Chicago: Ivan R. Dee. Gardner, L. C., Gittinger, T., eds. () Vietnam: The Early Decisions, Austin: University of Texas Press. Gergen, K. J. () “The Ethnopsychology of SD,” in Martin M., ed., SD and Self-Understanding: –. Gibson, J. W. () The Perfect War: Technowar in Vietnam, New York: Atlantic Monthly. Gilovich, T. () How Do We Know What Isn’t So? New York: The Free Press. Goldstein, L. J. () Preventive Attack and WMD: A Comparative Historical Analysis, San Francisco: Stanford University Press, Goleman, D. () Vital Lies, Simple Truths: The Psychology of SD, London: Bloomsbury. Goulden, J. C. () Truth Is the First Casualty. The Gulf of Tonkin Affair – Illusion and Reality, Chicago: James Adler-Rand McNall Company. Greenawald, A. () “Self-Knowledge and SD” in J. Lockard and D. Paulhus, eds., SD: An Adaptive Mechanism, pp. –. Gur, R. C., Sackeim, H. A. () “SD, Self-Confrontation and Consciousness,” in Schwartz, G. E., Shapiro, D., eds., Consciousness and Self-Regulation: Advances in Research, vol. , New York: Plenum Press: –. () “SD: A Concept in Search of a Phenomenon” Journal of Personality and Social Psychology, : –. () “Voice Recognition and the Ontological Status of SD,” Journal of Personality and Social Psychology, : –. Haig, A. () Inner Circles: How America Changed the World: A Memoir, New York: Warner. Haight, M. () A Study on SD, Brighton, UK: Harvester Press. () “Tales from a Black Box,” in Martin, M., ed., SD and SelfUnderstanding: –. Halberstam, D. () The Best and the Brightest, New York: Random House. () “LBJ and Presidential Machismo” in J. P. Kimball, ed., To Reason Why, pp. –. () The Making of a Quagmire: America and Vietnam during the Kennedy Era, New York: Knopf. Harvey, D. () The New Imperialism, Oxford: Oxford University Press. Haselton, M. G., Nettle, D. () “The Paranoid Optimist: An Integrative Evolutionary Model of Cognitive Biases,” Personality and Social Psychology Review, : –. Haas, R. () War of Necessity, War of Choice: A Memoir from the Two Iraqi Wars, New York: Simon and Schuster. Hayek, F. A. () Individualism and Economic Order, London-Chicago: Routledge and Chicago University Press. Heil, J. () “Doxastic Incontinence,” Mind, : –.

References



Helm, P. () Belief Policies, Cambridge: Cambridge University Press. Helsing, J. W. () Johnson’s War/Johnson’s Great Society. The Guns and Butter Trap, Westport, Connecticut-London: Praeger. Higgins, T. () The Perfect Failure: Kennedy, Eisenhower, and the CIA at the Bay of Pigs, New York: W.W. Norton. Holmes, S. (), The Matador’s Cape: America Reckless Response to Terror, Cambridge: Cambridge University Press. Hong, L., Page, S. () “Groups of Diverse Problem Solvers Can Outperform Groups of High Ability Problem Solvers,” Proceedings of the National Academy of Sciences,  (): –. Hybel, A. R., and Kaufman, J. M. () The Bush Administration and Saddam Hussein: Deciding on Conflict, New York: Palgrave-Macmillan. Isikoff, M., and Corn, D. Ubris, New York: Crown,  Jamieson, K. H. () Dirty Politics, Oxford: Oxford University Press. Janis, I. () Groupthink: Psychological Studies of Policy Decisions and Fiascoes, Boston: Houghton Mifflin. () Crucial Decisions, New York: The Free Press. Jay, M. () The Virtue of Mendacity: On Lying in Politics, Charlottesville: University of Virginia Press. Jenny, K. () Vices of Inattention, Journal of Applied Philosophy, : –. Jervis, R. () Perception and Misperception in International Politics, Princeton: Princeton University Press. () “Understanding Beliefs and Threat Inflation,” in A. Trevor-Thrall, J. K. Cramer, eds., American Foreign Policy and the Politics of Fear, pp. –. Johnson, T. () The War on Terrorism: A Collision of Values, Strategies and Societies, Boca Raton, FL: CRC Press. Johnston, M. () “SD and the Nature of the Mind,” in B.P. Mclaughlin and A.O.Rorty, eds., Perspectives on SD, pp. –. Kahneman, D., and Renshon, J. () “Hawkish Biases,” in A. Trevor Thrall, J. K. Cramer, eds., American Foreign Policy and the Politics of Fear, pp. –. Kaplan, F. () Daydream Believers: How a Few Grand Ideas Wrecked American Power, Hoboken, NJ: Wiley. Kaufmann, C. () “Threat Inflation and the Failure of the Marketplace of Ideas. The Selling of the Iraq War” in A. Trevor Thrall, J. K. Cramer, eds., American Foreign Policy and the Politics of Fear: Threat Inflation since /, pp. –. Kennedy, R. () Thirteen Days: A Memoir of the Cuban Missile Crisis, New York: W.W. Norton. Kimball, J. P., ed. () To Reason Why: The Debate over the Causes of US Involvement in the Vietnam War, New York: McGraw-Hill. Kipp, D. () “SD, Inauthenticity and Weakness of the Will” in Martin M., ed., SD and Self-Understanding: –. () “On SD” Philosophical Quarterly, : –.



References

Kirsh, J. () “What’s So Great about Reality?” Canadian Journal of Philosophy, : –. Kissinger, H. () The Necessity for Choice, New York: Cotler Books. Klayman, J., Young-Won Ha () “Confirmation, Disconfirmation and Information in Hypothesis Testing,” Psychological Review, : –. Kolko, G. () Anatomy of War, New York: The New Press. Kruglanski, A. () “Motivated Social Cognition: Principles of the Interface” in E. Higgins, A. Kruglanski, eds., Social Psychology Handbook of Basic Principles, New York-London: The Guildford Press, –. Kunda, Z. () “Motivated Interference: Self-Serving Generation and Evaluational Causal Theory,” Journal of Personality and Social Psychology, : –. Kurzban, R. () Why Everyone Else Is a Hypocrite: Evolution and the Modular Mind, Princeton: Princeton University Press. Landemore, H. () Democratic Reason: Politics, Collective Intelligence and Democratic Rule. Princeton: Princeton University Press. Langston, T. S. () “The Decider’s Path to War and the Importance of Personality” in G. C. Edwards, D. S. King, eds., The Polarized Presidency of G.W. Bush, pp. –. Larson, D. W. () Origin of Containment, Princeton: Princeton University Press. Lazar, A. () “SD and the Desire to Believe,” Behavioral and Brain Sciences, : –. () “Deceiving Oneself or Self-Deceived? On the Formation of Beliefs ‘Under the Influence’,” Mind, , –. Levy, N. () SD and Moral Responsibility, Ratio, : –. () Restoring Control: Comments on Sher, Philosophia : –. Lewy, G. () America and Vietnam, New York: Oxford University Press. Lind, M. () Vietnam, the Necessary War: A Reinterpretation of America’s Most Disastrous Military Conflict, New York: The Free Press. Lockard, J., Paulhus, D., eds. (), SD: An Adaptive Mechanism, Princeton: Prentice Hall. Longeway, J. L. () The Rationality of Escapism and SD, Behavior and Philosophy, : –. Lynch, K. () “SD and Stubborn Beliefs,” Erkenntis : –. Martin, M. ed. () SD and Self-Understanding, Lawrence: University of Kansas Press. ed. () SD and Morality, Lawrence: University of Kansas Press. Mata, A., Ferreira, M., Sherman, S. () “Flexibility in Motivated Reasoning: Strategic Shift of Reasoning Modes in Co-Variation Judgment,” Social Cognition, : –. McLaughlin, B. P. () “On the Very Possibility of SD,” in Ames R. T., Dissanayake W., eds., Self and Deception: –. McLaughlin, B. P., Rorty, A. O., eds. (), Perspectives on SD, Berkeley: University of California Press.

References



McNamara, R. () In Retrospect. The Tragedy and Lesson of Vietnam, New York: Random House. McNamara, R., Blight, J., Brigham, R.K. () Argument without End, New York: Public Affairs. Mearsheimer, J. (), Why Leaders Lie: The Truth about Lying in International Politics, Oxford: Oxford University Press. Mele, A. () “Incontinent Believing” The Philosophical Quarterly, : –. () Irrationality. An Essay on Akrasia, SD and Self-Control, Oxford: Oxford University Press. () “Real SD,” in Behavioral and Brain Sciences, : –. () SD Unmasked, Princeton: Princeton University Press. () “When Are We Self-Deceived?” Humana Mente, : –. Merry, R. W. () Sands of Empire: Missionary Zeal, American Foreign Policy and the Hazard of Global Ambition, New York-London: Simon & Shuster. Michael, C., and Newen, A. () “SD as Pseudo-Rational Regulation of Beliefs,” Consciousness and Cognition, : –. Moïse, E. E. () Tonkin Gulf and the Escalation of the Vietnam War, Chapel Hill-London: The University of North Carolina Press. Moomal Z., Henzi, S. P. () “The Evolutionary Psychology of Deception and SD,” South African Journal of Philosophy : –. Moyar, M. () “Vietnam: Historians at War,” US Army Research, paper , http://digitalcommons.unl.edu/usarmyresearch/. Munton, D., Welch, D. () The Cuban Missiles Crisis: A Concise History. Oxford: Oxford University Press. Nagel, T. () Mortal Questions, Cambridge: Cambridge University Press. Nelkin, D. K. () “SD, Motivation and the Desire to Believe,” Pacific Philosophical Quarterly,  (), –. Nozick, R. () Anarchy, State and Utopia, New York: Basic Books. () “On Austrian Methodology,” Synthese, : –. O’Hagan, E. () “Self-Knowledge and Moral Stupidity,” Ratio, : –. O’Reilly, K. O. () “Perceiving Rogue States: The Use of the Rogue State Concept by US Foreign Policy Elites,” Foreign Policy Analysis, : –. Orman, J. M. (), Presidential Secrecy and Deception, Westport, CT: Greenwood Press. Owen, D. () The Hubris Syndrome: Bush, Blair and the Intoxication of Power, London: Politico. Packer, G. () The Assassin’s Gate: America in Iraq, London: Faber & Faber. Paluch, S. () SD, Inquiry : –. Paterson, T., ed. () Kennedy’s Quest for Victory: American Foreign Policy –, Oxford: Oxford University Press. Parry, R. () Secrecy and Privilege: The Rise of the Bush Dynasty from Watergate to Iraq, Arlington, VA: The Media Consortium. Pears, D. (), Motivated Irrationality, Oxford: Oxford University Press.



References

() “The Goals and Strategies of SD,” in J. Elster ed. The Multiple Self, pp. –. () “Self-Deceptive Belief-Formation,” Synthese, : –. Pfiffer, J. P. () “Intelligence and Decision Making before the War with Iraq,” in G. C. Edwards, D. S. King, eds., The Polarized Presidency, pp. –. The Pentagon Papers (), London: Routledge & Kegan Paul. Petrini, P. () “What Does the Self-Deceiver Want?” Humana Mente, : –. () L’autoinganno: Che cos’è e come funziona, Bari-Roma: Laterza. Piattelli-Palmarini, M., (), Inevitable Illusions: How Mistakes of Reason Rule Our Mind, Hoboken, NJ: John Wiley. Pillar, P. () Intelligence and Foreign Policy, Iraq, / and Misguided Reform, New York: Columbia University Press. Pinker, S. () The Blank Slate, New York: Viking. Porter, G. () Perils of Dominance: Imbalance of Power and the Road to War in Vietnam, Berkeley: University of California Press. Prados, J. () “The Zen of Escalation: Containment and Commitment in Southeast Asia” in L.C. Gardner, T. Gittinger, eds., Vietnam. The Early Decisions, pp. –. Ramachandran, V. S. () “The Evolutionary Biology of SD, Laughter, Dreaming and Depression: Some Clues from Anosognosia,” Medical Hypotheses, : –. Record, J. () Wanting War, Washington DC: Potomac Books. Rey, G. () “Toward a Computational Account of SD and Akrasia,” in McLaughlin B.P., Rorty A.O., eds., Perspectives on SD: pp. –. Ricks, T. () Fiasco. The American Military Adventure in Iraq, London: Penguin Books. Ritchie, N., Rogers, P. () The Political Road to War with Iraq, London: Routledge. Rorty, A. O. () “The Deceptive Self: Liars, Layers and Lairs,” in McLaughlin, B. P., Rorty, A. O., eds., Perspectives on SD: pp. –. () “Belief and SD,” Inquiry, : –. () “User-Friendly SD,” in, Ames R. T., Dissanayake W., eds., Self and Deception pp. –. Runciman, D. () The Politics of Good Intentions, Princeton: Princeton University Press. (), Political Hypocrisy: The Mask of Power from Hobbes to Orwell, Princeton: Princeton University Press. Sackeim, H. () “SD: A Synthesis,” in J. Lockard, D. Paulhus, eds., SD: An Adaptive Mechanism, pp. –. Salomon, R. () “SD and SD in Philosophy,” in, Ames, R. T., Dissanayake, W., eds., Self and Deception, pp. –. Scanlon, T. () What We Owe to Each Other, Cambridge, MA: Harvard University Press.

References



Schlesinger, A., Jr. () A Thousand Days: John F. Kennedy at the White House, Cambridge, MA: Riverside Press. () The Bitter Heritage, London: Andre Deutsch. Scott-Kakures, D. () “SD and Internal Irrationality,” Philosophy and Phenomenological Research, : –. () “Motivated Believing: Wishful and Unwelcome,” Nous, : –. Sher, G. () “Out of Control,” Ethics, : –. () Who Knew? Responsibility without Awareness, Oxford: Oxford University Press. Segunda, E., Moran, T. () Selling War to America, Westport, CT: Praeger Security International. Smith, A. () “Control, Responsibility and Moral Assessment,” Philosophical Studies : –. Snyder, J. (), Myths of Empire. Domestic Politics and International Ambitions, Ithaca, NY: Cornell University Press. Steinberg, B. () Shame and Humiliation: Presidential Decision Making on Vietnam, Pittsburgh: Pittsburgh University Press. Sultana, M. () SD and Akrasia, Rome: Analecta Gregoriana, Editrice Pontificia Gregoriana. Sunskin, R. () The Price of Loyalty: George Bush, The White House and the Education of Paul O’Neill, New York; Simon & Schuster. Sunstein, C. () “Terrorism and Probability Neglect,” Journal of Risk and Uncertainty, : –. Szabò-Gendler, T. () “SD as Pretense,” Philosophical Perspectives, : –. Szabados, B. () “The Self, Its Passions and SD,” in Martin M., ed., SD and Self-Understanding: –. Talbott, W. J., () “Intentional SD in a Single, Coherent Self,” Philosophy and Phenomenological Research, , –. Taylor, S. () Positive Illusions, New York: Basic Books. Tenbrunsel, A. E., Messick, D. M. (), “Ethical Fading: The Role of SelfDeception in Unethical Behavior,” Social Justice and Research, : –. Trevor Thrall, A. () “Framing Iraq: Threat Inflation in the Marketplace of Values,” in A. Trevor Thrall, J. K. Cramer, eds., American Foreign Policy and the Politics of Fear, pp. –. Trevor Thrall, A., Cramer, J. K., eds. () American Foreign Policy and the Politics of Fear. Threat Inflation since /, London: Routledge. Trivers, R. () Deceit and SD: Fooling Yourself the Better to Fool Others, London: Penguin Books. Trivers, R. () Social Evolution, Menlo Park, CA: Benjamin/Cummings. Trobe, Y., Liberman, A. (), “Social Hypothesis Testing. Cognitive and Motivational Mechanism,” in E. Higgins, A. Kruglansky, eds., Social Psychology: Handbook of Basic Principles, New York: Guildford Press, –.



References

Vaillant, G. E. () The Wisdom of the Ego, Cambridge MA: Harvard University Press. Van Fraassen, B. () “The Peculiar Effects of Love and Desires,” in B. P. McLaughlin, A. O. Rorty, eds., Perspectives on SD: pp. –. Vandenbrouke, L. (a) “The ‘Confession’ of Allen Dulles: New Evidence on the Bay of Pigs,” Diplomatic History, : –. (b) “Anatomy of a Failure: The Decision to Land at the Bay of Pigs,” Political Science Quarterly, : –. Van Leuven, D. S. N. () “The Product of SD,” Erkenntis : –. () “Finite Rational Self-Deceiver,” Philosophical Studies : –. Von Hippel, W., Trivers, R. () “The Evolution of Psychology of SD,” Behavioral and Brain Sciences, : –. Walzer, M. () “Political Action: The Problem of Dirty Hands,” Philosophy and Public Affairs, : –. Weber, M. [] () “Politics as a Vocation,” in From Max Weber: Essays in Sociology, transl. and ed. by H. H. Gerth, C. Wright Mills, Oxford: Oxford University Press. Wheeler, M. () Anatomy of Deceit, Berkeley, CA: Vaster Books. Wentura D., Greve, W. () “Who Wants to Be Erudite? Everyone! Evidence for Automatic Adaptations of Trait Definition,” Social Cognition, : –. () “Evidence for Self-Defensive Processes by Using a Sentence Priming Task,” Self and Identity, : –. White, M. J. () The Cuban Missile Crisis, London: Macmillan. Wise, D. () The Politics of Lying, New York: Random House. Wolfowitz, P. () Interview, Vanity Fair, May, www.defenselink.mil/ transcripts/transcript, aspx?transcriptid=. Woodward, B. () Plan of Attack: The Definite Account of the Decision to Invade Iraq, New York: Simon & Schuster. () State of Denial, London: Pocket Books.

Index

Bonini, Carlo,  Bowles, Chester, , , , , – British Intelligence, ,  bullshit, ,  Bundy, McGeorge, , , , ,  Burba, Elisabetta,  Burchinal, David, ,  Bush, G. W., , , , –, , , –, , –, –, –, –, –, –, –, –, –, , 

/, , , , , –, –, , –, , –, , , , , –,  Aesop,  Afghanistan, , – agency, , –, , –, , , ,  Ainslie, George, , ,  akrasia,  Al Qaeda, , , –,  Alterman, Eric, , , –, , , , , , –, , ,  aluminum tubes, , , , ,  Arendt, Hannah, , , ,  Aristotle, , , ,  avowal of belief, – axis of evil,  bad faith, , ,  bad thinking,  Baghdad,  Ball, George, , ,  Batista, Fulgencio,  Bay of Pigs, , , , , , , , , –, –, –, –, –, , , –, , ,  belief formation, , , , , , , ,  Berle, Adolf A.  Berlusconi, Silvio,  Bin Laden, Osama, –,  Bissell, Richard, , , –, , –, , , , ,  Blair, Tony, , , –, , , ,  blame, , , , , ,  Blight, James, , , , , ,  Bolton, John, 

Cardona, Mirò,  Castro, Fidel, , –, , , , –, , , –,  causal view-account-model, , , –, , , –, , –, , , –, , –, , , , ,  causality, ,  character-building, , , ,  Cheney, Richard, , , –, –, , ,  Chilcot, John, , , , ,  CIA (Central Intelligence Agency), , , , , , , –, –, –, –, , –, , , , , , –, , , –, ,  Civil Rights Act,  Clarke, Richard,  Clifford, William,  Cline, Ray,  cognitive dissonance, , , , ,  cold bias, –, , , , , , –,  Cold War, , –, , , , –, –, , –, , , , , –, , , 





Index

confirmatory evidence-bias, , , , –, , , ,  conspiracy, , , –, –, , , , , , , , ,  contrary evidence, , , , –, , , , , , , , , , , , , , , , , –, , , , , , ,  control condition, , , , , , , , –, –, –, , , ,  costs of inaccuracy, , , , , , –, , , ,  covering up, , , , – Cuba, , , , , , , –, , , –, –, , –, , , , ,  Cuban exiles, ,  Cuban Missile Crisis, , , , , , , –, – Curveball, , , , , , , ,  D’Avanzo, Giuseppe,  Dearlove, Richard,  deception of the public, , , , , , , , ,  decision making, , –, –, , , , –, –, , , , , , , , , , , , –, – decision-making processes, , ,  defensive avoidance, , , , ,  democracy-normative theory, , , , , , , ,  democratic legitimacy, , – deniability, , , , –, , , –, , , ,  deniable war,  desires, , , , , , , , , , , , , , , , –, , , , ,  DeSoto patrol, , –, , ,  deterrence, , , , ,  devil’s advocate, , , , ,  DeWeese-Boyd, Ian, ,  DIA (Defense Intelligence Agency),  Diem, Ngo Dinh, , ,  dirty hands, , ,  dissenters- dissent- dissenting, ,  Dobrynin, Anatoly,  Downing Street Memo,  Drogin, Bob, , ,  Drummheller, Tyler, 

Duelfer, Charles,  Dulles, Allen, , , , –, –, , , , ,  Duran, Alfredo,  Eisenhower, Dwight D., , , ,  Ellsberg, Daniel,  Elster, Jon, viii, , , , , ,  emotions, , , , , , , , –, , , , , ,  escalation, , , , , , –, –, –, –, , , , , , –,  false beliefs, , , , –, , , , , , ,  false consciousness, , ,  Feith, Douglas, ,  France, ,  Frankfurt, ,  Freedman, Lawrence, , , , , ,  Friedman, Alvin,  Fulbright, J. William –, , ,  Germany,  Giap, Vo Nguyen, , , ,  Gilpatrick, Roswell,  Goldwater, Barry,  Goleman, Daniel, ,  Gordon, Michael, ,  Goulden, Joseph, , , , –,  Great Britain, , ,  Greve, Werner, ,  groupthink, , , , –, , , , , , , , ,  Haas, Richard, ,  Haig, Alexander,  Hanoi, , , –, , –, – Herrick, John, , , –, –, ,  Higgins, Trumbull, , , –,  Holmes, Stephen, , , , , , –,  homuncularism, ,  Hon Me,  Hon Ngu,  Honolulu,  humanitarian intervention,  Hussein, Saddam, , , –, , , –, –, –, , –, –, , , , ,  hypothesis testing, –, , 

Index ideology, , , , , , , , , , , , , , –, , –,  illusion of invincibility,  illusion of invulnerability, ,  illusions, , , , , , , , , , , , , , , ,  inevitability, ,  intentional view-account-model, , –, , , –, –, , , –, –, –, , , –, , , , , ,  intentionality, –, , , , , ,  International Atomic Energy Agency (AIEA),  invisible hand, , , –, , , , , , , , , ,  Iran, , ,  Iraq, , , , , , , , , , –, , –, –, , , , , , –, , , –, –, , , –,  James, Mattis, viii, , , ,  Janis, Irving, , , , , , , ,  Jervis, Robert, , , –, , , , ,  Johnson, Lyndon, , , , , , –, –, –, –, , –, , , , , ,  Joint Chiefs, , , , , , ,  just-war doctrine,  Kaufmann, Chaim, , –,  Kay, David, ,  Kennedy, John (President), , , , , , , , , , , –, , , , –, –, –, , , , , , – Kennedy, Robert, , , –, –, , ,  Kerr, Richard,  Khanh, Nguyen, ,  Khrushchev, Nikita, , –, ,  Kissinger, Henry,  Kolko, Gabriel, ,  Kornbluh, Peter, , , , 



lay hypothesis testing HT, , ,  Le Monde,  Levy, Neil, , , –,  Libby, I. Lewis (“Scooter”),  lies-lying, –, , , , , –, , , –, , , –, , , , , , , , –, , –,  Lippman, Walter,  lying to oneself, , , , –, , , ,  Lynch, Kevin, ,  Machiavelli, Niccolò,  Maddox, , , –, ,  make believe, , , , , ,  Mallow, Davis E.,  Mark, James,  Mark, James, ,  Marx, Karl,  MBL (Mobile Biological Laboratory),  McNamara, Robert, –, , , , –, , –, , , , –, , –, –, , –, , , ,  Mele, Alfred, –, , , , , , –, , –, , , , –, –, – Michel, Christoph, –,  Middle East, ,  Miller, Judith, , , ,  Minh, Du’o’ng Van,  mistakes, , –, , , , , , , , –, , , , –, , , , , , , –, , , –, –, –, , , , –, –, , , , –,  Moïse, Edwin E., –, –, , –, –,  moral responsibility, , , –, , , , , ,  motivated irrationality, –, , , , , , , , ,  motivation, , , –, , , , , , , , , , , , –, , –, , , –,  National Security Council, , , , ,  NATO, ,  Nelkin, Dana, , , –,  New York Times, , , ,  Newbold, Gregory S.,  Newen, Albert, , –, 



Index

NIE (National Intelligence Estimate), –,  Niger, , –, – North Korea,  Nozick, Robert, , ,  Nuclear Missile Crisis,  Official Special Plan (OSP),  Ogier, Herbert L.,  O’Neill, Paul,  OPLAN A, , ,  other-deception, , –, , , , –, ,  paradoxes: static, dynamic, , , –, , , ,  Park, Patrick,  Pascal, Blaise,  Pentagon, , , , , , , –, , , , , , , , , ,  people’s SD, , , , ,  Pillar, Paul, , –, , , ,  Plame, Valerie,  Plato,  political deception,  political myths,  Porter, Gareth, , , ,  positive thinking, ,  Powell, Colin, , , ,  precommitment, , , –, , , , –, , – pretense, , –, , , , , –, ,  pretext, , , , –, –, , ,  preventive measures-strategies, –, , , , , , , , –,  probability neglect, , –, , –, ,  probability neglect bias, ,  publicity, ,  Quintero, Rafael, ,  realism,  reason-responsiveness, – reduction of cognitive dissonance, , , ,  responsibility: moral-political, , , , –, , , , –, –, –, , –, –, , , , , , , , , , , , –, , , –, ,  Rice, Condoleezza, , , 

Ricks, Thomas, , –, , ,  rogue states, , , , , –,  Rohn, Douglas,  Rumsfeld, Donald, , , , ,  Rusk, Dean, , , , , ,  Saratoga,  Sartre, Jean Paul,  Schlesinger, Arthur, Jr., , , , , –, –, , –, –, , , , ,  secrecy, , –, , , , , , , –, ,  selectivity, –, , –, , , , , , , ,  self-defense, , ,  self-esteem, , , ,  self-perception,  self-respect, ,  selling reasons for war,  Sharp, U. S. Grant, –, ,  Sher, George, , , –, ,  SISMI (Servizio per le Informazioni e la Sicurezza Militare),  skepticism, ,  slippery slope, , , ,  social constructivism,  Sorensen, Theodore, ,  sour grapes, , , , –, ,  Soviet Union, , , , ,  specificity of SD, –, , , , , , ,  Stevenson, Adlai,  Stockdale, James,  straight SD cases, –, , , , , –, , , , , –,  stubborn beliefs, , , , ,  Sullivan, William,  Sylvester, Arthur,  Taliban, – Tenet, George, , , – terrorism, , , , , , , , ,  Ticonderoga, –,  Tojo, Hideki,  Tonkin, Gulf of, –, , –, , –, –, , , –, , –, , , , –,  Trollope ploy, , –,  Turner, Joy, , , ,  Turner, Joe,  Twin Towers, , , 

Index twisted SD cases, –, , , , , , , , –, , –, , –, , ,  Ulysses, , ,  UN Security Council, ,  unconscious, , , , , , , ,  Vandenbrouke, Lucien, –,  Vanity Fair, ,  Vietcong, , –,  Vietnam, , , , , , , –, –, –, –, , –, –, , , , , , , ,  war of choice,  Watergate, ,  Wayne, John, , ,  Wentura, Dirk,  Wheeler, Earle G., , , , , ,  Wheeler, Mercy, , – White, Marc, , –, , , , , , , ,  Wilson, Joseph, , , –



wish, , –, –, , –, , , –, , –, , –, , , –, , –, , , , , –, –, –, –, , , , –, , , , , , –, , , , –, , , –, , , , –, , – wish to beleive, –, ,  wishful thinking, , , , , , , , , , , , , , , , , , , ,  WMD (weapons of mass distruction), , , , , , , , , , –, , , , –, , –, –, –, , , –, ,  Wolfowitz, Paul, , , , –,  Woodward, Bob, , , , , –,  World Trade Center (Twin Towers), , ,  worst-case scenario, –, –, –, –, ,  yellowcake, , , –, , 