Why?: The Philosophy Behind the Question 9781503635715

A philosopher explores the many dimensions of a beguilingly simple question. Why did triceratops have horns? Why did W

215 67 3MB

English Pages 248 [318] Year 2023

Report DMCA / Copyright


Table of contents :
Series Foreword
One. Grammar
1 Why Is Oscar Pistorius Guilty of Murder?
2 Why Do Things Fall When We Let Them Go?
3 Why Did Mickey Mouse Open the Fridge?
Two. Fusions
4 Why Do Triceratops Have Horns?
5 Why Did World War I Happen?
6 Why Did Napoleon Lose at Waterloo?
Three. Limits
7 Why Were There American Soldiers on the 15:17 Train from Amsterdam to Paris on August 21, 2015?
8 Why Does Romeo Love Juliet?
9 Why Am I Me?
Conclusion Why “Why”?
Recommend Papers

Why?: The Philosophy Behind the Question

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview


This page intentionally left blank

Advance praise for Why? “This is an engaging, creative, and masterful exploration of human experience, stemming from the seemingly innocent question ‘why?’ Huneman expertly draws upon an exceptionally rich array of sources—­from the philosophical to the everyday—­brought to life through illuminating examples. Even if we never reach an ultimate answer to life’s most pressing query, this lucidly written book not only evokes its necessity, but transforms the way we will forever approach the question.” —­A nthony J. Steinbock, author of Knowing by Heart

“Ranging with ease and erudition across both contemporary Anglo-­ American analytic and so-­called Continental philosophies of science and the history of Western philosophy, Huneman argues that the plurality of questions expressed by ‘why?’ nevertheless share an underlying unity. A stimulating text addressed to professional philosophers as well as readers seeking to deepen their understanding of philosophy’s relevance to common concerns.” —­H elen Longino, author of Studying Human Behavior

“With wry humor, engaging examples, and indefatigable curiosity, Huneman takes the primeval question ‘why?’ as a launchpad to explore topics throughout the philosophy of science and beyond—­evidence, cause, chance, natural selection, contingency and necessity, and in the end, love and the self.” —­M ichael Strevens, author of The Knowledge Machine

“This work offers a vast panorama that is both deeply researched and pleasant to read.” —­S ylvain Guilbaud, La Recherche

“This is a particularly well-­crafted introduction to the philosophy of science, one that combines sharpness and quiet erudition.” —­Pascal Engel, En attendant Nadeau

S Q UA RE O N E First-­Order Questions in the Humanities Series Editor: PAU L A . KOT TM A N

The Philosophy Behind the Question PHILIPPE HU NEM AN Translated by Adam Hocker

Stanford University Press Stanford, California

Stanford University Press Stanford, California English translation © 2023 by the Board of Trustees of the Leland Stanford Junior University. All rights reserved. The first version of Why? The Philosophy Behind the Question was originally published in French in 2020 under the title Pourquoi? Une question pour découvrir le monde © Éditions Autrement, department of Flammarion, 2020. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or in any information storage or retrieval system without the prior written permission of Stanford University Press. Printed in the United States of America on acid-­free, archival-­quality paper Library of Congress Cataloging-in-Publication Data Names: Huneman, Philippe, author. Title: Why? : the philosophy behind the question / Philippe Huneman ; translated by Adam Hocker Other titles: Pourquoi? English | Square one (Series) Description: Stanford, California : Stanford University Press, 2023. | Series: Square one | Originally published in French under the title: Pourquoi? | Includes bibliographical references. Identifiers: LCCN 2022046040 (print) | LCCN 2022046041 (ebook) | ISBN 9781503628908 (cloth) | ISBN 9781503635715 (epub) Subjects: LCSH: Questioning. | Metaphysics. | Knowledge, Theory of. Classification: LCC BF463.Q 47 H862 2023 (print) | LCC BF463.Q 47 (ebook) | DDC 121—dc23/eng/20230222 LC record available at https://lccn.loc.gov/2022046040 LC ebook record available at https://lccn.loc.gov/2022046041 Cover design and art: David Drummond Typeset by Elliott Beard in Adobe Caslon Pro 10.5/15.5

One does not see anything, but what it matters so little to see, Nothing, and yet one shudders. Why? —­H enri Michaux, “Je vous écris d’un pays lointain” I see these frightening spaces in the universe that lock me in, and I find myself tied to a corner of this wide extension, without knowing why I’m put in this place rather than in another, and why the little time given to me to live is ascribed to this point rather than to another one in all the eternity that came before me and the one that comes after me. —­Pascal, Pensées, 427/194 Why are not all organic beings blended together in an inextric­ able chaos? —­C harles Darw in, The Origin of Species

This page intentionally left blank


Series Foreword by Paul Kottman


xi 1



1 Why Is Oscar Pistorius Guilty of Murder? 2 Why Do Things Fall When We Let Them Go? 3 Why Did Mickey Mouse Open the Fridge?

23 49 87



4 Why Do Triceratops Have Horns? 5 Why Did World War I Happen? 6 Why Did Napoleon Lose at Waterloo?

119 149 159

x T H R EE


7 Why Were There American Soldiers on the 15:17 Train from Amsterdam to Paris on August 21, 2015?


8 Why Does Romeo Love Juliet? 199 9 Why Am I Me? 215 Conclusion 235 Why “Why”?

Acknowledgments 257 Notes 259


Series Foreword Paul A. Kottman

a s a n yon e w ho h a s spe n t t i m e a rou n d you ng ch i l dren knows, “why?” is a question that gets posed with dogged persistence. In this doggedness, says Philippe Huneman, children “learn [a] kind of grammar for why.” By grammar, Huneman does not just mean what those children will eventually learn to call “grammar” as they make their way through school. Huneman also means “a necessity . . . between logical necessity . . . and pure linguistic convention.” In other words, when children formulate why? questions, the formulation of the sentence itself exposes and teaches a logical requirement, one that tells us something important about what we are doing when we ask why? Huneman puts the point this way: “Before even asking what the legitimate form of response to ‘why?’ could be, we presuppose that there is a response; and thus, that it is possible to ask why.” For instance, in learning how to ask why? questions, we simultaneously learn about different kinds of objects, different kinds of response possibilities, and various presuppositions that govern possible answers as well as possible misunderstandings. After all, a well-­k nown issue for philosophers xi


and children alike is how to make sense of what people do, and whether or how this might be distinct from explaining things that happen to us or that happen in our world. As Huneman sees it, following Kant, such different kinds of questions require distinct kinds of explanation. Huneman focuses in this book on three different kinds of why? questions. “When asking ‘why?’,” he says, “we can expect in response a cause, or a reason for believing something, or a motive and thus a reason for acting (a goal, in particular). These three types of ‘because’ are perhaps not independent but they are distinct.” By underlining this distinctness without independence, Huneman wants to note that events like a fire breaking out, on the one hand, and actions like remodeling a kitchen, on the other, can be said to have different logical forms. In the case of a fire, we might look for both a cause and a reason—­not independently, but distinctly. Someone may have started the fire purposely such that they can give a reason for having done so, or the fire may have been caused accidentally by a faulty wire. In the case of the remodeled kitchen, we feel ourselves entitled to ask the renovator why a specific color scheme was chosen, and we can expect at least a minimally rational answer in return. The answer you get to why? questions can be a reason of some kind, or it might be a cause. Of course, put like this, all sorts of complications are elided. “The deeper we delve in search of causes the more of them we find,” wrote Leo Tolstoy in War and Peace (1812), “and each separate cause or whole series of causes appears to us equally valid and equally false by its insignificance compared to the magnitude of the events, and by its impotence—­apart from the cooperation of all the other coincident causes—­to occasion the event” (book 9, chapter 1). How, then, to treat different questions, like “Why did Napoleon lose at Waterloo?” Huneman’s book elegantly teases out the implications and stakes of these complications, through a series of examples like these, and through engagements with philosophical treatments of these issues. In reading Why? I was struck by something about the grammar of why? questions as Huneman treats them. Series Foreword


As young children make vivid, we not only “presuppose that there is a response” when we ask “why?” We also presuppose that someone, or some others, can or will give this response. That is, we presuppose not only that an answer exists, or that an answer belongs to the realm of the known, or to an impersonal space of reasons—­we also make a vocative appeal for an answer from someone who we expect to be somehow responsible for what they are saying. We make an appeal to others who, we suppose, are somehow accountable for what they claim, or know, or believe, or are motivated by. Why? questions not only presuppose a grammatical connection between the question and an answer; such questions and answers also evoke the presupposition of a dialogue between participants who uphold a space of reasons, and who can somehow be held to account for whatever answers they give. In an age when young children can be witnessed addressing certain why? questions to the “artificial intelligence” at work in their devices, this presupposition might be hard to detect, and might invite all kinds of skepticism. Is there truly this kind of responsibility-­taking dialogue that, I was just suggesting, we presuppose or evoke when we ask why? To get the stakes of this question into view, one might ask further: Can we address, or even imagine addressing, all our why? questions to the collected wisdom of the archives, to the sedimented knowledge of the libraries, or to the artificial intelligence said to be at work on the internet—­that is, to no one in particular? Can we collectively sustain the social practice of seeking answers for why? questions without supposing and expecting that others will comport themselves responsibly in light of whatever answers we might receive?

Series Foreword

This page intentionally left blank


This page intentionally left blank


t h e roa d is bl ock e d. a t r e e l i e s st r etch e d across it. The weather is fine with no trace of a storm, but the tree has just fallen, crushing a gray car in its path whose occupants are able to extract themselves, with difficulty but unharmed. The body of the car is completely wrecked, its windows shattered. Firefighters, rapidly on the scene, comfort the shaken victims, keeping the onlookers at bay while other workers, who also have arrived with remarkable haste, begin to cut the tree into pieces to remove it as quickly as possible. All around there is the ballet of tourist smartphones and wandering Parisians immortalizing the event, sharing it on multiple more or less well-­k nown platforms, hoping to gather a maximum of likes and retweets and expressions of interest coming from other individuals who are amazed that their “friends” could have witnessed such an extraordinary event—­as if they had accomplished something difficult like holding their breath for six minutes, climbing a particularly formidable peak, or writing a poem without vowels. And I was driving just a few meters behind that car. Without thinking, I asked myself: “Why him? Why did the tree fall on his car rather than on mine or someone else’s?” 1


Of course, there are responses: as a specialist in natural history who just happened to be there told me, the tree was completely eaten through. Rot could be clearly seen in the now visible interior of the trunk; and in the end, only a slight gust of wind would have been needed to topple it—­or it could have toppled on its own once the parasites had finished their destructive work. Certainly, but why at this moment, while the gray Passat with the Dutch driver was passing beneath it? In one sense, we know why. I can go through the chain of causes and inventory the circumstances: on the one hand, dryness, wind, internal infiltration of the tree; and on the other, the Dutchman’s schedule—­his plans which made it so that he happened to be on this avenue at this precise minute. The explanation is long and gets lost in countless fastidious details but it is available. Why then was I frustrated with this impression that my question—­“ Why him instead of me?”—­did not have a response? Without a doubt, it is because there is no why for this precise question; or rather, the “why” that would satisfy me has no meaning here. Does that mean there are many whys, then? Or perhaps things without a why? Starting from three or four years old, not long after the moment when they learn how to speak with clarity and coherence, it is well known that children never stop asking “why?” It is true that they gain precious knowledge about the world in this way. But by asking these kinds of questions, they also learn how the question “why?” works. When my five-­year-­old daughter asks “Why is it Sunday?”, she learns that one does not or should not ask such a question. Perhaps the question “Why him?” regarding the tree accident and the unfortunate Dutchman is of the same order. But my daughter also learns through these questions that certain responses to the question “why?” only have a limited validity: “Why does water boil?” cannot be explained with “Because it wants to boil,” while “Why do dogs drink water?” can potentially receive the response “Because they get thirsty”—­or, said differently, “Because they want to drink.” As she grows, my daughter will learn science, which will tell her why this and why that. In this way, the sciences develop a system of responses to a certain type of question “why?” But they do not include precisely those responses that Introduction


would satisfy my desire to know why the tree fell on the Dutchman’s car and not mine—­hence my frustrating feeling that it was pure chance, a feeling that for others, on the contrary, could mean that it was the Dutchman’s “destiny.” But why in the end is it so frustrating to recognize that the tree fell on him by chance? This small word “why” seems to weave strange links between the nonknowledge of children, the science of adults, and the idea of destiny. But what are these links then? Asking Why Language has interrogative words that spread out the sizable dimensions in which we can understand an event, a fact, an action, or a thing: What? Where? When? How many? How? and Why? Let’s imagine something like a gnu. A complete knowledge of the gnu clearly entails being able to respond to all these questions. In Categories, Aristotle explains that the first five questions determine the broad articulations of a living being or something that exists: a thing has a nature or an essence (“what is a gnu?”), encompasses temporal and spatial dimensions (“where is the gnu?” “when will we find this gnu?”), quantity (“how many gnus are there?”), and a certain mode of existence or appearance (“this gnu was fast, was shimmering, etc.”). These dimensions are not always independent: the “how” will include measurements of acceleration, for example. For Aristotle, these “categories” are at the same time the major articulations of being and the sizable divisions within language—­namely, that through which language has the capacity to really understand what is there. In the later philosophical tradition, one has sometimes questioned this homogeneity of language and being but—­w ithout entirely subscribing to it—­we can already imagine that such a systematicity of language tells us something about being in how it is susceptible to being said or brought into language. Among these large questions, “why” is perhaps the least understandable. Aristotle, moreover, does not include it in his Categories but evokes Introduction


it in what we call Physics—­his treatise on things in movement that he calls natural things (“physical” comes from the Greek phusis, nature). The Categories correspond to dimensions of being that one can easily name: substance (the response to “What?”), place (the response to “Where?”), time (the response to “When?”), and so on. The response to “Why?” is not, however, unequivocal. An example will help us to see how “why” is so delicate. As an event, let’s look at the victory of the French team at the 2018 World Cup. It is easy to respond to the five questions: What? It was the final of a global soccer competition. Where? In Moscow. When? July 15, 2018. How many? Some figures are pertinent: a 90-­minute match between two teams of 11 players in front of 60,000 people. How? By a score of 4–­2. And Why? “Because France scored four goals and Croatia two.” Certainly; but is this not another more precise way of simply saying that France won? What we have now allows for me to legitimately say that France is the world champion, but not what made it so that France was world champion. Do we not rather want to know the reason why France won? It could be a question of identifying who were the scorers, who passed them the ball, etc. But we could also think beyond goals to distinct strategies in conflict with each other: France was more effective in defense, Croatia less effective in attack. Perhaps previous matches played a role in terms of the fatigue of the players, their motivation, etc. Croatia had one less rest day than France after their respective semifinals and had gone into extra time in each of its direct elimination matches. And outside of the World Cup itself, we could highlight the differences in soccer traditions between France and Croatia—­ their experiences in major competitions, the failure of the French in the Euro Final two years previously, etc. Barely broached, and even with such a simple subject, the question “why?” leads us into a tangle of explanations that are reminiscent of the scientific controversies of the genre: Is it the recent usage of pesticides that is responsible for the declining number of birds, or rather the expansion of agriculture that is destroying their habitats, or perhaps climate change that is disturbing their migratory and reproductive habits, or one of the many combinations thereof? Introduction


At the same time, this question “why?” traverses the expansive domains of speech and habit in which we all live together. In front of a certain animal, a child or even an adult may spontaneously ask: Why does a zebra have stripes? Why does a duck have webbed feet? “Why?” is also a question for historians: Why did war break out in 1940? Why did Christianity conquer Europe? Of course, scientists make abundant usage of “why”—­whether it is a question of simple things, like falling bodies; or of very sophisticated things, like the physicist who asks, “Why is the universe cooling?” But it is also a daily question: “Why do you say that?” we regularly exclaim when someone announces an unknown piece of information. This is what we say to a child who returns from school stating that “Marc is mean.” The varied scale of human feelings thus easily gives rise to such questions: “Why does Edith love Marcel?” we think, especially if Edith and Marcel seem badly suited for each other. But “why?” is also a doubtful question, that of an investigator. One thinks of television detective Columbo, who, after having questioned a business owner or a gallerist (always someone important) whom we know is the killer, puts on his beige raincoat as though to leave (assuming he ever took it off), and then turns around to ask the fatal question: “But Mr. So-­and-­So, why did you say that you were playing golf in Santa Monica when we know you were having dinner on Sunset Boulevard at the same time?” And these conspiracy theories that have been talked about so much recently suggest a generalized version of this detective-­like doubt: “Why did the Charlie Hebdo killers forget their ID cards in the car?” And last, the occurrences of this question “why?” go from the trivial to the vertiginous, like the “Why not me?” with which I opened this book; the “My God, why hast thou forsaken me?” that Jesus could not help but utter during the crucifixion in the Gospels; or in a still more metaphysical sense, “Why am I me instead of another?” Conceived to unravel the complexities that envelop the usage of the question “why?”, this book will consider in detail some of the questions that I have just mentioned. The book proposes a geography of thought, Introduction


in the distant tradition of Kant—­the philosopher whose university career was in great part devoted to teaching geography, and whose language is haunted by geographical metaphor (borders, territories, domains, maps, etc.). A Single Word for Many Questions Regarding this singular question—­why?—­metaphysical reflection wavers between two opposed positions. The first suggests that the question “why?” is so indispensable that it conditions the possibility of having experiences. Without the capacity to ask “why X?” and to respond “Because Y,” experience would just be a web of disjointed events with no link between them. And from a practical point of view, the simplest action demands an identification of the means for the ends, of choosing, for example, a type of transport for a trip; yet this relationship involves an implicitly formulated question: “Why did I (or why must I) do something?”, with its response, “Because I wanted something else.” And, independent of us, events that occur only take on meaning in light of this question “why?” The small Pacific islands that will soon be submerged by the ocean owe their annihilation to the climate change generated by human industrial activity from the past two hundred years: I can thus respond to the question “Why will they vanish?” Starting from this rather dramatic point, my worldview acquires a certain meaning—­one that I can also consider in light of my own goals and desires, while reflecting on what we could do to avoid the destruction of other islands. A diametrically opposed position has been argued by philosophers according to which the question “why?” is merely the residue of a past pre-­ rationalist age, an antiscientific question that is thereby opposed to true rationality. Pierre Duhem—­the physicist, historian of physics, and author of a major book about the philosophy of science in 1917 entitled La Théorie physique: Son objet, sa structure (The Aim and Structure of Physical Theory)—­ contested the idea that science had to ask why things are as they are. This question, as with the desire to explain in a more general sense, would later Introduction


pertain more to metaphysics than to science, which tries to describe how phenomena occur and according to what regularities. And positivists like Auguste Comte pre-­dated Duhem in relegating questions concerning “why” to the domain of what they called “metaphysics”—­namely, wild speculation about the first principles—­as if a newly ripened mind should be diverted more toward questions of the nature of “How does this work?”1 In this book, I will not align myself with this skeptical latter argument; the subsistence of the question “why?” as a crucial vector for the interpretation of our experience and the justification of our actions seems to me to be a fundamental fact that should be understood. I am not in bad company: Aristotle, as was mentioned above, dedicates a major work—­Physics—­to the explication of this question; Kant, in his Critique of Pure Reason, even proposes a justification of the possibility of asking “why?” about each event. Indeed, in his language, what he calls the “second analogy of experience” in the Critique of Pure Reason stipulates that “everything that happens presupposes something that it follows in accordance with a rule.” This principle, which governs the use of the category of causality, is justified by Kant in this text—­taking over in some way within the framework of his so-­called transcendental philosophy from the more general idea that “everything has a reason,” which Leibniz was among the first to formulate.2 About a century before Kant, Leibniz named this idea the “principle of sufficient reason” and saw it as a fundamental ontological and epistemological principle according to which “we hold that there can be no fact real or existing, no statement true, unless there be a sufficient reason, why it should be so and not otherwise, although these reasons usually cannot be known by us” (Monadology, §32). This principle legitimized our always asking why, even though in a well-­k nown text Schopenhauer wondered why such an obvious idea waited for centuries to be explicitly stated by a philosopher.3 I will thus take the question “why?” seriously, because it plays a decisive role in the manner in which language can make sense of the human experience. And above all, it should be asked if the object of this “why” is unique, since this word is everywhere in our speech: it questions things Introduction


and events—­“ Why do planets follow elliptical orbits?”—­but also actions—­ “Why should I run away?”; it lastly concerns beliefs, as I can always ask “Why do you think that?” to someone expressing an opinion.4 Thus, “why?” turns out to be a central question not only in science but also in logic (the justification of beliefs) and within what one could call the language of action. This plurality of “why?” suggests that the types of legitimate contexts and of appropriate response formats are multiple. Such a plurality immediately raises a debate about its reducibility to a single, more elementary notion. Philosophers call this kind of debate “monism vs. pluralism”: Is there an ultimate object that the question “why?” (“monism”) aims toward? Or are all of its meanings, in different orders, fundamentally heterogeneous (“pluralism”)? In other words, is the concept of “why”—­to adopt Aristotle’s words—­“ homonymous” (denoting that it bears only a nominal and arbitrary identity, like the verb “object” and the unrelated but similarly spelled noun “object”) or “synonymous” (denoting that all meanings are truly connected and join together into one sole meaning)? This enigmatic multiplicity is one of the problems addressed in this book, which will propose that there exists a certain grammar behind “why?” according to which contexts and legitimate forms of response are organized. When asking “why?” we can expect in response a cause, or a reason for believing something, or a motive and thus a reason for acting (a goal, in particular). These three types of “because” are perhaps not independent but they are distinct. Through their insistence on indiscriminately asking “why?”, children between three and five years old learn precisely this kind of grammar for why. The particular responses to different “why?”s are perhaps less important for their cognitive development than the acquisition of a general sense of which “why?”s are pertinent and which are not. Children learn through this which kind of response is called for by which kind of why-­question: for example, they ultimately accept that if ocean waves always destroy the sand castles that they obstinately rebuild, this is not out of ill will but because the wind or a sea swell gives the waves a rhythm and



amplitude that results in them regularly coming to lap at and then dismantle the fragile structures. The “grammar” that I am talking about is not exactly that of grammarians. It encompasses a necessity that we will call intermediary between logical necessity—­namely, the certitude of deduction—­and pure linguistic convention, which strangely assigns the masculine gender to the French word véhicule (vehicle) and the feminine to voiture (car). But like proper grammar, it separates what can and cannot legitimately be said. I borrow this use of the concept of “grammar” from Wittgenstein, although I don’t accept all of Wittgenstein’s views; it suffices that this term establishes a set of constraints that are not logical, but that still govern our way of speaking and thinking. The Possibility of “Why?” Many philosophical controversies concern precisely the use of such a grammar. When, echoing the Cartesians of his time, Molière in the caustic play Le médecin malgré lui (The Doctor in Spite of Himself ) mocks how doctors turn to the “dormitive virtue” of opium, it is precisely a question of restricting the legitimate “because”s to a certain type of causes—­in this case, antecedent mechanisms. Today, Molière appears to have been correct in mocking these ridiculous pedants, these doctors who are passionate about quoting Latin and incapable of examining a sick patient. Historically, however, this reflects a major moment in modern science that we somewhat unimaginatively call in traditional history “the Scientific Revolution,” which runs from Galileo to Descartes—­the moment where the very idea of what constitutes a legitimate response to the question “why?” was changed.5 To the question “Why do stones fall when we let them go?” we could indeed respond, along with Aristotle: “Because they tend toward the ground, which is their natural place.” After Galileo, Descartes, and Newton, this response is excluded: the Aristotelian tendency to fall appears to be made out of the same stuff as the Molièresque dormitive



virtue—­namely, a simple description of an effect, then portrayed as a trend or a power instead of a true explanation. Modern physics understands that the cosmos is inert, that nothing has any desire to go anywhere; this is even the first principle of Newton’s Principia Mathematica Scientia Naturalis (1686), which named it the “principle of inertia.” Galileo’s inert stone simply follows the law of falling bodies, awaiting Newton to demonstrate that it is subject to Earth’s gravity. Debating the legitimate form of response to the question “why?”—­as Galileo, Descartes, Newton, Leibniz, and their contemporaries did—­ forms a crucial philosophical challenge—­crucial, in particular, in the sense that what can be accepted as a manner of making science depends on it. I do not intend to put forth a solution to these debates with this “grammar for why”; on the contrary, it is metaphorically more of a question of the space in which they can take place. In this sense, this book will examine what the question “why?” is actually asking so that a specific discussion about what response can be given to it is possible. However, if “why?” is undeniably a proper question for raising a philosophically intense interrogation, this is also because before even asking what the legitimate form of response to “why?” could be, we presuppose that there is a response; and thus, that it is possible to ask why. This may seem trivial or far-­fetched, but even the possibility of “why?” is undeniably problematic: why should there even be an answer? Is it necessary for the world to be of a certain nature in order for this question to be formulable and to receive responses? Or does every possible or conceivable world allow for us to rightly hope that we can respond “Because of this” to any question in the form of “Why is that?” In other words, if there are rules for the usage of “why?” or something that resembles them, we must understand their reason; what makes them this and not that; on what they are based; and, if they are indeed grounded on something, whether this foundation is more than a simple convention.



The Limits of “Why?” The mastery of this grammar is never perfect. Saying why something will happen does not necessarily mean the reason why I believe it will happen; and the responses to these questions should not be of the same order as those of why I would act in such and such a way in regard to this thing. It does, though, occur that we unintentionally mistake one for the other, and the effects are far from insignificant. For a long time, identifying these confusions has been a part of the philosophical tradition. Similar to how one makes ordinary grammatical errors when one wants to say something despite the rules, we likewise—­according to the type of grammar considered here—­want to ask why and respond where “why?” no longer makes sense. It is also these limits of “why?” that trace out its grammar. What does “limit” mean here? Good examples of this can be found in questions like “Why is a line a line?” or “Why does Tuesday come after Monday?”, where we can only respond in a tautological way: a line is a line, Tuesday is the day after Monday, thus Tuesday. We often have the feeling here that “Yes, it’s obvious.” Obviousness is thus a limit of why. The other limit was already illustrated by the anecdote of the driving Dutchman: “Why him?” Technically, in philosophy we talk about contingent events to designate what could have or could not have been: the tree could also have fallen on a Nepalese tourist, a Sinhalese electrician, or myself. This is something akin to chance, yet chance is incompatible with “because”—­ unless “by chance” was a legitimate response to “why?”, which would then quickly become the subject of metaphysical discussions. “Contingence,” along with “obviousness” (or self-­evidence), thus declares the “limits” of this territory that delineates the grammar of why. There are then, in the end, two kinds of grammatical errors: mistaking one kind of “why?” for another (confusion of category) or asking “why?” where no “because” makes sense (transgression of limits). The equivalent is produced in ordinary grammar: a proposition like “Green numbers yearn for calm,” which has no meaning because the involved terms pertain to heterogeneous ontological domains (colors/numbers), is different from a Introduction


proposition like “The green is or” (a favorite among logicians), which does not even respect syntax and thus does not really say anything.6 Asking why when no reason holds or is thinkable would be like stating “The green is or”; while mixing the types of “why?” produces something that has the appearance of meaning but ultimately has none, as is seen in the sentence “Green numbers yearn for calm.” 7 These grammatical errors allow for “idols” to emerge. This is a word used by Nietzsche in the title of his last work, Twilight of the Idols—­a very short book conceived of as a handbook for his philosophy. God, the ultimate cause, values in themselves; for Nietzsche, these are idols that some kind of almost inevitable thought mechanism pushed Western humanity to forge and venerate—­a mechanism that has a great deal to do with the question “why?” since it involves notions of cause and effect. The present book will also concern itself with what happens to thought when it mixes the different usages of “why?” and supposes that all “why?”s demand the same kind of response—­or any response at all. Likewise, in a great number of cultures there exist myths that are supposed to answer why-­questions: Why must we die? Why are there men and women? Why is there good and evil? Without even leaving European culture, the biblical myth of the Garden of Eden responds to the first question, the myth of united souls transcribed by Plato (in the Symposium) to the second, the Greek legend of Pandora’s Box to the third. It is clearly difficult to know exactly what our ancestors believed when they recounted these myths and very plausible that they did not believe in them the way that one believes the sun will rise tomorrow, as it has been forcefully argued by historian Paul Veyne.8 However, it was through them that discussion about essential why-­questions could take place. Indeed, myths offer a response to “why?” where none is accessible; and where, I would add, it could be that the question “why?” makes no sense. The romanticized history of the birth of philosophy and science at the heart of occidental rationality speaks often of how logos—­reason, as a giver of reasons (logos is the same word)—­was substituted for muthos—­the myth, a fiction transmitted with small variations from generation to generation.9 Introduction


After having established the grammar of why, it will thus be necessary to very generally address those cases where we in some way fill the void left by an absence of why. Stated bluntly, the possibility of responding to why-­questions establishes a coherent and compact universe of events and beliefs: “This because of that, because of that, etc.” It is in this sense that David Hume called causality, which is a paradigmatical “because,” the “cement of the universe.”10 When this possibility comes up short, we have something like a hole; and a certain number of discursive, mental, or ideational constructions are deployed to fill this hole. Such holes are, however, crucial. In a famous text from 1950, Claude Lévi-­Strauss tries to explain the word “mana,” often associated with the practice of magic by sorcerers and shamans from Indigenous American tribes.11 This word defies translation in how it is used in so many diverse contexts and seems to gather together totally heterogeneous usages. Actually, according to Lévi-­Strauss, mana marks the gaps between language and knowledge. Mana objects, animals, or people doubtlessly have something special about them, but in a way that is not necessarily always the same and whose existence is precisely established without being formally identified in its nature. As this thing was not at all known, the founder of the myth attempted to grasp it with the resources of language, even if language could articulate nothing that is consistent beyond the fact of qualifying it as mana. The thing thus became mana. This well-­k nown analysis—­which is contested and doubtlessly more suggestive than robust—­sheds light on these “holes” and “idols.” There where no why is needed, “pseudo-­because”s appear and sometimes acquire a special, almost honorific status. The notion of destiny could illustrate it: as stories often show, it responds to questions like “Why was this person born?” The expression “to fulfill one’s destiny” means that the agent accomplishes that for which she was put on Earth, while one can reasonably claim that asking “Why were you put on Earth?” is a meaningless question. Now leaving religion, myths, and mana aside, the quest for “why?” manifests itself in other more contemporary auspices. Everyone is familiar Introduction


with what are called “conspiracy theories.” Believing, as certain conspiracy theorists do, that world history is directed by a malevolent secret society called the Illuminati—­or in a less colorful and more serious fashion, groups such as the Jews and the Freemasons—­often reflects a yearning for a unique theory that can give reason to important or tragic world events. Thus, certain psychologists have shown that the tendency to believe conspiracy theories—­such as “the moon landing was faked” or “the CIA planned 9/11”—­is broadly correlated to what they call a “need for cognitive closure”; or in other words, a loathing of these explanatory “holes” in the fabric of the world.12 Explaining or Narrating When we say why, we are often explaining something. Today, science in all its forms represents the principal authority through which we gain access to explanations. Things that are explained can be very general (like planetary movement, vertebrate diversity, the transmission of biological traits) or singular (like the emergence of the universe, the extinction of Neanderthals, the current stagnation of life expectancy in the West). Explanations can exist in very different forms (equations and mathematical models, researching antecedent causes, subsumption under laws, etc.). Besides science, there exists a more commonplace practice by which, without formal or scientific explanations, we respond to why-­questions: storytelling. Indeed, narration—­ an historian’s account, an article describing yesterday’s protest in USA Today, novels, film scripts, or even me simply describing my childhood to someone I’m fond of—­f unctions when it lays bare why agents act as they do and why events occur as they occur. Not in the sense where each line could be preceded with a “because” or a “why?”, but simply that the multiplicity of events, words, and actions that are presented can only be understood on the condition that it is indicated why things happened as such rather than otherwise. And just how explanation is protean in the sciences, narrations are almost infinitely diverse—­ particularly in how they indicate in a direct, implicit, roundabout, or Introduction


fragmentary way the reasons why what is narrated took place in the way that it did. Narrativity is thus a major subject for the philosopher interested in the grammar of why; and we will see how the “idols” I was talking about are formed in a strong connection to narrativity. The Program, and the Big Picture This book proposes a journey through what we will call, to adopt one of Kant’s favorite metaphors, the territory of why. It will explain its “grammar,” and then will focus on its limits. And it will, at each moment, remain attentive to the confusions, mix-­ups, and blurrings between why’s grammatical registers and categories. As such, I consider this book as quite Kantian in spirit; or, to use Kant’s language, I see it as an exercise in critical philosophy. Indeed, the idea that there is a logic as well as a grammar of confusions governs what Kant called Transcendental Philosophy. He deemed the study of the norms and justifications of our ability to know and our faculties for action “analytic”; and the study of confusions that naturally result from the exercise of these faculties, and which result in a systematic and regular way, “dialectic.”13 As a philosopher of science—­of biology more precisely—­I am decently acquainted with the issues related to why-­questions in biology, and explanations more generally. Most of my academic activity focuses on precise aspects of the set of ontological and epistemological issues raised by these questions. However, it is also true that philosophy is not an archipelago made of many disconnected islands; and if this were the case, many bridges or walkable paths would connect them together. I mean here that the philosophical reflections one can develop with respect to the question “why the eyes?” (a question thoroughly confronted by Darwin in the Origin of Species as well as many others before him) and the science pertaining to it connect to issues related to other aspects of “why”—­in other words, of the activity of asking and giving reasons. Thus, philosophy of science is part of an often unaddressed big picture—­unaddressed because it doesn’t belong to the current habits of philosophers as well as because of its complexities. In Introduction


this book I want to trace the connections between those issues with which I’m more familiar (biology and the sciences) and some other aspects (for instance, language, reasons-­for-­action, and historical inquiries), and start portraying the big picture. It means that my arguments, although rooted in my practice as a philosopher of biology, will span a large spectrum that will include philosophy of science (of course), of language, of action, of history, and of metaphysics. This book is an attempt to reconstitute the big picture, or rather its articulations, in accordance with the views I defend as a philosopher of biology. Since I will be aiming at this big picture, the details of the accounts for which I am arguing will not be given; they will be referred to in the endnotes, if some readers are interested. The “territory of why” that I am intending to explore will allow me to portray this big picture. To achieve this task, I propose a journey in three stages. The first part will present the three major meanings of why, starting from the analysis of daily language, judiciary activity, and scientific thought. Second, I will describe the articulations between these meanings, their confusions, and their legitimate or illegitimate shifts in content especially in biology and narrative activity. The third section will approach the above-­mentioned “limits” of why—­contingence, self-­evidence—­and the metaphysical idols that sometimes take their place. The book will conclude with the metaphysical question “why ‘why?’ ” Therefore, the reader will successively journey through the philosophy of language, of science, of action, of history, and of metaphysics—­not always in this order—­but this gives a proper notion of the domains the book will explore. Some Precisions, and a User’s Manual I would now like to talk briefly about a few books that a reader interested in my questions may have already seen. The sociologist Charles Tilly’s Why? is a very important work, and also very different from mine.14 It examines the types of discourses when we want to explain “why?” to specific people in particular contexts, and is a major exercise in the sociology of discourses and narrativity. Tilly uses 9/11 as a focal event, investigating the Introduction


way several people—­a ll differently situated within society and in relation to this event—­talk about it and explain why they did what they did. He considers a fireman, a witness, etc. All their narratives are very different because they each obey different codes linked to their jobs, their habits of interrelations, etc.—­which is also why they are so diverse, and why someone who is not in the same social space as the speaker may be unable to understand what is said. It is a sociology book and therefore does not ask the philosophical questions I am interested in here, although it can be very complementary to what I’m trying to do. On the other hand, two philosophy books about “why?” focus on specific questions that I consider much more quickly as steps in the journey I am proposing. Bradford Skow’s Reasons Why deals with explanations.15 It equates “giving a reason why” to “giving an explanation,” and explicitly does not address the reasons-­for-­action that occupy me in part here. It is a metaphysics book, arguing that explanations are about showing or telling the ground of something. Thus, it is a book about ground, and especially this precise case of grounds that are causes. As such, it may converge with some of my analyses on causation in the first chapters, as well as with my reflection on “reasons” (since reasons are grounds). I will write more explicitly about this convergence in the conclusion. But as it is much more centered on metaphysics, it ultimately differs from my own project. There is also The Book of Why by Judea Pearl (and Dana McKenzie),16 who elaborated an important theory of causal equations and structural modeling in his Causation17 that has been strongly influential on philosophy (Woodward, Hitchcock, etc.18) while also providing computer scientists with a grip on what is causal when they model data. The book is largely about the connection between statistics and causation; it argues that under certain conditions statistical data indeed allows one to legitimately infer causes. Hence, statistics are not only descriptions; they also provide a “why” when they are correctly handled. This important work shares common themes with what I say about our access to causation through statistics; but as one sees, the project is different and focused on something other than my grammar of why. Introduction


In the present book, there will be a great deal of science and scientific explanations, as well as justifications for our beliefs and our actions. And beyond science and agency, I will approach this other mentioned domain: narrativity. Wherever fissures, oceans, or abysses appear in the territory through which we will be traveling, I will try to understand what fills them—­sometimes bringing great benefits, and other times useless conflicts. As we can imagine, the task (or the territory) is enormous. This book does not aim toward systematicity. To draw out the metaphor, it aims more toward something akin to a travel journal than a Lonely Planet. I will describe the landscape in a fragmentary manner—­w ith the hope, however, that the accumulation of these fragments will allow the reader to form a fair idea of the territory. “Fragmentary” is understood here in a very precise way. Each chapter will consider one kind of why-­question. By demonstrating the logic of this question—­its object, its response possibilities, its presuppositions—­ each chapter will try to present in a case-­by-­case fashion the grammar of “why?”, its systematicity, and the type of logic that governs the possible misunderstandings that could arise around it. Because of its overall conception, this book is intended for several kinds of readers. Academic philosophers and graduate students in philosophy can of course read it as a contribution to questions concerning explanation, causation, narration, and reasons; they may be well aware of everything within their own subfield (e.g., ethics or philosophy of history) but can still learn something about the other ones (e.g., philosophy of science or of language). In addition to traditional materials that can be of interest to students, the book also hopefully contains some original contributions to often discussed problems (among others: the conceptions of contingency, the causal structure of narrativity, definitional fragility, and a rethinking of the notions of chance and the “nature of something”); as well as some personal views on metametaphysics, interpretations of traditional themes such as the relation between causes and reasons as seen by classical rationalist thinkers, and a revised version of Nietzsche’s critique of the “idols” Introduction


of reason. Readers who are well versed in philosophy will have no difficulty in recognizing that this book continues a critical tradition mostly represented by Kant and Wittgenstein in the history of philosophy, even though there is nothing here from either of those thinker’s doctrines. But in the end, I conceived this book as something nonphilosophers could read if they are interested in the practice of giving reasons or in some of the questions usually labeled “metaphysical.” By this, I mean, for instance, scholars in the humanities, scientists, lawyers, physicians, and anyone curious about why “why?” is such an important question. The endnotes are of interest to academic philosophers, providing details about literature on the topics I address as well as on my own views, indicating more technical questions, and answering certain objections an academic would be eager to raise; as such, one can entirely skip them. The reader who is not an academic philosopher can also skip more technical sections that engage some of the current debates among philosophers and develop some of my accounts in a more formal way. Not reading them will have no consequence on one’s general understanding of the message.


This page intentionally left blank



This page intentionally left blank

1 W hy Is Oscar Pistorius Guilt y of Murder?

yo u a l m o s t c e r ta i n ly r e m e m b e r t h i s s o u t h ­a f r i c a n athletic star—­a young runner who, although both of his legs had been amputated below the knees due to congenital anomalies, went on to set records for track events at different distances. With his prostheses, Oscar Pistorius ran as fast or even faster than the fastest to the point where at the peak of his career he argued for the right to compete in the Olympic Games rather than the Paralympics. He was undoubtedly the most visible figure in parasports, and certainly one of the most striking ambassadors for the cause of disabled people. His career ended in 2013 when, on the morning of Valentine’s Day, he killed his fiancée, Reeva Steenkamp, by firing multiple shots from a revolver through the door of his bathroom at home where she was hiding. In September 2014, the court had great difficulty in untangling whether this was a murder or a terrible accident: for his defense, Pistorius relentlessly claimed that he did not know where the young woman was at that moment; and that in this country renowned for its extreme violence, he thought that his house was being broken into and he was trying to defend himself. As a result of the judge being able to establish an accidental homicide but not a murder, his five-­year sen23


tence was appealed by the prosecution (by definition, let’s remember, a murder includes the intention to kill). Pistorius was ultimately declared guilty of murder in December 2015 and condemned to thirteen years and five months in prison in 2018. The South African judiciary system abolished citizen juries during apartheid; judicial decisions are thus left to a judge. For my demonstration, and to make things more similar to the trials that we know through movies from Hollywood, I will be talking about “jurors” hearing pleas and witness accounts just like in our criminal trials. Let’s thus imagine a guilty-­voting juror at this trial and let’s imagine that we ask him: “Why is Pistorius guilty of murdering his fiancée?” He would probably respond with something like: “Because he knew that his fiancée had left the bedroom, that it was thus reasonable to think that she was in the bathroom, that he aimed at the bathroom rather than elsewhere, that he had visibly made no effort to see if his fiancée could be elsewhere, that there had additionally been other acts of violence toward previous partners . . .” We often say that the question “why?” marks the search for an explanation or a cause; for example, there exist “why books” in France (Dis Pourquoi?) for children who are six to eight years old where it is explained why the sky is blue, why we breathe, why Bactrian camels have two bumps, etc., and correspond to the Just Ask book series. Yet here, none of the reasons mentioned by the juror represents a cause for the athlete’s murder of his girlfriend, as all of them are compatible with a case of involuntary homicide. The response to this particular “why?” is in reality a justification of the juror’s belief. Let’s note that the subject does not need to know the causes of the fact in question to be justified in his belief. On the contrary, it often happens in this type of juridical case that the explanation regarding the reasons for the act in question are unknown or out of reach. Thus, even someone who thinks that the reasons for a crime are buried in the depths of a criminal’s soul—­opaque to both others and himself—­would be satisfied by a justification similar to that which our imagined juror from the Pistorius trial declared. And conversely, if this justified belief is true, the



search for causes can truly begin based on this verdict since we know that Pistorius actually killed his fiancée. The “why” that seeks a justification (the reason-­for-­belief, in other words) is thus distinct from that which demands an explanation, which is often given in terms of a cause—­as I will return to later. Nevertheless, by dissecting this example we see how the two things—­the justification of a belief, and the explanation for an act—­are intertwined. Knowing a potential reason for what Pistorius did—­firing a gun into the wall of his bathroom—­would immediately become part of the larger reason why one would believe that it was murder. Let’s suppose that the notoriously jealous Pistorius knew about an affair his fiancée was having; or that he was the sole beneficiary of a life-­insurance plan his wealthy wife had taken out: the detective from a crime novel would thus have reasons to consider the hypothesis of murder as plausible. The motive according to which an agent could have acted—­which is connected thus to the explanation of the act itself—­becomes a reason to believe or at least to consider it plausible that he committed the act. This example, one could counter, is too complex since it involves the subtlety of law and the moral qualification of an action. “Guilty” is indeed a more complex concept than “table,” “chair,” or “fire.” Let’s then take a simple situation that is empty of all moral concepts. If I exclaim, “There is a fire in the forest!” and someone asks, “Why?” I could respond by saying, “I see smoke.” Here, the effect of fire and not its cause responds to a why-­ question seeking the justification of my belief that the forest is on fire—­in the same way that the imagined juror responds to the question “Why is Pistorius guilty?” with facts that justify their belief that the athlete killed his fiancée. However, if one asks the slightly more specific question, “Why is the Massif des Maures on fire?”, one would rather answer, “Because a careless person flicked a lit cigarette there”; we see that here the most immediate cause responds to this why-­question. We thus see the first important ambiguity between the two “why?”s: that which looks toward the cause of an event and that which looks toward

W hy Is Oscar Pistorius Guilt y of Murder?


the reason-­for-­belief regarding the event, which can very well be an effect of the event in question. “Cause” and “reason-­for-­belief ” (or justification) are two essential and distinct meanings of the object of the question “why?” Nevertheless, if I see someone flick their cigarette into the forest and the forest bursts into flames almost immediately afterward, that could form my response to either question: this information would justify my belief that there is a fire in the forest, and at the same time it would include the cause of this fire. Two Actually Independent Questions? In fact, the terms “cause” and “reason” are often used in what would appear to be an unobjectionably synonymous way. The cause of planetary movement—­that being the sun’s gravitational pull—­is indeed the reason why the planets turn in an elliptical orbit around the sun. To the extent that they respond to questions like “Why this or that phenomenon?”, the causes are indeed the reasons. Reasons explain by nature; and appealing to causes generally constitutes a perfectly acceptable form of explanation—­ particularly in science. For the moment, the question about Pistorius lets us see that a response to a “why?” is generally what we call a reason; and that this can be as much the cause of a phenomenon as a reason for believing in its existence (a justification). Descartes and the post-­ Cartesians—­ Spinoza, Malebranche, and Leibniz—­took a major interest in this multiplicity of meanings for the word “reason.” Thus, among the first to do so, Descartes distinguishes the ratio cognoscendi (the reason for our knowledge of a thing) from the ratio essendi (the reason why a thing comes to be). What we call in the words of Leibniz “the principle of reason”—­which stipulates that everything must have a sufficient reason or cause1—­expresses the idea that any true proposition derives from other propositions as its consequence and is justified by them, while at the same time positing that the facts of the world (which are referred to in propositions) are not without reason; and, in other words, Grammar


have a cause. Conforming with the distinction made above, this principle of reason is thus double: there is, as Leibniz says, a principle of reason for propositions, which demands that any true proposition must have a reason; and a principle of reason for things or facts, which claims that any thing or event must have a reason for why it exists. In the two cases, the principle of reason thus seems to guarantee that a “why?” question has a response; conversely, asking “why?” implicitly presupposes a principle of reason in some way. The principle of reason for propositions seems like a rational demand, since believing a proposition means to hold it to be true; and a rational agent is supposed to have reasons for holding a proposition p to be true instead of a proposition non-­p; therefore, those reasons should exist. (The principle of reason regarding facts and causes is actually less obvious, but we will extensively address it in the final chapters.) Reading these first analyses, one could imagine that simple linguistic precautions could dispel the ambiguity of “why,” making two totally distinct things out of “cause” and “reason-­to-­believe”: in the first case, “Why is Pistorius a murderer?” would signify “Why do you believe that Pistorius is guilty?”; in the second, it would be “Why did he become a murderer?” It is thus a matter of two questions that are dependent on different logical analyses, and which are ultimately the object of a linguistic homonymy. This homonymy would thus be eliminated through well-­formed language—­ exactly as, according to the German philosopher Gottlob Frege, a formal language would get rid of all the metaphysical problems stemming from our having only one word (“is”) to say two distinct things (predication and existence judgments, e.g., “the cat is black” and “there is a cat”). An attractive argument such as this, however, loses sight of the fact that knowing the cause of an event is sometimes a reason—­and even a very good reason—­to believe that this event took place. Indeed, if I know that the cause of the fire has occurred, I know that the fire is happening; and doubtlessly know it better than if I had simply seen smoke, which only makes it plausible that my belief there is a fire is true but doesn’t entail it (since it could be, for example, steam from a nuclear power plant, which W hy Is Oscar Pistorius Guilt y of Murder?


does not come from fire). The epistemic superiority of knowing the cause rests on the fact that the cause, to adopt a word from Spinoza’s Ethics, “envelops” the “necessity” of the effect. Where there is fire, there is smoke necessarily—­and this necessity makes it so that this knowledge of smoke justified by my knowledge of fire is reliable.2 As a result, it seems important to argue that “why?” always concerns one same question that includes many dimensions of one same thing that we could call “the space of reasons,” to make use of a contemporary term introduced by Wilfrid Sellars.3 The misunderstandings between “reason as justification” and “reason as cause,” which are sometimes potentially anecdotal as is seen in the preceding examples, can become major metaphysical problems. Nietzsche provided one of the strongest analyses of them; but before addressing them I will lay out some of the components of each of the two types of “why?” indicated here: reasons to believe and causes. “Reason to . . .” The word “reason” has a double meaning, referring to either the reason for something (a proposition, phenomenon, or act) or reason as a faculty. The rational agent is exemplarily human, but we can remain very general here; Kant spoke of “finite rational beings” as the designated subject of his Critique of Pure Reason, leaving the possibility open that there are others besides humans. Perhaps he was thinking of aliens endowed with a more refined body than ours, while modern biologists increasingly attribute rationality to other organisms—­to other primates, for example, but also even to birds, animals, and insects.4 The “reason” of the rational agent could be minimally defined as its capacity to give reason to its beliefs and its acts. In other words, to be rational means believing and doing things that are based on reasons.5 As we can see from the above, these reasons can be more or less good. Ever since Plato’s Meno, we have known that the simple truth of a belief is different from knowledge. The visionary, someone who has what they believe to be a premonitory dream, or simply the winner of a lottery, believes Grammar


what is true but believes it by chance in some way. In other words, there is no necessary—­or in any case highly probable—­link between what they believe and their way of believing it. If Stéphanie believes that the square of the hypotenuse of a right triangle is equal to the sum of the squares of its two sides because she saw the Pythagorean theorem in a dream, this is not a convincing justification because there exists no direct and highly probable link between dreaming a theorem and the truth of said theorem (many people surely dream up false theorems). On the other hand, there exists a necessary link between the mathematical demonstration of the Pythagorean theorem and the theorem itself, meaning that the belief in the theorem is justified in this case. What philosophers sometimes call the “problem of knowledge” includes the question of determining what turns a true belief into knowledge; and in a general manner, responses to this question since Plato have revolved around the idea that the knowledge of something is a justified true belief.6 Divination or drawing lots are not good reasons-­for-­belief, since their link with the truth in question is debatable, aleatoric, and fragile. The question “Why X?” (in the sense of “Why do you believe there is X?”)—­for example, “Why do you think that Macron is going to win the 2022 French presidential elections?”—­receives in response certain reasons for believing X, and these reasons can be more or less good. Thus: “Because the polls show him winning,” “Because a president who has an approval rating of higher than 40 percent midterm is always reelected,” “Because his opponents do not have a credible candidate,” etc. For the agent, such reasons are reasons to believe that Macron is going to win; but it could be that there objectively exist better reasons to believe it (and even still better reasons to believe that Macron will not be reelected). The strength of the link between the reason-­for-­belief and the object of belief is a measure of this reason-­for-­belief ’s quality—­and this strength can be evaluated in terms of probability: to what degree is it probable that there is this object of belief if there is this reason to believe it? For instance, in my example, can we compute the probability of someone winning an election when, at such a distance in time from the elections, he is currently scoring 40 percent in the polls? Probability, besides measuring W hy Is Oscar Pistorius Guilt y of Murder?


the frequency of a class of events or the degree to which someone believes in a statement, also measures the weight that a stated fact X confers to a hypothesis A about another fact Y.7 Thus, the perception of any object in good visibility conditions is a relatively good reason to believe that this object exists; in other words, that the belief that “there is a visible object with such and such a form here” is true, because if an object is perceived in good conditions (clearness, light, good health of the subject, etc.) it is very probable that it exists. We can juxtapose this justification with “Why is there a Smurf here?” “Because I dreamed it.”—­which is ostensibly a much less good reason because, measured in terms of probabilities, the relationship between “dreaming up a Smurf ” and “the Smurf exists” is very weak. If Jeanne knows that the earth’s climate is changing because she saw it on Marlène’s Facebook page, this is a much less good reason to believe it than if she had read it in Le Monde, because the probability that something seen on Facebook exists is much less high than that of a thing seen in Le Monde when one considers the different modes of information construction in Le Monde and Facebook. In this vein, philosophers talk about the “reliability” of knowledge, sometimes saying that a “good reason” often comes from a reliable source of knowledge. I am passing over the countless discussions that could be had about, on the one hand, the justifying strength of perception, and, on the other, the sufficiency of the justification criteria for transforming a true belief into knowledge. I would simply like to highlight the role of “reason-­to” in the very nature of what we call “rationality”; and to indicate that because of the nature of knowledge (as justified belief), the question “why?” is central in the activity of knowing. Evaluating the strength of a reason to believe leads us to another fundamental activity of knowledge—­one which was highly significant in scientific practice and illustrated most famously in philosophy by René Descartes: that of doubt. With all of my beliefs, I think of things based on reasons of varying strength: I believe that the Amazon is the longest river in the world because I was taught this (even though other sources say it’s Grammar


the Nile, because there are distinct and equally justified ways to measure their length); I believe that Brazil has won the World Cup five times; I believe that water boils at one hundred degrees Celsius. I also (pessimistically) believed that Donald Trump would be reelected in 2020, but I admit that my reasons for believing it were much weaker or less good than those of the other propositions; and in this sense, I am admitting that they could have been more easily doubted. We could tentatively believe that science constitutes a network of beliefs where the strength of justification that links the beliefs together is relatively high because they have been subjected to methodical doubt—­meaning empirical tests that serve as attempts at refutation. A reason is more or less good than another if it is more or less strong, which could mean that the link between the reason and what it justifies includes a higher degree of probability. With this being the case, the question remains of determining what a “good” reason-­for-­belief is in an absolute sense since I have only given examples so far. Thus, to the question “Why do you believe such and such a fact or generality?”, the response “Because X or Y” can be more or less convincing; and beyond a certain threshold one can say that it is a “good reason.” Below this threshold, one will say that the proposition should be doubted. But what is this threshold? Let’s take another look at the Pistorius case; the juridical context of establishing facts will be full of information for our analysis of what a good reason to believe something actually is. If our juror only knows that Pistorius fired four shots into the wall of his bathroom and in consequence assumes him to be a murderer, he has an insufficient reason to believe it; if I asked him, “Why do you think he is guilty?”, I would indeed be unsatisfied with such a response; because it would be necessary for me to have a reason to believe that Pistorius wanted to shoot at his fiancée by aiming at the bathroom, and this fact is not induced by the reason that the juror has provided. Conversely, if with this same information he judged him to be not guilty, and if when responding to my asking why he said, “Pistorius said that he did not know his fiancée was there,” I would also find that to be flimsy since Pistorius could very well have been lying. This is why the hearing seeks information other than W hy Is Oscar Pistorius Guilt y of Murder?


that of the initial findings and statements from the suspect, information that will further influence whether the juror should believe Pistorius to be a murderer or not. This judiciary case is interesting for us because it indicates other aspects of what is a (good) reason—­namely, a (good) response to “Why (do you believe) such a proposition?”; or in other words, a response to “Why should I also believe this proposition?” How to Assess a Good Reason We can see that these two examples of weak reasons are not symmetrical: between the two options, which are based on the same (minimal) knowledge of the situation, the juror had to declare Pistorius not guilty because—­in the justice systems of democratic countries—­the accused must be given “the benefit of the doubt.” With the opposing reasons having equal credibility, there is thus a reason for supporting one claim rather than the other when it comes to guilt. Beyond the relative strength of the reasons to believe, there is another supplementary principle to help decide between beliefs and determine why we must declare Pistorius not guilty on the basis of the information that I mentioned (in a scenario that resembles the first trial, which did not conclude in murder). But maybe we should ask where this judicial principle comes from? It seems to be ultimately invoking a moral asymmetry between two falsehoods: between declaring someone who is innocent to be guilty or declaring someone who is guilty to be innocent, the second option is rationally preferable. I will not go into the justifications that one can give for this principle, which is sometimes formulated in a more extreme form: “It is better that ten guilty persons escape than that one innocent suffer.” This saying has the advantage of clearly demonstrating that there exists an asymmetry between two errors of judgment, and ultimately proclaims an ethical preference for one type of error. In reality, it is a question here of an ethical perspective on an extremely general consideration that is sometimes technically called “the asymmetry between false positives and false negatives.” Grammar


Up until now, I have been talking about being right or wrong when I believe something or other. But there are always two ways of being wrong: believing in something that did not happen (the “false positive”) or believing that something does not exist when in fact it does (the “false negative”). This seems very abstract but in fact covers a multitude of clearly important concrete cases: for example, it is very different to believe that I have COVID when I do not have it, and to not believe I have it when in fact I do; because in the second case, my behavior will easily transmit the virus, while in the first my behavior will change nothing regarding the transmission of the virus around me. The imaginary Pistorius example above suggests that when there is not a strong enough reason to believe something (in the real case of Pistorius there was one) a supplementary consideration concerning the difference between the two types of error—­the false negative and the false positive—­ will be able to form a reason for choosing one belief instead of another. This is especially the case in a trial where a juror is not allowed to be neutral because he is forced to give a verdict; in other situations, for instance in scientific practice, when one has two sets of evidence that equally support two different theses, one can wait for additional evidence and, in the meantime, refrain from believing. But the judicial case provides us with a major lesson: until now, it has seemed that the reasons for believing a proposition were measured by the strength of the connection between the reasons and the proposition (particularly in terms of probability). And it is effectively here where we see an essential requisite for any reason to believe: it must first be founded on known facts and relate to them in an appropriate way—­a way that generally includes a specific relationship of probabilities, with the believed thing being more probable than others on the basis of these facts. Yet the dissymmetry principle of the “condemned innocent party / freed guilty party” indicates in a very general manner that it is difficult to measure the strength of a reason to believe something without immediately also recognizing the consequences that the fact of believing this thing could bring. For example, believing that Pistorius is guilty would mean sending him to W hy Is Oscar Pistorius Guilt y of Murder?


prison for a very long time. A reason that is strong enough to believe that it will rain and thus to take your umbrella (the cost of the error is weak: you just look foolish carrying your umbrella with you all day) is not strong enough to declare a man a murderer; because there, the cost of the error is much more dramatic. The “reason” for a statement or a belief is thus also evaluated in terms of the effects of the statement in the practical world; and this seems perfectly rational: it would be irrational to put the umbrella case and the life sentence case on equal footing.8 To continue with Pistorius, let’s take another look at the jury during their deliberation: they have been provided with much information—­ witness testimonies, scientific police reports, an inquiry into the life and habits of the accused, etc.—­and on this basis they have determined Pistorius to be guilty. The principle under which they were operating was that of “reasonable doubt”9: guilt must be pronounced “beyond any reasonable doubt,” which is clearly derived from the fact that all doubt must be to the benefit of the accused.10 This standard of juridical proof instantiates what I called earlier the criterion for a “good” reason-­for-­belief, namely, the threshold above which a reason becomes a “good reason.” The asymmetry principle “condemned innocent party / freed guilty party” thus provides in some way a threshold for a “good” reason-­for-­belief during a trial. As such, this threshold is determined by the judiciary context, and thus by a regulated normative social practice. In particular, it is difficult for the weight of the consequences to remain separate from the evaluation of the threshold of this “reasonable doubt”: when the life and freedom of an individual is at stake, this threshold is much higher than when it is a question of just one or two years in prison with the possibility of a suspended sentence. Thus, if we call “pragmatism” the set of conceptions that somehow connect the truth of a statement to the consequences that believing this statement has upon the believer of the others, it seems that pragmatism has a point here.11



A Minimal Pragmatism Indispensable Even Outside the Courts? Although pragmatism may rule the norms of belief in courts—­especially because the option of refraining to believe and to wait for more evidence is not open to jurors—­it should be left out in science, theory, and journalism. Only evidence should count as the norm of belief. Or, at least, this is the classical position held by rationalists, and is often called evidentialism.12 There are, however, caveats when one considers the practice of science, even if the scientist faces no deadline for deciding what she believes to be true. In a landmark paper, Heather Douglas statistically showed that the asymmetry between false negatives and false positives is crucial in science as well.13 Suppose you design a test for a specific infectious disease. As one knows, tests have false positives, which means their results cannot be reliably informative if they are not assessed against other information such as the frequency of the disease in the population. Yet it is often possible to perform several types of tests that are based on distinct models, thereby putting the scientist in the position of having to choose to minimize either false negatives or false positives. Ultimately, it is this decision that will result in the choice of the model and the final description of the diffusion of the disease within the population. Analyses like this one can be applied to any branch of science where detection and measurements are practiced (ecology, meteorology, etc.). In each case, various considerations—­which can be economical, political, ethical, etc.; and which can range from minimizing the number of undetected sick people likely to transmit a virus to minimizing the cost of delivering vaccines—­w ill justify the choice of a particular model depending upon the way that asymmetric costs are assessed. Thus, once again, our beliefs must be grounded not only in collected evidence but also in other principled considerations that may be of various natures. As a result, it seems that even outside the court (and especially in the lab) the prospects for finding a simple, purely evidence-­based determination of what W hy Is Oscar Pistorius Guilt y of Murder?


is a “good justification” are weak. And if we take an even closer look at scientific practice, we find other additional grounds to hold this view. Richard Levins was a major evolutionary biologist and ecologist; together with Richard Lewontin, Robert McArthur, and Edmund Wilson, he contributed to the rise of a model-­centered, mathematized community ecology that was closer to population genetics than to natural history.14 Unlike the others, he was also a political activist and worked for decades in Puerto Rico to improve crop yields as well as to provide education for farmers. As an ecologist, he developed reflections on the potential of ecological modeling, eventually writing an essay that became a classic among philosophers of science as well as ecologists entitled “The Strategy of Model Building in Population Biology” (1966).15 In it, Levins shows that the model of a system can fulfill several epistemic goals—­namely, that it can be realistic, general, precise (or predictive)—­but that it can’t reach these goals at the same time. As a result, modelers have to accept trade-­ offs among these goals (for example, being very general will often come at the cost of being realistic). This argument constitutes another version of the question of having a “good reason” to believe something, but instead it concerns scientists. It may also lead to the same conclusions. Let’s consider COVID-­19. During the pandemic, we have been presented with a huge plurality of epidemiological models. Levins’s idea of epistemic goals allows us to make sense of this diversity. Some of these models were very general—­like SIR models, which stands for “susceptible, infectious, and recovered”—­and considered the population based on these three criteria (those who have not been infected yet, those who have the virus, and those who are either recovered or dead and can no longer transmit it). The model deploys a dynamic of virus diffusion, with one key parameter being R0 (rate of transmission of the virus)—­which allows one to compute the amount of S, I, and R at each step along the way. When R0 is above the threshold 1, any epidemiologist knows that the virus will spread and possibly invade the population. But such a model is very general. In fact, the amount of infected people for a given sick person is not equal to R0, which is only an average. In Grammar


reality, some people may pass on the virus 10 times Ro, while others may not pass it on at all (for instance, if they never leave home, or never see other people). If one wants their model to be more realistic, they can partition the population into several classes defined by a specific level of social activity—­which operates on the basis of assuming that the higher a person’s social activity, the higher their chances of spreading the disease to many people. This example of model duality is different from the case of the asymmetry between false positives and false negatives, but supports analogous conclusions. Indeed, which COVID-­19 model should we use? The former (SIR) is simpler and more general; the latter is more realistic—­but because it’s too mathematically complex, it should be put aside. Thus, we have a scientific reason to choose between the two options. But let’s think again about this second model: if I wanted to apply a criterion like “social class,” how would I be able to know who is in which class? I would have to ask people; and more generally, I would have to integrate many sources of information that would likely violate personal privacy. Thus, there are additional reasons to prefer the SIR model, but they are ethical rather than epistemic. As a consequence, the singularity of a situation found in a courtroom, which I considered ideal for explaining how a threshold for a “good” justification can be integrated within a practical context, fades away. To some extent, epidemiology, public health, and medicine are in a similar situation: practical determinants enter into the specification of a “good” reason to adopt a model, and thus into the choice to believe in something. Here, we see that even though a court seems singular because a juror must come to a decision before a given deadline, this same kind of deadline situation can exist for scientists in fields such as epidemiology or medicine. At certain moments, scientists must decide among various options in order to deliver reliable beliefs that politicians, physicians, public health experts, and other people will use to ground their own decisions; and as we have seen with COVID-­19, these decisions must happen quickly. Heather Douglas argues a similar view. The more a branch of science W hy Is Oscar Pistorius Guilt y of Murder?


affects people (environmental science and medicine immediately come to mind), the more it is affected by the asymmetry in cost of errors—­which means it should further include pragmatic reasons-­for-­belief. In addition, sciences that affect people are also those where one cannot dawdle in deciding, meaning that Douglas’s conclusion concurs with ours.16 But what about other sciences? One could say that once a science has a minimal impact on our lives purely on its own (leaving aside technical applications), this question of nonepistemic reasons being involved in how one decides what to believe is out of place. In fields like paleontology or quantum physics, there may sometimes be reasons to believe some claim or to adopt some model or hypothesis, and then at other times no reason to believe or adopt them—­forcing us to remain in a state of ignorance before our being provided additional evidence. The most fundamental sciences take place in the face of eternity, whereas jurors and environmental or health scientists have to act now or tomorrow at the latest. Even though science is not the focus of this chapter, I will take the time to criticize this hypothesis apparently grounded in common sense. It would be true, indeed, if there were only one fundamental scientific problem to solve in the world; but unfortunately, there are many. Thus, in the real world, scientists have to decide—­collectively and in general—­how to allocate their cognitive and temporal resources in regard to these various problems. And the cognitive resources and efforts they put toward a problem will depend upon the nature of the other problems to solve, as well as how they assess their importance. This makes science, once again, in a position similar to that of jurors. So, let’s think again about the trial. After Pistorius is accused of homicide, how many testimonies should be required before a judgment is made? A priori, I would argue that if one questions many witnesses, the testimonies will either ultimately converge into one same narrative (wherein a verdict of “guilty” would be announced), or not converge at all (resulting in a verdict of “not guilty,” for reasons we discussed above). So, when should we stop admitting witnesses and testimonies? Similarly, if a scientist is researching a particular question, how do they Grammar


know when they have enough data to form a judgment? As an example, let’s consider pesticides and GMOs. Suppose, as is the case, that we have “longitudinal studies” of one product—­namely, studies that look at farmers who are exposed to a given pesticide and then compare their medical histories to those of unexposed neighbors regarding various pathologies (leukemia, cancer, etc.). Suppose now that I have three such studies on three populations, whose result is that there is no significant difference in pathologies between those who were exposed and those who were not. Should the scientist conducting the studies then conclude that the product is safe? Or should they ask for additional longitudinal studies? And given that they will want to minimize biases in their statistics, how different should these new studies be? Of course, this is environmental science, which has many impacts on human life. As a result, the fact that a sound justification depends upon the proper allocation of resources, which itself depends upon ethical or political values, is not surprising. However, this situation exemplifies the idea that allocating cognitive resources to gather data is always costly because resources are limited—­since longitudinal studies done for this product cannot also be done for a different product, thereby weakening our capacity to assess this second product. And yet this question of the finite allocation of cognitive resource also holds for the most foundational and nonimpacting sciences, even if it is far less intuitive. For example, let’s consider fundamental physics. Imagine that there is a specific particle called the “kozon” that should exist according to our increasingly likely theory of fundamental forces and interactions but that we haven’t detected it yet. We have many reasons (coherence, explanatory power, mathematical tractability, etc.) to believe in this theory, and so we design instruments to find this kozon.17 Now let’s suppose we don’t find it; and that we prove that another more expensive and more complex instrument that collides particles is necessary to do so. Which choice should we make? We’ll be kind to ourselves: thanks to a large donation from a bored multibillionaire with too much money, there is funding for our new device; W hy Is Oscar Pistorius Guilt y of Murder?


but it fails again! And then again. And again. What then do we do if the experiment to find the kozon fails every time? Should we continue until we finally find it? Or should we weaken our confidence in the theory, since the predictions have not yet been proven true? This would, however, bear problematic consequences; because, given that this theory is fundamental, many other theories rely on it and would thus be weakened or disqualified without it. With infinite time and resources, we could of course keep making instruments until the kozon is found, never seriously giving up on the theory unless a better one were developed. But this is not a realistic prospect. In the real world, the decision to gather data and to ultimately assess whether we have sound reasons to support a theory depends upon whether we have a strong enough argument to mobilize resources for the inquiry (instruments, data gatherers, etc.). These reasons are of course primarily scientific, since they concern why priorities are given to certain scientific issues rather than others. In our kozon example, for instance, how should the gross amount of scientific resources be distributed between the kozon inquiry and other tests regarding physical theories in other fields? But there are other reasons for these priorities, including economic and political ones. In the end, this means that even in those sciences whose object has virtually no impact on our lives, it is difficult to define a threshold for a good reason to believe in theories or hypotheses without involving practical considerations. Cause Thus, to sum up this long development, it is not obvious that there is a clear universal criterion for what constitutes a “good reason” for believing a proposition, or a good response to the question “Why do you believe this proposition?”—­whether it be it in court, in science, or elsewhere. Any threshold beneath which a reason-­for-­belief—­a justification, in other words—­is poor would seem arbitrary if the practical context is not taken into account.18 If we now examine “why?” as a question concerning the reason behind something coming into existence, we find that the concept of “cause” enGrammar


compasses a good part of the possible responses to this question. The natural sciences often discover causes, and I will pause over this in the next chapter, but the concept of cause is much wider than its scientific usage: we must use it as much in our daily language as in juridical, political, and moral contexts. For example, it is rare for an argument to not have a sentence where one of the participants yells “Because of you the cat got out! / the bathroom is flooded! / our vacation is ruined!”; politicians jump over themselves to say that their policies are the cause of newfound prosperity, or that those of their opponents are the cause of higher unemployment; teachers during recess try to find out who caused a fight. And we have already seen how any trial presupposes being able to identify a cause. But for one same thing, admissible causes can be multiple. For example, the cause of death for Pistorius’s fiancée was four bullets from a fired gun; but it is just as correct to say that it was a successful attempted murder now that Pistorius has finally been declared guilty. To look at a less grisly example, the cause of the Maures fire was as much a lit cigarette as it was the thoughtlessness of the person who flicked it into the brush while out walking. The necessity for identifying causes in order to respond to the question of why things are as they are—­which seems fundamental to the very existence of a rational subject—­must apparently be ready to adapt itself to a plurality of different responses, which could in the long run create a problematic sense of relativism. Indeed, if there is a multitude of valid responses to one same question, and they depend on points of view, why for example are we making such a fuss about how climatologists are declaring that industrial activity is causing global warming? What happens then when we respond to the question “Why did the fire happen?” by identifying a cause? The pine grove is on fire, the firefighters find a cigarette butt among the charred brush, it is very hot this summer, and so we say, “The forest caught fire because someone tossed their still lit cigarette.” The cause is found, and the cause is understood. But if we look closer, we see that things are more complex. Here again, the world of the courts can enlighten us: a major concern during a trial indeed consists in deciding who caused what, and the implicit W hy Is Oscar Pistorius Guilt y of Murder?


usage of a concept of causality is drawn upon.19 Let’s then imagine that our negligent hiker was identified by video surveillance cameras or drones (with surveillance being so advanced, one can speculate that even wild nature will be subjected to it in the near future) and the police arrest him at the resort where he is staying. In front of the court, he is accused of arson. In a general manner, various questions arise during a trial that identify the committed acts (shooting at somebody, stealing an object, communicating secret information), qualify them (homicide, theft, accident, insider trading)—­which involves determining whether the act was voluntary or not—­and then decide on a penalty (but let’s leave aside this last aspect of things). Let’s say that nothing indicates that the action in my example was done intentionally. In response to the prosecutor accusing the hiker of having caused a major fire, the defense lawyer could perhaps claim: the forest was extremely dry, the wind was powerful, the brush was so abundant that the smallest flame could have spread far and wide. In other words, on that particular day, the forest did not need much at all to catch fire. The causal contribution of the casually flicked cigarette is thus not as great as it would be in a context where the forest is less flammable. In such conditions of extreme susceptibility to fire, can we be truly sure that there was not something else besides the cigarette that could have set the forest on fire? Insurers are familiar with these kinds of disputes regarding the causal contributions of different factors. Imagine that your house gets robbed. You did not have a reinforced door, or your door was not locked; the burglars broke in and stole your jewels and your Dali engravings. Of course, they are the cause of the burglary; but after all, anyone could have opened your door with a kick or a credit card just like one sees in the movies. Your negligence toward locking up is thus a plausible enough cause for the robbery; or in any case, it largely contributed to this sad turn of events. What is thus the good response to the question “why?” in all these cases? From a psychological point of view, it has recently been shown that attributions of causality are hard to separate from moral or ethical considerations. If Virginie leaves for vacation and her plant dies, one would say that it died because it did not have enough water; if, however, she had Grammar


asked me to water the plant and I had not done so, people would say that I was mostly the cause of the plant’s death—­even though, in regard to the physical process leading to its dehydration, there is no difference whatsoever between the two situations. This now famous psychological test developed by the philosopher Joshua Knobe indicates that we include norms and values in our identification of causes.20 But psychology is not all. In general, the way in which we attribute causality (which is what psychologists investigate21) does not determine what must be in itself the causality (which is the concern of philosophers); just like the way in which we count—­by usually favoring multiples of 10 and 5 (because of the number of fingers on our hands, probably)—­does not determine the structure of arithmetic, for which prime numbers are the most important, and not 10s or 5s.22 Leaving psychology aside, a major lesson concerning the question “Why did such an event take place?” emerges from our reflections about burglars and fires: there is not an unequivocal response, but rather distinct responses that must be ordered in terms of their “causal weight.” Thus, in terms of what started the fire, the cigarette would not have set the forest ablaze without the dryness, the wind, or the oxygen in the air: in other words, answering “why?” means indicating a specific factor that we call “the cause” among the whole group of causal factors that were necessary for or favorable toward the fire’s occurrence. In any given context, there are always protocols for identifying this factor. For example, instead of the air’s oxygen or the wood’s flammability, we refer to the cigarette as the “cause” because the cigarette was quasi-­concurrent with the start of the fire; and probably because it is also an element over which we, as humans, have control—­whereas we do not have control over the existence of oxygen. At first glance, the cause that acts as a response to a “why?” question is thus an event or fact without which the focal event would not have taken place and which additionally fulfills the two conditions of quasi-­concurrence and human control that I just mentioned. In consequence, we can thus identify the difference between causes and conditions. The oxygen in the air and the absence of humidity in the brush W hy Is Oscar Pistorius Guilt y of Murder?


where the cigarette fell are among the conditions of the “fire” event; the act of flicking the lit cigarette is the cause of it; and the requisites of control and quasi-­concurrence indicated above allow us to draw a line between cause and condition. It follows that “why?” generally requires a response in terms of cause rather than conditions, as is indicated by the fact that in our example, no one says, “The forest caught fire because there was oxygen.” Further analysis could highlight the relative character of the “control” requisite. After all, whether a fact is controllable or not depends on the humans who consider it, on the state of society, etc. In the end, it is perhaps not so objective. This is the conclusion that Bertrand Russell draws in a 1911 article that finishes by arguing that the very concept of cause is nonsense.23 In it, he claims that we privilege certain conditions among the infinite number of potential ones that exist for an event and call them “causes”; but that from the perspective of the nature of things, so to speak, this distinction is null and void. In its barest sense then, we have here a pragmatist conception of causality. Within this context, the disputes between insurers and policy holders about the cause of an event are in the end discussions that do not settle the pure facts of the matter; instead, they involve a debate between normative antagonistic criteria. Thus, when an insurance agent tells me that the cause of my window breaking is not hail falling but the fragility of the window’s glass, they are devaluing the norm of “quasi-­concurrence” mentioned above—­since the fragility of the glass precedes its breaking—­thereby favoring the criterion of control—­as there is of course a human control over the fragility of the glass. But as for myself, being the victim in this case, I would do the opposite. (Let’s note that this intervention of the normative is initially different from the “Knobe effect” cited above; even without this effect, the analysis concerning the consequences of the cause-­condition difference still seem valid.) Russell radically concludes that if science aims toward the understanding, explanation, or objective description of the world, the concept of cause thus has no place in it—­as it is a residue of a bygone era, like the notion of monarchy in modern politics. But without going as far as Russell’s extremism regarding causality, we can take note here of the pluralism of the Grammar


concept: there are many possible and not necessarily compatible responses to the question “Why this event?” that are all legitimately formulable in terms of “cause.” Billy the Kid and Contrast Classes Our usage of the category of causality seems so important in daily speech—­ for insurers, courts, newspapers, domestic disputes, etc.—­that it may be good to quickly pause over it before focusing on science in the following chapter. To discuss Russell’s extreme argument, let’s bring a more focused attention to the very form of the question “why?” Since the 1970s, many philosophers of science have highlighted the fact that these types of questions often include implicit clauses of the genre “Why X rather than Y ?” We call these “contrast classes,” and they are crucial for understanding how different responses to “Why was there a fire?” can coexist. Elliott Sober tells a joke to illustrate this that probably comes from Hillary Putnam: “Why did Billy the Kid rob banks?” “Because that’s where the money was.” The punchline works because it responds to a question in the form of “Why did he rob banks rather than stores?” instead of that which a judge would implicitly ask: “Why did he rob banks rather than working?” In the first case, the contrast class relates to banks vs. grocery stores, while in the second it relates to stealing vs. working. In the two cases, the given response is correct. Often, we give opposing answers to one same apparent question “why this thing?” because we understand different implicit contrast classes. To go back to our example of the fire, we can now see that the opposition between the two possible responses to “Why did the Maures fire occur?” can be cleared up in the following way: the response “because someone flicked their lit cigarette” would respond to “Why was there a fire that day rather than another day?”; while the claim that “the forest was excessively flammable that month” would respond to the question “Why was there a fire that month rather than no fire at all?” (Indeed, in regard to the second case, we should highlight the fact that even if hikers often careW hy Is Oscar Pistorius Guilt y of Murder?


lessly toss their cigarettes, nothing happens in other seasons such as winter or autumn; while conversely, in that particular month of July, if another hiker had come one day earlier and had himself tossed a lit cigarette, the forest would have very probably caught fire. I’ll come back to this idea in chapter 5, when I’ll analyze the apparent inevitability of some events.) These contrast classes are not just purely philosophical whims; they are often defined by dialogue contexts. For example, Billy the Kid could indeed respond to his accomplice—­since he is already situated within a universe where one steals—­that he robs banks because that’s where the money is since the implied opposition between robbing banks and robbing grocery stores or hardware stores is contextually clear: in any case, we are here to rob someone. On the other hand, with a psychologist or a judge, the implicit opposition when discussing bank robbing becomes “to steal or to work hard?”, which means that the contrast class becomes stealing vs. working in turn. Thus, according to the definition of contrast classes—­which are often relative to the social contexts of dialogue—­one same “why?” can give rise to different but entirely legitimate responses.24 Nietzsche and the Confusion of Cause and Effect This examination of the difficulties of the concept of cause shows to what degree the debates created by our question “why?” are based on confusions about how one understands its object—­not only just between reason-­for-­ belief and cause but also in regard to causality itself. However, when confusion envelops both cause and reason-­for-­belief, the consequences can be metaphysically important. According to Nietzsche, many of our most entrenched metaphysical beliefs arise from this confusion of justification—­ which is often an effect—­w ith the cause itself. Entitled the “Four Great Errors,” a chapter in Twilight of the Idols pithily demonstrates this in a paragraph about “the error of confusing cause and consequence.” In fact, if one can mistake cause for effect, it is precisely because both of them can



be reasons-­for-­belief and because this ambiguity is intrinsic to the question “why?” Nietzsche writes: There is no error more dangerous than confusing the effect with the cause: I call it the genuine corruption of reason. Nevertheless, this error is one of humanity’s oldest and most contemporary customs: it has even been made sacred among us, it bears the name of “religion” and “morality.” Every statement formulated by religion and morality contains it; priests and moral lawgivers are the ones who originated this corruption of reason.—­Let me take an example. Everyone knows the book by the famous Cornaro where he promotes his skimpy diet as a prescription for a long, happy—­and virtuous—­life . . . The reason: confusing the effect with the cause. The honorable Italian saw in his diet the cause of his long life, whereas in fact, the prerequisites for his long life—­extraordinary metabolic slowness, low expenditure of energy—­were the cause of his skimpy diet. He was not at liberty to eat a little or a lot, his frugality was not “freely willed”: he got sick if he ate more. But for anyone who’s not a cold fish, it not only does good but also is necessary to eat properly.

The philosopher uses this argument against religion in general, which promotes believing in God as the source of a good life—­when in fact, for its promoters, it is simply the effect of having a character that by nature wants to believe in God and which induces a certain form of contentment within these conditions. In regard to the text, the error targeted by Nietzsche can be formulated as an implicit misunderstanding of two responses to the same question: “Why is a radical diet good for a long life?” If asked, an ideologue might respond, “Because Cornaro lived for a very long time eating that way.” Yet if this is a correct response to the question “Why do you think that a radical diet is good?”, it provides us with no information about the causes behind Cornaro’s long life; and in a more general manner, on the causes of having a long life for those who follow this diet. But the ideologue will still want to make this draconian diet into an irrefutable cause for longevity.

W hy Is Oscar Pistorius Guilt y of Murder?


Nietzsche’s text exposes the magnitude of the confusions between cause and effect. One can certainly be surprised by how people can confuse the two, given that the category of causality is so important for how one makes sense of experience, and in particular for how one interacts with things—­since, to be able to manipulate something, it is necessary to identify what is likely to have a causal effect on this thing and thus distinguish cause from effect. But the ambiguity of the question “why?”, as has been analyzed in this chapter, may suggest an explanation for this confusion: cause and effect are equally likely to form a response to “why (do you believe) X?” In this measure, they may be interchangeable; and our speech may easily tend to take one for the other with sometimes harmless consequences—­and sometimes massive ones when it is a question of religious or moral beliefs such as Nietzsche portrays them. To Sum Up This chapter showed the essential distinction between the reasons-­for-­belief in a proposition and the causes of the thing claimed by this proposition. These are the two first distinct responses to a why-­question. Their relationships are complex: the effect of something can easily and legitimately be a justification for believing in this thing; but the knowledge of the cause of this thing is also a justification of this belief. In this sense, “why?” is ambiguous and one can easily and wrongly mistake the justification of some belief about a thing with the explanation of this thing. Often benign, this kind of confusion can, however, have major effects in moral and religious domains. A preliminary characterization of the “reason-­for-­belief ” and of “cause” has also been given. We have seen that defining a “good” reason to believe in a proposition is often very difficult in the absence of any consideration of the practical consequences of such a belief; and that the notion of the “cause of an event”—­which is unavoidable in ordinary speech—­poses grave philosophical problems if one wants to isolate “the” cause of an event or infallibly distinguish cause from conditions without taking pragmatic factors into account. Grammar

2 W hy Do Things Fa ll W hen We Let Them Go ?

“ b e c a u s e o f g r av i t y,” a s i s a a c n e w t o n s h o w e d u s . But “la chute des graves”—­this lovely expression from the sixteenth century whose now obsolete terminology can still be seen in the nouns “gravity” and “gravitation” (graves, or “heavy bodies”)—­to this day still remains one of the defining images of physics. Who does not know the popular myth of Newton being struck by a free-­falling apple? Thanks to this apple, he finally understood this most fundamental of phenomena, sharing his discovery of “universal attraction” to mankind in the formula F=GMm/r2. With it, the force exerted by a body of mass M on a body of mass m from a distance of r can be calculated, with G representing the universal constant of gravitation—­a fundamental value that is a defining feature of our universe. Stones and the Four Causes By all accounts, the will to know proceeds from astonishment; but unlike a total solar eclipse, a falling body is not a surprising mystery. On the contrary, it is entirely regular. Yet it is much more natural to be surprised 49


by unusual phenomena like eclipses than ordinary phenomena like falling bodies or the succession of night into day and day into night. Many cultures invented divinities to explain these eclipses that shocked, frightened, or surprised them; but very few imagined a god of falling bodies—­to which they were so accustomed that that they did not even notice them. But the reason for eclipses is ultimately the same as that of the succession of night and day: the movement of celestial bodies, which itself is based on the Newtonian law of attraction and how it explains why things fall when we let them go. For the physicist, understanding the ordinary, the habitual, and the frequent thus allows us to account for the frightening and the singular. As such, it was thus necessary to ask “Why do things fall?” and to have Newton’s response to understand a broad range of much more bizarre phenomena occurring at every level of the universe. Treatises about the history of science are full of analyses of the successive responses to this question. I find it interesting here because it is a good example of a scientific explanation—­in other words, a response that science gives to a why-­question. It was also a good example for Aristotle—­to whom we owe one of the first “physics,” meaning a systematic conceptual understanding of nature; and for whom physical science was in principle the science of movement, in the sense of spatial motion (like in modern physics) as well as growth or alteration. For Aristotle, unlike his teacher and best enemy Plato, the moving world is subject to science. Everything moves around us: whether in regard to space and its falling bodies; or just to its states—­such as alteration, aging, and growth. Aristotle’s physics asks why this all is, both in general and in regard to certain particular bodies; and his response consists in looking for causes. He goes on to argue that stones, and “heavy bodies” in general, have a natural tendency to fall. And in this sense, a natural movement—­like that of a stone or an apple that falls from a tree—­is different from a “violent” movement imposed by force (like me taking an apple and throwing it, for example). The details of his theory do not interest us here except in highlighting that modern science begins with the principle of inertia famously put forth by Galileo, which can otherwise be described as the idea that bodies Grammar


themselves have no intrinsic tendencies; if they move, it is because a force is being applied to them. However, in his theory, Aristotle introduced a reflection on the very idea of cause that is worth pausing over because it is one of the first analyses of the plurality of the question “why?” It will go on to form the framework for thought concerning causality in philosophy and, later, science.1 Aristotle distinguishes four types of major causes, which are moreover four complementary responses to “Why this thing?”: the material, the formal, the efficient, and the final. Let’s look at an example that clearly illustrates his idea. We are standing in front of a bronze statue that represents an athlete. The “cause” designates why this statue is what it is; and for Aristotle, this cause is quadruple: its “material” cause is the bronze of the statue that forms its matter; the “formal” cause, or its shape, is the figure of the athlete that it represents; the work of the sculptor, whose hands and chisel extracted this figure from bronze, is what later came to be labeled the “efficient” cause (in other words, the process through which the statue is concretely produced); and last, its goal: to glorify, for example, Olympic sport—­w ith this goal being the reason why the sculptor began his work—­is the “final cause.” The total explanation of a phenomenon thus requires presenting these four causes. The material cause is often passed over in silence; it is also often difficult to distinguish the final cause from the formal one. But whatever the case, these four dimensions define how something like our statue came to be the thing that it is. Of course, in the history of scientific thought, material and formal causes are eclipsed by the tension between efficient cause—­in other words, the mechanical process of the statue’s production that temporally precedes the statue—­and the final cause, which is achieved after this production but which ideally or conceptually exists before production (such as in the sculptor’s plans for his project). For Aristotle, as is shown in statements like “Nature does nothing in vain” and “Nature abhors a vacuum,” nature on the whole has goals and resembles something like a supreme agent. The art of the sculptor is just a pedagogical illustration of the four causes; but it’s a perfectly legitimate illustration since, for Aristotle, art imitates nature. W hy Do Things Fall W hen We Let Them Go?


But modern science and its principle of inertia will ultimately rid nature of these goals, as we shall now see. It should be noted that Aristotelian analysis clearly distinguishes two aspects of “why?”—­namely, that which precedes production and the final goal; or, in simpler terms, the “pourquoi?” (which explains why this thing is here) and the “pour quoi?” (which explains for what purpose this thing exists). For Aristotle, because artifice is an extension of nature, these two aspects are found everywhere and are always complementary; but for us since Galileo, such complementarity is not so obvious. However, awareness of this distinction remains an asset as it is fundamental to our language and adds to the ambiguities that I detailed in the previous chapter. If we adopt this Aristotelian language, modern Galilean-­Cartesian science thus banished final causes in order to work only with efficient ones. As such, there are no longer violent movements in opposition to natural ones because objects no longer have an inherent natural tendency to fall. In reality, a cause, namely gravitational attraction, provokes an apple to fall in the same way that my hurling it into the air is the cause for its temporary flight. The Truth about “Why” and “How” We often say that science deals with how and not why. If we associate why with the final cause, this is clearly true. There is no place in science to ask why, for example, God or nature made it so that objects do not remain suspended in the air; or what purpose it serves that they fall if we let them go. But it would be false to conclude from this that scientists do not ask why; on the contrary, if we can now say that “things fall because of universal gravitation,” it is because someone asked why and not how they do so. Additionally, English-­speaking philosophers of science often use the expression “why-­question” to categorize a good deal of scientific questions (as is seen, for example, with Wesley Salmon and his Four Decades of Scientific Explanation [1999]). Nevertheless, as the falling bodies example will illustrate, this disGrammar


tinction between how and why corresponds to a real distinction in science—­namely, the difference between what we sometimes call a phenomenological model and a mechanistic model, and whose respective objects I will refer to here as pattern and process. The strength of Newtonian theory indeed consisted in offering an explanation for two phenomena that were perceived as being independent: falling bodies and planetary motion—­w ith the first being rectilinear and forming a part of our daily world, which Aristotle called the sublunary; and the second being at the scale of the universe and cyclical. Of course, it was Galileo who launched modern science by abolishing the boundary that Aristotle had erected between our “sublunary” world—­where movement is sometimes irregular or troubled by violence—­and the astral world, which is cyclical and perfect. But it was Newton who showed that one same dynamic linked them together, governed by one same cause: gravitation. But how do we know this? Before Newton, Galileo and Kepler had established mathematical laws for these two phenomena: falling bodies (whose law shows that the distance traveled by a body in free fall is calculated in relation to the square of its time); and planetary motion (whose law shows that the speed of all elliptical planetary orbits around the sun is such that in equal intervals of time the line joining the planet to the sun covers the same area (namely, Kepler’s second law). These laws describe how objects fall and how planets turn, but do not say why. By afterwards showing that these two laws arise if one imagines a gravitational force operating between two bodies of mass M and m according to the equation for universal gravitation, Newton thus responded to the question “why?” And so this one same cause (or one same process) of gravitation gives us a reason for these two laws that describe trajectories (or patterns). In the case of these two how-­questions—­“How do objects fall?” and “How do the planets turn?”—­each response marked a major advance in the natural sciences, and they could be later joined under one same “why?”2 The strategy of first finding patterns in nature or the social world and then asking “why?” is characteristic of scientific activity in general. Evolutionary biology and ecology constantly give us examples of this way of W hy Do Things Fall W hen We Let Them Go?


thinking. When we examine the different species in a tropical forest or when we estimate the relationship between the surface area of an ecosystem and the number of species it houses, the patterns we see in these two cases will be somewhat constant globally and we will then seek out why it is this way instead of another. Community ecology is the discipline that studies biodiversity—­the reasons why a community (namely, a set of species that live together in a territory) is made up of these particular species and what follows from this. Finding a pattern, such as the distribution of species abundances—­or a ranking of these abundances—­is thus the first crucial step in studying these communities. Many theories have been advanced to make sense of this pattern, which is often a so-­called lognormal distribution, where most species are poorly or very poorly abundant and only a few are very, or extremely, abundant. Ecologists now often debate between the merits of a theory in which natural selection mostly accounts for diversity by fitting each species into their own proper niche, and the “neutral theory” proposed by Stephen Hubbell in 2001 in which biodiversity is mostly due to neutral processes, namely stochastic (randomly determined) ones in which natural selection plays no part (see chapter 7). Both accounts compete to make sense of particular species abundance distribution, and of the reason why these patterns are often lognormal. Other branches of ecology, such as biogeography, are also concerned with finding patterns in biodiversity at larger scales. For example, take the so-­ called “species-­area law,” according to which the amount of species in an ecosystem covaries with the size of this ecosystem; it gained much support among theorists and received a formal and groundbreaking treatment in the seminal book by Robert MacArthur and E. O. Wilson, Principles of Island Biogeography (1964). In this text, several simple models are presented in order to derive these laws from a small set of variables that describe an abstract situation where mainland species colonize several “islands” (an abstract term denoting, in addition to actual islands, any place that is separated from the mainland by any geographical entity that reduces gene flow: water, mountains, desert, etc.) that are distinctly remote from the mainland. Once again, we see that ecology is mostly about finding Grammar


patterns, describing generalities concerning these patterns, and forging models that try to explain them. But the explanation for falling bodies teaches us another fundamental lesson about the nature of modern science: it is all about laws, or universal regularities in a mathematical form. Patterns and processes alike follow laws: the law of areas or the law of falling bodies in regard to patterns, and the Newtonian formula for gravitation in regard to processes. Even ecologists deal with general patterns that they sometimes call “laws,” even though these “laws” seem less nomothetic, as philosophers of science say (namely, lawlike), than the ones addressed by physicists (since they suffer more exceptions, don’t always have a mathematical form, etc.).3 We are now far away from the Aristotelian cosmos: for Aristotle, our sublunary world was not easily mathematizable because only celestial bodies had the necessary regularity to allow for mathematical abstraction. This is why he was always centrally concerned with identifying sets of causes, which are often linked to the nature of the things themselves—­the tendency for things to fall, for example—­rather than describing laws that govern these causes. Galileo’s famous phrase that “nature is a book written in the language of mathematics” symbolizes this turning point where nature became explainable through laws that can be stated in the form of equations, matrices, or probability distributions (among other mathematical formulas). This radical rupture affected how one could respond to the question “Why do things fall?”: it was no longer a matter of determining a quadruple causality that explained both the nature of things and the purpose of nature itself, but rather of indicating the causes that could produce events and phenomena according to certain mathematical laws. And in classical Newtonian physics, a cause is always a force, since by definition a change in acceleration necessarily equals the consequences of a force (this is Newton’s “Second Law of Motion” or the “fundamental principle of mechanics”): to ask “why?”, then, means to determine what forces are responsible for an event or phenomenon and how they are interrelated. Physicists now list four fundamental forces—­three of which operate at very close range, and whose effect is felt on the scale of elementary particles or atoms (elecW hy Do Things Fall W hen We Let Them Go?


tromagnetic force, weak nuclear force, and strong nuclear force); and one of which operates on a grand scale (gravity). The behavior of the entire universe could be ultimately explained through their interactions. The essential notions at play here are metaphysically delicate. We know that the very idea of “law”—­in terms of a formula that rules an entire aspect of the world—­was doubtlessly forged in a theological context where laws were decrees from God that inextricably governed both the universe and mankind. This context would explain why the same word “law” concerns both the laws of nature (which are indifferent to men) and political and juridical laws (which are human conventions)—­even if this origin of the concept has been lost; and even if post-­Galilean, mathematical, universal, and ineluctable natural law no longer has much to do with the civil code or criminal law, which is always historically situated and often transgressed. Whoever decides to try to explain the meaning of the notion of law, however, confronts sizable problems: how is a natural law distinguished from a simple generality, such as “all mountains are smaller than 10,000 meters” or “all Australians like to surf ”? Everyone sees that the law that light always moves in a straight line bears greater depth than these two correct generalities; but what does “depth” mean here?4 And to make matters worse, even if we were to figure out what a law of nature actually is, David Hume—­in his Treatise on Human Nature (1739)—­brought to light a radical problem linked to how one checks hypotheses about the laws of nature, now known to all philosophy students as the “Problem of Induction”: how can I validate a general claim along the lines of “all bread is nutritious” since I have not tested, and cannot test, all the particular elements that make up this generality (in this case, all bread in the past, present, and future)? For the moment, let’s put these problems aside and remember that the explanations for falling bodies in science are based within the framework of the very general idea that the universe is ruled by laws and has no room for final causes.



The Classical Theory of Scientific Explanation and Its Discontents When the branch of philosophy that we now call the “philosophy of science” became a discipline in its own right, researchers largely devoted themselves to understanding the nature of scientific explanation. Carl Hempel, an Austrian philosopher who spent most of his life in exile in the United States, put forth a strong idea that made use of this solidarity between the idea of law and modern science that I just mentioned. For him, to explain an event meant showing that it had to take place; and this demonstration had a very simple form: that of deduction from laws. To grasp the importance of this idea, let’s remember that deduction is a simple and infallible logical operation: from a set of premises, one draws a conclusion that is already contained within these premises. Thus, from the statements “It rains every Monday” and “Today is Monday,” I can comfortably deduce that “It’s raining.” The truth of the premises is maintained within the conclusion. After Hume, we always distinguish induction from deduction; because, as we have seen, induction does not possess this conservational property. Indeed, induction consists in jumping from the consideration of just a few cases to that of all the cases within one same genre. To use one of Hempel’s examples, if I saw a large number of white swans, I would induce that swans are white. Yet even if “Swan A is white,” “Swan B is white,” “Swan C is white,” etc., are true statements, “All swans are white” is not necessarily true because I have not seen them all—­and, moreover, there do in fact exist black swans. And, should one invoke the existence of laws explaining why almost all swans are white, the difficulty still exists, since, if science indeed relies on induction, some inductions would have been necessary to establish these laws, and they would face the same objection. Granted, it is plausible that induction plays a role in scientific discovery and imagination; however, philosophers are suspicious of induction when it comes to account for the veracious power of scientific explanation and its epistemological superiority over other forms of discourse—­for examW hy Do Things Fall W hen We Let Them Go?


ple, myth or religion—­and they thus prefer deduction. This is the case for Hempel.5 For him, the scientific explanation for why things fall consists of a deduction that is based on the statement of a law (in this instance, universal gravitation) and of facts (“We have an apple with mass m, and the earth has a mass M . . .”). From this, called “the explanans,” he deduces that the apple and the earth are attracted to each other in accordance with the Newtonian formula, and that the apple thus falls in in keeping with the Galilean law of falling bodies (which was the thing to be explained, or the explanandum). Our “why?” has thus found its scientific response: a deduction, which places objects under the jurisdiction of the law of attraction. Admittedly, the problem then reverts to questioning the nature of the law itself. But Hempel is quite liberal on this point: as soon as something has the shape of a law—­in other words, has the form of a strongly corroborated general statement—­a scientific explanation is possible that can respond to the question “why?” The laws that govern the patterns that we detect, as well as the laws reigning over the processes that produce these patterns, can therefore equally be used as grounds for scientific explanations. Hempel calls this the “Deductive-­Nomological Model” of scientific explanation. Thus, by invoking his law of falling bodies, Galileo could explain why an apple touches the ground where and when it does. This of course leaves us hungry for more “why” but it remains an explanation; the Newtonian formula will itself provide a more profound explanation since it explains the mechanics behind this law. Science thus progresses by discovering more general and more fundamental laws that explain existing laws which are already used to explain the world—­more fundamental precisely in the sense that a law like that of universal gravitation tells us why another one exists (in this case, the law of falling bodies; or for planets, Kepler’s second law). As Hempel admits, many explanations are of course causal; but they are explanations by virtue of how they actualize the deductive-­nomological outline “law; particular conditions => phenomenon to explain”—­w ith the particularity that the law in question is a causal law like the Newtonian formula for universal gravitation. Grammar


Hempel’s view of explanation is remarkably flexible. It accounts for the fact that explanations may be given for laws that are themselves explanatory as well as for the fact that explanations are often a search for cause; and in addition to those cases where one of the laws in the explanation is causal, it also accounts for something that some philosophers saw as an objection to the relevance of the notion of “explanation” in science—­ namely, the pervasive use of models. Ever since Auguste Comte and his Cours de philosophie positive (A Course of Positive Philosophy, 1856), positivists of various brands have indeed often been skeptical of causal explanations. The physicist and philosopher Pierre Duhem later explained how theories can be underdetermined and formulated the idea of what is now known as the “Duhem-­Quine thesis”—­according to which one cannot test only a single hypothesis, but must always consider a whole set of hypotheses that ontologically include general claims as well as physical theories involved in the conception of testing instruments. In his book La théorie physique: Son objet, sa structure (The Aim and Structure of Physical Theory, 1920) he also claimed that the common view that science explains the world is wrong. He instead argued that science mostly describes the world, and that advancing the best descriptions (some would now say: models) is the point of the various scientific theories. By “explanation,” however, Duhem was thinking of the ultimate causes of the world; namely, things pertaining to a deep metaphysical nature that for him was opaque or empty and which made the very idea of causal explanation irrelevant. While his viewpoint constitutes the core of an antirealist view of science (since our world’s explanantia—­such as genes, electrons, quarks, etc.—­would then have to be interpreted as its best descriptors, as what optimally organizes our descriptions and unifies the various empirical laws we have identified rather than objects existing out there), it is not by itself a refutation of Hempel’s view of explanations, as we shall see. Let’s consider a more recent formulation of Duhem’s antirealism, such as John von Neumann’s claim that “science doesn’t tell the truth, science doesn’t explain the world, science mainly makes models.” Von Neumann, a major contributor to mathematical set theory, quantum physics, and economics W hy Do Things Fall W hen We Let Them Go?


(he initiated game theory with his Theory of Games and Economic Behavior, co-­authored with Oskar Morgenstern in 19536), supported the popular pragmatist idea that scientists only build representations (“models”) of the systems in which they are interested; and that these representations don’t tell us why the systems function as they do, instead mirroring how they operate by capturing one or two interesting aspects of a particular system. This interest, of course, depends upon our projects and tastes; and thus models have to be judged according to various objectives, for instance the epistemic goals considered by Levins above—­e.g., predicting the outcome of an epidemic, capturing a general dynamic that is likely to be followed by analogous systems, identifying targets of a specific action, etc.7 Models and Explanations Indeed, most descriptions of what scientists actually do would agree with von Neumann’s idea: scientists build models of various kinds that are tailored to a specific question and embedded in a particular project. This is why one same system and one same phenomenon may be studied by distinct models, without any of the latter being viewed as “the true model.” But a Hempelian philosopher could easily contend that these activities still represent forms of an explanation. For example, let’s look at a classical model in evolutionary biology; namely, the Fisher-­Wright model. It is drawn from the seminal papers by Ronald Fisher and Sewall Wright, who were two of the founders of population genetics—­which describes the process of gene frequency change among biological populations. In it, a population of organisms of a species (imagine butterflies in a forest, or a field of strawberries) is a pool of genes. More precisely, it is modeled as a set of genotypes; and each genotype is understood as one or two loci—­ namely, a position on a chromosome that can be filled by several alleles, which are different versions of a gene. Each combination of alleles—­each genotype—­can correspond to one of two phenotypes. As Gregor Mendel famously demonstrated, one can study peas, consider the character “color,” and hypothesize that two alleles at one locus condition the color green or Grammar


yellow. Write these two alleles X and x. Now, the population has a given size N; each allele has an initial frequency p or 1-­p. After the second generation, the distribution of the three genotypes XX, Xx, and xx is given by the Hardy-­Weinberg formula, which states that if the frequency of allele X is p, there will be in the next generation p2 XX, p (1–­p) Xx, and (1–­p)2 xx. With each generation, the frequency will stay at equilibrium. But if the population size N is small, there is a high chance that the population will suffer a stochastic effect called “genetic drift,” by which one allele will be lost by the population through a kind of sampling error. This sampling error comes from a principle of probability theory: the smaller the sample, the higher the chances that outcomes will not reflect what is the most probable; this is derived from the law of large numbers, which states that the frequencies of the outcome will be more likely to reflect their chances when the sample is large. Yet, if we assume instead that N is large and if natural selection plays a role—­which means that the chances of reproduction of the organisms are different due to their having the genotype (XX, Xx, or xx) under focus—­then the distribution will follow a transition law that will respect the probabilities of genotype reproduction defined by these reproductive chances (technically called “fitness” and noted Wij). Any Hempelian philosopher can see this model as an explanation: the explanans is constituted by facts such as the initial frequency of the alleles, the fitness of the genotypes, and the size of the population; the laws are the law of large numbers, which, as we know, explains the stochastic process of drift; and the principle of natural selection, according to which genotypes that have higher fitness reproduce more and therefore leave more descendants in the next generation. Our Hempelian will emphasize two things. First, the laws involved in the explanans are various; they include laws of statistics, which are a priori laws of mathematics.8 Second, this explanation is not of a single fact: whatever the initial values of the variables N, p, and Wij, one can forge a similar explanation by filling the “fact” category with these values in the explanans. Thus a model such as the Fisher-­Wright model functions as an explanatory matrix—­that is, as a matrix likely to provide explanations of a system for a wide range of initial W hy Do Things Fall W hen We Let Them Go?


facts that can characterize this system; and it can explain a large range of phenomena—­namely, all the possible generic outcomes of the dynamics modeled by the Fisher-­Wright model: loss of an allele, alleles reaching fixation, higher fitness alleles getting lost because of genetic drift or higher fitness alleles going to fixation in the population (which is the expected result of natural selection, when a genotype has the highest fitness and the population is large), and an equilibrium between heterozygous and homozygous genotypes when the heterozygote has the highest fitness but cannot be the only one in the population since heterozygotes always yield a quarter of each type of homozygote in reproduction.9 Critiques of the Classical Deductive-­N omological Model of Explanation: The Return of Causality in Science, and the Theory of Possible Worlds Even if someone holds a model-­based view of science used to support an opposition to scientific realism—­namely, the claim that science tells or is oriented toward telling how the world really is—­the deductive-­ nomological view of explanation conceived by Hempel can still account for this modeling practice. Yet, although seductive, the Hempelian model of explanation put forth several major problems whose formulation pushed a good number of philosophers of science away from Hempel’s positivism, suggesting good old-­fashioned causality in its place as the main response to a “why?” kind of question. Indeed, if scientific explanation is a deduction like Hempel thinks, then it is unfortunately going to appear as being symmetrical. In what way? Let’s imagine a flag waving in the sun. The flagpole projects a shadow. What size is this shadow? If we follow Hempel’s line of thought, I would explain this length (Argument A) by considering the length of the pole, the laws of optics, the position of the sun (the light source), and would then apply basic trigonometry formulas that would allow for me to deduce the length of the shadow from these premises. However, as is seen in remarks by Wesley Salmon—­one of the philosophers to have brought the notion of causality back to center stage—­I can Grammar


do this exact same process in reverse: that is, deducing the length of the pole by starting with the length of the pole’s shadow and the laws of optics; and by applying the same trigonometric formulas in reverse (Argument B). Yet if Argument A explains the length of the shadow, this is not the case for Argument B; because no one would claim that the length of the shadow explains the length of the pole! Yet the Hempelian conception of explanation as being deductive and nomological would not know how to take note of this difference. Thus, true scientific explanation is asymmetrical, while the deductive arguments that, according to Hempel, make up an explanation are symmetrical. We find here the same difficulty that the previous chapter showed us in detail: the confusion between two why-­questions. In this case, we have “Why is the shadow two meters long at 9 a.m.?” and “Why do I know that the length of the flagpole at 9 a.m. is two meters?” In other words, a distinction between a “why?” that demands the reason for something and a justificatory “why?” The former “why?”, focused as it is on the ratio essendi, is often a search for causes as we have seen. Understanding what a scientific explanation actually is thus involves highlighting the distinction between cause and reason-­to-­believe once again. And likewise, the notion of causality allows for us to understand the asymmetry of explanation. If the length of the flagpole explains that of the shadow and not the opposite, it is because the pole causes its shadow; while the shadow causes neither the pole nor its length. It is indisputable that causality is intrinsically asymmetrical—­meaning that scientific explanation’s asymmetry can be very easily understood if the explanations are causal explanations. As we can see, despite what Russell is cited as claiming in the previous chapter, it is very difficult to expel the notion of causality from modern science. It is necessary for us then to look back again at what “cause” actually means—­and to look beyond its Aristotelian four-­way division, which showed us different types of causes without, however, putting forth a determination of the concept of cause itself in its generality. This determination has kept modern philosophers busy enough to the point that the “philosoW hy Do Things Fall W hen We Let Them Go?


phy of causation” has practically become a specialty in the profession—­and then it has almost divided into subspecialties (“probabilistic causation,” “manipulationist causation,” etc.). We fortunately do not need to go into these subtleties. It is just a question here of clarifying what we actually mean when we respond to the question “Why X?” with “Y is the cause of X,” and of determining how this constitutes a scientific explanation of X. Wesley Salmon developed an alternative theory to the Hempelian theory of explanation in which he argues that the world is defined overall by what he calls “causal processes”—­changes that move across space-­time while respecting a law of conservation regarding an essential magnitude (quantity of movement, energy, or information, for example). “Causal interactions”—­or encounters between causal processes—­are added to this. Whereas most philosophers interested in causation used to propose analyses of “A causes B” that questioned what A and B should be (are they events? facts? properties? variables?), Salmon reversed the problem and started with causation itself as a primitive, calling it a “causal process.” For him, these causal processes are somehow the elementary bricks of the world (instead of properties, facts, or events—­which are usually seen as the basic bricks, while causation is a glue holding some of them together).10 Salmon’s ideas rely heavily on quantum physics, for which he thinks that the classical Hempelian theory of explanation would not be able to account. A major reason for his rejection is the fact that quantum physics is supposedly indeterministic;11 thus, knowing that a system—­for example, a radioactive atom of uranium, has a 90 percent chance to disintegrate after a time T—­is what we ultimately know about such disintegration. For Salmon, in this case, the explanation of the atom’s behavior at T, whether it disintegrates or not, is wholly explained by uncovering this probability value. This contradicts two correlated assumptions usually made by philosophers who study explanations: high-­probability values are more explanatory than low-­probability values, and “an explanation of A cannot be an explanation of non-­A.” Here, establishing the probability of disintegration indeed explains at the same time A (“the atom has disintegrated”) and non-­A (“the atom has not disintegrated”). Grammar


In these conditions, to explain a phenomenon is to locate it within the “causal structure of the world”; by such phrase Salmon means these processes and causal interactions—­the foundational bricks of causality—­ whose defining characteristic is to satisfy the principles of conservation that I just mentioned. Placing phenomena within this structure made up of processes and interactions allows one to determine the probabilities they confer to various facts, which ultimately explains key events as well as their absence. A Legacy of Salmon: The New Mechanists Even though quantum physics inspired Salmon himself, his idea that to “explain” means to “place a system within the causal structure of the world” has been massively influential in philosophy through what is now called the “new mechanical philosophy,” which is more oriented toward biology and the other nonphysical sciences than toward fundamental physics.12 The precise ontological views that Salmon held—­namely, those of causal processes and interactions, which are well suited for physics—­are no longer as central to scholarly discussion as they once were. As such, I won’t focus too much on this general account of explanation—­which gained major traction in the 2000s after the paper “Thinking about Mechanisms” was published by Lindley Darden, Peter Machamer and Carl Craver in Philosophy of Science in 2000—­but it is still worth mentioning for our purposes here. The core of the argument is that an explanation of a phenomenon puts forth a mechanism, which is made up of entities performing various activities that are organized in a precise way so that the output of this mechanism is the explanandum. This account very accurately captures how molecular biology operates. For instance, the explanation of gene regulation, as introduced by François Jacob, Jacques Monod and André Lwoff in 1961,13 falls exactly in line with this idea, with the explanation consisting in showing a subtle mechanism made up of genes whose activities are either producing lactose or inhibiting other genes, which ultimately assembles an exact established pattern of outputs under specific conditions. Also, as Craver insisted in ExW hy Do Things Fall W hen We Let Them Go?


plaining the Brain, mechanisms are explanandum-­dependent and situated at a given level: the “entities” in a mechanism can become themselves the objects of an explanation that pinpoints a mechanism composed of lower-­ level entities, through which the activity of the focal entity is conducted. For example, in molecular biology one can propose a physical mechanism that would explain the inhibiting behavior of the genes involved in the lactose operon. In addition, as Stuart Glennan emphasizes, mechanisms are not exactly objects of study themselves.14 Indeed, scientists reconstitute a model-­mechanism, which means that identifying a mechanism consists in designing a satisfactory model which, by modeling the mechanisms, can give us a reason why the explanandum happens.15 Enter the Possible Worlds With regard to his general idea of explanation, which downplays Hempel’s insistence on the laws of nature in favor of an emphasis on causes inherited from the critique of the symmetry of Hempelian-­like explanations, Salmon’s particular ontology matters little here. It essentially represents a large family of different visions of causality in which “A causes B” is a relationship of “production” that satisfies physical conditions (which include contiguity, locality, etc.).16 It constitutes a general way of understanding the statement “A causes B,” and it stands in contrast with another family of theories of causality—­whose most illustrious representative is without a doubt David Lewis, a major metaphysician from the second half of the twentieth century. His major work, On the Plurality of Worlds, is concerned with what Leibniz called “possible worlds.” Lewis articulates a modern version of the Leibnizian theory of possible worlds that is based on an argument called “modal realism,” according to which all possible worlds exist.17 As such, in spite of the temporal distance, Lewis is very closely aligned with Leibniz; and so it is quite natural to consider his theories if we take a look at problems that preoccupied Leibniz: causes vs. reasons, “Why is there something instead of nothing?,” or the relationships between the necessary and the contingent. Grammar


For us, Lewis’s major contribution is an important theory about “counterfactual” statements. What he means by this are statements that—­unlike “factual” ones—­discuss what would have happened if an event in the world had been different. For example, “If the goalie had not dropped the ball, it would not have fallen at the feet of the striker for Real Madrid.” Yet in an article with the austere title of “Causation,” Lewis shows that “A causes B” precisely envelops affirmations like the one I just mentioned—­namely, that “if A had not taken place, B would not have happened.” Following Leibniz’s basic intuition about this notion, a world is a collection of facts, things, and events that are mutually compatible. We are in the “actual” world, but we can conceive of “possible” worlds—­that is, worlds where we change certain events or things in regard to the actual world, making sure that these changes are compatible with everything else in place, and providing supplemental changes should this not be the case. Thus, we can think of a possible world where the continental plates of North America and Asia were never connected, and therefore where our early human ancestors never set foot in America; or we could imagine a world where Lenin lived to be 78 years old: Stalin probably never ruled the USSR, Trotsky had an important governmental role and was not assassinated, etc. In our case, Lewis’s argument is interesting because he proposes a semantic analysis of counterfactuals in terms of possible worlds, which forces an explanation of the concept of causality. “If there had not been a passage between America and Eurasia” indeed means: “Let’s consider a world similar to our own, where the sea between America and Eurasia never closed and see what it’s like.” An assertion about what was caused by the proximity of America to Eurasia is thus an assertion about what would differ in the possible worlds where America and Eurasia never made contact. The counterfactual conception of causality that Lewis is defending illustrates the second large family of theories of causality (besides the one put forth by Salmon and mentioned above) that are sometimes called “difference-­making theories”—­since they all try to construct the meaning of causality based on the idea that “A causes B” actually means that A makes a difference in regard to B. W hy Do Things Fall W hen We Let Them Go?


Thus, when I say that (p) “a mutation of the PAH18 gene causes the disease phenylketonuria”—­which is characterized by developmental problems affecting size (and which can be treated by following a specific ­diet)—­I am apparently saying that if an individual does not have this genetic mutation, she does not have phenylketonuria. This particular analysis of a causal statement seems convincing. It also allows us to understand the main difficulty with the counterfactual conception of causality. Indeed, whoever has this mutation but follows a diet low in phenylananine will not develop phenylketonuria. It is thus not exactly because they have the mutation that persons suffering from phenylketonuria have the disease. To explain the causal assertion p, it is necessary then to understand that when we examine the possible worlds in which we imagine that the person in consideration does not have the genetic mutation in question, we are considering only those where the person’s diet—­as well as all other factors likely to impinge on phenylketonuria—­is not different from what it is in our actual world. If we imagine a universe made up of all possible worlds, to say that “the mutation of the PAH gene causes phenylketonuria” means that we first consider only those worlds in which the individuals do not change their diet nor anything else except for their genetic mutation, and then see that in this group of worlds they do not have phenylketonuria. This set of possible worlds is closer to our own than those where the individual not only does not have the genetic mutation but also has a different diet, practices a different religion, and lives elsewhere. Our causal assertion concerns exactly such a set: what would happen in a world where the individual neither has the mutation nor adopts the same diet is not relevant for the meaning of the claim “Had he not suffered the PAH genetic mutation, he would not have had phenylketonuria,” which constitutes the causal claim in question now.19 Lewis’s counterfactual conception of causality thus demands for us to define how a possible world can be more or less close to our own—­which we call a metric for possible worlds. What does this distance mean? A sort of measure of similarity: a world where Lenin only lived one more day is closer to our own than one where he lived for fifty more years. A Grammar


world where America never touched Eurasia is farther from us than one where Lenin lived to be 70. And a world where the gravitational constant G is different from its current value is still farther away from our world than these two other possible worlds. In a general manner, it seems that the worlds in which we change laws of nature are more distinct from our world than those where only certain events are modified. But these few intuitive elements for understanding distances between possible worlds do not make up a rigorous and generalized definition of distance; and it is not clear that all possible worlds can be situated in regard to each other. For instance, even though worlds where laws of nature differ from our world w are admittedly farther than worlds where mostly only the facts are different, it is still tricky to compare two worlds w' and w" that have different laws of nature: would a world w" where the law of gravitation changes because of the value of the gravitational constant G (supposedly an intrinsic property of our universe, like the speed of light) be more different from w than a world w' where this law changes because its formulation changes (e.g., g = G mm' / r8)? And even differences between worlds that differ only in facts are hard to evaluate. Intuitively we would say that the farther away a fact f occurs in the past, the higher the chances that modifying f into f " would constitute a world w" that is more different from our world w than the world w'—­ which is constituted by modifying a fact f ', which occurred much more recently than f. Why that? Because the collected consequences from the older f " were much more numerous than the consequences from f ', and thus the world w" diverged from w much more than w'. However, this intuition is not always correct: for instance, a change in the position of one lake during the time of the Ediacara fauna (in the late Cryogenian period 635 million years) would produce at the present time a possible world much less different than a world w" defined by a small change in the trajectory of the Chixtulub asteroid, which wiped out reptilian dinosaurs at the end of the Cetacean (which was much more recent) and prepared the way for large mammals like us (see chapter 7).20 Therefore, constituting an overall metrics for possible worlds seems an W hy Do Things Fall W hen We Let Them Go?


almost impossible task. However, most often (as is the case in our example of phenylketonuria) one only needs to evaluate the distance between a small set of possible worlds and our own, and not that which exists between all possible worlds—­which means that the plausible absence of a generalized metric for all possible worlds is not necessarily a problem when one is analyzing the notion of causality in counterfactual terms.21 Everything Else Being Equal? This counterfactual notion is not alone in the family of those visions of causality that are understood through the notion of “difference making.” A major variant of this counterfactual view, which is now quite popular among philosophers, is the view proposed by James Woodward in Making Things Happen that is often dubbed the “manipulationist view” of causation. Inspired by the formal treatment of causation by computer scientists such as Judea Pearl,22 which formalizes statistics-­based causal inference with diagrams likely to be implemented in algorithms, Woodward interprets “A causes B” as “intervening on variable A changes the value of variable B.” This matches the practice of many scientists when they intend to discover causal relations, since they often use targeted interventions that change the value of a variable: for instance, eliminating a gene in a gene network can be seen as a change of the variable value “presence of the gene” from 1 to 0. Tweaking a variable’s value in a model amounts to experimentally checking a counterfactual statement in the form of “what if the value of X were different?” But another (and older) very important way of understanding causality as “difference making” consists of thinking in terms of probabilities. When I say that drought is a cause of forest fires, we can understand this sentence as an assertion that a forest fire is more probable in times of drought than in normal times. Phrased in a more general way, the cause is defined by that which increases the probability of the effect, everything else being equal. This preceding clause, ceteris paribus in Latin, is crucial: for example, if smoking were to be severely forbidden and punished during a period of Grammar


drought, it is plausible that the probability of there being a fire would not increase—­even if science indicates that droughts cause fires.23 Still, such an understanding of causality as “what makes a difference” poses a number of problems, starting with this reference to ceteris paribus. This reference prolongs the already-­mentioned problem of how one can determine a metric for possible worlds, since the clause “everything else being equal” describes a group of things that stay fixed in our minds while imagining the modification of a specific thing so that we thus characterize a subset of the group of possible worlds, a subset in which this thing varies from world to world. Yet the notion of causality as an increase in probability is important because it fits in with a common way of detecting causality—­namely, the use of statistics and thus of probabilities (probabilities are the flip side of statistics: if something is statistically dominant, it is more probable than another thing). However, it can still be asked if the increase of the probability of the effect is causality itself, or if it is simply an aspect of causality through which it is recognized and by which it can be measured or inferred. We will also note that this understanding of causality in the form of “making a difference” is more general than that of causality as “production”—­which I mentioned above in regard to Wesley Salmon, with its “causal processes” and “causal interactions.” Indeed, if the world were such that there was no law of conservation, there would be no causal relation like production; and therefore, no causal processes along the lines of Salmon. We could still nevertheless make counterfactual claims, and thus respond to different “why?”s by appealing to counterfactual causes or, more generally, “difference makers.” Mathematical Reasons and the Mathematics of Reasons The idea of causal explanation—­whether interpreted in terms that are counterfactual or “difference making,” or even those that fall in line with Salmon’s process-­based version—­does not exhaust the different ways of responding to “why?” that we find in science. W hy Do Things Fall W hen We Let Them Go?


Up to now, we have only talked about the empirical sciences. The case of mathematics is something else, with “why?” being just as central a question as it is in the natural sciences. “Why do the perpendicular bisectors of a triangle intersect at the center of the circle that the triangle circumscribes?” we ask junior high school students. “Why is the square root of 2 an irrational number?” asked the ancient Greeks as they faced the “crisis of irrationality” that this discovery would ultimately cause. We respond to these questions through demonstrations (which state reasons why the “square root of 2 isn’t rational” is true), hence we are in no way dealing with efficient or final causes, since nothing produces the fact that the root of 2 is an irrational number. Granted, there is a process that led to the utterance of the Pythagorean theorem by Pythagoras; but it is not this process that makes the Pythagorean theorem true (in the same way that Marie Curie discovered X-­rays, but did not herself produce them). But couldn’t a mathematical fact such as the Pythagorean theorem have a cause in the sense of “counterfactual causation” rather than in the sense of production, since I distinguished these two meanings ? The answer is no, as we’ll see easily. When it comes to empirical facts, they happen in a range of possible worlds (a very large range, if they are fundamental laws of nature). This is why we can think of causation as a counterfactual relation: in some possible worlds these facts don’t happen, and considering this allows us to understand why they actually happen. Now, about mathematical facts, they are always obtaining, in all possible worlds. So the reason why they happen is not of the same nature as the reasons why empirical facts happen—­since it cannot be formulated in terms of counterfactual statements (namely, a statement ending with “. . . and they would not be the case”). No one could, for instance, claim that “if X was not the case, then the Pythagorean theorem would not be the case,” since this latter proposition is true in no possible world.24 The response to “why?” in mathematics is thus not a cause. Could it belong to the order of justification? At first sight, it seems to be the case. Empirical facts have causes; and the reasons to believe that these empirical facts occur are often different from the causes of these facts (remember Grammar


this: smoke is my reason for believing there is a fire out there . . .). Now, with a mathematical fact like “the square root of 2 is irrational,” what makes it the case that this is true should also be my reason to believe that it is true, since nothing other than the reason why the square root of 2 is irrational constitutes a better reason to hold this statement as true. The possible distance between a reason of the fact, and a reason to believe in the fact, proper to empirical facts, does not apparently exist in the case of mathematics. However, things are more complicated. A very good reason to believe a mathematical proposition such as “there is no x, y, and z such that xn + yn = zn (when n > 5)” is when a Field-­medal mathematician tells me that it is the case. These mathematicians are reliable, and, by the way, this reliability of the knowledgeable is at the principle of education: it’s rational to believe what teachers teach us. There is no chance that I can demonstrate this theorem (which was first formulated by Pierre de Fermat in the seventeenth century, but the demonstration proved to be horribly difficult and needed hundreds of pages involving computer-­based computations, which was done by Andrew Wiles in the 1990s). Yet this is not the reason why this Fermat theorem is true. Hence, the answer to “Why the Fermat theorem?” or, more generally, “Why this mathematical fact?” is no more a justification than a cause. Thus, in mathematics, “why?” demands neither a cause nor what I have called a “reason-­to-­believe” (or justification). It instead concerns a “reason for truth,” which is different from a reason why I believe or must believe in a true proposition. We can, however, doubt that in mathematics the response to “Why p?” is an authentic explanation. After all, there is no recourse to a cause or to the laws of nature—­t wo things which, according to the theories of explanation that we have put forth thus far, fundamentally characterize a scientific explanation. For this reason, until the last decade philosophers only used the word “explanation” when they talk about empirical sciences.25 However, this case of the mathematical “why?” indicates another aspect of scientific explanation, which up until recently had been relatively unnoticed by the philosophy of science. W hy Do Things Fall W hen We Let Them Go?


I just argued that the “why?” in the mathematical disciplines always asks for a “reason for mathematical facts.” But the opposite is not without meaning: there could sometimes exist a “mathematical reason for facts.” Recognizing this would thus confer a new dimension to scientific explanation—­one that is beyond the causality of Wesley Salmon and his many emulators, or Carl Hempel’s argument for its subsumption under natural laws. This is what we are now going to try. Let’s imagine two ice cream vendors on the edge of a long beach. Where are they going to set themselves up? Let’s also imagine that they are perfectly equal: they sell the same kind of ice cream, with the same flavors, the same cones, and the same toppings; and the beachgoers are all evenly distributed on the beach. Each one of course wants to attract as many customers as possible. Based on the fundamental presupposition that each of them is a rational agent, we can claim that they want to maximize their clientele. If from the outset they are both at opposite ends of the beach, and if we suppose that customers will go to whichever vendor is closest, they will each attract an equal number of them. But if one of them—­let’s imagine the one from the left—­moves toward the middle, he will attract all the clients to his left (since he remains the one who is closest to them) as well as the remaining half between his competitor and himself, thereby seizing an advantage. But the competitor on the right may then follow suit and begin advancing toward the left. In the end, the two vendors will end up side by side along the middle of the beach. This fable is often introduced in economics to present the “Nash equilibrium,” named after the extraordinary mathematician played by Russell Crowe in the well-­k nown film A Beautiful Mind. There are thus situations where if everyone changes their position general harm will result, forcing them to return to their original position. This explains why in Paris all of the furniture stores are concentrated in one street (le faubourg Saint-­Antoine), why all of the music shops are in Pigalle, and why many Japanese restaurants are found on Rue Saint-­Anne near the Louvre. But beyond economics, this also explains why the platforms of major political



parties tend to draw closer together at election time; or why successful French variety, pop, or hip-­hop songs sound similar to each other. The important thing for us here regarding equilibrium is that the reason why the ice cream vendors find themselves at the middle of the beach is the fact that this position is the middle (because in the middle, neither of the two can increase his profits without harming the other and restarting the game of musical chairs in reverse, or perhaps even also harming himself). Up to a certain point, the explanation thus does not call for causes or movements that produce something. We could certainly explain the position of the vendors by saying, “This one went left, that one went right; the first said this, the second noticed that, etc.” But in the end, other movements and the resulting set of new displacements would lead to the same result anyway. This is indeed part of the very definition of an equilibrium—­in the sense that, following a classic example, if a marble rolls down a hill into a valley (in this case quite a large marble to prevent anything stopping it indefinitely in its tracks) it will have the tendency to keep heading toward that valley no matter how its course deviates; and that the trajectory by which it arrives is not important in understanding where it eventually ends up. Whatever its trajectory, the marble will ultimately land in the valley. A Nash equilibrium is an equilibrium, and thus behaves like this valley: the set of movements involved in arriving at it are not necessary to explain what is happening. “Why is the marble in the valley? Why are piano stores all located on the same street? And why are all the ice cream vendors at the middle of the beach?” The response to these questions bypasses all mention of the set of movements that caused the exact position of the marble or the vendors. However, their movements could have taken different shapes—­ assuming that the vendors are rational, which is the basic assumption of economics—­the marbles and vendors would have ended up exactly where they are. Such a response provides some sort of mathematical reason for a fact. The property of being an equilibrium is indeed mathematically defined,

W hy Do Things Fall W hen We Let Them Go?


just like how the middle of the beach is the endpoint for the ice cream vendors because it has the mathematical property of cutting the beach into two equal parts. This response is thus relatively indifferent to the fluctuations and vagaries of the causes that make it so that the vendors and marbles end up where they do. The “why?” is thus no longer that of causes, at least in the sense of a sequence of events that produces something.26 Scientific explanation is no longer causal here. Instead, we could call it structural, since mathematical structures come into play when it is a question of complex situations. A Personal Take on Structural Explanations In the last decade, several philosophers (such as Alan Baker, Robert Batterman, and Marc Lange27 ) have advanced views of explanations that center on those explanations that apparently do not rely on causality, using distinct labels to characterize these explanations, and differing in the range of cases which they consider to be noncausal. Others have discussed noncausal explanations.28 Lange, in Because without a Cause, proposed an extensive account of these explanations that is centered on the scope of their modal force, and which connects them to his argument that explanations also exist in mathematics (not just proofs or demonstrations). I proposed that “structural explanations” are such that mathematics play an explanatory and not a representational role in them. By this, I meant that whereas in many explanations the mathematics are there to describe a causal process (for instance, a mathematical function describes an input-­output relationship) with the process itself accounting for the explanandum, in some cases the mathematical properties account for the explanandum and therefore are in themselves explanatory.29 For example, in many of the cases of equilibrium, a mathematical property called the “theorem of the fixed point” is appealed to, and this theorem accounts for the existence of an equilibrium in such systems—­giving us the reason for the existence of the equilibrium, so to speak. Indeed, John Nash’s original paper that introduced



Nash equilibria in economics relied on Kobayashi’s mathematical fixed point theorem, which states that for any function that is monotonous and maps an interval onto itself there is a point x0 such as f (x0) = x0 —­namely, a fixed point.30 Hence, supposing that the system has a transition function f (which fulfills the conditions of a fixed point theorem) such that the state x is always followed by the state y = f (x), if the system reaches this point x0 then f will indefinitely revolve around x0 since it will constantly ascribe x0 itself as its image to x0. The causal processes (exchanges, consumption, and production in the case of economics) going on in the system can’t explain why there is an equilibrium unless this fixed point theorem is taken into account. This explanatory indifference to causal processes creates a remarkable property for such structural explanations—­namely, the fact that they can apply to very different domains (as we have seen with Nash equilibria and how they can be applied to furniture stores and politicians). We often call this the genericity of explanations. Some phenomena instantiate behavioral or functional patterns that are indeed generic in that they belong to different ontological regions, which can range from social groups to brains, or from ecosystems to the internet. Standing ovations, like the kind witnessed after a performance when the crowd harmoniously stands up, occur in any domain where elementary entities spontaneously self-­synchronize in regard to a specific behavior. Theoreticians of what has sometimes been called the “science of complexity” claim that complex systems are driven by very generic processes that may underpin a set of systems with very different natures. Scattered in various disciplines and nations, these people (including the ecologist Robert May, the meteorologist Edward Lorenz, the physicist David Ruelle, and the biologist Stuart Kauffmann among many others) provided tools and models to approach generic properties of complex systems, some of which we will now consider.31

W hy Do Things Fall W hen We Let Them Go?


Sadness in Social Media, Epidemics, and “Why?” without Causes The genericity of structural explanations justifies the fact that I have allowed myself digressions ranging from ice cream vendors to Japanese restaurants in this chapter on falling bodies: in certain cases, structural—­or “noncausal” as some others would say, notwithstanding theoretical differences—­explanations respond to the question “why?” in an identical fashion for both natural facts such as falling bodies and social and economic ones like with the ice cream vendors. I am now going to give another detailed example of these structural explanations where one does not need to detect a cause or know a law of nature in order to be able to respond to a why-­question, an example that illustrates an important part of scientific theorization today and pertains heavily to the science of generic complex systems. Focusing on it will make clear what “accounting for the explanandum” means when it comes to some mathematical properties, and therefore exemplify what “structural explanations” are. We call a set of entities—­genes, neurons, people, molecules, consumers, cells, etc.—­that are connected together a “network,” with each connection representing a potential interaction. We call the mathematical object that represents a network a “graph”; entities there are “nodes” and relations are “edges.” At a time when the notion of networks is omnipresent in science, especially with rise of big data science,32 graph theory provides a good number of structural explanations like those we just discussed. Here’s a question we have been seeing recently: “Why does spending time on Facebook make us sad?” This has been shown by many studies: most users believe that others have more friends and that they are themselves as a result unpopular. However, the response to this question is very simple and is based on graph theory. In a network, if we randomly examine a node (let’s say an individual named Kevin Bacon 33) and if we consider everyone who is connected to him (his “friends” on Facebook), a simple mathematical theorem will tell you that the average number of connections for all of Kevin’s friends will be higher than the number of Kevin’s Grammar


connections. Thus, the average number of friends-­of-­friends is higher than that of direct friends; this fact will of course leap out from the screen and will make most people feel somewhat depressed. This simplistic approach in no way considers the nature of social interactions, the way in which people come to call each other friends, etc.; in other words, all the causes behind the construction of a Facebook network. It only draws consequences from a graph theory theorem. During some past vaccination campaigns, physicians vaccinated all the contacts of the contacts of a random person; and this strategy is justified exactly for the same reason: by proceeding in this way, we maximize the number of people that we can immunize. Such a case presents what I introduced as “topological explanation”; by this concept, I mean that we are confronted with a structural explanation within which the involved mathematical properties are topological properties.34 As topology is often defined as the science that studies invariance in continuously transforming abstract spaces, the structure of a network is legitimately called its topology, and many instances of topological explanations are found in those branches of science where people study networks. This makes these explanations very common in the sciences of complex systems, since these systems often require to be modeled as networks, given the large amount of entities and interactions they encompass. Zooming In on Topological Explanations Saying that one topologically explains facts and properties can be more precisely explained in the following way, with another example. Let’s consider an ecosystem in the way that theoretical ecologists do (as was mentioned in the previous paragraphs). A long-­standing question raised by these ecologists concerns the relationships between diversity and stability. Initially, they had the intuition that the more a community is diverse, the more it will be stable. However, sophisticated mathematical modeling done by Robert May in 1974 showed that diversity per se does not beget stability.35 Mathematically speaking, the more species you add to an ecoW hy Do Things Fall W hen We Let Them Go?


system, the less chance there is that it will continue to follow the same dynamics and conserve the same approximate species abundances. Thus, the question arose of the kinds of diversity that were likely to promote stability (and what kind of stability) since we still have many examples of a positive diversity-­stability correlation—­which suggests that some particular kinds of diversity yield stability, but not diversity as such. I know this sounds like a purely theoretical question, but it is not: reasons for the stability of ecosystems or communities are a key subject of investigation for ecosystem management and the preservation of natural environments in general; so if plain species diversity is not a vector of stability, we need to know what the alternative option is so that we can target those actions which are best for the environment. Among the many theoretical approaches to this question, some involve topological explanations.36 Suppose a community is such that its trophic network is scale-­free—­meaning that there are very few species preying on many species; a large amount of species that prey on one species; and, in between these two extremes, several species that prey on a few other species. Rigorously speaking, it means that the degree of the nodes—­i.e., the number of connections that each node has—­is approximately distributed along a power-­law (a few nodes with degree 10n, 10 times more nodes with degree 10n-­1, and so on until 10n nodes at degree 1). In this example, suppose that one species randomly goes extinct. There are many more chances that this species would be poorly connected, given that the amount of species which are highly connected is thousands of times lower than the amount of species poorly connected. Thus, the probability that the functioning of the system will be disturbed by this extinction is very low and the ecological community is stable. This reasoning is focused on trophic networks; but as we know, we can abstract away from predation and talk about ecological networks in general, where each interaction is an edge in the network—­be it predatory, competitive, or mutualistic.37 Of course, I skipped many details here and this is just a brief example. But it is interesting to examine the high genericity of the explanation. The



internet, for instance, has an opaque topology but we know that there are major hubs (such as the websites of Facebook, YouTube, Wikipedia, and Google), significant hubs (such as newspapers, social networks, and dating websites), and a myriad of sites that are poorly connected to other sites. The same kind of topology can also be seen with airlines (where talking about hubs is common). Some researchers have argued that topological explanations valid for ecosystems also hold for financial economy.38 A topological explanation can hence be formally described in these terms. First, there is a relationship between the ecological system (ecosystem, community, etc.) and a mathematical object (the graph) through its associated ecological network. Second, the network has a specific topological property (here: scale-­free). This property entails that the low probability of an extinction is not likely to have major consequences upon the network. Last, this latter property corresponds to the ecological property of stability, which means that the topological property accounts for the stability of the network. Thus, as I said, this mathematical property plays an explanatory and not merely a representational role in the explanation. In our example, if the ecological community’s network were not scale-­free, it would not have had this property of stability; thus, in addition to the component of “entailment” (namely, the derivation of the low probability of disturbances from scale-­freeness of the network), such an explanation also includes a counterfactual component. This exemplifies the logics of structural explanations in general (where a mathematical property can be of any sort—­topological, algebraical, statistical, etc.) that we discussed in the previous paragraph. As we can see, the nature of the causal interactions occurring in the system is not explanatorily relevant; therefore, systems of very different ontological natures that are thereby undergoing very different causal processes (such as ecosystems and the internet or financial economy) will receive the same topological or, more generally, structural explanation of some of their properties or outputs.

W hy Do Things Fall W hen We Let Them Go?


Structures and Optima Thus, “why?” does not always receive an understanding of cause as a possible scientific response. In numerous cases similar to that of “Facebook depression,” the response instead consists of a mathematical reason for the fact. And on such a basis, this explanation will turn out to be entirely generic, resolving a group of why-­questions that seem at first glance to be heterogeneous because they concern things of a different nature: epidemics or Facebook friends, ecosystems or financial systems, economies or the human brain. Let’s extend these insights beyond the consideration of topological explanations. In addition to graph theory, another set of mathematical reasons indeed reveals something of what Leibniz glimpsed when crafting his theory of possible worlds: considerations of optimality. Mathematically speaking, an optimum is indeed an extremum, namely, an extreme value—­whether maximal or minimal—­attained by a function that describes the behavior of a system. For Leibniz, the best is defined by the greatest possible quantity of “reality,” with reality being the only positive (like, for example, light or heat) and the negative being by definition the absence of the positive and thus something unreal (like darkness or cold). The world that exists is the best of all possible worlds—­that is, the one that maximizes the quantity of reality—­because God chooses it in the same way that economists describe how we choose our own actions by selecting those that maximize our “utility.” The fact of being an extremum is a mathematical property. But when I say that a ray of light arrives at a particular place because it moved in a straight line, I am also referring to the existence of a minimum: the light beam took the shortest path between two points. For Leibniz, a scientist can legitimately rely on these kinds of extrema considerations to explain natural phenomena since the ultimate reason for things is the principle that God always chooses what is best, which in itself involves a reference to an extremum. Modern scientists can no longer invoke divine justification, but references to extrema principles remain constant: the principle of least action—­which we owe to Maupertuis, and from which the idea of the Grammar


rectilinear propagation of light was derived—­the “entropy maximization,” and the “minimization of free energy” are all central principles of physics. In each instance, the existence of an extremum is explanatory: it is because a certain strategy or a certain trajectory is an extremum for a behavior described by a certain mathematical function that the system will adopt it. For Leibniz, just like how one can explain the marble’s ending up in the valley both by describing its particular trajectory and by indicating the fact that the valley exists (and that all future marbles will ultimately wind up there), explanations that follow the details of cause and effect coexist with explanations that highlight the principle of optimality, which guides the order of things. “I even find that several effects of nature can be demonstrated doubly, that is, by considering first the efficient cause and then by considering the final cause, making use, for example, of God’s decree always to produce his effect by the easiest and most determinate ways, as I have shown elsewhere in accounting for the rules of catoptrics [reflection] and dioptrics [refraction].”39 Modern scientists could say that the optimum does not have an ultimate theological justification but that it allows us to more easily explain phenomena through mathematics. This is precisely the principle of the “economy of thought” that was so dear to the physicist and philosopher Ernst Mach: there is no need to scrutinize or to overdemonstrate in order to have a solution. Even if we ultimately know that that there are a multitude of true causes and effects acting together in the world, the idea of an optimum simply allows for us to predict phenomena more easily. When one holds an instrumentalist or pragmatist view of science—­according to which science is not aiming to uncover reality but instead building useful models that may support efficient predictions or any of our other goals (as we saw in von Neumann’s dictum above)—­a Machian conception of optimality reasoning is plausible. But what about the scientific realist’s view? And here is an even more general question: if we can no longer run back to God as the ultimate cause of everything, are explanations governed by mathematical reasons such as optima guaranteed? Or are they just a sometimes lucky heuristic? W hy Do Things Fall W hen We Let Them Go?


In regard to Leibniz, we have been discussing optimum-­t ype mathematical reasons. But there are others; for example, we had to invoke the idea of equilibriums to explain the story about the ice cream vendors. Yet in all possible worlds, the fact that a situation is an equilibrium implies that the system that has attained it will maintain it or will have a long-­ term tendency to attain it. Likewise, a good part of the models that scientists make—­and not just those pertaining to economics—­tend to find points of equilibrium in different systems. In this case, one could say that the invocation of an equilibrium, and thus a mathematical reason for certain facts, is itself a good explanation; and that in reality, it does not need a divine warrant to be considered as more than a heuristic—­in other words, as a legitimate explanation. In this case, even the scientific realist would probably be okay with this answer. A Pluralism of Explanations ? We can now see that within even the natural sciences, the question “why?” is responded to in terms of causes, which coexist with another range of responses formulated in terms of mathematical reasons (or “structures,” in my wording). This could be an even more radical reading of the Galilean aphorism according to which nature is a book written in mathematical language: not only are the laws of nature formulable in mathematical language, but mathematical properties themselves on their own can be under certain conditions a good explanation—­and can even provide a generic explanation by which phenomena of apparently very different ontological genres prove to be identical. But throughout this section, we have seen the tension between “efficient causes” and “final causes” regularly reappear—­leading up to this strange persistence of the Leibnizian principle of a God-­like “choice of the best” within modern physics and its principles of maximum and minimum. We are thus far from having resolved the enigma of the question “why?” by simply declaring that only efficient causes are a worthy response to “why?” (like certain simplified versions of the Scientific Revolution tend Grammar


to claim).40 As Aristotle already knew, the linguistic universe of cause remains very heterogeneous, and the universe of explanation is even more so. It is thus time to focus more on action and choice—­and on the way we can respond to the question “Why are you doing that?” To Sum Up This chapter, which was essentially devoted to the ways in which we can scientifically respond to “why?”, showed that many scientific questions are why-­questions. Those that ask “how?” can be extended into a “why?” question: “How do the planets turn?” (The pattern of Kepler’s laws of motion) becomes “Why do they turn?” (The process of Newtonian gravitation). They can also be reformulated in terms of “why?”: “How does water boil?” means “Why does water boil at 100 degrees Celsius?”—­w ith this “why?” being of the order of efficient cause, and never that of final cause (goal). Classical philosophy of science tried to define scientific explanation by avoiding all reference to causality, favoring deduction over induction as it is more logically satisfying. But here we are confronted with the problem that an explanation—­if it is a deduction—­is symmetrical (as is the justification), yet a true explanation is asymmetrical. Causality takes account of this asymmetry—­cause causes effect, and not the other way around—­ which makes it so that the act of explaining could very well mean “finding the cause” if we conceive a clear notion of causality. Two families of conceptions of causality were then mentioned that provide an explication of what a causal utterance is: difference-­making, mostly illustrated by counterfactuals, and causality conceived of as process. But in reality, many scientific responses to “why?” are not causes; they are “structural explanations,” which invoke instead mathematical reasons, as we saw in the economic theories of equilibrium or in the ecological or social models of certain networks relying on topology; or again under certain considerations of optimality, whose justification becomes problematic if, unlike Leibniz, we no longer accept in science a divine origin of the world. W hy Do Things Fall W hen We Let Them Go?

This page intentionally left blank

3 W hy Did Mickey Mouse Open the Fridge ?

“ b e c a u s e h e wa s t h i r s t y a n d k n e w t h a t t h e r e wa s orange juice there.” Even a young child understands this when watching the humanoid mouse as he walks toward the refrigerator. But how is this possible? Beliefs and Desires Other people interest us, there is no denying it. They intrigue us, attract us, annoy us, worry us, amaze us . . . the list of verbs is long. And very often, our interest revolves around this simple question: “Why are they doing that?” Sometimes, we ask them; other times, we know why without even having to think about it. Other times still, we construct a response based on what we see in their behavior and their speech. Providing a response to this basic question is crucial: if I know why my friend Luca goes to Porte de Saint-­Cloud every other Saturday—­in this instance, to watch Paris Saint-­Germain soccer matches—­I also know that he would be happy if I gave him a Kylian Mbappé (PSG’s world-­ 87


class striker) team shirt for his birthday. And knowing why tons of people drive off in their cars on the first Saturday in July to head south on the highway—­as they leave for vacation—­a llows me to avoid traffic jams by choosing another date to travel. Knowing how to respond to the question “Why are they doing that?” helps us in general to predict the behavior of others, a decisive faculty in our social life. This capacity both to ask “Why are they doing that?” and to be able to respond to it is surprising—­especially since it is practiced at all times and is available to us at a very young age, as is illustrated by the young child watching Mickey open the refrigerator to help himself to a glass of orange juice. In our “why?” grammar, desire, will, need, and intention are always accessible responses. They are generally applied to humans; nevertheless, we attribute them sometimes to animals—­the dog nibbles at my shoe because he wants to go out—­and often see how children include everything that moves within this domain: the sun is going to set because it is tired, the sea got angry and threw waves at my sandcastle to destroy it. But let’s set these examples aside and focus on fully developed humans. This type of “because” precisely constitutes a “final cause” in the sense of Aristotle: the response to “Why did Mickey open the fridge?”—­namely, to drink the juice—­concludes Mickey’s act. Unlike a “because” that indicates the causes of an event, here the cause seems to come after the action. Or rather, the cause exists first before the action in the representation that the agent makes of it, and then in reality after the action.1 In the case of human actions, we thus respond to “why?” with a word that contains a goal: desire, will, intention, etc.; these terms always designate attitudes in which I relate to something that is not there, but in the mode of what should be (for me). To want a glass of orange juice means both not having orange juice and determining a course of action so that it comes into my possession. This kind of idea about orange juice makes up the reason for Mickey’s action, his reason for opening the fridge. Mickey Mouse is certainly a trivial example. But the same type of structure for questions like “Why are they doing that?” “For what purpose?” allows for us to understand the actions of people in a more genGrammar


eral manner. In June 1940, General de Gaulle exiled himself in London. Without a response to this question, such a departure could appear as him fleeing. If we would like to argue that General de Gaulle was not running away, it is thus necessary to say why he left—­namely, because his goal was to continue the struggle against the Nazis, and because he believed (rightly) that this was no longer possible on French territory. De Gaulle’s case is much more interesting than Mickey’s because without knowing de Gaulle’s goal and his reason for going to London, we would wrongly describe his action. A correct description of an action is thus difficult to separate from a knowledge of the why of this action. Historians often seek to determine such whys from the perspective of the agent themselves, since—­to continue our example—­it is a question of reconstructing de Gaulle’s motivations even if one does not share them. To understand why de Gaulle acted as he did, the historian transports himself in some way into the general’s person and says: “If I were de Gaulle, what beliefs and what goals would I need to have to act as de Gaulle actually acted?” In a more general manner, a narrative makes its subject more understandable for us by indicating these reasons for action. And to these, we also join reasons for events—­most often otherwise called causes—­which make everything that is going on around us understandable. Of course, all events and all actions are not always explained in a given narration; just a few of them are enough so that the narrated thing is not completely opaque to us. Thus, a history book about the Resistance will say that Jean Moulin was captured by the Nazis even if we do not know exactly why; he was denounced, but we do not know by whom. These “why?”s without response are often like blanks in the text; and as long as the story is not entirely filled with them, it remains understandable. From this point of view, the narrative of a historian is hardly different from the narrative of a novelist. The texture of the narrative in both cases is based on the same thing: actions, and a set of reasons for those actions (the agent’s beliefs and goals); events, and explanations of those events.2 The preceding chapter discussed “why?” in the context of scientific exW hy Did Mickey Mouse Open the Fridge?


planations; here, we will talk about “why?” and its vital role in narratives of all kinds—­from Tolstoy novels to the peasant folklore of medieval Gévaudan to the narratives that we make about our own lives. Folk Psychology, Suspicion, and Artifacts “Why?” can thus receive in response not only (1) a reason-­for-­belief or (2) a cause—­and more broadly, a scientific explanation that explains an objective reason which might not be causal—­but also (3) a reason-­for-­action, a goal, a “what for.” This is the third and last type of “because,” concluding the list of categories defining what we have called the grammar of why. This reason-­for is thus a state of the world that does not exist yet but that one wants to realize: orange juice in Mickey’s stomach, France liberated from the Nazis, etc. It motivates the action in a similar way to that in which the reason-­for-­belief justifies a belief. It explains the actions of others but also our own actions, since we must explicate it when we want to provide reasons to ourselves or to others—­even if they remain implicit most of the time. Through the action of the agent, this idea of a certain situation to realize produces (sometimes) the state of the world that we aim toward. When the realization demands very few favorable conditions—­like Mickey’s orange juice, for example—­then the final cause that is the reason-­for-­ action will explain the realized state of the world. Thus, notice that reasons-­for-­action make sense of an action whether the goal is achieved or not, and that sets them aside from causes or other reasons for events. I could tell why someone rents a sailboat, crosses the Caribbean seas, and dives hundreds of times around a small desert island by saying that they hunt for a treasure in a forgotten sunk ship, even though they never come back with a treasure. Unlike causes, which explain things or events that follow them as effects, goals, ends, intentions, and all kinds of reasons-­for-­action make sense of actions even when the end does not actually follow from the means. When it demands a great number of annex causes, the goal is not always achieved and it thus does not alone constitute a good explanation of Grammar


the current state of the world: thus, the goal of Christopher Columbus—­to find a short maritime passage to the Indies—­is not a “final cause” for his presence in India since he precisely never went there. What explains Columbus being in America is the combination of his goal and the fact that his initial geographic beliefs were wrong—­namely, that India was situated across from Europe. Sometimes, the goal is achieved so that it appears as the preferred explanation of the phenomenon; and when it is not, we have to consider the gap between the state of the world and the agent’s beliefs. For instance, our treasure hunter falsely believed that there was a treasure, or that the treasure was reachable. A reason-­for-­action—­in other words, a response to “Why are they doing that?” or “Why must I do this?”—­thus includes a goal as well as a belief that justifies choosing such means for such purpose. And this is how we understand other people: by attributing beliefs to them, and by supposing that they have desires. Psychologists call this spontaneous capacity to understand the actions of others and ourselves through the dual concept of belief/desire “folk psychology”; and there are some who even think that it already existed in the mental toolkit of the first Homo sapiens or of previous hominids. Sometimes, the possibility of this understanding is simple: I infer the beliefs of others because they are doubtlessly the same as mine since we live in the same environment. Of course, I am speaking here of basic beliefs: for example, if I were living in Mickey’s house, I would believe that the orange juice is in the refrigerator because that is where cold drinks generally are, and I would thus imagine that Mickey—­being an intelligent mouse—­would put it there as well. At other times, I know the desires of others because they have expressed them, or because their behavior allows me to easily infer them: the act of drinking water implies a desire to drink; that one is thirsty, in other words. Sometimes, the context helps me to interpret these desires: someone raises their hand on Fifth Avenue right next to the road; I am “seeing” them hail a taxi because this gesture here is code for this kind of request. It is almost as if they are yelling out “I want a taxi!” But other times, the goal is difficult to infer: “What are they after?”, “What do they want from me?”, to bring up again those W hy Did Mickey Mouse Open the Fridge?


questions that the subject worries about regarding the Other, according to Lacan and psychoanalysis. Intentions target something and always have consequences—­ w ith some of these being predictable, and others being less so. While it is rational to avoid ascribing the intention of launching a worldwide pandemic on the poor soul who (according to one of the narratives about the origins of COVID-­19) ate a contaminated pangolin, since no rational worldview could have predicted this outcome, it is rational to say that someone wandering in the woods with a rifle and a hunting outfit intends to kill animals. This holds even if no animals are killed (because he’s a terrible shot) or if a human is unintentionally killed (again because he’s a terrible shot). There is an extended moral and legal lexicon for talking about states of affairs that are our goals versus states of affairs that are related to them and that are not unpredictable consequences (such as unintended but tolerated situations, situations for which we are unwillingly responsible and to which we contributed or that we facilitated, etc.). This very rich language allows us to express all the nuances between “my intention is X” and “I had no intention of causing or doing Y.” Elizabeth Anscombe wrote an important book called Intentions (1946) about the reasonings that underlie our intentions and the way we can infer them—­given that they stand in between the avowed goals of someone and the set of states of affairs that could be connected to them causally and more or less predictably. It is also something that is debated in courts, when the responsibility of someone is controversially involved in some form of harm experienced by someone else.3 But when it is a question of figures from the past, inferring goals and desires proves to be even far less simple. Much of the context for the action is composed of other agents and their implicit social norms. When we do not know them, as is the case when dealing with history, we are sometimes barely able to say why a certain person did a certain thing; this is why historians use documents, witness accounts, and all the archival data that they can gather to answer this question. Could the structure of this response to action-­based why-­questions—­as it is formulated by historians and ordinary people alike when confronted Grammar


with the actions of others and themselves, and which is thus based on beliefs and desires—­be a conventional construction allowing for people to discuss among themselves but ultimately covering something far more profound that is kept hidden? Such an idea of a hidden truth is tempting for our contemporaries, marked as they are by those thinkers whose focus was “suspicion.” Grouped together under this label at the end of the nineteenth century, Nietzsche, Marx, and Freud indeed argued in extremely diverse ways that humans are relatively absent or foreign to themselves; and that in particular, we do not know what makes us act in the way that we do. For such thinkers, our reasons are always rationalizations that come after the fact, and we are blind to our actual motivations.4 However, the structure itself of “Why are they doing that?” is not affected by these philosophical suspicions. Let’s look at the psychoanalytic explanations that Freud puts forth. The subject has forgotten their umbrella at the store: Freud would claim this is a parapraxis, that the act of forgetting manifests an unconscious desire to see again the attractive grocery clerk who seduced the subject without them being aware of it. But still in this case, when asked, “Why did they forget their umbrella?” we would respond by saying, “Because they had the unconscious desire to see the attractive clerk again.” In other words, the explanation goes back once again to a desire—­even if the subject isn’t aware of it. Generally speaking, the masters of suspicion certainly did away with the naivete of believing that the desires that motivate individuals are always the desires that they are aware of having. For Marx, the social situation of subjects defines beliefs and interests that orient their actions toward the preservation of their social privileges, which forms the constant object of their desires; for Nietzsche, the individual is made up of more elementary wills that struggle against each other until one imposes its agenda—­which the subject will attribute to themselves as their own and which will then be rationalized. But the very grammar of “Why are they doing that?” is still the same: it is a matter of identifying desires (whether known or unknown), and of indicating the beliefs within the framework in which these desires will be tentatively realized. W hy Did Mickey Mouse Open the Fridge?


We must also note that this goal or reason-­for is attributed not only to people and their actions but also to the things they make. Aristotle already distinguished two different meanings of making: to make an action (praxis in Greek, which gave us “practice”)—­like running, where the action does not have any independence outside of the agent; and to make something in the sense of manufacturing it (poiesis in Greek, which gave us “poetry”)—­ like a bridge, a cathedral, or a painting, where the manufactured object exists independently from its maker. The “why” behind the actions we make is, as we have seen, a reason-­for-­action, a goal that has been distilled to belief and desire. Let’s now imagine that we are in the remains of a Neolithic campsite and that we find a pointed and slender metallic object that has obviously been crafted by someone. We would probably say that it is most likely an arrow, or a weapon of some kind, that was used for hunting or self-­defense from enemies. The “why” of this object—­“ Why is it here?” “Why does it have this shape?”—­would also receive a response in terms of goals and intentions. And certainly not those of the object itself, but those of its creator: to hunt, to make war. In spite of its apparent difference, “poetic” activity (or technical, we could say) is thus explained by the same reference to a goal as practical actions. The reason why these technical objects exist is this “what for”—­their usefulness. It is helpful to remember here that Aristotle’s favorite example for illustrating final cause is an artistic production: a statue. Understanding or Explaining We often correctly insist on the difference between first-­person and third-­ person knowledge: we have direct access to our interior mental states, beliefs, and desires; and only indirect access to the mental states of others. I infer the latter on the basis of their behavior, their words, and my own past behavior. But from the point of view of the question “why?”, whether in regard to ourselves or others, the structure of the response is the same, invoking goals—­desires, wills, etc.—­and beliefs. This type of response, on the other hand, is very different from the Grammar


explanations of an event by causes, which do not allow any reference to intentions, goals, or wills. At the end of the nineteenth century, when philosopher Wilhelm Dilthey wanted to understand the specificity of the nascent human sciences—­sociology, linguistics, psychology—­he developed the conceptual distinction of “Explaining/Understanding.” Explaining, as we saw in the preceding chapter in regard to the natural sciences, consists of showing a cause or referring to a law of nature. Understanding pertains more to the approach of the human sciences and consists in reexamining reality from the point of view of those humans who we are studying in order to see things from their perspective and discern why they behave the way that they do. Explaining is in the third person, understanding in the first—­in the sense that the interpreter positions himself in the same place as this first person. Understanding is precisely the comprehension of reasons-­for-­action. Max Weber, one of the founding fathers of modern sociology, conceived of the idea of an ideal-­t ype, namely, a type of individual with typical beliefs, desires, and social habits, whom we can understand, and whose existence in turn allows us to explain the social facts of his time. Think of the Protestant capitalist entrepreneur, or the religious ascetic leader, two famous ideal-­t ypes that are supposed to allow us to make sense of capitalism or some moments of Christian expansion.5 The sense that the explaining/understanding framework envelops that of the natural sciences/human sciences, however, is much weaker now than it was a century ago—­because in economics or sociology as much as cognitive science we have explanations in the apparent form of laws that hardly can be formally distinguished from explanations in mechanics or biology. For instance, Joshua Epstein in Growing Artificial Societies (1996) builds up simulations of social phenomena in the form of “agent-­ based models”; namely, computer simulations where individuals behave according to simple rules determining their actions on the basis of what the neighbors do. (Think of a standing ovation, in which me standing up or not depends upon how many people around me do it.) He intends to artificially simulate extant social patterns, which therefore would provide us with an idea of the individual-­based mechanisms yielding these patterns. W hy Did Mickey Mouse Open the Fridge?


In the same vein, population biologist Peter Turchin launched several years ago the project of a Cliodynamics: a study of human history based on long-­ scale quantitative data, in order to extract from them recurrent historical patterns—­for instance, concerning the growth and decline of civilizations, the fluctuating intensity of warfare, etc.—­and model plausible mechanisms that account for them.6 Clearly these novel approaches to social sciences don’t share an ideal of understanding why people behave how they behave, but rather focus on explaining social dynamics and patterns. But besides sociology this doubting of explanation vs. understanding as a mark of the difference between natural and human sciences is made even more serious since parts of sociology or psychology are now extensions of the natural sciences thanks to Darwinian biology and neurology; indeed, Darwin’s books The Descent of Man (1871) and The Expression of Emotions in Man and Animals (1879) gave rise to research programs that aimed at explaining human beliefs, institutions, and behaviors from an evolutionary and often selectionist perspective.7 But for the human sciences, this understanding/explaining distinction still indicates that the knowledge of “why” often risks being heterogeneous from what happens in the natural sciences.8 Even though we maintain that human sciences like psychology or sociology “understand,” they still do not “justify” (although it is in fashion to think otherwise nowadays). They are certainly focused on discovering an agent’s reasons for action; but determining that an agent had a specific reason-­for-­action doesn’t mean that it was a good reason. Understood within this context, “justifying” still concerns a moral justification along the lines of “the agent was right to do that.” But having a reason to do something does not make it reasonable, that one is right to do so—­w ith right and wrong being understood here in relation to a certain, often moral, norm. Justifying, in the sense of the moral justification of actions, is a different practice from that of responding to the question “Why are you doing that?” Al-­Qaeda practices mass terrorism because they want to liberate Middle Eastern land from the American military, but such a reason does not prove it is good to do so ipso facto. Grammar


Causes or Reasons? Goals, when considered as reasons for action, and the efficient causes of natural science are thus two types of competing and incompatible explanations. The philosopher Fred Dretske subtitled his book Explaining Behavior—­now a classic in the philosophy of mind—­Reasons in a World of Causes.9 This phrase encapsulates the metaphysical problem within questions like “Why did Mickey open the fridge?” Nature is a totality of events connected by causal relationships that are possibly subsumable under natural laws; like Spinoza said, since human beings are natural things, “Man is not an empire within an empire” (Ethics, III, Preface). From this angle, can the reasons for action that we express in terms of a goal (intention, desire, will, etc.) be the real explanations for actions? After all, if human is a natural being, and if nature is entirely governed by causes, why should we add a level of “why?” that is expressed in terms of beliefs and desires? Within the framework of reflecting on the relationships between mind and body, the American philosopher Jaegwon Kim formulated the idea of a “causal closure of physics”: any physical event (the door of the refrigerator opens while Mickey’s hand is on the handle) has a physical cause (the movements of Mickey’s muscles, the neuronal motion that initiated it, Mickey’s brain state when he thinks “orange juice in the fridge, I’m thirsty”), and this physical cause is sufficient to produce such an effect.10 From the existence of this closure, we deduce that the reasons for action cause nothing since all the causes are already given by different physical brain and bodily states. These reasons are certainly there, and are apparently causes, but the true causal explanation of the actions is not within their domain. Certain philosophers take this to an extreme; they eliminate the level of reasons-­for-­action altogether and are called “eliminativists.” Following their trailblazer Patricia Churchland (the author of Neurophilosophy [1986]), they think that the language of beliefs and desires is as unscientific as the sentence “the sun rises and sets”: it certainly helps us understand each other but pertains to nothing real in itself. Here, the real explanation of our actions would thus be the causal relationships taking place between the neuW hy Did Mickey Mouse Open the Fridge?


rons of our brains, and then those between these same neurons and the exterior environment. In the end, Mickey has the impression that he decided to drink orange juice and de Gaulle had the impression that he chose the Resistance; but these are in fact the consequences of small events that took place between their synapses, which were themselves consequences of events within their brains, of events existing outside of them, and in the outer world. Or rather: such choices are really just these small events. Thus, if the world is indeed a compact universe whose cement is causality, is there space for “reasons-­for-­action” in this world of causes? Or are goals and beliefs only a useful fiction that allows us to communicate with each other while not having any real consistency? I do not want to go into this gigantic philosophical question that ultimately involves the question of human freedom. There is a whole subdiscipline of philosophy concerned with “free will,” in which “compatibilists” fiercely fight hardcore materialists, the former assuming that humans are free to choose even though the world is a material world, while the latter claim that science proves materialism, hence determinism, hence no free will exists. Modern-­day freedom fighters repeat classical debates with which Spinoza and Leibniz were acquainted. Spinoza indeed famously compared the much-praised freedom of humans to a stone that, endowed with the faculty of thought, would say “I’m free to climb down the mountain,” whereas it’s only undergoing the force of gravity with no way to opt out. Freedom would only be the ignorance of the determinations that move us, a stance that has been constantly repeated from Spinoza to Bourdieu and other contemporary sociologists. A Short Insight into Free Will The issue of free will parallels here the question of the ontological consistency of mental states debated by Churchland, Kim, Dretske, and others. Causal closure of physics, as Kim says, entails the epiphenomenality of mental states as causes, and therefore, if one assumes that to be implies to have some effect upon other beings—­a quite widely shared assumption Grammar


about being—­the inexistence of mental states. Similarly, the determination by exclusively physical causes prevents any self-­determination of the agent, hence free will as it is usually defined.11 With Kim, Spinoza wins. Physicalism is the current name of this materialism supported by the causal closure thesis, and justified by modern science, in which no “nonphysical” cause (nonefficient cause, as indicated in chapter 2) is accepted. However, I doubt that the a posteriori argument from physics or biology, about which many free will unbelievers (often coming from the exact sciences) think highly, is conclusive. Such an argument would go something like this: Our best science shows that nature is made of matter and that nothing else exists; matter is inert, its motion is determined by other matter, and so on to infinity; thus, nothing moves unless some form of matter determines it; and thus free will is impossible. But, as it has been repeatedly argued by philosophers who have seriously investigated the foundations and conditions of the natural sciences, determinism is an a priori condition for science. As we saw, Kant formulated this principle in his Second Analogy of Experience: “Everything that happens presupposes something that it follows in accordance with a rule.” (Transcendental Analytics, Critique of Pure Reason). Claude Bernard, a nineteenth-­century French physiologist whose work on experimental physiology was groundbreaking and highly influential, reflected on scientific methodology in a widely read book called An Introduction to the Study of Experimental Medicine (1865). He explains that “determinism” should be the principle of science because it means that the same causes produce the same effects; and that no experiment is conclusive if one does not assume such a determinism—­since in this case such an experiment can show that B has followed from some intervention on A, but cannot show that this intervention causes B as nothing prevents another identical experiment from displaying C as following A. Experimental physiology (and by extension experimental science) in general assumes, whether explicitly or not, determinism. Thus, no experimental scientist can prove without logical circularity that determinism is true, since they have already presupposed it. W hy Did Mickey Mouse Open the Fridge?


This circularity argument has been put forth several times in the past two centuries. It concludes that no one can scientifically prove that free will does not exist. But also that no science can prove the existence of free will, which is excluded ab initio by its methodology. Thus, it may just be that the question of whether free will exists is poorly constructed—­a long the same lines as a child asking, “Do aliens exist?” A Wittgensteinian would say that given the a priori impossibility of justifying one or another answer within a scientific context, it is indeed a misleading question. Personally, I would stand with this skeptical stance here. But it does not prevent us from thinking about reasons and causes; and besides any ontological questions concerning freedom, from wondering how reasons-­to-­act can coexist with causes as legitimate answers to why-­questions within a rational discursive framework. A Plea for the Reality of Reasons-­f or-­A ction From my viewpoint, in order to argue for the irreducibility of reasons-­ for-­action, we should indicate that our reasons-­for-­action do not follow the same logic as causality—­since a goal does not explain along the same lines as a cause, which means that the latter cannot immediately render the former illegitimate. A reason-­for-­action indeed does not entail that the action follows from this reason as the effect follows from a cause, since the reason often consists in the intended effect of the action, meaning that the action actually causes the realization of its reason. Serena Williams (a transcendent player who has her eye on retirement as I write this) practices day and night to regularly win Wimbledon tournaments; her training puts her in a position to win, and winning Wimbledon is her reason for doing it; but training day and night is not an effect of her winning Wimbledon, which is still her reason for training so intensively. And after all, even if she does not win Wimbledon, “winning Wimbledon” is still her reason for practicing day and night. However, in contrast, if the supposed neuron-­level cause of her practicing day and night is absent (these causes that eliminativists think are the only real efficient reasons on Earth), she Grammar


won’t practice day and night. Therefore, a reason and a cause of the same event are not competing hypothesized causes where one would eliminate the other if it were proven to be true. In addition, in the previous chapters, we saw that causality was one thing while the justification of beliefs was grammatically something else, even though it is also expressed with “because.” Then, the reason-­for-­action is a third thing, and it should therefore be no more eliminable by the findings of natural causality than justification was by the latter. Just as there are relationships between facts that form the justification of a belief, there are relationships between facts that are described by the term “reasons-­for-­ action,” which are relationships of motivation: “If Mickey has the goal to drink orange juice, he will open the refrigerator” forms such a relationship of motivation. To understand that the relationship of motivation described here is real—­and as real as a relationship of justification—­there is thus no need to suggest that a goal is a special entity that ideally exists “next to” the world and has concrete effects in the natural world. Thus, the goal is not something that exists alongside the causes that are “in the brain” (neural states, etc.) and does not represent an alternative cause for the drinking of the orange juice. I won’t settle this century-­long debate about reasons and causes here; but the heterogeneity among three types of “reasons” as answers to why-­ questions is a strong argument against seeing causes as alternatives to reasons in the explanation of behavior. Eliminativism, and even the supposedly obvious “causal closure of physics,” deny the pluralism of reasons I have defended here by seeing the complex grammar of “why?” as a competition for the unique proper explanation of an action. In doing so, one commits a metaphysical confusion that runs parallel to the much more classical conflation of causes with intentions—­which gives rise to theological worldviews. Moreover, in the previous chapter, we also saw that causality is far from exhausting the grammar of scientific explanation. It follows that even if the expression “reasons in a world of causes” is striking and captures the singularity of human action, it forgets the very complexity of W hy Did Mickey Mouse Open the Fridge?


scientific explanation itself. Within nature, there are not only causes capable of explaining things: the world of science is not just a “world of causes”; it also includes reasons that are structural explanations of facts and even of generic patterns. And this insufficiency of cause as a response to a why-­question suggests that reasons-­for-­action cannot be eliminated by causality when they offer another answer to the question “why?” Briefly: if some explanations in the natural sciences are not causal, then pointing to reasons as explanations of an action is not at all illegitimate. In fact, it can be a bona fide explanation even though it does not talk about any causes, such as the neural correlate of the relevant mental state of the agent. But notice that this does not commit us to any ontological stance that would contradict physicalism, nor that it corroborates physicalism. This parallels the case of structural explanations—­about which it would be erroneous to infer from their existence a commitment to some Platonism about the existence of mathematical objects. Economy as a Theory of Action But why are we able to explain so well the actions of Mickey, General de Gaulle, characters from movies and novels, and our friends and loved ones? Because we suppose that they are likely to have beliefs and goals, unlike clouds or mountains. Economists have an interesting way of understanding this massive assumption because it introduces the notion of “rationality.” Let’s immediately note that “economist” does not mean here a researcher who deals with falling interest rates or calculating the impact of labor laws on the GNP—­but rather what has been called since neoclassical economics a theory of individual choices, which is easily transformed into a theory of action, since acting is ultimately choosing to do one thing instead of another. This theory of rational choice lies at the foundation of economic models for the actions of agents (microeconomics) but is studied mainly by a small group of economists. (It is now also being developed quite differently by some authors who claim to be closer to psychology—­we sometimes speak of behavioral economics or economic psycholGrammar


ogy.)12 It is interesting to us here, first, because it offers a rigorous and detailed approach to reasons-­for-­action that answer certain why-­questions; and second, because it allows for us to specify certain fine distinctions concerning rationality and reasons—­particularly in how it distinguishes a minimal sense of the rational agent’s human reason from a richer one. To explain the choices of agents, microeconomists suggest two types of entities—­preferences and utilities. Preferences are simply the way in which one ranks their action objects from best to less good; utilities are a somewhat quantitative measure of these preferences.13 The utility of something often has the property of decreasing after a certain consumed quantity: if dark chocolate is my favorite food, the first square from a bar will bring a higher utility—­in terms of satisfaction, well-­being, etc.—­than the rest of it; but the 63rd square will doubtlessly have an inferior utility to that of the first one. This utility brought by item n+1 as compared to item n of the same kind is called marginal utility; and as a rule, marginal utility of a given object generally decreases. We can then assume that agents always have to choose between different combinations of a particular set of goods (for example, “chocolate only”; or “67% chocolate, 15% caramel, and 18% strawberries”; and so on). Each of these “baskets of goods” has a utility for the agent that we can calculate. For the economist, the human subject will maximize this utility by always making the choice that will bring them the maximum utility. We thus have here a strong explanatory principle for human behavior. Of course, it is not easily applicable: in a precise case, it supposes that we know the preferences of a subject; yet in general, we know them only by the choices they make (this is called the “revelation of preferences”)—­ which raises a problem of obvious circularity. We should also note that “goods” are not necessarily material: one can talk about the utility of a symphony, or that of the resolution of a geometry problem (for certain subjects, this is very high). Such generality allows for the near-­universal applicability of the theory. Another point worth highlighting is that the monetary value of things does not directly intervene here; it is one of the aspects of utilities, but the W hy Did Mickey Mouse Open the Fridge?


determination of its weight depends precisely on the subject’s preferences (for some, money is almost everything; for others, almost nothing; and variations thereof). In this sense, economics is a very powerful means of modeling—­if not explaining—­behaviors and choices. If the subject chose to live somewhere close to three chocolate makers, this is because dark chocolate was their preference among all possible goods. Preferences can be reformulated in terms of goals and beliefs. For example, if Mickey’s goal is to drink, and to drink orange juice in particular, his preferences are ordered at that moment in such a way that drinks are preferred things and orange juice the most preferred of all—­at least among all the drinks he thinks are in the refrigerator. Conversely, given the preferences of an individual, their desires will be directed toward the choices that bring them the most utility, i.e., what they prefer. And in regard to folk psychology, the economist’s formulation allows us in some way to model choices and actions more quantitatively, as well as to consider finer distinctions. In particular, it takes into account the fact that the satisfaction of a desire is exposed to fatigue—­like in the example of the 63rd chocolate square. Speaking in terms of preferences and utility thus allows for us to respond in a systematic way to “Why are they doing that?” while revealing several important aspects of what an action and an agent are. Indeed, given some beliefs about the world, for the preferences and utilities to lead to the determination of a choice, only one condition is required: that the individual systematically chooses the largest utility. This is what economists call rationality. Such rationality requires at least one condition so that a maximum utility is guaranteed—­namely, what is called the “transitivity of preferences”: if I prefer chocolate to coffee, and coffee to strawberries, I must then prefer chocolate to strawberries. This may seem trivial, but when such a clause is not respected we simply cannot determine what maximizes utility.14 Certain people will say that humans are often irrational because their preferences are not transitive. This is more complex than it seems. For example, let’s imagine that Albert prefers chicken to eggs, and prefers eggs Grammar


to fried fish, but prefers fried fish to chicken: a clear violation of the transitivity of preferences and a self-­declaration of irrationality. But in fact, in the first preference the eggs are prepared as an omelet and in the second they are hard-­boiled—­which means it is no longer a question of three but of four compared items, and that the question of transitivity no longer arises. Alternatively, when someone prefers fried fish to omelets, omelets to burgers, but burgers to fried fish, we can refine the description and say that it’s one particular burger that is preferred, and that others are less appreciated. Finally, if the items are exactly the same, it’s still possible to say that the compared items, when the agent fails to satisfy the transitivity of preferences, are temporally situated. In that case, we are no longer talking about a “burger,” but instead a “burger at 12:17”—­which means that there is no issue of transitivity since it’s perfectly rational to have placed a higher value on a “burger at 12:17,” but to ultimately prefer “fish at 12:33” to a “burger at 1:12.” And given the fact that no actual series of choices takes place instantaneously, this strategy always works. And so it is often easy to show that the cases presented as transgressions of the transitivity of preferences can be redescribed as standard cases. We must insist on this point because it leads to a thesis that is somewhat trivial but important: if we can constantly ask and often discover why people do what they do, it is because we assume them to be rational. Only this supposition allows us to move from actions to motives, to say that someone broke several eggs to make an omelet because omelets were at the top of their list of preferences. Yet this robustness in the transitivity of preferences, which most of the time can be saved by redescribing the basket of choices, is an epistemic problem: there may not be a way to show that an agent is irrational, which deprives the concept of rationality of any empirical content (since no empirical data can inform an attribution of rationality). Therefore, this renders the principle of rationality into an a priori principle like that of the principle of inertia—­which share this same impossibility of being disproven while also providing a structuring role for science.15

W hy Did Mickey Mouse Open the Fridge?


The Rationality of Economists and The Three Musketeers Historians who examine documents from the past to understand who did what and why implicitly make this supposition of rationality. Up to a certain point, it allows one to imagine that people in the past acted much like we do; and that if one therefore reconstructs the nature of their preferences—­which may be very foreign to us—­we can understand their reasons-­for-­action. The principle of human rationality is at the root of our ability to understand others, to reconstruct the past, and to generally follow a narrative. For this type of understanding, it plays the same role as Leibniz’s principle of reason (“everything has a reason”), which we encountered earlier when we were discussing the justification of beliefs and the explanation of events. Conversely, if we did not presuppose this and we could thus not conclude from our knowledge of someone’s preferences or utilities that they would aim toward a particular goal (or rather that they did a particular action with this goal in mind), we could neither explain nor predict the actions of others—­which would make our social lives much more problematic. And more broadly, since we could no longer make sense of our actions based on our goals or act on the basis of our preferences, we simply would not be able to understand ourselves. Of course, what we have been dealing with so far concerns a minimal form of rationality, which includes the transitivity of preferences. The nature of preferences themselves, or of utilities, has not been discussed; and it is entirely possible to have preferences that many would deem unreasonable. For example, the Nobel Prize–­w inning economist Gary Becker’s work on addiction considers certain subjects to be rational when they prefer shooting heroin to anything else and explain their behavior on this basis.16 Such a minimal rationality calls for three remarks: ■

Beyond this rationality, we can distinguish a rationality of higher degree that concerns the preferences themselves. It is generally considered more reasonable as a preference to seek satisfaction by learning



a musical instrument instead of consuming heroin (there are numerous criteria for this—­sanitary, social, etc.). It is worth noting that this kind of rationality (which is probably best classified as a moral or political rationality) is less formal than that of the principle of minimal rationality. ■

In this way, we better understand the difference between understanding and justifying an act (which was indicated above). Understanding means reconstructing the set of preferences of the agent—­as well as their beliefs—­so that the act effectively appears as the maximization of the agent’s utility. Nevertheless, these preferences themselves can be considered more or less good in terms of exterior moral criteria, or even in light of their later preferences. When the minimal rationality we have been discussing is supposed, identifying an agent’s preferences means characterizing their reasons; while justifying means evaluating these preferences as being more or less reasonable—­in other words, showing that the agent had good reasons. The latter is related to a rationality of higher degree (sometimes called “practical,” in Kant’s terminology) concerning goals or preferences. In the end, this duality of degrees implies that the question “Why are they doing that?” can receive a dual response. It becomes, first, a question of reconstructing beliefs and desires (in ordinary language) or beliefs and preferences (in the language of economists). “Mickey took the orange juice from the fridge because his ultimate preference was for orange juice instead of water, milk, or Coke.” But besides that, one can always then ask why these preferences were what they were. Here, another order of considerations comes into play. Beyond the agent, they concern the agent’s society, and the historical world to which the agent belongs. In what sense?

Let’s open The Three Musketeers. At the start, d’Artagnan comes across one musketeer, then a second, and then a third (Athos, Porthos, and Aramis)—­each of whom in turn behaves in a way that d’Artagnan conW hy Did Mickey Mouse Open the Fridge?


siders offensive. He challenges them each separately to duels in a nearby meadow and confronts them. Just as the first duel is beginning, the king’s guards come and try to arrest them for illegal dueling, a fight ensues, and thus begins a long friendship between our four musketeers. This novel’s famous opening is crystal clear: d’Artagnan instigates the duels because he considers himself insulted (belief) and wants to defend his honor (desire, intention). Among his preferences, defending his honor is placed above his health and well-­being. We thus understand d’Artagnan because we perceive his desires and preferences, but at the same time these seem absurd or unseemly. If I ask why d’Artagnan provoked a duel, I arrive at the notion of honor—­of which I can then ask why it has so much importance for these people. This is a question about the formation of the musketeers’ preferences. It lets us see that preferences are always historically anchored. If we have difficulty understanding the past, if its customs seem strange or unreasonable or even crazy (“What a bizarre phenomenon dueling was!” “What possessed those Aztecs to perform human sacrifices?”), it is not because they were irrational—­since a minimal rationality is supposed in order to be able to give meaning to their action—­but rather that their preferences seem unreasonable in terms of second degree, or practical, reason. Yet people do not choose their preferences the way a child would choose one or two treats in a giant candy shop. If d’Artagnan demands a duel, it is because duels were a solution to conflicts of honor at the time—­even if they have been forbidden by order of the king in the novel; and more profoundly, it is because honor was considered an essential virtue that had to be defended at all costs, and that it was lost precisely by refusing to defend it. The very essence of duels conveys the idea that one values certain things even more than one’s own life, which means that one proves one’s honor by defending it at the risk of death. Today, duels are no longer even in the register of possible preferences; and honor now vies for priority among many other common preferences (well-­being, personal fulfillment, human dignity, health, etc.). The fact that they were socially valued to the point of becoming a norm at the time of the musketeers thus



reveals that for many—­as with d’Artagnan—­defending one’s honor was at the summit of personal preferences. This story illustrates a more general idea: if we want to reconstruct the preferences of agents, and thus say why they have the particular goals that they are pursuing, it is necessary to identify which preferences are available and socially valued in their world. On this point, the question “Why are they doing that?” leads us to sociology. I am by no means claiming that historical sociology fully explains the goals of agents according to their social group and historical background, but simply saying that it allows us to understand the horizon of possible preferences in which the agent will “choose” their own—­w ith these preferences thus explaining their choices and their actions. Today, d’Artagnan would not fight in a duel because duels are no longer available at the preference store and honor is no longer a dominant social norm. Maybe he would sue for defamation; maybe he would launch a wave of hurtful messages against Aramis and Pathos on Twitter; maybe in the end he would go back home thinking of other things, like planning a squash match with friends or scheduling a yoga session. In any case, The Three Musketeers would not exist. The Infinite Regression of Means and Ends In this chapter, we have been deliberately vague about terms: desire, will, and intention are different things but all have an intentional structure—­ that is, a structure which aims toward a nonexistent state of the world (a goal), whose representation and valorization are a reason that the agent acts in the way that they do.17 The causes of events respond to a particular why-­question; while goals, desires, and intentions respond to a different kind of question that could be formulated as “What for?” (Pour quoi?). These two responses to different “why?” questions present an important structural analogy: they define concept pairs. On the one hand, we have cause and effect; on the other, we have the goal and its means. The cause causes the effect; the means produce the goal—­but in addition, the goal is

W hy Did Mickey Mouse Open the Fridge?


what motivates the means to be implemented. This second clause radically distinguishes the pair goal/means from that of cause/effect.18 By pursuing this parallel, we see that the goal, just like the cause, gives rise to possible iterations. Indeed, in the two cases, when someone responds “because” if we ask the question “why?”, we can always keep asking “why?” The cause of the fire was lightning, the cause of the lightning was the state of electricity at that particular moment in the clouds, the cause of the state of electricity was a previous meteorological state, and so on perhaps to infinity. We also saw above, in similarity to the language of economists, how the structure of the reason-­for requires being divided into two distinct levels, two types of rationality, and two why-­questions—­w ith one concerning what maximizes utility, and the other about the formation of preferences. But if we now leave aside this language and consider our daily language and its references to goals and means, it is clear that the goal can always be the means of another goal, and so on. For Mbappé, dribbling the ball is a means of scoring a goal. Scoring a goal is a means of winning the match. Winning the match is a means of winning the World Cup. Winning the World Cup is a means of being among the best players in the world. And so on. Like cause, the reason-­for as a response to “Why are they doing that?” can be iterated in an infinite number of ways due to a purpose’s reversibility, which allows it to become a means toward a broader end. But do we really mean “infinite”? Or does this regression stop at some point? Kant gave the name of an “end in itself ” to those goals, which are both ends from all points of view and which are never a means to anything else.19 For him, only a subject that is likely to follow a moral law that is given by their own reason—­a human, in other words—­would be such an end in itself. Before him, Aristotle—­especially in his Nicomachean Ethics—­ suggested an analysis that is probably closer to what I am describing here. We have intentions and desires that—­once they have been satisfied—­ become means to other ends in turn; but ultimately these goals are in the



service of our own fulfillment, or what we sometimes call (along with Aristotle) “happiness.”20 Happiness in itself is supposed to be universal, but the subjective determination of happiness varies just as preferences vary according to subjects. However, the fact that all our different goals are ordered in the pursuit of happiness constitutes the ultimate term of the regression from means to end that we have been discussing, and which is therefore not infinite. In addition, the language of economists on the subject is almost identical: after all, what is the maximization of utility, which is what guides our actions and our choices, if not another name for “doing what makes us happy” (even if this happiness is at the cost of our well-­ being, our health, our comfort, our reputation, etc.—­since the preferences of the agent, like those of the heroin addict, can be logically contradictory with these same values). Perhaps the term “happiness” is inappropriate here, especially if we think of all those cases where utility is maximized but the agent looks very unhappy from the outside. Indeed, one of the lessons of psychoanalysis consists in showing that these “unconscious desires” that push us can be relatively indifferent to what is generally called happiness. Freud devoted his influential text Beyond the Pleasure Principle (1920) to his discovery that what he called the “pleasure principle” (Eros), which is ultimately driven by erotic gratification, does not govern all of human psychic life. He then went on to make a startling argument: one of the drives that governs humans is that of succeeding in dying one’s own death, which he calls the “death drive” (Thanatos). But let’s not go further into these difficult subjects and instead highlight this simple result: questions along the lines of “Why are they doing that?” must receive answers in terms of reasons-­for, or goals—­which are ultimately linked to each other and articulated in one final goal, which varies from individual to individual, and whose general concept is not uncontroversial.

W hy Did Mickey Mouse Open the Fridge?


Why? or What For?: Intentional Bias I emphasized earlier the parallel between causes and reasons-­for/goals; I also indicated that two different “why?”s were at play. Nevertheless, the confusion of one and the other, of “why?” and “what for?” occurs regularly in many diverse contexts. The denunciation of this confusion is even a recurring trope in philosophy. Confusing “cause” and “reason-­for” goes back to responding to the question “why?” with an intention instead of a cause, and thus of supposing that there is an agent at work who is responsible for the phenomenon that interests us. This confusion is present in many religious notions—­such as all those concerned with atonement and the retribution of actions by God. Saying that someone has fallen ill because they sinned essentially explains their sickness as a punishment, and thus as the intention of a divine agent (since by definition a punishment implies someone having a will to punish). Psychologists sometimes call this tendency of looking for intentions when causes explain phenomena “intentional bias.” And if we look at the psychological development of a child, and particularly from the perspective of Jean Piaget, we know that small children respond to “why?” with intentions and agents in a rather generalized way. This Swiss psychologist also spoke of the “animistic stage”—­in the sense that animism is a religion that attributes desires and goals to natural entities (plants, animals, etc.).21 Becoming an adult thus involves progressively discerning the two registers of response to “why?” and the occasions in which one (cause, if not scientific explanation) or the other (intention, goal) must be invoked. Intentional bias thus illustrates the difficulty of separating these two registers of “why?”—­even at an adult age. The very idea of God in Judeo-­Christian religions, just as much as the gods of polytheistic religions, proceeds from this kind of confusion in the grammatical registers of the question “why?” In the Old Testament, God is explicitly represented as a moral agent who is often angry, or who sometimes simply wants to put his followers through trials in order to test Grammar


their degree of faith—­as is the case with poor Job, who has to suffer the death of his children, the loss of his home, and physical deformation so that God can verify that he believes in God more than anything and despite everything. “Why did Job’s crops go up in smoke?” would require an answer along the lines of “Because God wanted to test Job.” Whether it is Poseidon vindictively pursuing Odysseus with tempests in order to prevent him from returning home, or Zeus unleashing hurricanes and lightning, the confusion between intentions and natural causes that presides over their invention is striking. The critique of “final causes” is an essential part of the general critique of religion and morality in Spinoza’s Ethics. In the famous pages of Book One’s Appendix, he rejects all forms of explanation by intention on the basis of an ontology in which everything is effect and cause—­flowing necessarily from what he calls “substance,” or the reality of nature. He further maintains that the prejudice of most humans in favor of final causes creates the main moral concepts that structure their lives. He writes: Now all the prejudices which I intend to mention here turn on this one point, the widespread belief among men that all things in Nature are like themselves in acting with an end in view. Indeed, they hold it as certain that God himself directs everything to a fixed end; for they say that God has made everything for man’s sake and has made man so that he should worship God.22

He continues: Further, since they find within themselves and outside themselves a considerable number of means very convenient for the pursuit of their own advantage—­as, for instance, eyes for seeing, teeth for chewing, cereals and living creatures for food, the sun for giving light, the sea for breeding fish—­the result is that they look on all the things of Nature as means to their own advantage. And realising that these were found, not produced by them, they came to believe that there is someone else who produced these means for their use.23

W hy Did Mickey Mouse Open the Fridge?


The very usage of the concept of goal and its corollary, means, ends up occupying, in the elementary psychology of humans, all forms of relationships that they experience—­often instead of the link between cause and effect. For looking on things as means, they could not believe them to be self-­ created, but on the analogy of the means which they are accustomed to produce for themselves, they were bound to conclude that there was some governor or governors of Nature, endowed with human freedom, who have attended to all their needs and made everything for their use. And having no information on the subject, they also had to estimate the character of these rulers by their own, and so they asserted that the gods direct everything for man’s use so that they may bind men to them and be held in the highest honour by them. So it came about that every individual devised different methods of worshipping God as he thought fit in order that God should love him beyond others and direct the whole of Nature so as to serve his blind cupidity and insatiable greed.24

Spinoza’s analysis only goes on to enrich the denunciation of what we have been calling a grammatical confusion by providing an anthropological explanation of this confusion and its consequences. We move from intentional bias (what Spinoza calls a “prejudice,” i.e., a preference for the means/goal explanatory pair) to “superstition” as a grounding force in ordinary perceptions of the world. Thus it was that this misconception developed into superstition and became deep-­rooted in the minds of men, and it was for this reason that every man strove most earnestly to understand and to explain the final causes of all things.25

But this confusion denounced by Spinoza goes far beyond religion, since it gives rise to a set of very general moral concepts that are supposed to explain everything. When men became convinced that everything that is created is created on their behalf, they were bound to consider as the most important quality in every individual thing that which was most useful to them,



and to regard as of the highest excellence all those things by which they were most benefited. Hence they came to form these abstract notions to explain the natures of things:—­Good, Bad, Order, Confusion, Hot, Cold, Beauty, Ugliness; and since they believe that they are free, the following abstract notions came into being:—­Praise, Blame, Right, Wrong.26

This generality of superstition is not specific to the distant time when the Dutch philosopher was writing—­one which we often associate with being rather distant from the rays of the Enlightenment. Indeed, it is not uncommon to read that “the markets are afraid” as an explanation for worrying developments concerning stock indexes, or that “capitalism is aiming toward world domination.” Yet these entities—­whether markets or capitalism—­are not agents endowed with intentions. They are above all else the name of a number of complex and intertwined causal processes. One may say that they are only metaphors; but when they persist and settle within our discourse, they illustrate that the confusion or elementary grammatical error expressed by Spinoza had a bright future ahead of it even in the secularized West. In other words, there is a guiding thread that runs from the child (and sometimes adult) who calls the door on which they have just banged their head “mean,” to the faithful who ask God to bring them success and prosperity, to the economic journalist worrying about the markets becoming upset at the indictment of the head of a major corporation; and this thread could be described as covering up the language of causes with the language of intention. In the following chapters, we will explore how intention and causality can still, despite their differences, be articulated in subtle ways; and can sometimes produce legitimate knowledge, especially in the case of biology.

W hy Did Mickey Mouse Open the Fridge?


To Sum Up This chapter focused on the third meaning of “why?” (pourquoi?), which could be rephrased here as “what for?” (pour quoi?)—­i.e., in the sense of for what purpose or intention. Formulated in ordinary psychology in terms of belief and desire, or in a more sophisticated and operational way by microeconomists in terms of utility and preferences and beliefs, goal-­based explanation specifically applies to human agents. It usually comes in the form of an attribution of beliefs and desires. It requires a principle of minimal rationality, which supports both the understanding of myself and the understanding of others and the past—­as well as all forms of narrativity. But this minimal rationality says nothing about the rationality of the agent’s preferences; and understanding others around us as well as understanding our predecessors is possibly the proper identification of preferences that seem opaque or bad at first glance. This language of goals or reasons-­for-­action contrasts with the language of causes and combining them together is problematic. When we confuse them, we often produce absurd explanations.




This page intentionally left blank

4 W hy Do Triceratops Have Horns ?

“ t o d e f e n d t h e m s e lv e s f r o m t h e t. r e x o f c o u r s e ! ” A fairly spontaneous response like this should intrigue us because it sounds like tyrannosaurs had goals in their lives. Yet we have just seen that goals and reasons-­for relate only to actions taken by humans. This was not the case for tyrannosaurs, and it is doubtful that they were rational agents. Yet this language concerning living beings is generalized: the peacock’s tail is there to attract females, the skin tissue between the legs of the flying squirrel is precisely used to fly. And even in regard to plants, it is said that sunflowers seek out the sun, and that flowers attract insects to their stamens so that they cover themselves in pollen and thus fertilize other plants. Discourse about purpose, utility, and more technically “function” (eyelashes have the function of protecting the eyes, kidneys have the function of eliminating toxins) is consubstantial with biology. But is it a good answer to questions about flower stamens or the horns of the triceratops?



Animals, Plants, and Machines In regard to why-­questions, it seems that living things do not behave like the rest of nature. The paradox did not fail to preoccupy thinkers from the Scientific Revolution such as Descartes—­who were the very people who theorized the restriction of science to efficient causes, and thus excluded final causes from “natural philosophy” (as the science of nature was called back then). Descartes is famous among French high school students for having developed the so-­called “animal machine” concept, according to which animals follow the same laws of mechanics as machines and nothing else. Usually, students are somewhat shocked by this: “My cat is not a machine, he understands me and has emotions!”; “My dog feels guilty when he does something stupid!” There is nothing to suggest that Descartes was less sensitive than these teenagers, but his argument responds to a central conceptual need: reconciling the illegitimacy of final causes in science and the appearance of goal-­d irected behavior seen in animals. Since machines are also oriented toward a goal (that which is programmed by their designer) while clearly not having a will or intention of their own, they provide a good model for thinking about animals. A simple example is enough for us to understand the Cartesian line of thought. Traditional thinkers, following the original driving forces of physiology and medicine (the Greeks Aristotle and Hippocrates, and the Roman Galen), believed that animals contained a specific internal heat source that was essentially different from any form of physical heat, and that it came directly from their immaterial soul and allowed them to move. Descartes disputed the fact that this fire had anything exceptional to it: Thus, I say, when you reflect on how these functions follow completely naturally in this machine solely from the disposition of the organs, no more nor less than those of a clock or other automaton from its counterweights and wheels, then it is not necessary to conceive on this account any other vegetative soul, nor sensitive one, nor any other principle of



motion and life, than its blood and animal spirits, agitated by the heat of the continually burning fire in the heart, and which is of the same nature as those fires found in inanimate bodies. (Descartes, Oeuvres Complètes, AT XI, 202, my emphasis)

Descartes is often presented as a foil to those who argue that biology is a truly special science that cannot be understood by physics and chemistry alone—­these “vitalists” or more generally “anti-­reductionists” (since they did not want to reduce life to matter, even if they often thought that the idea of an immaterial vital principle was excessive). But Descartes actually believed in two related ideas. The first, in connection with metaphysical reasons, was that everything in living bodies operated under the laws of mechanics (which he moreover helped in developing). But for Descartes, secondarily, there existed a fundamental ontological difference between extension—­which is the essence of what we call matter—­and thought. There is an infinite difference between these two things, which were moreover for him the only two types of things that existed in the universe. As humans have both a soul and body, these things are somehow united in us but just how remains a great mystery. Animals do not think because they have no language (their not having a language being our reason for not ascribing thoughts to them); they are therefore entirely matter and thus entirely governed by the laws of mechanics.1 But the machines we make are a good model for understanding animals. With these machines, Descartes formulated a fundamental heuristic which modern biologists still discuss: the analogy with the artifact. In fact, each era has constructed a biology that resembles its techniques for explaining the world.2 Thus, for Hermann Boerhaave—­a major physiologist from the early eighteenth century—­the universe of mechanical machines provided biologists with their lexicon: “We find in the body supports, columns, beams, bastions, levers, corners, integuments, presses, bellows, filters, channels, troughs, reservoirs. The ability to perform movements by means of these instruments is called function; it is only by mechanical laws that these movements are made, and it is only by these laws W hy Do Triceratops Have Horns?


that they can be explained” (Institutions of Medicine, I, 121). In the nineteenth century, organisms were regularly considered through the lens of electrical machines, with a major interest in the role of the nervous system. This was the case with Claude Bernard—­whose Introduction to the Study of Experimental Medicine (1859) became as we know the vade mecum for the empirical method in science. In the twentieth century, all of molecular biology and genetics bore witness to the power of computing, with most of its notions (code, program, instruction, memory, etc.) coming from the world of computers—­which was in full development at the time of the discovery of DNA (1953). This analogy with machines, or technical artifacts, indeed allows one to inquire about the nature of any structure since they generally always serve a purpose within a greater machine. The Swiss scientist Albrecht von Haller, whose Elements of Physiology (1755) is one of the classics of eighteenth-­century physiology, declared that the physiologist’s intellectual task was “anatomical deduction”: to deduce a structure’s function from its description. In fact, physiologists have rarely been genuine iatromechanists—­the weird name given to those who truly subscribed to Descartes’s mechanicist ideas. Using this analogy is one thing (and an extremely common thing at that: who has never explained to a child asking “why do we have to eat?” that we are like cars and need fuel to in order to run?), but justifying it is something else entirely. The goals of a machine don’t pose any problems for us since we know that it was designed by someone: as such, they are the goals of the designer. But no engineer constructs living beings; and the fact that they grow by themselves constitutes, at least according to Aristotle, the difference between living things and technical objects. How then can we support this analogy with machines if not as a heuristic—­an aid to understanding and discovery—­that provides an often easy way to identify functions? But in this case, two big problems arise. First, why must this heuristic be limited to living beings? For example, we could clearly say that during the water



cycle the heat of the sun tries to evaporate water from the oceans and that the earth has the function of absorbing rainwater and spreading it in groundwater. However, in our earlier examples, we saw that this language of utility and function seems to provide a fairly legitimate answer to the question “why?” in the case of the triceratops but rarely elsewhere in physics. Second, if the machine is only an analogy, can we really talk about the function of an organism’s structure? The function is what the designer has planned for the machine and its parts; thus, in the absence of a designer, there would exist no function strictly speaking, thereby leaving only things that we humans see in our own functional way. For Descartes, the justification of the machine analogy goes back to theology: God is the engineer who built animals and plants, like man with his machines. And his own system provides a proof of the existence of God—­prior (logically) to the consideration of animals.3 Even if Descartes’s particular arguments for the existence of God and its truthfulness are not universally shared, this strategy—­which consists in assuming that some divine architect has determined that every part plays a role in an animal or a plant—­is common for classical rationalists.4 Leibniz was one of the first to use the term “organism” to designate living beings or in any case the special form of arrangement that they represent; 5 he distinguished between “artificial machines” made by man and “natural machines” made by God—­w ith the former resulting from a finite intelligence and the latter from an infinite one and therefore infinitely organized. Every part of an animal or a plant is in reality a small machine whose parts are organized ad infinitum, as is illustrated by the following famous sentence from his Monadology: “Each portion of matter may be conceived of as a garden full of plants, and as a pond full of fishes. But each branch of the plant, each member of the animal, each drop of its humors, is also such a garden or such a pond” (§67).

W hy Do Triceratops Have Horns?


God and Russian Dolls Writing at the time when the first microscopic observations were being made by the Dutch scientists Leuwenhoek and Swammerdam, Leibniz developed a metaphysical theory that appears strange to us but which makes it possible to understand the fecundity of the machine analogy: the so-­called preformation theory; and more precisely, the version of it that is called the “preexistence of germs.”6 Since Aristotle, the formation of living beings has always been something quite mysterious. Observation under the microscope of the spermatozoa present in seminal fluid inspired several thinkers of the time—­Leibniz being foremost among them—­w ith the idea that these “animalcules” are what the organism will become in miniature. And, like Russian dolls, that these animalcules contain their own gametes in an even smaller version; and that the gametes of those even smaller animalcules will have in them a still smaller version, and so on ad infinitum for all of their descendants. God thus built all living beings at the origin of the world, endowing all their structures with their functions so that these living miniatures could develop throughout the whole history of the world without needing to add anything essential to the process that God had initially laid out. The theory of the preexistence of germs perfectly justifies seeing animals as machines made by God, and thus endowed with many functions that all aim toward allowing them to exist and to reproduce (since it is necessary for the miniatures of their offspring, and the offspring of the offspring contained within them, to develop in turn). But many scholars ended up rejecting this dominant idea at the beginning of the eighteenth century, with the best known among them being Buffon and Maupertuis.7 According to them, the structures of living things do not preexist within their embryogenesis and are instead constructed through a regulated exchange with the physico-­chemical environment. This second theory is called epigenesis. We may note that the opposition of “epigenesis vs. preformationism”–­in other words, the opposition between the idea of a preexisting form and the idea of the construction of form through the Fusions


interplay of various forces (both chemical and physical) during their development—­is a constant one in biology, just like the one mentioned earlier between mechanisms and vitalists.8 But preformationism is not as ridiculous as it may appear. Of course, no one believes in this Russian Doll theory anymore; and our best theory about how an organism develops involves genes and genomes. However, the most reductionist version of this theory—­which was in vogue until the 2000s—­described genes as a parentally inherited program or set of instructions present in the zygote that directed development toward adulthood. As such, it was a modern version of the preformationism of Leibniz and his contemporaries, where “shape” is replaced by “information.” Be that as it may, in the absence of a God who created the living (possibly in the form of Russian dolls at the beginning of time), the justification of the machine-­like analogy is no longer self-­evident. Are we really allowed to say that the function of the triceratops’s horns is self-­defense? Likewise, are we able to maintain that the goal of an eagle tearing through the sky toward a defenseless sheep is to capture and then eat it? Or that the dog that chews on our shoes or brings us its leash wants to go out for a walk? In our modern post-­Galilean, post-­Newtonian, post-­Kantian science—­a science in which God guarantees nothing—­how is it that we can truly use these action verbs (hunting, gathering, fleeing, supplying, etc.) to respond to why-­questions pertaining to the living world? Darwin, Natural Selection, and Adaptation In the end, wouldn’t all of these statements and answers that invoke the function of wings or the kidneys, the courting strategy of the peacock or the hunting strategies of the cheetah, simply be opportunities to confuse causes (those that are necessarily physical) with goals or intentions—­ something which Spinoza so forcefully denounced? This is not the case, because we are all Darwinians, in the sense of the often-­quoted aphorism from the Russian biologist Dobzhansky that “nothing makes sense in biology except in the light of evolution.” But how could Darwin and the W hy Do Triceratops Have Horns?


Darwinians dispel our concerns here about a triceratops’s horns, or the pistils of a flower, or how hawks hunt? To better understand this, let’s refer to the Reverend William Paley. Darwin was a great admirer of Paley’s treatise on natural theology from 1802, Natural Theology or Evidences of the Existence and Attributes of the Deity. A natural theology is a theory that infers the indispensability of the existence of God from the complexities of the world—­or what Kant calls “physico-­theological proof.” In nineteenth-­century England, a place and time full of naturalists and bird connoisseurs, these wonders of the world generally belonged to the universe of the living. Paley thus knew how to very accurately describe the subtle mechanisms by which, for example, orchids ensure that the insects that feed off of them will also fertilize them. Yet if we look back at animal-­machines and anatomical deduction, we see the same correlation of justification and explanation that was discussed back in the first chapter. Physiologists can infer from the shape of a wing that it is used to fly—­and thus explain it—­because they know that birds are machines built by God, and that He endowed his structures with specific functions. Conversely, natural theologians like Paley are justified in their belief in God because the observations that wings are used to fly, that the ball joint is used to rotate the knee, and that the eyes of vertebrates allow them to see (in other words, that everything in the living world seems to have a function or a utility) indicate an intelligent and benevolent cause—­namely, a God.9 Of course, we do not need to wait until Darwin to see criticisms of this. Without being fierce materialists, many thinkers questioned these arguments of natural theology. In particular, in his Critique of Judgment from 1790 (one of the few major philosophical texts devoted to life before Darwin10), Kant advanced an interesting distinction between what he calls “relative” purposiveness and “internal” purposiveness (§63). When we say that grasshoppers are food for starlings, or that horse droppings are food for flies, we are establishing a relationship of purpose and utility between two different species. But ultimately, Kant says, these relationships can be arbitrarily drawn between any species and thus have no final Fusions


cognitive value. Indeed, they hardly say more than Marcel Pagnol’s joke that plane trees were made to provide shade for pétanque (bocce) players, or Bernardin de Saint-­Pierre’s apparently serious argument that melons had been previously divided into slices by God so that they could be eaten en famille.11 These are the very same confusions that Spinoza criticized in the Appendix which was quoted earlier at length. But “internal purpose,” according to Kant, is attributed only in relation to a given organism and its parts: a part, or an organ, seems to be there for a specific purpose. This is indeed a “final cause,” since the effect explains the cause: for example, the mating success of a peacock explains why its caudal appendage is multicolored. But unlike relative purposiveness, it is very difficult for biologists to do without these kinds of statements—­these functional attributions that make up a good part of physiology. For Paley, these functions prove the existence of God. More fundamentally, they indicate in a general way what he calls “contrivances”—­a term describing multiple parts that appear to be forced to fit together for one same purpose. To take up a classic example from Kant in a 1764 text called The Only Possible Argument in Support of a Demonstration of the Existence of God, the eye is composed of hundreds of separate parts (lens, retinas, muscles, nerves, corneas, etc.) and vision would be impossible if any of them were only slightly different. It is therefore difficult to think that this is all due to chance; for if we imagine being given all of these pieces and assembling them at random—­regardless of even whether vision would be possible or not—­the probability of ultimately having a functional eye in the end would be almost zero. The contrivances appearing throughout living nature thus indicate that the parts of living things are “made for” something; and Paley, like all natural theologians, infers a divine intelligence from this since chance seems to be once again excluded. Moreover, these contrivances can be more or less complex; and obviously, with increasing complexity (like with the eye, for example), the inference toward divine intelligence only becomes stronger.12 This adjustment is not only internal to the organism between constituent parts and organs; it also takes place between the organ itself and its W hy Do Triceratops Have Horns?


environment. Thus, among all possible colors, the polar bear is as white as the surrounding ice, which allows it to more easily stalk its prey. Cheetahs and lions are the color of the savannah. The dolphin and the shark—­ creatures which move rapidly under water—­have similar hydrodynamic shapes even if they are not closely related (one is a mammal, the other an “elasmobranch” fish). This is what is traditionally called adaptation, in the sense that organisms are adapted to their environment. And adaptation is sometimes amazing: researchers recently understood that fish living in the Antarctic Ocean, where water is a cold as –­2°C (salt keeps the water from freezing), are able to prevent ice crystals forming in their blood (which would mean certain death) by producing a protein that is similar to the antifreeze we put in our car engines. The same theological reasoning as that behind “contrivances” would be applied by Paley and his contemporaries in regard to the polar bear. Among all the possible colors, the polar bear is white, which allows it to hunt more easily. How can we possibly explain this perfect coincidence when any other color had the same possibility of being produced? Here again, we are thinking in terms of final causes: the pairing of the bear and its outer appearance—­an effect of it being white—­is imagined to be the cause of its color: it is white in order to hunt. Physiologists identify and dissect functions; seventeenth-­and eighteenth-­century naturalists like Linnaeus, Buffon, Jussieu, and Réaumur study the adaptation of organisms; while theologians are amazed by these things and use them to prove the existence of God. As was mentioned earlier, functions and adaptations seem profoundly finalist, or “teleological,” as philosophers have pointed out repeatedly since Kant. Darwin, in his 1859 Origin of Species—­which proved that species descend from one another—­presented a solution to this enigma of biological finalism that did not resort to God and did not shock the scientific meaning of causality, as we will soon see. It is often said that Darwinian evolutionism caused a scandal because it included man among the animals; Freud famously spoke of this as a “narcissistic wound.”13 Without denying certain resistances to the idea, we Fusions


must note that Darwin’s genuine novelty was the argument that natural selection (which he was the first to discover, along with the less well-­ known figure Alfred Russell Wallace) was responsible for the evolution of species—­the very idea of an evolution having already been conceived of by figures ranging from Lamarck to Darwin’s own grandfather Erasmus, and already had several followers. However, evolution could still be considered the result of a divine plan of improvement, thereby keeping it within the realms of acceptability for those with a theological mindset. Natural selection, on the other hand, broke definitively with the shadow of God in living nature. Indeed, with Darwin, the process he called “natural selection” explained both adaptation and the functions of organisms without having to resort to any form of divine intelligence while also avoiding the crutch of pure chance as an expository measure. How can this be? The “Paramount Power of Natural Selection”14 In order to figure out the connection between natural selection and adaptation, let’s imagine a population of a certain species, rabbits for example. All rabbits are slightly different from each other, as well as being slightly different from their parents; this is what we call variation. Rabbits produce offspring that look more like them than other rabbits but that do not copy them; this is called heredity. Now let’s consider a particular property of rabbits like running speed. This influences the chances of survival for rabbits—­because the faster they run, the less likely they will be eaten by foxes. Supposing that running speed is heritable (which is indeed the case15), rabbits who run fast will live longer and therefore produce more offspring, who will in turn run faster than other rabbits in their particular generation (because of heritability). Thus, in generation after generation, the proportion of fast rabbits increases in the population; and since a few of the young rabbits can be expected to eventually run faster than their parents, the average speed (as well as number) of these rabbits will also tend to increase. When someone says that natural selection explains the running speed of rabbits, they are talking precisely about this process. Rabbits are W hy Do Triceratops Have Horns?


adapted to a world where foxes chase them; thus, natural selection explains adaptation. This is the major Darwinian argument about adaptation—­and a key aspect of purposiveness in biology. Let’s note three things here. First, why would this running speed not increase infinitely? To answer this, it should be highlighted that the main frame of reference for competition for running speed is not that of the rabbit and the fox, but that among the rabbits themselves. As a joke much loved by biologists goes: “Two scientists are in the jungle. A tiger comes. One says, ‘A tiger! Let’s run!’ The other replies, ‘It’s useless, he can run faster than us.’ The first one responds, ‘But I don’t care about running faster than the tiger! I care about running faster than you!’ ” And since a rabbit that runs much faster than the foxes expends more energy than one who simply runs a little faster than them, it will have less energy than the latter to, for instance, produce offspring—­thus leaving behind fewer offspring to fill the population with its genes. This same reasoning tells us that through natural selection the speed of rabbits has a tendency to stabilize at a rate that is slightly higher than that of foxes.16 Second, natural selection is fundamentally a population-­level process, and not an individual-­level one. As Darwinians say, individuals never evolve! They develop—­meaning that they go from an egg stage to an adult stage. Only species evolve. Last, simple variations (from one generation to another; for example, from adult rabbits to their offspring) are not spontaneously directed toward the traits that would be best for the organism. Our parental rabbits do not always produce rabbits that run faster and faster; they just produce rabbits that look like them more than others. If variation, on the contrary, produced rabbits that run faster than foxes for generation after generation, the “rabbit” species would not need natural selection to achieve a running speed that is adapted to foxes. Variation is random—­or like biologists say, “blind.” Despite this random quality (random in the sense that nature does nothing for the good of organisms), if given enough time, and enough variations, the process of natural selection ends up producing the adaptation of organisms to their environment.17 A current major controversy in Fusions


the area of evolutionary biology consists in discussing the extent to which some variation processes can be seen as Lamarckian, in the sense that they feature processes through which new variants tend to be better adapted on their own.18 This is an ongoing scientific controversy, which does not crucially affect my cartography of “why?” and the way adaptation is a “why?” When the adaptations are complex, the accumulation of selections will account for them. It is the selection of those variations that are always the best from each generation over a long period of time that allows for the production of an organ as sophisticated as the eye—­to which Darwin devotes a passage purely with the intention of refuting the classic argument of those who believe that the complexity and the functionality of life require an “intelligent designer.” Likewise, Darwin’s argument was reinforced by an article in 1989 using a computer simulation to demonstrate that the production of a vertebrate’s eye from small simple cells was plausible as long as they were slightly more sensitive to variations of light than other small cells and that development did not take an excessively long amount of time.19 Paley’s “contrivances,” which were unattainable solely through chance and its aimless processes, are thus explained through cumulative natural selection. And Darwin will go on to devote an entire book to orchids in 1862 (On the Various Contrivances by Which British and Foreign Orchids Are Fertilised by Insects), these flowers whose different reproductive systems present countless examples of such complex arrangements. Even though natural selection is the major explanans of many features of the biological world (including adaptation, as well as species diversity—­ since a multiplicity of environments yields an even larger multiplicity of adapted species), there are still major controversies about such explanation. First, what does “explain” actually mean? And to what extent does selection explain? Biologists used to conceive of natural selection as a force or a cause. Thus, philosophers often inherited the view that selection is a force—­for instance, Elliott Sober, in The Nature of Selection (1984), which was a milestone in the philosophy of evolutionary biology. Actually, the first modern evolutionary biologists themselves (who created the so-­called Modern Synthesis—­see below) oscillated between “forces,” “causes” (e.g., W hy Do Triceratops Have Horns?


J. B. S. Haldane, The Causes of Evolution, 1949), and “factors.”20 More recently, the so-­called “statisticalist interpretation of natural selection,” put forth by philosophers Mohan Matthen, Tim Lewens, André Ariew, and Denis Walsh, contested that selection is a cause. They contend that multiple interactions such as eating, dying from parasitic disease, and reproducing represent what is causal; and the “force of selection” is constituted by the aggregate of these interactions. However, nothing exists in addition to these causes that would explain the changes in traits within a population.21 A second dispute concerns what selection explains, even if it’s not a cause proprio sensu: the diffusion of a trait (which is the minimalist characterization of its effect), the spreading of an allele, or the arrival of a particular trait in a population.22 These debates bear important consequences for the practice of science itself. For instance, if selection is not a cause of the emergence of novel adaptations, then their causes should be thought of at the level of individual developmental physiological processes—­a position systematically developed by Denis Walsh recently.23 But such controversies do not directly matter for the current issue of deciding to what extent answers to why-­questions about biology should legitimately include intentional or teleological wording. Natural Selection and Biological Functions The Modern Synthesis unified the Darwinian theory of evolution with Mendelian genetics (Darwin did not know about genes, which were suggested by Gregor Mendel in 1880 and discovered as a material reality named DNA by Rosalind Franklin, James Watson, and Francis Crick in 1953). The natural selection process conceived of by Darwin thus received an explanation in genetic terms; evolution is shaped by changes in gene frequencies in populations, mostly under the effect of natural selection.24 But the substance of the Darwinian argument—­at least with respect to the explanation that it provides for biological adaptation—­does not change. Granted, philosophers and biologists disagree about how exactly this allelic frequency change should be understood: is it evolution itself, or an effect of Fusions


evolution, or its cause?25 I won’t delve into this problem here; it encompasses metaphysical issues about the proper theory of causation, as well as the biological question of whether selection targets genes or organisms. Natural selection, the explanation of the sometimes complex adaptations of organisms, ultimately justifies these familiar yet scientifically shocking teleological statements that rely on purpose in the study of life. To fully understand it, we must remember that the very notion of biological “function” often has an explanatory use: to say that “the function of the kidneys is the elimination of toxins” explains why one has kidneys. It is even for this very reason that Spinoza would see a departure here from good scientific method since there is an apparent final cause: the effect of the kidney (the elimination of toxins) explains its cause, namely the existence of the kidney. But we Darwinians have a scientifically acceptable reformulation of this. Indeed, the vertebrate kidney exists because it has been favored by cumulative natural selection—­exactly like the running speed of rabbits or the structure of the eye—­and the reason it was selected was that it allowed for the elimination of toxins. To say that a feature, a part, a behavior, or an organ has a function X is not an appeal to divine intention; it is a simple affirmation that the effect X is the reason for which this trait was selected as it gave its bearers a selective advantage (i.e., a longer life than others, more opportunities for reproduction, etc.). And this proposition reformulates the final cause in terms that are much more acceptable as they are based on a long-­term scenario involving the history of the population of the first vertebrates—­among which the kidney evolved. This idea—­initially formulated by Larry Wright and called the “etiological view of functions”—­ makes “the function of the kidneys” something perfectly legitimate scientifically, since it encompasses a causal explanation of the kidneys.26 Therefore, like all causes, functions are ontologically robust; they are part of the “furniture of the world,” in the sense of Russell’s old phrase. And functions encompass norms—­since when something has a function (like the kidney or a tire), it can also malfunction while still continuing to have its function: flat tires, diseased kidney. Karen Neander and Ruth Millikan W hy Do Triceratops Have Horns?


developed this realist view of functions (realist in the sense that functions exist in real-­life terms) by emphasizing that norms set by functions exist in nature—­which in turn allows us to posit epistemic norms (such as “truth”) and to therefore naturalize epistemic concepts such as intentionality.27 This is called the teleosemantic program, which was embraced by Fred Dretske, Daniel Dennett, and Ruth Millikan among others.28 Some philosophers disagree with this interpretation of functional statements. For these “causal-­role theorists,” “the function of the kidney is to eliminate toxins” does not explain the presence of the kidneys, but rather the evacuation process by which an organism gets rid of harmful substances. The question answered by functions is not “why do we have kidneys?” but “why are we able to get rid of these substances?”—­which in turn is part of the general question: “why are we able to remain alive despite our intake of toxins?”; which is ultimately a sub-­question of “why are we able to stay alive?” Different why-­questions therefore license a different understanding of the concept of function: namely, a function is the causal role of kidneys within the process of toxin evacuation. More generally this means: “The function y of X is its contribution to a certain general process Y, proper to a general system S to which X belongs.” This implies that whatever system S is chosen, there will be another function of X. Therefore, “functions” depend upon the choice of an explanatory system S, and are not something that exists in the world, as functions do within the context of etiological theories. This lack of realism implies that functional norms are only in the eye of the beholder, thereby preventing philosophers from using this account of functions to naturalize norms (as the competing account of functions does).29 I won’t explicate these theoretical differences in making sense of biological functions any longer. The debates are still raging among philosophers; but it suffices to say here that function harbors both realist and less realist accounts, and has a legitimate status within biology.30 This debate—­which relies on distinct ways of understanding the explanation required for questions like “why do we have kidneys?”—­a lready indicates that there exist several ways of answering the question “why?” Fusions


in biology. Together with this duality of the question “why?” embedded within functional ascription, consider someone asking, “Why is my hair growing?” Two types of response are possible and both are correct. The first would list all the physical processes through which scalp cells secrete keratin that then becomes hair. The second would indicate that hair is a defining trait for mammals, and that it was selected because it offers protection against variations in temperature. Ernst Mayr, one of the architects of the Modern Synthesis, theorized this difference by speaking of proximate causes and ultimate causes.31 The first response to “why?” discusses proximate causes—­that is, causes that are specific to the existence of an individual of the given species; it states efficient causes in exactly the same way as chemistry does. The second response concerns ancestral populations of the species in question, and is “ultimate” in this sense; it essentially discusses evolution by natural selection. I used the example of the triceratops earlier because the first type of response is almost impossible since we do not know much about the physiology of the triceratops; this leaves us with ultimate causes (evolution), and there we can provide an answer. It is also in this kind of answer that living beings—­from the scientific point of view—­present interesting characteristics, such as functions and adaptations, which are not essentially found in physics or chemistry. Notice that the former way of understanding “why” allows one to talk about “functions” as causal contributions to a mechanism (for example, the function of secreting follicles in growing hair); and thus as “causal role” functions instead of “etiological” ones, which instead fit with the latter kind of why. Hence, discussions concerning function in biology can be divided into two meanings that correspond to the two kinds of causes as defined by Mayr. Interestingly, the first answer encapsulates the mechanism of hair growth, while the second addresses its function—­in the sense of the etiological account that was just sketched out. Mayr claims that only the second one is proper to what’s alive, and epistemologically characterizes biology as a whole: evolution by natural selection is what distinguishes the living from pure matter. The mechanisms through which (etiologiW hy Do Triceratops Have Horns?


cal) functions are implemented are, at all levels, complex physicochemical systems. This latter claim is often challenged, especially by philosophers who argue that molecular biology is different from physics and chemistry because it features emerging properties. Others (e.g., Laland et al. 2011 32) contest that the ultimate/proximate difference is accurate, because some phenomena occurring in the lifetime of an individual organism—­such as its development (namely, the process through which it goes from zygote stage to adult stage)—­have an evolutionary relevance and thus pertain to both ultimate and proximate causes. But in any case, this difference has an interest in classifying explanations; and therefore, the meanings of “why?” If it proves to be misguided, an additional layer of complexity would be brought into our cartography of “why?”33 Functions, Artifacts, Agents Let’s summarize: if there is a population of individuals who have heritable properties and who are variable in regard to these properties, if this variation is not directed toward what is adaptive, and if these properties give their carriers a higher or lower chance of survival and reproduction, then evolution through natural selection will take place in that population.34 And insofar as living beings result from natural selection, they will be adapted to their environment. This means that there is no need to presuppose an infinitely intelligent engineer in order to see physiological functions in an organism’s structure. The webbed foot of the duck has the function of water propulsion. This can be easily explained through the hypothesis of a selection of bigger and bigger interstitial tissue between the toes, as compared with the feet of the ancestors of ducks who lived outside of water. The features of organisms are thus both close to what an engineer would create—­the wings of an eagle resemble those of an airplane, the shape of a dolphin resembles that of a submarine—­and somewhat different; because, as was pointed out by François Jacob and Stephen Jay Gould among others, natural selection must always start from a previous state.35 It is a “tinkerer” (Jacob), managing with whatever is at hand; and not an Fusions


engineer, who can procure the best materials that may be required. This is why, for example, the eye of vertebrates is not perfect: it has a “blind spot”—­namely, a point that is entirely unreceptive to light because the optic nerve occupies it, a result of the ancestral state from which this organ evolved. There is no doubt that an engineer with access to all of the materials they would have wanted or needed would have done differently and better.36 Thus, one answers the question “why do eagles have claws?” by invoking their function and ultimately natural selection. However, the same response in a slightly modified way is also suitable for questions about apparently dysfunctional traits (or, as biologists say, “maladaptive”) like, for example, “Why do we sometimes swallow the wrong way?” (Answer: Because the trachea and the esophagus lead to the same cavity—­since, in order to “build” a trachea, natural selection had to graft it onto the already existing esophagus of aquatic vertebrates.) But the explanatory capacity of natural selection does not limit itself to making rational this analogy with machines that physiologists have used since Galen; it can also justify an intentionalist or finalistic language in which goals are attributed to organisms without conscience or intellect. Let’s look at birds, for example; their clutch size was one of the major themes of what is now called “behavioral ecology”—­the study of the behaviors and traits of organisms based on the assumption that they result from natural selection.37 For most species, this size varies little: each season, they lay around four to five eggs per nest. In the 1940s, the biologist David Lack performed in-­depth studies of many bird species in England (great tits, starlings, etc.) and accumulated data in this way. He then wondered why the number of eggs was what it was.38 One would imagine that natural selection would increase the number of eggs—­since the more that there are, the more descendants there may be. But in fact if there are too many eggs, the competition between them becomes too high and the chances of survival suffer for all of them. In the end, the clutch results from a trade-­ off between these two tendencies—­w ith the added consideration that a female bird that exhausts itself too much over the course of a season is at a disadvantage with other females that “reserve” their strength for the W hy Do Triceratops Have Horns?


following season. This ultimately suggests that there is an advantage in the strategy of not laying too many eggs even if they all survive, and the size of the nest stabilizes at around 4–­5 eggs per clutch. Everything thus happens as if the bird were trying to maximize its number of healthy offspring over the course of multiple seasons—­just like we saw with rational agents and how they maximize their utility, and how therein lies the matrix of responses to the question “Why are they doing that?” Natural selection thus allows for us to consider that poorly intelligent organisms like many birds (and even those devoid of cognitive abilities like insects or plants) behave like rational agents. However, instead of maximizing their utility, the former maximize, roughly speaking, their number of offspring—­or, in the language of biologists, their “fitness.” More generally, if animals seem to make decisions insofar as their decision-­making mechanism has been forged by natural selection, it is legitimate to think that the decisions they make can be modeled as the choices of a rational agent maximizing a utility defined by its “fitness.” This takes place without the animal in question having any awareness of it, since natural selection has “built” it that way. Thus, life itself is a domain where causes surprisingly look like purposes because of natural selection. But they are not, since we generally do not attribute cognitive capacities to dinosaurs, and certainly do not do so with plants, amoebas, or mollusks. And if one objects that only conscious beings can perform the complex cognitive operations required to execute rational decisions, they should be reminded that our brain constantly conducts very sophisticated cognitive tasks of which we are unaware.39 Thus, the process of selection mimics the intentions of a cognitive agent at the level of a population across multiple generations, and then individuals in this population appear as cognitive agents. Samir Okasha, in Agents and Goals in Evolution, recently examined the following two ascriptions of goals in evolution: organisms as intentional cognitive agents, and selection itself as an optimizing agent.40 He argues that while the latter faces many theoretical issues, the former appears justified to the extent that a “community of purposes” between parts of the Fusions


organism can be asserted. But the parallelism between maximizing fitness as a criterion for selection in choosing phenotypes, and maximizing utility as a criterion for rational agents in choosing options, instantiates a deep affinity between natural selection and rationality. Beyond the etiological account of function, which relies on a reference to natural selection, we see here (as Maynard-­Smith famously wrote about in his seminal book about evolutionary game theory41) that there is a close connaturality between economics and biological evolution—­which ultimately allows us, to some extent, to treat biological why-­questions as intentional why-­questions.42 The evolutionary viewpoint here supports an account of animals as somehow rational beings. This has been strongly and independently advocated by Susan Hurley, who inquired about the “space of reasons” possibly inhabited by animals; she claims that it is the “space of action,” not of “conceptualized inference” used in epistemic frameworks.43 Therefore, she argues that animals can be rational in the sense that they act for reasons, even though they lack conceptual abilities (and therefore can’t represent their reasons). Their rationality is “context-­bound”: they can base their actions upon transitive preferences about particular things in a context, for instance mating, but can’t generalize this transitivity across all contexts.44 The idea that animals have some practical rationality appears as a point of convergence between Hurley’s philosophical analysis of rationality, and an examination of the assumptions of behavioral ecology. Organisms as Social Agents: The Extent of Altruism Nevertheless, by providing a genetic understanding of heredity, modern biology introduced an additional subtlety. Sometimes, like bad economic agents, organisms act in ways that are contrary to their interests and in seeming contradiction with what we just claimed. Why do they do this? Why are most bees sterile? Why do male praying mantises let themselves be eaten? Why do vervet monkeys yell out loudly to warn the group if they see a predator (at great risk to themselves) instead of discreetly fleeing? The explanation lies in a somewhat esoteric formula called Hamilton’s Rule, W hy Do Triceratops Have Horns?


which is a mantra for biologists interested in cooperation and expressed as br –­  c > 0. In the equation, c is the cost for the individual, b is the benefit for another (the queen, for example), and r is a coefficient that expresses genetic relatedness. The whole point here consists in taking into account not just the reproduction of the individual but also the impact that their actions have on those who are closely related to them—­since there is a good chance that these relatives will pass on the same genes as the individual would (or, at least, more of these same genes than unrelated individuals). This is called kin selection, a concept introduced by William Hamilton in 1963 and perhaps the most significant advance in biology since natural selection.45 Thus, being sterile while working to help the queen fertilize, the bee will leave more of her genes behind than if she produced her offspring (because in the unusual bee kinship system, a bee’s sister is genetically closer to a bee than a potential daughter—­even though “how close” depends upon which species of bee is considered, as there are hundreds of bee species and their kinship systems often differ). In the living world, everything thus happens as if organisms were aiming toward maximizing their so-­called “inclusive” fitness, which is defined in the Hamiltonian equation by the relationship of br and c (i.e., the genes left to the next generation by themselves and by their close relatives, linked by the degree r, that they help to reproduce). When someone asks “Why are bees sterile?”, it is thus legitimate to answer that it helps the queen to procreate as much as possible—­a lthough bees obviously do not have cognitive abilities that would allow them to explicitly make such a plan. One can thus think of organisms as economic agents trying to maximize their inclusive fitness even if they are as brainless as an oyster or a plane tree. Biologist Alan Grafen recently defended the legitimacy of seeing organisms as “maximizing agents” on the basis of a formal correspondence or “isomorphism” between models in behavioral ecology (which focuses on the evolution of the fittest phenotype) and population genetics models (which capture the dynamics of allele frequencies).46 Such correspondences Fusions


do not always hold; but in principle, it would provide behavioral ecology models (framed in terms of optimal responses to environmental demands) with a guarantee that they explain the result of an actual process of evolution by natural selection instead of a mere speculation about phenotypes. Those isomorphisms would therefore realize a mathematical proof of a correspondence between natural selection as a process involving gene change, and adaptation as a concept concerning organisms. Granted, the quest for a mathematical justification of the idea that natural selection in principle creates organism design—­as long as all circumstances are satisfied—­is as old as Ronald Fisher’s work in population genetics. For him, in his Genetical Theory of Natural Selection (1930) a “Fundamental Theorem of Natural Selection” is proven, which states that the fitness increase due to natural selection between two generations is always positive; and thus, selection by nature increases fitness.47 In practice, fitness may not increase if the environment deteriorates enough at the same time to counterbalance it, for example when the selected traits themselves change the environment in ways that harm their capacities for flourishing.48 Fisher’s theorem was met with much opposition.49 By integrating kin selection Grafen’s view takes into account those cases where selection is frequency-­dependent, which can appear as improving the prospects of Fisher’s project.50 Notwithstanding the ultimate limitations of using it as a justification, the maximizing agent analogy brings our attention to agency as another aspect of the conflation of two meanings of “why”: the cause, and the intention. Determining to what extent the ‘maximizing agent’ is an analogy remains an open question. Some philosophers adopted a very different perspective than the Darwinian view put forth by Grafen or Okasha, and argued that agency is a major ontological category required for understanding what organisms are and do. Walsh’s Organisms, Agency and Evolution (mentioned earlier in this chapter) develops a view that is far more realist regarding agency than what Grafen proposes, since for him agency is more than an analogy. If organisms are indeed agents—­which would result in the ontological category of agency being far more widely W hy Do Triceratops Have Horns?


distributed—­major ethical consequences would follow. The jury is still out on the exact ontology required for the use of agency concepts in biology.51 Limits to Purposes and Agency in Biology The illusion of final causes that Spinoza denounced therefore has no place in biology, even though biologists constantly use apparent final causes and would find it very difficult to formulate their speech differently. However, natural selection and Hamilton’s Rule (which states that, considering an allele, adding the payoff it provides to its bearer to the one it provides to the others, mitigated by their degree of relatedness to the former, determines its being selected when it is positive) underlie a legitimate language in which both the analogies of the machine and of the economic agent are authorized and thus allow science to say the why of phenomena—­ even though the proper metaphysical interpretation of such agency is still disputed. It should not be inferred, however, that final causes are completely authorized in biology and that living beings are therefore here for some purpose. The German poet Angelus Silesius said as much three centuries ago: “The Rose is without why.” How can we understand it? How can this rose—­which I look at, whose perfume I smell, whose variety may have received the name of a famous actress or tennis player (as is common among thousands of existing breeds of roses)—­be without a why? On the contrary, it seems like we know very well why roses exist: we can reconstruct the ancestors of the species and their evolutionary history; for the same reasons, we know why roses have thorns (protection against fragrance-­ attracted herbivores) and why petals are arranged the way that they are (pollination). The life cycle of the rose, the way it reproduces, and the way its flowers bloom and wither annually are all well-­explained through evolution by natural selection. These traits can be compared with conifers like fir trees, whose leaves do not fall because the high mountain environment where they live imposes very different selective pressures on plants. In the case of our rose, let’s attribute some of its specificities to its gardeners—­ Fusions


who through techniques such as hybridization, stem cuttings, and artificial selection created the rose that they had been wanting. Finally, in a broad sense that is becoming increasingly precise, we can also explain how a rose seed develops into a rose given the right environmental conditions by tracing how the seed’s genome responds to environmental stimulation by triggering a certain number of embryogenetic processes (cell divisions and differentiations, morphogenesis, etc.). According to the two registers of ultimate causes and proximate causes that Ernst Mayr distinguished (respectively represented here by evolution and development) we thus have a substantial set of answers to the question “Why this rose?” So was this Silesius just a crank? In reality, the Silesius’s warum deals with something else. Roses bring happiness: some sing odes about them, and others want to share this joy and offer them as gifts. This has reached a point where they have become a symbol of love—­or rather of all the variants of the feelings of love, namely passion (red) and tenderness (pink). That a flower can bring so much happiness thus suggests that it is there precisely for such a reason—­whether to sing the glory of the Creator, as generations of theologians maintained; to satisfy our aesthetic appetites; or to help people seduce each other if we believe books about popularized evolutionary psychology. However, this intuition must be resisted, which is what Angelus Silesius understood: the rose is there for nothing, without aims nor whys. The fact that it blooms and enchants us simply comes as a free extra, a supplement to the long evolutionary and developmental process that led it to our eyes and nostrils but that was directed by no external goals. But Wait . . . Why Life? We have just seen that the living world authorizes a more flexible application of final causes thanks to Darwin’s theory of natural selection. Rival ideas, which are sometimes called “intelligent design,” have become unnecessary in accounting for the functions and purposes of animals and plants. W hy Do Triceratops Have Horns?


But as a modern Paleyan might retort, if final causes for living beings belong solely to a purely natural order of things that is without intentions or goals—­like we see in physics and chemistry—­why does life exist? Is this life, where purposes and functions are only natural properties without an intentional agent to explain them, not the result of a project or goal similar to that of a divine creator? Although natural selection may explain the horns of the triceratops, the wings of birds, or the size of nests, it can hardly explain life. After all, even with the existence of heritable traits through reproduction, “before” life there was no reproduction and thus no natural selection! And regarding the basic facts of living—­for example, the fact that a cell reproduces and harbors a metabolism that requires a plethora of coordinated and simultaneous molecular operations—­are they not so complex that their appearance simply due to chance is unthinkable? And despite the vastness of the known universe—­w ith billions of galaxies hosting billions and billions of stars, many of which are accompanied by exoplanets whose number, for us, has multiplied by twenty in recent years—­we have not found any sign of life on any other planet (although our ability to detect life on exoplanets remains limited). Doesn’t this Fermi paradox, as we call it, prove that the unique life seen on this planet was infinitely improbable? And doesn’t this improbability require an explanation? Honestly, the Paleyan scores a few points here. In his book from 1971, Chance and Necessity, which is an essential text for anyone interested in biological theory, the molecular biologist and Nobel laureate Jacques Monod concludes that the two fundamental theoretical advances of the past century—­namely, molecular biology and the modern theory of evolution—­come up against two unknowable things (for now): the emergence of thought in the brain, and the origin of life. Decades later, all of our research programs into the origin of life—­whether by reproducing it in silico or in vitro, by looking for it in exoplanets or in the multicolored hot springs at the bottom of the ocean—­have not yielded a consensus. The few extant Darwinian theories concerning the evolution of prebiotic molecules have not been unanimously accepted.52 Should we thus interpret the Fusions


question “Why life?” as “What is life for?” given the absence of scientific theories that incontestably explain its origin? But the Paleyan asks his question poorly here. We know a great deal about the horns of a triceratops, or why a duck has webbed feet; we know much less about the exact nature of life. Without going into too much detail, I will simply indicate that even the cardinal properties of metabolism and reproduction are perhaps poor characterizations of life: mules, which are sterile and thus do not reproduce, appear as being very much alive; and viruses—­which are strands of DNA that some say are alive—­ have no metabolism (it is activated by entering a host cell). Additionally, if all life on Earth is composed of molecules of carbon, oxygen, hydrogen, and nitrogen, does this make them necessary conditions for life? Similarly, since genes are made up of DNA, should we expect a different form of life to also involve this molecule as the foundation of its heredity? There is a whole scientific field called “Artificial Life” where researchers look for what life in general would be like if it were independent from the contingent fact that it is implemented in these C, H, O, and N–­based molecules. Artificial Life operates mostly on in silico models but also includes research done on “protocells,” which are chemistry-­based.53 The problem here is that there is no definition of life. Moreover, some Darwinians think that just as a “cow” is defined as “any individual that descends from the first cow,” living is defined as “any individual born from the first living cell”—­which effectively excludes the possibility of “life” on a different planet. Others adopt the ideas of François Jacob, who wrote the following famous sentence in his groundbreaking book The Logic of Life: “We no longer question life in laboratories,” implying that questions about the nature of life were no longer a matter of true science and were instead purely metaphysical rambling.54 Granted, molecular biology superseded all speculations about the nature of life, restricting scientific inquiries to explanations concerning the interactions between DNA, RNA, mRNA, as well as other molecular vehicles. This bears a striking similarity to when psychology emerged as a science in the nineteenth century by focusing on the dispositions and operW hy Do Triceratops Have Horns?


ations of the mind while getting rid of the notion of a “soul.” The fact that “bio” figures in the word biology does not mean that life exists any more than “psyche” remaining in the name psychology proves the existence of a psyche (a term generally translated as “soul”). And some philosophers subscribe to this idea, arguing like Edouard Machery that the question of the definition of life is poorly constructed.55 Others like Christophe Malaterre argue that the question of the nature of life as deciding between life and nonlife is ill defined; we rather have a set of marks of life that may be found together, but often are not satisfied at the same time—­a long the lines of Wittgenstein’s idea of a “family resemblance” that he used to make sense of concepts like “game,” which can’t be defined by an appeal to necessary and sufficient conditions for being a game.56 (This view is probably the closest to what derives from the perspective I will argue for now.) In any case, these issues are fiercely debated. For our purposes, I would just like to highlight that occurrences of life in the universe strongly depend on what we mean by the word “life.” As a result, the very question of the frequency of life in the universe—­and from there, that of the probability of life and its common or unique character—­is not just a matter of fact but instead depends on conceptual suppositions concerning the very notion of life, suppositions that for the moment cannot be confirmed or invalidated since they condition the only facts that are likely to test them. I call this problem “definitional fragility,” and it has its roots in the philosophical issue of the reference of terms. Let’s give an example by taking a common object like a motorcycle. I can give several definitions of a motorcycle: a motorized vehicle with two wheels, a two-­wheeled vehicle with a combustion chamber larger than 50 cc, a two-­wheeled or three-­ wheeled motorized vehicle, a motorized vehicle with saddle and handlebars, etc. Each vehicle that is considered as a motorcycle may be a little different from the others. For example, three-­wheeled scooters and motorized bicycles may count as motorcycles according to certain definitions but not according to others. Some may even consider the Renault Twizy—­an unusual vehicle with four wheels and a convertible hood—­to be one. But overall, the cases that vary according to different definitions are quite rare Fusions


when compared with the mass of things that are universally considered to be motorcycles. In this sense, the concept of the motorcycle is definitionally robust. On the other hand, where “life” is concerned, modifying the characteristics involved in its definition greatly changes what we will consider living or not. Let’s explain this in more detail. One often distinguishes between two families of definitions of life, with each emphasizing one of two salient properties of most organisms we know: metabolism and inheritance/evolution/information (these three things go hand in hand: evolution assumes inheritance and relies on gene changes, which in turn embodies information, in some sense).57 This partition is often used to classify projects about the origins of life, depending upon whether they have to account first for metabolism (the circular process that handles external materials—­turning them into the organism itself while keeping its major physiological parameters stable)—­or evolution.58 If metabolism is used as the essential definition, viruses (or the internet) may not seem alive, but ecosystems or Earth could; if evolution is used as the key notion in the definition, then viruses and the internet might be alive59—­but not Earth60 or termite mounds, which nevertheless regenerate themselves in such a way that the idea is not so absurd for some biologists.61 This is definitional fragility: changes in the definition excessively affect the extension of the concept—­that is, the range of things to which we ascribe the defined property (in this case, life). Definitional fragility here entails a major consequence: it is not possible to robustly estimate the extension of “life” in the universe. Is it really extremely scarce, as Monod asserted? If this question can’t be answered, then the claim that life is extremely improbable makes no sense either. To this extent, all explanations of life that refer to some design which operates on the basis of an impossibility to explain a phenomenon that is extremely unlikely within the laws of nature are not sound since they explain an unestablished fact. Under these conditions, our Paleyan—­who argues that there is an intelligent design behind not only living forms but also behind life itself—­is not able to assert unconditional premises in support of his inference of a creative intelligence. W hy Do Triceratops Have Horns?


Additionally, definitional fragility raises conceptual issues for exobiology—­since this discipline assumes some idea of life in order to recognize living things when we are exposed to them. It raises similar issues for Artificial Life, because there are many options for research programs that aim at fabricating life in general (since the reference point “life in general” appears fragile). In the end, “Why life?” now seems like a poor question; and by extension, questions along the line of “Why X?”—­where X presents an elevated degree of definitional fragility—­no longer count as legitimate why-­questions. To Sum Up . . . Living beings are adapted to their environment: why are they adapted? We explain why they have their parts by assigning functions to them, which places them in the domain of final cause. Is this possible when nonhuman nature has no intentions or purpose? Can this be explained through a simple analogy with machines, or a metaphor? The concept of natural selection justifies this way of describing nature: a function is a selected effect. It explains the adaptation of organisms to their environment. It legitimizes how we use the analogy with machines, as well as the metaphor of the rational agent when applied to mindless creatures, even though some would confer a higher metaphysical weight to this biological agency. On the other hand, to say that life on Earth has a purpose or a function simply does not make sense.


5 W hy Did World War I Happen ?

“ bec ause a rch du k e f r a nz f e r di na n d wa s a ssa ssi nat e d in Sarajevo on June 28, 1914.” High school history books generally answer with something like this. However, there is a problem with this response despite the subject being so well known: many historians highlight the fact that the war would have happened regardless of this event. Does this mean that it is really the right answer? Historians and Their Quest for Causes World War I perfectly captures the essence of what a historical event is: occurring at the scale of several entire societies, it had major consequences that reverberated throughout the rest of the twentieth century. It therefore typically represents an object of study for historians, and the question that opened this chapter has motivated many of them. Philosophically speaking, it is a singular fact and one can reasonably think that history is primarily concerned with singular events (the French Revolution, the collapse of the Soviet Union, etc.)—­even if a better familiarity with historians will let one see that they often go on from there to study more general 149


things, such as “How do revolutions happen?”; or to analyze the long-­term course of a subject like “the evolution of the West’s relationship to death.”1 A singular fact is distinguished from general or universal facts like “a stone will fall if you drop it”; asking “why?” about the former will thus largely differ from “Why do stones fall?”, whose answer allowed for us to study the scientific way of responding to the question “why?” From the outset, the answer to “Why a singular fact?” can hardly be a nomological deductive explanation showing how the fact follows from the laws of nature. Even if this explanation existed—­showing, for example, how the initial conditions of the universe and the laws of nature led to World War I as a logical consequence—­it would doubtlessly be too complex for us to understand, or to even be stated. Of course, the principle of rationality—­which is not exactly a law but still appears as a universal statement—­is involved in all understanding of human action; but even if it plays the role of a law in deductions based on historical facts, in which, for example, the preferences of the agents would play the role of initial conditions, far too many other facts that are independent of human rationality come into play and prevent us from being able to formulate a good nomological explanation.2 It is thus more natural to assume that “Why did World War I happen?” will receive an answer based on certain causes accompanied by mentions of the beliefs and desires of certain actors. Is the assassination of Archduke Franz Ferdinand on June 28, 1914, such a cause? The arrest of conspirators and then the discovery of the involvement of the Serbian government resulted in an ultimatum issued by Austria-­Hungary (whose emperor was an uncle of the murdered archduke) to Serbia. War quickly ensued; and the strong alliances between the many surrounding countries plunged Europe into a conflict that was initially specific to Serbia and Austria-­Hungary. It seems fair enough to say that the assassination caused World War I. With different alliances, however—­if, for example, Serbia had had the same allies as Austria-­Hungary—­the war would have been different or nonexistent. It is thus easy to see that the assassination of Franz Ferdinand would not necessarily have led to a world war on its own. On the conFusions


trary, a whole precise geopolitical configuration was needed to create such massive and disastrous consequences. As such, the Sarajevo assassination could not have been the cause of world war without the existence of certain geopolitical conditions provided by the era. This example illustrates a major distinction that is very useful if we want to account for both the fact that World War I happened because Franz Ferdinand was murdered; and that if the world had been different, the murder would not have had this consequence. This distinction is that of “triggering causes” and “structuring causes,” which Fred Dretske introduced in his book Explaining Behavior (discussed in chapter 3). The latter represent those conditions that allow for the former to have the effects that they do. Thus, the spark created by the firing pin that ignites the gunpowder tucked inside the shell casing of a bullet—­which is then projected at great speed and distance—­is a triggering cause; the design of the gun itself is a structuring cause. Without the structure of the gun—­if, for example, the barrel were a hollow sphere or a very short pipe—­there might be a pop or an explosion but a bullet would not be successfully fired. The question of knowing which causes are important does not make much sense in this respect: without triggering causes, nothing happens; without structuring causes, the triggering ones do not trigger anything. And so with our question about why World War I began, we must respond by indicating the two orders of cause: the attack itself and the geopolitical structure of Europe on the eve of the war—­the trigger and the structure. The Return of Possible Worlds: Contingency, Destiny, and Martina Navratilova But once this is stated, things become a little complicated: the assassination itself, in this context, turns out to be a rather anecdotal cause. At the time, the major European powers had traversed so many profound mutual antagonisms—­ranging from territorial issues (especially in regard to their various colonies around the world) to economic concerns to historical bad blood (such as an underlying French resentment toward Germany)—­that W hy Did World War I Happen?


the Sarajevo shooting was, as the French say, La goutte d’eau qui fait déborder la vase (“the drop of water that made the vase overflow”). Yet this implies that if the assassination had never taken place, a similar event would have created the same catastrophic consequences. In this sense, if we recall the definition of causality in the counterfactual sense from chapter 2—­” if there had not been A, there would not be B”—­the attack was not even a cause of the war since this definition is not satisfied. If the attack had not happened, there would still have been a war because we would have found another pretext to butcher each other. And if we use the terms from above, the structuring cause was thus so conducive to war that the triggering cause was no longer a cause at all since almost anything else could have played that role. To understand this apparent paradox, we must look into the metaphysics of events and facts.3 Metaphysicians distinguishes between “fine-­grained” and “coarse-­grained” events.4 What do they mean by this? World War I fits at least two descriptions: “The War of 1914” (after all, a war between the same protagonists starting a little before or a little after the real date would still be “World War I”), and “the war started on June 28, 1914, by Austria-­ Hungary’s declaration of war.” These are both the same event of course. But the second description is defined with a much higher degree of finesse since it is distinguished from all other wars that may have been started at different times—­while the former is differentiated only from peace in 1914. This first event is “coarser,” covering a much broader portion of time, and potentially corresponding to many “fine-­grained” events. We can go on to imagine still coarser-­grained events (like “war in Europe”) that enfold the coarse-­grained event in question among other similar events. These differences somehow overlap with the differences between contrast classes found in chapter 1 when we dealt with the semantics of causality. Here the fine-­grained event “World war started on June 27, 1914” would contrast in some inquiries with other fine-­grained events such as “A world war started on September 1, 1914,” so that the Sarajevo assassination, cause of the fine-­grained event, answers “Why World War I at this moment rather than later?” In turn, the coarse-­grained event “World War I in 1914” Fusions


contrasts with coarse-­grained events such as “no war in 1914,” and a cause of such coarse-­grained event (namely, the geopolitical situation that rendered it almost inexorable) would answer “Why a war in 1914 rather than peace?” And the most general why-­question here would be “Why war in general?”, and would bear on a very coarse-­grained event (namely, “a war”). This question could be an issue addressed by social scientists rather than historians. Thus, the Sarajevo assassination itself in no way causes the coarse-­ grained event “First World War in 1914.” Whatever may have ultimately been the trigger, it was essentially caused by the structure of European geopolitics. For its part, the fine-­grained event “World war started on June 27, 1914” was indeed caused by the shooting in Sarajevo; if it had not taken place, the war would have presumably erupted but on another day and thus this specific fine-­grained event would not have occurred. I am aware that all of this may sound like byzantine or scholarly subtleties. But the stakes are high. Grammatically speaking (in the sense of our “why?” grammar), “Why this war?” or more generally “Why such a singular event?” can receive different answers depending on the granularity of the event and whether it is considered fine or coarse. From a metaphysical stance, we are dealing with what is called inexorability. The 1914 war such as it took place (the fine-­grained event) seemed to have been a case of bad luck: if Gavrilo Princip had missed his target, the Kaiser would not have wanted to punish Serbia, the infernal machine of alliances would not have been set into motion, and there would have ultimately been no such war! In line with Pascal’s aphorism about Cleopatra’s nose—­which claims that if it had been shorter the course of world history would have changed—­the twentieth century would have been entirely transformed if the trajectory of a tiny piece of metal had deviated by one centimeter. Certainly. We are after all in the realm of the absolutely arbitrary and the totally contingent. On the other hand, the “coarse” event—­namely, the 1914 war in general—­ seemed inevitable; anything could have ignited it. There are thus degrees of necessity, which make it so that events seem increasingly necessary as their grain becomes coarser. W hy Did World War I Happen?


On a much larger scale, this world war resembles what is happening in our own lives—­not to mean that life is war (although this may depend on who you talk to), but rather that it is sometimes difficult to defend one’s self from a dizzying sense of contingency: “If I hadn’t missed my train, I wouldn’t have met that Russian on the next one; he wouldn’t have lent me his chess set; I wouldn’t have learned how to play chess; and I wouldn’t have become Bobby Fisher.” (It is possible I made this story up, but it illustrates my point well.) But other times we feel that in one way or another our lives would not have been fundamentally different if certain important encounters had not taken place: “If I had taken the train as planned, I would have arrived at Freiburg im Breisgau; at the central station, I would have seen Iranians playing chess; because of my clear fascination with the pawns as they slowly moved across the chessboard, one of them would have asked me to play,” and you can guess the rest. This feeling of necessity corresponds to what is sometimes called the “nature” of a being—­a nature that is assumed to express itself very similarly in a broad range of different situations. Some philosophers talk equally of an essence. For the phenomenologist Edmund Husserl, the essence of a thing is seen when we imagine different changes affecting this thing.5 The changes that transform it into a thing that we no longer recognize thus indicate, through the negative, what makes up its essence. A black swan is still a swan; a featherless swan remains one too. But a scaled swan that lives underwater contradicts a swan’s essence. Likewise, the nature of a thing is what determines that its behavior remains relatively identical across a wide range of possible situations. Let’s look at another, and more interesting, example: Martina Navratilova, who once dominated the world of women’s tennis. If we were asked “Why did Martina train so many hours a day? And why did she keep competing? And why did she keep winning?”, the answer “it was in her nature” may seem empty, but it expresses a necessity (namely, her being the best in the world) that is quite distinct from contingent events such as winning at this or that tournament against this or that player. Contingency (“It came close to . . .”) and necessity (“Whatever hapFusions


pens, this would have taken place”) thus go hand in hand here. And to understand it, as well as the meaning of “because of its nature,” it will be useful to look at this notion of possible worlds that we owe to Leibniz and David Lewis. A necessary event in this framework—­namely, one that cannot not take place—­is therefore an event that occurs in all possible worlds, or rather in all those that share the same natural laws: stones fall if they are dropped, the Earth’s sky is blue. An event that would take place in a very wide range of possible worlds similar to ours would not be necessary strictly speaking, but would inspire a fairly legitimate impression of necessity that we could call inexorability. A contingent event is an event that takes place in our world (the so-­called “actual” world) and perhaps in a very small number of possible worlds: the jacket I am wearing today is blue, Donald Trump became president of the United States. More precisely, inexorability means some kind of uniformity across close possible worlds, with respect to coarse-­grained events (for example, all the possible worlds close to the actual world in which a “first world war” happened). If we know that an inexorable set of events is taking place, we also know that in almost any of the close possible worlds it is also taking place; whereas a contingent event is such that it may or may not happen in a close possible world (see fig. 1). There are ways of providing formal analyses of this notion that would use the mathematical theory of measure (Lebesgue-­ Borel) and state criteria to formalize the meanings of “almost any” and “close possible world.” This is not the place to delve into such technicalities. It should suffice to say that this mathematical measure theory is also the basis for the formalization of probability theory; therefore, one can see how these considerations about modalities—­possibility, inexorability, and contingency—­w ill stand in accordance with probability calculus. In fact, they are more general, and may even hold when the requisites for applying probability calculus are not met.6

W hy Did World War I Happen?





Figure 1. A representation of the inexorability of an event X occurring in the actual world. The plane stands for a universe of possible worlds around actual world W. In the darker inner oval, worlds where the focal event X occurs; in the lighter outer oval, worlds where X does not occur. (Each world is a dot in the Predictable Unpredictability space.) Around W, the set of worlds is darker almost everywhere. determinism



Inexorability, Natures, Essences, and Kinds As we saw with Martina, inexorability may ground the concepts of “nature” or “essence.” Granted, many philosophers since Frege have dismissed the notion of essence. Assuming that the essence of something X (in the sense of an answer to the question “what is it?”) really exists bears a heavy metaphysical weight. Some would reject it as a hidden form of idealism: in “Contingent upon” Convergence Radical contingency addition to the particular thing X, the essentialist philosopher posits an idea that exists by itself and that is instantiated by this thing. In any case, the fate of “essences” depends upon one’s understanding of the status of the question “what is X?”, and therefore engages questions concerning the reference of terms (here, the name X). But by talking here about natures, and the nature of particulars (events, things, persons), I am not committing to any theory about essences: “natures” refer to a specific structure of the set of possible worlds within which something X is referred to. It might be that some natures are also what is D B




meant in certain philosophical accounts by “essences”; but if one denies any essentialism, “nature” is still a term that can be used. Moreover, there exists a major debate in philosophy about the meaning of “natural kinds” existing in the universe, and how we can identify them (that is, a property P is a natural kind if it captures an ontological feature of the world—­like “to be a stone” or “to be in gold”—­and not just a conventional assemblage—­ like “being-­a-­truck-­or-­a-­garden”).7 If some properties are natural kinds, then they should relate to general natures in the way that is sketched here; namely, they should feature the inexorability-­style structure of possible worlds I highlighted. But I leave the exact account of “natural kindness” open within this particular framework. My account is also neutral regarding biology: the “nature” of Martina Navratilova is not necessarily some set of biological properties, or her DNA, or her genes plus a few other factors that are supposedly less bound to environmental vagaries. Many nonbiological things could be robust across possible worlds; and the question of what constitutes such a nature in each case and for each particular is an empirical one, which can be addressed through our knowledge of natural and social regularities. Back to Inexorability and Contingency So what does my portrayal of contingency in Martina Navratilova’s life ultimately convey? Her winning Wimbledon in 1982 belongs to the current world and to a very few possible worlds (as soon as we modify certain events connected to our world that pertain to this event, it no longer takes place). But if we now look at the set of possible worlds that are similar to our own—­where Martina Navratilova has a similar origin and education, where there are tennis tournaments, where the rules of tennis competition and rankings are unchanged, and where the pool of other tennis players is the same—­she will end up atop the world tennis rankings for a similar amount of time because she will have won many other tournaments (often against Chris Evert Lloyd in finals or semifinals). The feeling of contingency thus concerns the realization of a fine-­grained event (e.g., winning W hy Did World War I Happen?


the French Open in 1985 after a hotly contested final with Chris Evert Lloyd) in our unique actual world; the feeling of inexorability concerns the fact that the coarse-­grained event of remaining a top player over such a long period of time—­which contains the particular fine-­grained event in question among other possible end results—­is realized in most parts of the possible worlds that are close to our own. The two feelings coexist because they translate two distinct orders of metaphysical modality, and because they concern two different structures of the set of possible worlds. Regarding contingency, we focus more on fine-­grained events—­which means that the pattern of inexorability described above, tied to the corresponding coarse-­grained event, no longer holds. Thus, in the next chapter, we will be discussing a very different situation where contingency and inexorability are completely disconnected—­a situation that will ultimately give rise to the birth of the metaphysical idols that were presented at the beginning of the book. To Sum Up Explaining a singular event implies identifying the triggering causes and the structuring causes. The event may be fine or coarse, and the causes will not be the same. A coarse-­grained event that would be identical in a large zone of possible worlds due to the constraining power of structuring causes is inexorable. A very contingent fine-­grained event takes place in a very small zone of possible worlds surrounding our own, but can realize a coarse-­grained event that will itself be inexorable.


6 W hy Did Napoleon Lose at Waterloo ?

“ b e c au s e m a r é c h a l g r o u c h y, w h o m h e wa s e x p e c t i n g on the western front, arrived late due to a poor transmission of orders, which allowed Blücher’s Prussian army to safely come to the assistance of Wellington’s British forces that were already engaged against the French empire’s army.” This is the common answer summed up in a few words, an answer which admits a fundamental property found in intentions and goals: namely, that they can fail. Napoleon, one of the greatest military strategists in history, had devised a remarkable battle plan; but this single detail about Grouchy’s delay (as well as some others that we will go into) thwarted it and Napoleon lost. Any historical narrative must come to terms with this possibility and thus retranscribe and reconstruct intentions in order to measure their effectiveness—­their rate of success, so to speak—­in the world. Napoleon’s defeat can be understood by the arrival of Blücher’s reinforcements for Wellington, as well as by the too important role that Grouchy’s troops had within Napoleon’s own battle plans.



Chance, Causes, and Plans We must note that there is an asymmetry here in the answers to “why?” When an intention is realized, it seems to be a sufficient answer unto itself. But when it fails, what is happening is not explained; and on the contrary, it is necessary to resort to explanatory causes (in this case, namely, to ask what events could have undone the Corsican emperor’s plans). After the case of Sarajevo from the previous chapter, we now find ourselves in another entanglement of the necessary and the contingent within a narrative. Grouchy’s delay is paradigmatically contingent: it might not have happened if we trust the various arguments about the different forces and strategies at play presented by historians; however, the problem (for Napoleon) is that it did. The contingent seems to have no “why” from the point of view of reasons-­for-­action; instead, it features a poor connection between intentions and sets of natural causes. In a more general sense, the case is as follows: a near infinity of events happens every second; among these events, we notice Grouchy’s delay because it ran counter to Napoleon’s plans and incurred terrible consequences for the empire’s army. It was bad luck for the emperor. To call something “chance” or “luck” thus does not mean that there is no cause (after all, everything has causes); but rather that this supposedly random event has a strong link with what interests us (Napoleon’s battle plans) while not having causes that have a direct physical or logical connection with it. This is why, when commenting on Aristotle, Antoine-­Augustin Cournot—­a major contributor to the early philosophical understanding of probabilities—­v iewed chance as an encounter of independent causal series.1 Of course, this very independence is not absolute or metaphysical since all causal series go back to the origin of the universe as their common beginning; but it remains in any case an independence within the time scale that concerns us here. And better still, according to this definition, causal series encounter series of reasons-­ for-­action without a logical connection. Here, independence is thus metaphysical because reasons-­for and causes are heterogeneous: reasons-­for are ideas (beliefs, intentions, etc.); causes are facts or events in the world. Thus, Fusions


these two things can’t be mixed, and their independence means something more than the lack of a direct causal process that connects them. To say that “chance caused Napoleon to lose” is obviously a misnomer since chance is neither the name of a force nor the name of an event. But such an expression does mean something—­namely, this precise complex structure that intermingles multiple causes; intentions; and the expectations of whoever is telling the story and describing the events, and who is thereby selecting certain events and certain intentions as being significant rather than others.2 The Map and the Territory of Waterloo Of course, the defeat of Napoleon is complex and Grouchy’s delayed arrival is only one aspect of it (otherwise, the many historians of the period would be out of a job). Specialists enumerate many other facts that contributed to it; and one of them in particular seems to be a perfect example of the role of contingency in the fabric of history. The Battle of Waterloo took place on the plain beneath Mont-­Saint-­Jean, and Wellington’s forces were positioned on the mountain near a farm of the same name. Another farm called La Haye Sainte was situated one kilometer to the south. The map used by the French was of poor quality, and the Mont-­Saint-­Jean farm was mistakenly indicated as being where La Haye Sainte was—­ resulting in major consequences. With the extra distance created by this error, the French cannonballs that were aimed at the British soldiers systematically fell short since they had a margin of error that was less than a kilometer. We can thus see how a single detail—­a minimal cartographical error—­ played an enormous role in Wellington’s resistance to the empire’s army, and thus in the final victory of the British and Prussian forces. We can imagine how different the world would have been if Napoleon had won at Waterloo, continued his second conquest of Europe, and perhaps forced the Prussians to surrender and the British to agree to an armistice. This cartographical mistake is a typical contingent fact: a reproduction mistake W hy Did Napoleon Lose at Waterloo?


is a detail, and there exist many possible worlds that are very close to our own in which everything is exactly the same at the moment where Napoleon entered into battle except for the map. In all of these possible worlds, the rest of the battle would have resulted differently. This contingent fact thus had disproportionate consequences. Formally speaking, the defeat at Waterloo thus differs from the case of World War I. The latter enveloped a certain form of necessity, or what we called earlier an “inexorability.” In the case of Waterloo, things are clearly not the same. In numerous worlds that are very close to our own, the map of Mont-­Saint-­Jean was correct, and Napoleon won instead of losing. In others, the information arrives to Grouchy in time, and the emperor wins again. In a general manner, the “structuring causes”—­namely, the situation of the belligerents—­are not as restrictive as they are in the case of World War I. Changing the details of certain events—­the map, the information given to Grouchy—­would possibly change the outcome of the battle; but this is something that is difficult to estimate. In the end, while World War I—­w ith its particular circumstances—­presents a clear case of inexorability, this is not the case with Napoleon’s defeat at Waterloo. If we want to specify this difference, we see that the group of structuring causes from World War I (alliances, conflicts of interest, etc.) contributed in making war imminent, thus meaning that anything could have ultimately triggered it. In the case of Waterloo, there are as many causes that are conducive to Napoleon’s defeat as there are that are conducive to his victory. If we modify the state of the world in our minds to imagine other possible worlds, then—­depending on whether we address one series of causes or another—­we will end up with possible worlds where Napoleon wins and others where he loses. We will not end up with a set of possible worlds that are close to our own, where the outcomes of the battle are almost all similar to each other—­as is the case with World War I, which makes this fact inexorable. Let’s be clear: in the two cases (Napoleon’s defeat at Waterloo and World War I), there are contingent events that play the role of being the (or one of the) triggering cause(s). But the articulation of the contingency Fusions


of these causes and of the necessity of the event itself is very different. For those of World War I, there is an inexorability at play that can be conceived of within the framework of possible worlds. With Waterloo, this is not the case. There is a contingency in the event of the “defeat at Waterloo” itself—­a contingency in the “coarse-­grained” event (to use the categories from the previous chapter, where the coarse-­grained event “1914 World War” was inexorable while the fine-­grained event was contingent). Mapping the Structure of Contingency Contingency and inexorability therefore correspond to two distinct structures of the set of possible worlds around ours. When event X occurs, its being contingent means that X happens in many possible worlds close to ours—­but not in all (or almost all) of them. If we had a closer look at those two subsets of possible worlds—­the one in which X happens and the ones in which it doesn’t—­we would see that they are intricate, and that it would be impossible to single out a small homogeneous subset of worlds close to ours where X happens in all of them (see fig. 2). This is a fact about the topology of possible worlds: “contingency” refers to a pattern where changing one detail in a nearby possible world W' (like Napoléon’s map, for example) may preclude the occurrence of X in W'; but changing another detail in W' would lead to W" in which X ultimately happens, and thus we have a topology of the set of possible worlds where X-­worlds and non-­X worlds alternate infinitely. In any region of the possible worlds close to the actual world W, there is at least one possible world in which X does not occur. (Notice that among the events we also include the intentions of the agents—­for instance, Napoleon’s desire to dominate Europe. Even though intentions sometimes don’t lead to events that are similar to them, they are crucial in determining what our actual world is, and which possible worlds are close to it. In other words, a world in which Napoleon decided to become a ballet dancer is very far from our own.) If one would like to develop this view, degrees of contingency would need to be defined. The highest degree would be a total intrication beW hy Did Napoleon Lose at Waterloo?


tween X and non-­X worlds, such that there is a non-­X world in between any two X worlds. The concept of fractals would describe such sets.3 This would mean that the size of the largest homogeneous non-­X worlds would define the degree of contingency. The lowest contingency would be defined as pure inexorability; inversely, and according to the “size” of the zone of possible worlds where the event in question is constantly taking place, inexorability would come in different degrees.4 We thus see the differences between two types of modalities: real contingency (Waterloo) and inexorableness (World War I). This difference moves between two very different structures in the universe of possible worlds around the actual world when we consider the event in question. In cases that are logically or causally similar to World War I, we discussed the notion of “nature”—­understood as an affirmation of what seems


Figure 2. Structure of contingency. In the set of possible worlds, W is the actual world where an event E happens. Each pixel represents a possible world. Black pixels are worlds where E happens, light pixels are worlds where it doesn’t. Whichever neighborhood of W one considers, there are light and black pixels, that is, worlds where E happens or doesn’t happen.



constant across different possible worlds in spite of dissimilar potential triggering causes, representing a subtle articulation between contingency (triggering causes) and necessity (structuring causes). Yet in a case like the battle of Waterloo, this notion of nature and this subtle articulation prove to be irrelevant. Contingency, Inexorability, and the Formalisms of Complex Systems Science What is relevant, however, is the fact that this characterization of contingency can map onto a partition of the systems in terms of their predictability, which one could infer from an examination of classes of models of physical systems. Since the 1980s, it has been common to talk about unpredictability in regard to those systems that are labeled “complex.” I won’t delve into the complexities of the meaning of “complex.” My point here is that predictability is an objective property of systems; that it is distinct from determinism; and that different kinds of predictability map onto my partition between degrees of contingency and degrees of inexorability as defined in terms of possible worlds. Considering only deterministic systems (namely, those systems in which the state at instant t is wholly determined from instant t – dt just before t), we can’t state that the behavior of the system at future time T is always predictable even though the common intuition about this determinism is that it renders systems wholly predictable. Why? Because of a property that has been crucial in the study of complex systems, named “extreme sensitivity to initial conditions” (ESIC).5 Determinism, in principle, states that once the laws and the initial conditions of a system are given, its behavior is wholly determined until it achieves a final state—­which is often an equilibrium state. To say that the system is not sensitive to initial conditions means that given initial conditions I—­after which the system reaches a behavior F at time T—­a small modification di will change the trajectory, and its behavior will be a small modification F + dF at T. Thus, the system is predictable because someone W hy Did Napoleon Lose at Waterloo?


who knows I, with a small error margin di, will be able to predict the final behavior F, since the error margin dF on F will be small. But what if this condition does not hold? In this case, if the initial conditions change from I to I + di, the final state F can change to a state F' which is very different from F. This extreme sensitivity constitutes a problem for any prediction for the following reason. Any determination of the initial conditions I is a physical measure, and thus a finite measure with error margin di. In other words, it’s actually an interval of value (granted, a small interval, [I – di, I + di]).6 But if there is sensitivity to initial conditions, then the final state reached by the system at time T can be anywhere between F or F', which are very different from each other; and because—­from the viewpoint of our measurement—­I and I + di are not discernible, we can’t decide what the final state of the system will be with a decent error margin. This ESIC may happen in apparently very simple systems—­as the French mathematician Henri Poincaré already demonstrated in 1890 with the “three bodies problem.” 7 Think of the moon, the earth, and the sun. They follow trajectories wholly determined by the law of gravitation. Given their current situation, Poincaré shows that the ESIC holds; and that in a very long time (in the range of billions of years), one will not be able to decide whether the moon will still orbit the earth or not. This historical example offers us two additional lessons. First, the unpredictability only applies in respect to a certain time range. The three bodies problems is fortunately very predictable in all time periods of interest to us; and whenever the moon may diverge, humankind will have probably already been gone for millions of years. Second, Poincaré’s mathematical demonstration explains why this may occur: in all differential equations describing actual physical systems, there are terms one neglects when solving these equations8 (for instance, the action of the gravitation of the moon upon the trajectory of the earth impacting the earth’s relation to the sun, which modifies the sun’s gravitational force, and then the trajectory of the earth, and then the effect of the moon at the next instant). Those are indeed nonlinear terms. Ordinarily, Fusions


they are so small that we can neglect them with no impact on our predictive abilities. However, they may make a difference upon the trajectory of the system when two initial conditions are extremely close. With time, this aggregates and may lead to a divergence between the system starting at conditions I and the system starting at condition I + di—­even though the equation dictates, mathematically, the position of the three bodies at each instant. Yet the prediction—­which is always based on physical measures, which are themselves finite—­can’t discriminate between these diverging trajectories over a very long time lapse. This situation formally defines contingency. Logically, it shows that there are two elements in the concept of contingency: the fact that an event would be different or would not have taken place if another event had not also taken place (something which is met by the phrase “contingent upon”); and then epistemologically, the fact that with respect to our cognitive abilities, the system (or the world) could be very different—­and this is due to the fact that the system, even when wholly determined, starts with conditions that can’t be infinitely measured, and is thus ESIC. Yet this has another consequence. It may happen that not only given close distinct initial conditions, the system ends up in the same state; but also that given very different initial conditions, the system still ends up in approximately the same state. It’s easy to think of an example: picture a bowl. Have a ball roll from the top to the bottom of the bowl: wherever you drop it, it will still end up at the same point. States like this are called “attractors.” The spaces of interest may not be made up of real bowls, but can instead be an abstract space in which each state of the system is figured by a point whose coordinates are the values of the main parameters describing the system (e.g., position and speed, or pressure, temperature and energy, etc.). In such a space (called “phase space”), the behavior of the system appears as a trajectory. An attractor would be a shape in this space around which the system evolves so that the system ultimately tends toward this shape whatever its initial state. Obviously, if there is an attractor for the behavior of a system—­at least given a range of initial conditions (which is called the “basin of attraction”)—­then W hy Did Napoleon Lose at Waterloo?


the trajectory is predictable. If there is W an attractor, then the system faces an inexorable destiny since its state will end up “in” the attractor wherever it starts. Contingency and inexorability map onto these two kinds of dynamics: those which are unpredictable because of ESIC and those which are predictable because of attractors. The sets of possible worlds in which an event X occurs, when it comes to assessing the degree of inexorability of X, correspond to the basin of attraction defined by the occurrence of event

Predictable determinism


Unpredictability (ESIC)

“Contingent upon”


Radical contingency

Figure 3. Three kinds of deterministic systems. To the left of the arrows: range of possible initial states; to their right: values of final states. (a) Predictable determinism: range of final states varies smoothly like values of initial states. (b) Attractors: whatever the value of initial state, the system ends up in the attractor state. (c) Unpredictable system (ESIC): very small variations in the initial states lead to very distinct final states (represented by distinct shapes). Attractors is a case of convergence of various causal paths and corresponds to an inexorable outcome; unlike (b), (a), namely predictable determinism, is a system type where final states are contingent upon initial states. The (c) system type is radically contingent.


Fusions F


X taken as the attractor of many processes occurring in the world. There is a correspondence between metaphysical concepts of necessity and formal notions in nonlinear dynamics that parallels what we previously saw when discussing ordinary goals and beliefs, and the formalized rational choice theory used by economists. Measuring the degree of inexorability of an event would therefore correspond to the size of the basin of attraction. I will leave out the details of this account, but still wanted to show how the semantics of possible worlds and the formalism of complex systems provide concurring coherent tools for making sense of the intuitive notions of contingency and inexorability. Epistemology: Knowing about Contingency Nevertheless, a question is left unresolved: how do we know if an event belongs to the “Waterloo” type or the “World War I” type? In other words, how do we know what is happening in these possible worlds that are close to our own? We must distinguish here what happens in these possible worlds (what we will call counterfacts) and our access to these possible worlds. The fact that we may not be able to know what happens there is distinct from the fact that things are determined there in a certain way or not. A world where Princip shoots at Archduke Franz-­Ferdinand and misses is determined at least in part: this miss results in many possible consequences, which will define just as many possible worlds. Yet up to a certain point we know certain things about counterfacts; and it is precisely because of this that we can make causal inferences, which suppose that we are able to tell what would have happened if the putative cause were not there. For example, we know what would happen in the worlds where Princip missed Franz-­Ferdinand. Because of our awareness of certain laws of nature, as well as of the economic and geopolitical regularities of the time, we can claim that Europe would have exploded into war regardless of whether his aim was true or not. Briefly, our knowledge of the laws of nature and social regularities allows us to know things about the possible worlds surrounding our own (but not everything, of course); and it is this W hy Did Napoleon Lose at Waterloo?


knowledge that justifies the argument that World War I was inexorable, as well as many other things. It also justifies that we can correctly maintain that a given event is contingent because we see through the same method that there exist many possible worlds around our own where one or two previous events have not happened and this event did not have taken place. The contrast between the contingent defeat at Waterloo and World War I inexorability is a metaphysical contrast between two structural types for the sets of possible worlds under focus when dealing with each event. In these two cases, by considering each of these events, we know enough about sociology, politics, economics, and nature to say to which structure each of them belongs. Of course, I deliberately chose these examples; yet in many cases, we cannot know what would happen in a large variety of possible worlds because we know too little about laws, regularities, etc. Consequently, for any given event, we will not often know whether it is contingent or inexorable; but this is a question of our knowledge and not the event itself—­of which it is reasonable to think, metaphysically, that it belongs to one of the two types, since counterfacts are determined in one way or another. We just don’t have access to them. This epistemic opacity, however, implies that an apparently contingent event can in reality be inexorable, since we only have a limited and ultimately erroneous knowledge of the group of regularities that underlie it. It could therefore be that certain “Waterloo” events prove to be, metaphysically, “World War I” events. Hence the temptation to wrongly think that all events are in reality “inexorable,” and to reify or personify that inexorability. This can then lead to what we are going to consider now—­ an entirely illusory ideal construction that we will call a metaphysical idol, and which will play a conceptual role that is analogous to that played by the idea of the nature of a thing. The idea of destiny is one of the figures of this idol. What is called the destiny of an individual is realized in events, even if these are not included in the individual’s plans—­and a fortiori when they are not. Someone could, for example, say that it was Napoleon’s destiny to lose at Waterloo because he had conquered too much, because he had been too ambitious Fusions


and too greedy. This argument would be supported by a kind of moral vision of the world that is somewhat similar to the Hindu idea of karma—­ according to which, in basic terms, humans ultimately pay for their actions in a following life; and which can also be found in the biblical aphorism that is well known to all the characters in the cowboy comic Lucky Luke: “who lives by the gun dies by the gun.” With the idea of destiny, we thus create a short circuit between causes and reasons in the independent series of causes that Cournot conceived of. The poorly reproduced map of the Mont-­Saint-­Jean farm was of course a contingent fact; but in the light of this notion of destiny, it becomes the thing by which Napoleon’s destiny was realized. Unlike the case we discussed in the previous chapter, we are not dealing here with the expression of an event that would roughly be the same in all the possible worlds close to our own. On the contrary, this event is contingent (in other words, in very similar possible worlds, the event would sometimes be different)—­a fact that the idea of destiny aims to erase. When talking about destiny, I do not want the reader to imagine an obscure mystical or religious idea, even if it is integrated into a great number of myths and stories from all cultures. The simple practice of narrativity—­of telling stories—­produces the device from which the notion of destiny emerges. I will explain this in a few words, and will then turn toward the myth of Oedipus and psychoanalysis—­t wo examples of how a metaphysical idol is constructed. Chekhov’s Gun and Destiny Imagine you are at the movies. The camera follows a young man; he is walking in a crowd while a woman with slightly red hair wearing a white fake fur coat advances toward him from the opposite direction. The man sees her; and at the moment where they begin to cross the street that separates them, he smiles at her. Because of this, he does not see the car bearing down on him from his left, and which hits its brakes too late. The man barely has enough time to throw himself backwards, falling hard on the W hy Did Napoleon Lose at Waterloo?


pavement. The people around him are of course very frightened by this, the young woman included—­who approaches him and helps him to get up, now smiling back at him. And then she leaves to go about her business, as he does as well after a deep breath. Voilà. After this scene, you expect something. These two will surely meet again, share a life story, fall in love, have children and terrible fights and go through crises; perhaps they will die together, or save the world, or spend all their time in cafes having irritating and depressing conversations while listening to Tindersticks playing in the background and smoking many cigarettes (especially if the film is French and dates from before 2000). Everything depends on the film’s genre, where it comes from, and its director. Your expectations here are fundamentally linked with narrativity—­ with our ability to listen, read, tell, and love stories.9 In a narrative, things necessarily happen in one way or another because of how the different causes and effects are linked together. The structuralists, following the Russians who laid the groundwork for these ideas (such as Vladimir Propp and his Morphology of the Folktale, 1928), maintained that there are major articulations in all folktales, and even in all narratives. They tried to extract them from the immense multiplicity of existing stories and novels; more recently, scholars have referred to the “narrative arc” when discussing these general structures of narration (which are now taught in film schools to young people mostly aspiring to write screenplays at Netflix or HBO).10 However, I would like to talk about something that is much simpler and more primitive than that—­something that Chekhov expressed best in the idea now referred to as “Chekhov’s gun.” This idea is simple: in a play, if a gun is mounted on a wall in act 1, it must be fired by act 5.11 What does this mean? In the real world, each event like each intention will have innumerable causes and innumerable effects. Only certain ones among them interest us—­namely, those that we control and those which influence our lives. If a narrative was forced to account for all the causes and effects of all events, it would be impossible to follow: infinite, too dense, with no principle of hierarchy between the different elements. On the contrary, the causes and effects that are generally presented are those Fusions


which have causes and effects that are directly relevant to the major events and characters of the story. The infinite clutter of events thus becomes manageable. For this reason, details are rarely given at random in a narrative; and this is what is illustrated in Chekhov’s idea. The criteria for event selection in narration thus becomes the opposite of the notion of efficient causality: they are there because of the effect they have on important things and characters in the story (such is the case with Chekhov’s gun), while other events that are the effects of internal events in the story but don’t impact relevant people are excluded from the narration. This is the basis for what I will call the “principle of causal saturation” in narration. What does this mean? In our daily lives, we cross paths with hundreds of people every day; we step on their toes or we smile at them or we steal their bags; and then we never see them again and nothing happens. Or perhaps something will indeed happen as a result for one of them: his foot needs a bandage; he goes to the pharmacy and meets a teacher who is there to buy medicine for her sick dog; later they have a child together. But we will never know this. In the film that I was imagining, the young man will see the young woman again—­simply because, like in those children’s maze games where you have to help the mouse find the right path to the cheese among the many different tangled routes—­we have to start at the end to construct the story (in this case, that of the young man). Indeed, among the many different encounters of this young man the same day, the film focuses on this one in particular because he will later share a story with this woman—­even if in the life that the film recounts, he does not meet her because he will share a life story with her. From this, a good portion of the events that we see in the film—­events that are present in the narration—­are full of important effects for the agents in the story. This is what I call “causal saturation”; and this is what differs from reality, where events lead to a myriad of effects that affect millions of individuals in billions of possible situations that all belong within the same causal framework. More precisely, causal saturation can be defined as follows: given two W hy Did Napoleon Lose at Waterloo?


successive events in the narration, A and B, most of the effects of A that are causally very far away from B will not figure in the narration, along with many events that are roughly contemporary with A that don’t connect causally with B (either as obstacles or as facilitators or as accelerators or as any other way of modifying the nature and the time and location of B). As a consequence, the causal space of the narration is saturated, meaning that most of the causal chains therein are related to A or B; while in the “real” world, these chains are much more diversified in terms of the events/ facts they relate. Now, if I introduce other events C, D, and E, they will be selected for their connection with the causal pathways already there connecting A and B. And if I add F and G, that are not events but people, the same principle of event selection holds, and the same effect of saturation emerges and is reinforced (see fig. 4). This causal saturation thus produces in the reader’s or spectator’s mind the effect of destiny: the events seem to have the function of leading toward a certain conclusion—­a very understandable effect since they were chosen in the narration specifically for this. This leads us to the formulation of a very simple hypothesis: the causal saturation belonging to the structure of narration—­whether it is found in fairy tales, novels, or a narrative that we make about our own stories or history in general—­creates a feeling of destiny. In an imaginary tragic play cowritten by Dostoevsky and Chekhov, the gun which Dimitrius ultimately uses to kill Svidrigailov is a tool of destiny: “It was there for that purpose.” More generally, for the playwright and spectator alike, the feeling will form that all the facts of the world—­ and in particular the actions of Svidrigailov himself—­contributed to his violent death. Briefly, this is what is conceptually happening here when “destiny” or kindred notions are introduced: within the narrative, the Waterloo-­t ype situation (in which we can conceive of nearby possible worlds that indifferently include Napoleon’s victory or defeat) becomes a World War I–­ type situation (in which all possible worlds close to our own include the eruption of this war). Indeed, if we exclusively consider the causes and effects that are present in the narration, and if we respect the constraint Fusions






Figure 4. Representation of the structure of causal saturation proper to narra­ tivity. A, B, C, D, E are events or facts, F, G are people. Arrows indicate causal connections. Plain lines are relations that are told in the narration; dotted lines represent causal relations that do not take place in the narration, and that connect all events and characters together, including A, B, . . . G. The black square is what is in the narrative. It appears full of causal relations between A, B, . . . G, namely, plain black arrows, with no represented causal line telling anything about other events and people.

of causal saturation in all of these worlds, the possible alternative worlds constructed exclusively based off of this become very similar to each other with respect to the central characters in the narration and what happens to them. Why so? Let’s consider again our romantic chance encounter between the red-­haired woman and the distracted young man and suppose a possible world close to the world W that they inhabit. (W is defined on the basis of the requisites for narration, as sketched above.) Because of the structure of narrativity, most of the causes that could exist and have W hy Did Napoleon Lose at Waterloo?


led to their not seeing each other again won’t be there, since by definition they would lead to effects and assume causes that are not in W. If a world W' must be close to W, then it will very probably not include such causes; therefore, the two young persons will meet again. And this reasoning applies to all the key events in the narration; as a result, most of the worlds close to W will be such that the characters meet again, go through the several stages of the nascent love story, and end up in the final situations chosen by the screenwriters of our movie. Thus, the impression of inexorability—­inexorability being by definition the constancy of an event through a very large set of possible worlds that is close to our own. Through the effect of causal saturation that is unique to storytelling, Napoleon “had” to lose in the same way that Martina Navratilova had to end up as one of the most important tennis players in history. It was their destiny. In his major opus Temps et récit,12 Paul Ricoeur insisted on the affinities between writing history and writing fictions.13 In addition to all the affinities he listed, I want to emphasize the fact that causal saturation as a key feature of narrativity impinges on any attempt to tell a story, whether the subjects of the story actually occurred or not. This is one reason why written history looks like stories; and perhaps more strikingly, why our own way of telling our story to ourselves—­and thus our own self-­ understanding—­often appears to be shaped through this causal structure. In addition to how we have seen that two meanings of “why?” emerge in two sorts of discourse—­namely, justifying (one’s action or one’s belief) and explaining (the world)—­and thus two sorts of linguistic practice, it appears that a major figure of the confusion between the meanings of why (as we have seen with the notion of destiny) stems from a third major linguistic activity among humans: telling stories, or narrativity. And because we tell ourselves the story of our own lives and constantly use narration to understand our own lives and the lives of others (both alive and dead),14 the notion of destiny (or kindred notions) will always be available for the better understanding of ourselves and others.15



Oedipus and Failures Of all the contributions of Freudian psychoanalysis, the use of the mythical figure of Oedipus is perhaps the most famous: after all, who has not heard of the Oedipal complex? But beyond the illustration of the Freudian theory of everyone’s ambivalent attachment to the two parental figures through the myth of Oedipus, psychoanalysis also reactivated a major idea concerning Greek myths—­namely, that of a hero’s destiny. Generally speaking, destiny is a way of aligning a chain of contingent events so that they become the manifestation of a profound necessity linked to the essence of a person. In particular, the random events that thwart an individual’s plans prove to be the mainsprings of the realization of their destiny. The story of Oedipus illustrates this perfectly. At his birth, an oracle predicts that he will kill his father and marry his mother—­the royal couple of Thebes. The credulous parents decide to kill their newborn son through exposure in the countryside. Thanks to a shepherd who takes pity on the wailing child, Oedipus survives and reaches adulthood but decides to visit an oracle when someone tells him that he is a bastard. After learning about his terrible destiny and fleeing from the people whom he believes to be his parents, Oedipus roams out into the countryside, randomly killing an old man who refuses to get of his way on the road (times were rough back then and people got worked up easily, especially behind the wheel). He then arrives at Thebes, where the famous Sphinx has taken control of the city and created a terrible epidemic as punishment for the assassination of King Laius—­who of course turns out to be the rude old man who had refused to budge on the road. He learns that whoever can solve the famous riddle of the Sphinx will free the city from its curse and marry the now-­w idowed queen to become king. Oedipus comes up with the answer that everyone now knows and marries Jocasta, who is none other than his mother. Sophocles’s legendary play, Oedipus Rex, retraces the detective-­like investigation that Oedipus then undertakes to find and punish his predecessor’s murderer—­an inquiry that the prophet Tiresias advises against, as it will lead him to discover the terrible secret about the death of his father and his W hy Did Napoleon Lose at Waterloo?


own true identity. A foundational myth in Greek culture, Oedipus is far more than what I have described in these few sentences; but what is interesting for us in our particular context is simply that the original prediction concerning Oedipus comes true in spite of the plans that were specifically conceived to prevent it from happening. The chance encounter with Laius on the road is the means by which the oracle’s vision is realized, since it is from that point that everything unravels. This combined with Oedipus’s intelligence in resolving the riddle of the Sphinx leads to his downfall—­ and by that we mean, his destiny. The randomness presented here is not simply random; on the contrary, it is the manifestation of a destiny laid out by the oracle, and thus a necessity that is in some way superior. Once again, the situation is different from the case of World War I, where the chance assassination of the archduke in Sarajevo was simultaneously the cause of the war as it took place and a manifestation of the inexorability of this war in general. Here, if Oedipus had not met Laius, it would have been very unlikely that he would have ended up killing his father and marrying his mother. But the idea of destiny—­an idea conveyed by myth that everything is converging toward the realization of a particular end—­makes the story of Oedipus in some way similar to that of World War I, as it evokes the same impression of inexorability. Psychoanalysis does not just cite the story of Oedipus. Under the name “unconscious desire,” it constructs an avatar of the mythical notion of destiny. Let’s look at what Freud tells us about what we call today more casually a “failure neurosis.”16 A patient had been named to the highest job in his own professional hierarchy—­something that he had wanted for a very long time. In the three months that followed, he got sick repeatedly; lost valuable personal objects; disputed with his team; argued incessantly with his wife; and ultimately got fired and divorced. Freud interprets the case: the man in question was having a very difficult time accepting the success that he had been seeking for so long; deep within himself, “unconsciously,” he thought that he did not deserve it, that he was usurping this position. Or rather, he unconsciously felt that he had gone beyond the level of his Fusions


father, thereby engendering a feeling of guilt about having transgressed the natural hierarchy between father and son. The fall down the stairs and the virus caught in autumn are certainly coincidences; but ultimately, they are the means by which this desire to be punished—­to spoil the sense of achievement that comes with success, to pay off the debt, to sanction the filial transgression—­is realized. As in the case of destiny, acknowledging an unconscious desire combines a group of random events, tying them together in a way where they appear as the manifestation of a consistent and meaningful desire. Destiny is thus not fixed by the gods or chanted by an oracle. On the contrary, it exists in the form of unconscious desires that analysts can decipher by listening to the dreams, associations, and words of the patient. But as for its structure, it is the same as that destiny found within myths. Since psychoanalysis developed its founding theories by keeping a watchful eye on these myths (Oedipus, Electra, Orestes, etc.), and since myths are a canonical form of narrativity, it is not surprising that a new avatar of destiny was produced—­one whose source depended on the very nature of narrativity. Let’s go back to Napoleon and Waterloo. From the perspective of Oedipus or Freud (a bit more modern), the poorly rendered map was purely bad luck at a certain level. But on another, it realized the unconscious desire of Napoleon to see himself punished for his hubris. An original sin according to Ancient Greek wisdom, hubris makes mortals believe themselves to be capable of crossing the boundaries that nature has imposed on them. Hubris is always ultimately the reason for one’s terrible destiny because all forms of hubris must be atoned for. Here, psychoanalysis is essentially a figure—­one that is certainly culturally powerful—­of the notion of destiny.17 Because of their essential affinity with the nature of storytelling, novels and narratives can be viewed as so many opportunities to think of existence as destiny—­following the very precise articulations of contingency and necessity that have been under discussion since the preceding chapter. Within them, contingency is manipulated into becoming a necessity. W hy Did Napoleon Lose at Waterloo?


But if, as René Girard said, there exists “fictional truth,” this is because successful novels allow for us to see a dialectic between inexorability and contingency in individual lives. In the following chapter, we will consider another figure—­one that is very different—­of this feeling of inexorability according to which chance does not exist. To Sum Up Narration intertwines natural causalities—­both successful and failed intentions. Some events are contingent because it is difficult to isolate an area of possible worlds close to our own where they would inexorably take place. Under various guises, the idea of destiny transfigures this contingency into a sort of inexorability; many ideas and narratives, as ancient as myth or as modern as psychoanalysis, give substance to this concept. By inducing this specific organization of causality in the framework of a narrative that I have named “causal saturation,” the very nature of narrativity contributes toward inducing the idea of destiny—­an idea to which we are well disposed by our deep familiarity with narratives, since we tell our lives to ourselves and others throughout the course of our lives.




This page intentionally left blank

7 W hy Were There A merican Soldiers on the 15 :17 Train from A msterdam to Paris on Aug ust 21, 2015 ?

o n t h at day, t h e t h a ly s a m s t e r da m -­p a r i s t r a i n wa s the scene of an attempted attack so spectacular in nature that none other than Clint Eastwood made a film based on it featuring some of the actual people who were there that day. A 26-­year-­old Moroccan man boarded the train with a Kalashnikov and some automatic pistols with the aim of massacring the other passengers. We later learned that he was a radical Islamic terrorist with links to the ISIS group responsible for both the Bataclan attack in Paris and afterwards the coordinated bombings in Brussels. Almost immediately after he began his assault, he was brought down by three weaponless American passengers who were returning from vacation—­t wo of whom were in the military. As it is difficult to picture civilians overpowering an individual armed with multiple handguns, we can easily imagine the terrible toll that this attack would have resulted in if these young men had not been there. But what a coincidence! Right at the moment when a terrorist boards a train to commit his crime—­something that is happily quite rare—­there 183


are soldiers in the same train car. It seems to be pure chance; but couldn’t there be another reason for their presence than this banal story about going home after a vacation? Defining Conspiracy Theories Of course, as was the case after the September 11 attacks, speculations immediately appeared about a possible coup, the actual role of the Moroccan, the event being staged, etc. The leitmotif for all conspiracy theories can be ultimately summed up by the two following sentences: “It is not by chance that . . .” and “Who benefits from the crime?” As such, the identity cards left behind by the Kouachi brothers—­the notorious assassins behind the Charlie Hebdo massacre on January 5, 2015—­were not forgotten out of negligence; just like the unusual presence of an extra police officer at the Federal Building in Oklahoma City right when Timothy McVeigh was leaving after having positioned his bomb (this attack remaining the most deadly instance of domestic terrorism in the history of the United States) was not merely a coincidence. At the behest of the Charlie Hebdo attack’s mysterious sponsors, the documents were intentionally left behind in order to leave a trail for the police. And these sponsors are necessarily the ultimate beneficiaries of this horrific act: not the terrorists—­who will spend the rest of their lives in prison, or who are already dead—­but the enemies of Islam in France, or the partisans of war in the Middle East. After all, as September 11 showed the world, these kinds of events can potentially accelerate military intervention. No matter how one defines them, conspiracy theorists stand out through their extreme attention to details that don’t “match up” with the “official version.”1 A strange reflection in a rearview mirror in the video footage of the Kouachi escape; or, to use one of the most famous conspiracy theories, the position of the flag planted into the lunar ground by Neil Armstrong as we see it in the photos of the 1969 moon landing (“there is no wind on the moon and so the flag cannot be floating like this”) allow Limits


them to cast doubt on what everyone thinks they know: that the Kouachis were responsible for the Charlie Hebdo massacre, that we landed on the moon. This taste for detail must intrigue us somehow; after all, this is precisely how the character Sherlock Holmes operates. Where a normal person would see the suicide of a desperate painter, the great detective would notice photos of the deceased showing that he was left-­handed—­ and that the revolver at the scene of the crime was being held in his right hand. “Staged,” the detective would say, rushing out to find the murderer afterwards. This is how many detective novels have been structured ever since the presumed invention of the genre in the nineteenth century by Edgar Allan Poe with his extraordinary tales about Dupin—­a figurehead for almost all the murder mysteries that continue to enthrall us.2 The official version is never correct, and we must be suspicious of coincidences to understand what is really happening in the world around us. But how does a conspiracy theory differ from a police inquiry? What distinguishes the “9/11 truther” who refuses to accept the standard narrative about Al-­Qaeda’s attack on the World Trade Center and the Pentagon from Hercule Poirot or Philip Marlowe? To answer this, we must first agree on what “conspiracy theory” means. And likewise, while we generally agree that the idea of the Illuminati is pure fantasy, or that the theory according to which Paul McCartney died in 1964 and was replaced by a doppelganger (a theory called “Fake Paul” for the true believers, and which is undeniably my favorite one of all) is the stuff of crackpots; in other words, while we know how to intuitively recognize a conspiracy theory, it is very difficult to produce a set of general criteria by which one can decide whether an explanation of a given event is a “conspiracy theory” or not, and thus to define this type of theory in itself. The history of intelligence services and even democratic governments is actually paved with hundreds of more or less successful conspiracies, whether they have been discovered yet or not: the CIA plots to overthrow the government of Salvador Allende, among many others; the British and American lies about weapons of mass destruction to justify the invasion of W hy Were There American Soldiers on the 15:17 Train


Iraq, etc. After all, is it unreasonable to see conspiracies everywhere? And conversely, when—­after having highlighted the secret involvement of biotechnology giants in the financing of studies on the safety of GMOs—­one argues that these products are indeed poison, are they giving themselves up to the world of conspiracy theories? Or are they practicing a healthy criticism of science that is a thousand miles away from the Illuminati, 9/11 truthers, and those who think they know the truth behind the Kennedy assassination?3 One useful definition of conspiracy theories argues that they characteristically assume an unnecessary hypothesis concerning the involvement of a group of responsible agents that remain hidden while malevolent acts are carried out.4 However, since one cannot define what makes a hypothesis “necessary,” this definition is hardly operational. For example, our “truther” will rightly point out that it is necessary to invoke the actions of the CIA when trying to account for facts that are inexplicable without it, particularly those “disturbing coincidences” which the “official version” doesn’t bother to explain: how is it possible that members of the American military were returning from vacation on the Thalys train between Brussels and Paris on the day of the attempted massacre? If we let conspiracy theorists ask why-­questions, they will quickly assume that the involvement of conspirators is a necessary explanatory hypothesis. To this extent, conspiracy theories can be recognized by the fact that they tend to ask why-­ questions that are not necessary for the ongoing investigation. But in fact, why are they unnecessary? Negligible Details, Weird Coincidences, and Reverend Paley These why-­questions that motivate conspiracy theorists in discrediting the “official” story are generally of two different orders: troubling coincidences (like the American soldiers on board the train or the extra police officer stationed in front of Timothy McVeigh) and inexplicable details (like the identity cards of the Kouachi brothers or the unfurled American flag on the moon). These two types are not unrelated since in both cases the ofLimits


ficial narrative does not try to provide answers. I will now lay out an argument that conspiracy theorists are wrong to demand that facts be more coherent, consistent, and rational than they can actually be. As a consequence, their hyper-­skepticism regarding the “official version” is combined with a hyper-­rationalism regarding the state of the world. What does this mean? To begin, try to narrate the day before the day before yesterday to a friend, and then to another, and then to another. It is strongly possible that certain details will shift between the three different narratives and that the versions will not be entirely identical: Marilyn’s dress, initially red, will become green; the restaurant where you had dinner will change names from Basilic to Estragon; and so on. We are dealing with more here than the reliability of memory. When scientists measure something, they know that there can be fluctuations. The measuring devices, the observation conditions, the thermodynamic fluctuations of the air, an entire group of things implies that there will be “noise” in the data (as computer scientists and physicists say)—­which means that what interests us, the “signal,” is not clearly given. For example, Kepler’s observations about the positions of the planets allowed for him to detect elliptical orbits; however, all of the measured positions were not exactly following an ellipse. As a result of this, if we had wanted to account for each movement of the planets through each measured position (which would not trace out an ellipse), Newton’s laws would never have been discovered (which do indeed predict an elliptical orbit). To know how to distinguish the signal from the noise, and to not look at all forms of data as being a signal, is thus as essential in science as it is in everyday life. In other words, it is reasonable to expect mismatched details in a narrative and in data; and therefore, unreasonable for a conspiracy theorist to imagine that everything coheres. This is where the nonnecessary character of a conspiracy theory’s hypothesis becomes demonstrable: when one tries to explain inconsistent details and needs conspirators to do so, one is making an unnecessary hypothesis precisely because it is not necessary to explain these details from the outset; and it is on the contrary rational to simply ignore them. UnderW hy Were There American Soldiers on the 15:17 Train


standing the rationality of the world does not mean that its smallest details must be rational. Hegel illustrated this point by imagining a natural philosopher who wanted there to be a reason for why the exact number of parrot species is what it is—­when of course this number is entirely contingent in regard to reason, and there is no rational explanation for why there are 52 instead of 53 of them.5 And so, we have a criterion for determining conspiracy theories and a reason for finding them irrational. This criterion primarily concerns the notion of chance. In which way? To say that everything tallies up means that the causal series at play merge or conspire (in the strictest sense of the word), which runs contrary to the Cournotian definition of chance given earlier. And this assumption also governs conspiratorial attitudes regarding those “disturbing coincidences” mentioned above. To make a conspiracy theory, one must find a reason why the supposed Moroccan terrorist and the three Americans were in the same train car. Hence, “the four of them were part of a plan.” With this link, we go beyond Cournot’s notion of chance, where causal series are independent from each other—­ending up in a situation exactly like that of Oedipus, whose idea of destiny made it so that a random encounter with Laius contained a direct link with the fact that he was subject to a terrible fate as the son of Laius and Jocasta. The divergent series of causes can indeed be united if the question is transformed from the causal “Why were there American soldiers on the train?” into “There are American soldiers on the train! What for?” To which one can of course respond: “To stage a fake attack.” The conspiracy theorist thus mobilizes reasoning that is similar to that of Reverend Paley when he was confronted with the complex adaptations proliferating within the living world: such an assembly of parts was so improbable that it could not have been there by chance, meaning that a divine design presided over its construction (at least according to him). Similarly, it was so unlikely that American soldiers would cross paths with the terrorist that it simply could not have been by chance; and thus, they were part of a plan. Asking who benefited from this crime will reveal a Limits


beneficiary, who thereby appears as the instigator of the plan. The conspiracy theorist of course commits here a logical imprudence in moving from the existence of a beneficiary—­the group that objectively profits from the Thalys attack—­to the assertion that this beneficiary intended to reap this particular benefit. Like losses, benefits are often accidental, meaning that they are not wanted by the agents. Benefit and intention are two distinct concepts, even if both envelop the idea that something or some event is good for someone. Let’s note a small difference from Paley’s case. The human mind is not very gifted in the art of calculating probabilities. Cognitive science has shown that we have a tendency to overestimate minuscule probabilities (for example, we are afraid of airplane accidents and sharks even though the frequency of deaths due to one or the other is infinitesimal) and underestimate higher probabilities (for example, species extinctions or car accidents). Now let us suppose that you are at a dinner with twenty other guests. One of them was born on February 28, exactly like you. What a surprise! You see an “incredible coincidence,” in the sense of the realization of an extremely unlikely event—­even though, mathematically speaking, the probability that someone has the same birthday as you in a group of n people quickly grows higher as n increases. Chance itself is thus not an easily managed idea. In other words, the inference that “American soldiers could not be there by chance, and thus they were part of a plan” (“by chance” taken in the sense that “it was very unlikely that they would be there”) is based on a false premise: it could even be that their probability of being in the train with the terrorist is not as low as one might think. It is thus often necessary to accept contingency—­to accept that things can be there or happen by chance, in the sense of independent causal series whose probability of crossing paths is weak but not zero. By definition, low probability events can occur and this is nothing surprising. But in spite of this, dissatisfied with the various possible enumerations of different causal series, the conspiracy theorist looks for a “why?” along the lines of a “what for?” He (and it is generally a “he”) thus imagines the existence of a plan that generally corresponds to his own prejudices—­antisemitism, anti-­ W hy Were There American Soldiers on the 15:17 Train


Islam, anticommunism, hatred (that is sometimes justified) of agricultural and pharmaceutical lobbies, etc. While it is indeed legitimate to think that everything has a cause, it is not always necessary to ask “why?” about something that appears to be an extraordinary coincidence in order to form a response that sounds more satisfying than “these three soldiers were on vacation in Amsterdam, a charming city; they were returning home; the train company had assigned them their particular seats.” But precisely, why is this response not enough for certain people? When one is not satisfied by the ordinary explanation as to why American soldiers were present on the Thalys train on August 21, 2015—­namely, that they were returning home from vacation—­it is because they are actually expecting a certain proportionality between cause and effect. Whenever an effect is massive (like a massacre in a train) we all have the tendency to more easily accept a “big” cause than a series of bland events—­an attested bias in how we handle the concept of causality, which we find at work in many conspiracy theories. Such is the motivation behind the reactions to the deaths of JFK and Lady Diana, as well as to the arrest of IMF leader Dominique Strauss-­Kahn among many other examples (in regard to the latter, a French poll done just after his arrest showed that a majority of French people believed this to be a set-­up): that an event which affects powerful people or the order of the universe could be caused by a simple armed fanatic, a drunk chauffeur, or the unrestrained sexual desires of someone well on their way to becoming president of a large country is shocking to our implicit sense of causal harmony. This inability to accept such a disproportion between cause and effect ultimately leads us to seek out more sizable and powerfully situated causes, such as a CIA conspiracy, the Queen of England, or President Sarkozy. If certain people think that something is missing in the investigation of the Thalys attack and in all the explanations of it that they read, it is because the only cause that would satisfy them would be one that is equal in importance to the effect which intrigues them. The dissatisfaction when confronted with a series of explanatory causes that we discussed earlier can thus also take



the form of a disappointed expectation concerning the harmony between a cause and its effect. Give Chance a Chance Actually, “chance” deserves a far richer exposition than what I have described up until now. The notion of chance is not very consistent: it aggregates several meanings that can lead to distinct rigorous or formal expressions, and this also explains why it’s hard to defend a strong stance about chance and why-­questions. Let’s first note that there are many words in many languages used to talk about chance, which are partly synonymous. For instance, in English, randomness, luck (which means something positive but implies chance), and fortune are all good examples. Scholars of Aristotle know the distinction he makes between tuchè and automaton, two terms whose translation depends on the translator and their view of Aristotle’s ontology. In French, one can say hasard, chance, or aléatoire; as well as the old word “heur” (which is the root of bonheur and malheur and could be translated as “luck”). Let’s try to make sense of this linguistic plurality. When I say “I ran into her by chance,” what do I mean? First, it does not mean that there is no cause. Her standing at the bus stop and me going to the same bus stop both have causes and reasons (in the sense of reasons-­for-­action). But the two sets of reasons are not connected. Thus, when the conspiracy theorist says, “There was a cop waiting for him there, this was not just by chance,” he clearly means that there should be a connection between the causal history of the presence of the cop (including his reasons for why he was there) and the reasons why the terrorists came out of the Oklahoma federal building on that day. Now let’s again consider the chance meeting with my friend. Suppose I learn that she comes every day at the same time to catch a bus: in this case, I would not be able to say so easily that I saw her “by chance.” Why? Because this habit of hers made it more probable that I would meet her there.

W hy Were There American Soldiers on the 15:17 Train


Thus, besides a lack of apparent reason, we see that there is a meaning of “by chance” that envelops the notion of “low probability,” since raising the probability of meeting my friend makes me less likely to say it’s by chance. This does not mean that there is no determinism in the world: for instance, if I say “it randomly snowed in April,” I don’t deny that there is a determinism supporting weather; it’s just that it’s very unlikely for it to snow in April, and so I ascribe this unlikely event to chance (or Prince). And this holds for the conspiracist example: he says that the supposedly low probability of the cop being there is not so low, because—­given that the cop had the intention to be there (in order to perform the false flag operation)—­the probability of him ultimately being there was in fact very high. Yet this does not exhaust the meaning of chance, whose relationship with probabilities is much more complex. In fact, chance does not pertain only to low probabilities but also to probabilities nearing 1/2. How so? Let’s imagine a tennis match between two major tennis players (say, Martina Navratilova and Chris Evert Lloyd) in the final of some Grand Slam tournament. The score is one set for each, and it is 6–­6 in the third set. The last match point is harshly fought, and finally the ball launched by Evert Lloyd hits the top of the net and slowly falls to its base on Navratilova’s side. At the moment of contact with the net, there is a subjective probability of 1/2 that it will come down on either side. Some would correctly say that luck made Lloyd win the match. Clearly, there is a determinism for all physical motions; and so the ball, given the state of the world at that precise moment, had to fall on Navratilova’s side. From the viewpoint of the equations of physics, the probability is 1, since, if one commits to determinism, anything that actually happens must have a probability of 1. But we still say that it’s luck, or chance. Chance therefore can also apply to situations where the subjective probability is 1/2 (for instance because, as here, we have no reason to say that the ball falls on one side rather than the other). The intuition is that if the game were replayed—­or more generally, if we consider a possible world immediately close to the actual one—­the ball would often fall on Lloyd’s side.



This case of chance should be distinguished from cases where certain scientists or philosophers claim that there is actual indeterminism—­namely, that an event is not determined by the instantaneous state of the world just before the event. I won’t take sides on this issue; the question underlying the whole debate touches upon the interpretation of probability concepts, which are notoriously intricate and subtle.6 In general, quantum physics provides most of the cases of purported indeterminism. In physics, the notion of half-­life means that a substance made of certain radioactive atoms such as U235—­which lose neutrons spontaneously—­will lose half the quantity of atoms after a period T. This T characterizes the nature of an atom. However, in regard to one particular atom, it’s not possible to say when exactly a particle will be emitted; and this impossibility does not depend upon our knowledge but is instead radically grounded in the nature of the atom (at least according to many physicists). Here, there is no reason for a neutron in a given atom to be emitted at a given time. Thus, this is a case of pure objective chance that is unlike all my previous examples where chance concerns our appreciation of events which in themselves have reasons and causes. We should also note that ascribing chance requires us to specifically characterize the events in question: “losing a neutron at 5:46 a.m.” is a chance event; but the quantity of atoms left after T—­and thus the fact that after T a certain amount of neutrons is lost—­is fully determined. And this holds not only for quantum phenomena but also for ordinary ones: suppose that Serena Williams, ranked number one in the world, beats some player ranked fifty-­fourth by a final score of 6–­3, 5–­7, 6–­2. She lost the second set because of the same net scenario described above. One would say that losing this set, and the resulting final score of the match, was bad luck and thus due to chance; but the final result, Serena’s win, is not chance (given her ranking and that of her adversary, Serena’s victory was expected). This means that chance could answer a question about why the final score was what it was; but the respective skills of Serena and her competitor were the true explanation for the overall outcome of the game. Chance or not chance: this depends upon the precise thing one considers.

W hy Were There American Soldiers on the 15:17 Train


As a result of this short inquiry into our ways of talking, we see that “chance” includes two conflicting kinds of events: low probability events and half-­probability events. This indicates a lack of internal consistency in the notion. To some extent there is, however, a strong connection between this weakly robust notion of chance and the ideas I considered above—­such as the notions of structuring/triggering causes, and the idea of contingency. Contingency—­when understood in terms of possible worlds and the size of subsets of the set of possible worlds that include or do not include a focal event—­matches with those notions of chance explained either in terms of low probability or of half-­probability, since these two measures can be mapped onto measures of subsets of possible worlds. And when I think in terms of structuring causes, I can translate chance talk into causal talk in the following way: saying that top-­ranked Serena won against our imaginary fifty-­fourth-­ranked player means that there were structuring causes, mirrored by the ATP ranking, such that Serena had to win. But the score itself must be explained by an appeal to triggering causes (for example, during the second set, Serena felt a little tired or stressed about the perspective of winning her nth Grand Slam title and breaking Martina Navratilova’s record; or any other small event that would trigger her slightly less efficient first serve and the almost equal number of games won by each player in this set). In contrast, think of Serena playing against me: not only would the structuring causes explain her victory; but those same causes, which include our relative skills in tennis, would also explain the final score (a crushing and swift 6–­0, 6–­0 defeat). These last considerations make clear why it is still possible to say that chance explains some events and answers why-­questions. Chance explanations refer to the need to place triggering causes in the foreground of an explanation, and indicate a specific setting of possible worlds that features some degree of contingency. And there is even a way to integrate these chance explanations into science, which I will now illustrate with evolutionary biology. We saw in chapter 4 that the intensity of natural selection also depends Limits


upon the size of the population where it occurs, and is inversely proportional to this size. The population geneticist Sewall Wright in 1932 (cited in that chapter) coined the term “random genetic drift” to name this effect, which is in essence an effect of “stochasticity” (or chance): the smaller the population, the higher the chances that the traits that should be selected (due to the higher chances of reproduction they confer to their bearers) are not selected. This occurs exactly like in a coin toss: if you toss a coin three times, you are likely to end up with “heads” three times; but if you toss it 1,000,000 times, you are likely to yield two almost equal series of heads and tails. This is called the law of large numbers, a law in probability theory that was demonstrated in the eighteenth century by Bernoulli. The objective chance of falling on heads or on tails is 1/2,7 but this does not determine respective frequencies in small series: random genetic drift is about these latter frequencies. One should also notice here that random genetic drift comes in two modes. When two traits with distinct chances to reproduce (named fitnesses, and written w and w'), are different, drift means that the trait with w  4 and any polynom P of degree n, there is no general solution to P(x) = 0, as is the case with those general solutions for ax2+bx+c=0 that one learns in high school. It is a fundamental theorem for algebra in general. Regarding Gödel’s incompleteness theorem, see later in chapter 9 (and note 8 below). 2. Porphyry edited and published the work of his teacher, the major Neopla-

Notes to Pages 216 –216


tonic philosopher Plotinus, who advocated an extreme version of Plato’s transcendence of Ideas. Later, Porphyry authored a treatise on universals that is a seminal text for the question of the existence of references of general terms that pervades the history of metaphysics. 3. Here we meet the discussion of me as a poached egg by David Lewis; I am just alluding to it. My question on the limits of “why” does not need an elaborated theory of essences and counterfactuals. 4. The philosophical notion of “stereotype” has been put forth by Hilary Putnam in the context of his theory of reference, and his critique of the idea that the meaning of a word is a set of properties that are somehow in the mind of the speaker; “stereotypes” are bundles of properties that allow us to recognize things and name them; they are socially elaborated. See Putnam, Mind, Language and Reality: Philosophical Papers (Cambridge: Cambridge University Press, 1975). 5. In Death: Perspectives from the Philosophy of Biology (London: Palgrave, 2022), I extensively analyze the evolutionary theories that provide an answer to “why we die.” The theory I just mentioned was presented by the Nobel Prize–­w inning immunologist Sir Peter Medawar; see his The Uniqueness of the Individual (London: Routledge, 1957). Another influential version was developed by George Williams, who showed that genes that are detrimental to organisms late in life might be selected against genes that extend lifespan when they also provide an advantage in terms of reproduction earlier on; Williams, “Pleiotropy, Natural Selection, and the Evolution of Senescence,” Evolution 11, no. 4 (1957): 398–­4 11. This is called Antagonistic pleiotropy; and I have explored the epistemology of the validation of these two hypotheses in detail. 6. Of course, I know that many, if not most, historians of science after Thomas Kuhn (The Structure of Scientific Revolutions, 1962) reject the idea that science advances along a progressive and continuous pathway that reaches better and better theories. For my purpose, it is enough to notice that—­even though there might be no absolute progress in physics—­the metaphysical views of Aristotle, Kant, and Quine can’t be ordered in the same pseudo-­progressive way that textbooks use when presenting physical theories. 7. The questions concerning the nature of real numbers represent a deep issue in mathematics and the philosophy thereof. Richard Dedekind famously proposed a view of how real numbers are constructed in Was sind und sollen die Zahlen? [What are numbers and what should they be?] (1909), a landmark text that sees real numbers as the limits of infinite series—­which he calls “cuts.” It is one among many competing views of this question, which is here left aside. 8. A useful and clear presentation of this history, including set theory and

Notes to Pages 219 –228


Gödel’s disturbing results, is provided by Morris Kline, Mathematics: The Loss of Certainty (New York: Oxford University Press, 1980). 9. Arithmetics is defined here by the Zermelo-­Fraenkel axioms (after the two mathematicians who developed them); but there are alternative formulations of this set of axioms—­especially the one provided by Bertrand Russell and Alfred North Whitehead in their seminal Principia Mathematica (1921), as well as an earlier one from the Italian mathematician Giuseppe Peano. 10. Of course, adding some axioms could allow the demonstration of the consistency of arithmetic; however, this new arithmetic would itself have to be proven consistent, which would require other axioms, and so on infinitely. Especially, one can take an undecidable proposition as such axiom: then this undecidable proposition will of course be true, but, according to the incompleteness theorem, in this new system of arithmetic, other undecidable propositions will necessarily arise. 11. An interesting conception of this relation between Newton’s and Einstein’s physics, the former being a particular case of the latter, is given by Thomas Nickles in “Two Concepts of Intertheoretic Reduction,” Journal of Philosophy 70 (1973): 181–­201. Nickles originally interprets this relation as one of the major kinds of reduction in science. 12. Craig Callender, What Makes Time Special? (New York: Oxford University Press, 2017), and Callender, ed., Handbook of the Philosophy of Time (Oxford: Oxford University Press, 2018), are good starting points for addressing this question. One can also see Sam Baron and Kristie Miller, An Introduction to the Philosophy of Time (Cambridge: Polity Press, 2018), for a solution that sees time as nonfundamental. 13. W. V. Quine, “Main Trends in Recent Philosophy: Two Dogmas of Empiricism,” Philosophical Review 60, no. 1 (1951): 20–­43. 14. As I mentioned in the introduction, I leave aside issues related to the relations between semantic and pragmatic, and especially the thesis that pragmatics doesn’t ultimately differ from semantics, so that the meaning of sentences is always wholly determined by the context. Thus, in such a perspective, for any sentence there exists an interlocution context within which it means something for the speakers. In this all-­pragmatics account, all the questions that appear to us as limits to “why?” can eventually receive an answer. 15. With respect to all caveats implied by this word in science: truths are falsifiable and provisory; scientists may commit to distinct epistemic goals, hence distinct criteria of truth; one can’t confirm hypotheses but only falsify them, as Popper has shown; and so on. But metaphysics in principle can’t hope for this

Notes to Pages 228 –231


kind of truth (in the sense of corroborated and provisionally to be unanimously accepted statement—­think climate change or evolution, for instance). 16. This discussion prolongs the examination of the role of epistemic values in science that I developed in chapter 2 on the basis of Levins’s classic paper on the strategy of model building. A key element in all these discussions is the fact that, even though no empirical evidence can be invoked to settle the question, there are still arguments to justify the privileged role of one value over the others. Conclusion: Why “Why”? 1. For Frege, as he explains in his classic 1892 paper “Ueber Begriff und Gegenstand”—­translated as “On Concept and Object,” in Mind 60, no. 238 (1951): 168–­80—­the concept “horse” in “Jolly Jumper is a horse” constitutes a function applied to something X, such that the value of this function is “true” when X is Jolly Jumper or Bucephalus, “false” when X is Socrates or Mount Fujiyama, or anything that is not a horse. The function “Horse (X),” as I write it, can be applied to anything, whether X exists or not. And the formal logics can use another sign to express that X exists (namely the existential quantifier). 2. The logic of “ground” has recently come to the fore among logicians and philosophers of logics, especially after seminal papers by Kit Fine, for example, “Some Puzzles of Ground,“ Notre Dame Journal of Formal Logic 51, no. 1 (2010): 97–­118. For a formal treatment see Poggiolesi Francesca, “On Defining the Notion of Complete and Immediate Formal Grounding,” Synthese 193, no. 10 (2016), and for a set of approaches see Fabrice Correia and Benjamin Schnieder, eds., Metaphysical Grounding: Understanding the Structure of Reality (Cambridge: Cambridge University Press, 2012). For a recent overview of the topic see Michael Raven, ed., The Routledge Handbook of Metaphysical Grounding (New York: Routledge, 2020). In this conclusion, I make explicit that the overall argument of this book could be seen as a contribution to the current renewed reflection on “ground” (and I say “renewed” because obviously ground was what philosophers like Leibniz were after), which addressed my question of the plurality of “reason” from another perspective, such as Selim Berker, “The Unity of Grounding,” Mind 127, no. 507 (2018): 729– ­7 7. 3. Jean-­Luc Marion authored an important commentary of the scientific methodology given by Descartes in his treatise Regulae ad directionem ingenii; see Sur l’Ontologie Grise de Descartes: Science Cartésienne et Savoir Aristotélicien dans les Regulae (Paris: Vrin, 1975); translated as Descartes’s Grey Ontology: Cartesian Science and Aristotelian Thought in the Regulae (South Bend, IN: St. Augustine’s Press, 2022). Among a prolix literature on Cartesian mathematics and science, one can

Notes to Pages 232–239


read the classical work by Jules Vuillemin, Mathématiques et Métaphysique chez Descartes (Paris: Presses Universitaires de France, 1960); or more recently Daniel Garber, Descartes’ Metaphysical Physics (Chicago: University of Chicago Press, 1992). 4. I developed this idea in “Natural Sciences,” in The Cambridge History of Philosophy in the Nineteenth Century, 1790–­1870, ed. Allen W. Wood and Songsuk Susan Hahn, 201–­38 (Cambridge: Cambridge University Press, 2011). 5. This thesis has been defended by Paul Horwich and is called deflationism. It implies that nothing more than p is asserted when someone says “p is true,” which empties the concept of truth of any substantial content (hence the name). Deflationism is a major concern for epistemologists; see Paul Horwich, Truth (Oxford: Clarendon Press, 1998). However, most of my arguments don’t need to rely on deflationism, therefore I will not discuss this claim directly here. 6. Spinoza uses this phrase pervasively in his Ethics, while Descartes uses it in replies to the “Second Objections to Descartes’ Meditations.” 7. Many scholarly works have been devoted to the idea of “causa sive ratio,” and this proposition 2.7 by Spinoza. Among those works see Vincent Carraud, Causa sive ratio: la raison de la cause, de Suarez à Leibniz (Paris: PUF, 2006). 8. “Dialectics” for Kant means the logics of these illusions within which reason inevitably falls, because of its striving toward the “unconditioned.” He uses the word in a very specific way, different from Plato’s understanding of dialectics, which has to do with dialogue, and from Hegel’s (and then Marx’s) conception of dialectics as a process through which something or some meaning is negated, transformed, and reasserted. Commentators have emphasized the way Kant’s idea has been understood and transformed by Hegel, and the roots of Kant’s concept, but this does not concern us here. 9. Among a voluminous literature one can read Victoria Wike, Kant’s Antinomies of Reason: Their Origin and Their Resolution (Lanham, MD: University Press of America, 1982), and Michelle Grier, Kant’s Doctrine of Transcendental Illusion (Cambridge: Cambridge University Press, 2001), on the transcendental dialectic and these questions. Regarding psychology, notice that Kant here thinks about what he calls “rational psychology,” a doctrine mostly devoted to these questions, which disappeared with the empirical psychology we know. Even though the question of the materiality of the mind, discussed by current philosophers of mind under the name “physicalism,” sometimes evokes discussions of rational psychologists debating the immateriality of the soul that Kant thought to be meaningless 10. On this resistance see Tania Lombrozo, Andrew Shtulman, and Michael

Notes to Pages 240 –244


Weisberg, “The Intelligent Design Controversy: Lessons from Psychology and Education,” Trends in Cognitive Sciences 10, no. 2 (2006): 56–­57. 11. In Schopenhauer’s first writing before his masterwork The World as Will and Representation, namely his dissertation On the Fourfold Root of the Principle of Sufficient Reason. 12. Kant breaks with this major metaphysical assumption: one of his major precritical works, known to be an important milestone in his pathway toward the Critiques and the critical philosophy in general, is his Essay to Introduce in Philosophy the Concept of Negative Magnitudes (1764), where he argues that negativity is not a mere lack of reality. Consequences of this view are numerous, and among them stands, obviously, the impossibility of an optimistic justification of the existence of the worlds such as the one Leibniz favors. 13. This exteriority between the subject of the proposition (Mbappé) and what is asserted about it (the trajectory of the ball) defines what Kant calls a “synthetic judgement,” as in the sentence “How are synthetic a priori judgments possible?”, which is the fundamental question of his transcendental philosophy. 14. Lewis, Philosophical Papers, vol. II (Oxford: Oxford University Press, 1986), ix. The Humean is clearly deflationist, skeptical regarding all that seems to go beyond events happening then and now; while the Aristotelian is more inflationist, the ontology admitting much more than events. The present book stands between the two. 15. Philosophers of science since at least Quine and Duhem doubt that any experiment suffices in order to decide between rival theories; nonetheless, the case of rival theories in relation to experience is obviously distinct in philosophy since it is often hard to even imagine what the world would look like with different metaphysics. 16. See, for example, Leibniz’s Correspondence with Arnauld and his Discours de Métaphysique (1677). On Leibniz’s ideas of individuality and complete notions, see Hide Ishiguro, Leibniz’s Philosophy of Logic and Language (Cambridge: Cambridge University Press, 1972); and among recent work see Justin Smith, Divine Machines: Leibniz and the Sciences of Life (Princeton: Princeton University Press, 2011).

Notes to Pages 246 –254

This page intentionally left blank

This page intentionally left blank

This page intentionally left blank

This page intentionally left blank

This page intentionally left blank