Casualties of Causality 3031182456, 9783031182457

This book offers a critique of the present status of the concept of causality in the social sciences. “The Causality Syn

218 73 2MB

English Pages 109 [110] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Acknowledgement
Contents
Chapter 1: The Causality Syndrome
The Causality Syndrome Unpacked: System 1 and System 2
The Institutionalization of The Causality Syndrome
Create More Space
References
Chapter 2: Twenty-five Questions
Q1. Is Causality a Useful Concept in Social Science?
Q2. Is Causation the Most Important and Honorable Task in the Social Sciences?
Q3. Are All Great Social Scientists Famous for Their Causal Analysis?
Q4. Is Causality Only One Thing?
Q5. Can You Only Ask One Type of Question About Causality?
Q6. Is Methodology Prior to Paradigms?
Q7. Do Methodological Rules Precede Scientific Practice?
Q8. Is Scientific Progress a Result of Compliance with Methodological Rules?
Q9. Will Social Science Cleanse Itself of Ideology and Normativity, if it Restricts Itself to Causal Analysis?
Q10. Does Causation Always Require a Counterfactual?
Q11. If You Compare Two Groups, Is it then Less Important What the Comparison Is About?
Q12. Does Reciprocal Causality Mean You Have Not Nailed Genuine Causality?
Q13. Does the Quality of a Study Depend on Its Place in a Hierarchy of Evidence?
Q14. Is the Randomized Controlled Trial a Clincher, and All Other Kinds of Studies Just Vouchers?
Q15. Is a Study Better, the More Control You Have over the Situation?
Q16. Will Causal Knowledge Accumulate Over Time?
Q17. Does the Evidence Hierarchy Only Produce Knowledge?
Q18. Are the Rules for Causal Inference the Same Regardless of the Practical Situation?
Q19. Can You Sell Your Study by Pretending that Its Design Is Better than it Actually Is?
Q20. Is Your Career in Jeopardy, if You Do Not Comply with The Causality Syndrome?
Q21. Are People Primarily Interested in Outcomes?
Q22. Do We Spend Most of Our Lives Thinking About the Causal Net Effect of X on Y?
Q23. If We Focus on Demonstrable Social Impact, Will We then Maximize the Impact of Social Science?
Q24. Are Methods Ways to Find Out About Things, But Not Ways to Influence Things?
Q25. Is the Time Right for Causal Studies?
References
Chapter 3: Casualties of Causality and Paths to the Future
Casualities of Causality
Three Paths to the Future
Final Word
References
Index
Recommend Papers

Casualties of Causality
 3031182456, 9783031182457

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Casualties of Causality

Peter Dahler-Larsen

Casualties of Causality

Peter Dahler-Larsen

Casualties of Causality

Peter Dahler-Larsen Political Science University of Copenhagen Copenhagen, Denmark

ISBN 978-3-031-18245-7    ISBN 978-3-031-18246-4 (eBook) https://doi.org/10.1007/978-3-031-18246-4 © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: Pattern © John Rawsterne/patternhead.com This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Acknowledgement

I want to thank Casper Sylvest, a fine colleague and friend, for our collaboration on an article in Danish on a similar topic (Dahler-Larsen & Sylvest, 2013). Without the standpoint that we developed together, the elaborations and further perspectives depicted in the present manuscript would not have been possible.

v

Contents

1 The Causality Syndrome 1 2 Twenty-five Questions33 3 Casualties of Causality and Paths to the Future95 Index103

vii

CHAPTER 1

The Causality Syndrome

Abstract  The present dominance of The Causality Syndrome has considerable downsides and presents a challenge to social science. This syndrome consists of a belief in causal studies as more important than other studies, a narrow definition of causality, and formulaic rules of thumb regarding how to make causal claims. The concept of causality as such is not the problem. When social scientists discuss causality in a careful way, there are multiple schools of thought and controversy between the positions. However, when fast and less reflexive thinking reigns, then monism, usurpation, simplification and institutionalization lead to the present dominance of The Causality Syndrome. Keywords  Causality • The Causality Syndrome • Evidence • Evidence hierarchy • Monism We believe ourselves to understand things first when we have reduced them to what we do not understand and cannot understand—to causality, axioms, God, character. —Georg Simmel, cited in Swedberg and Reich (2010) What is your independent variable? —A colleague talking to Professor Zimbardo, the leader of the Stanford prison experiment

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. Dahler-Larsen, Casualties of Causality, https://doi.org/10.1007/978-3-031-18246-4_1

1

2 

P. DAHLER-LARSEN

A large part of social science is currently on the wrong track. It is suffering from a particular set of ideas which create intellectual paralysis. These ideas are about causality, how to understand causality, and how to design studies aimed at producing causal claims. My concern is that a configuration of ideas and practices related to aspirations about causal claims is now placed at an undeservedly central position in social science. I call this configuration of ideas The Causality Syndrome. A syndrome is a set of interrelated problems and symptoms. Causality itself (whatever it might mean) is not the problem. It is the interrelatedness of multiple ideas, conceptions, propensities, and myths, which together create the syndrome. Consider the following examples. In a recent randomized controlled field trial, face masks were distributed to an intervention group to prevent participants from contracting corona. The study claimed to study the effects upon bearers of the mask. The number of corona cases in the intervention group and the control group were compared. No one, not the authors of the study, not the journalists, discussed whether the control group benefitted from interaction with the intervention group, if the two groups lived in the same social settings. Instead, the term “randomization” was used as a magic wand to ward off critique and suggest that the study proved the value of face masks “once and for all.” It is assumed that everybody knows that RCTs are found on the “top shelf” of the “evidence hierarchy”. Therefore, if that research design is used, all critique is eliminated, and findings allegedly hold true “once and for all.” But the study is born with an inherent problem. Although it claims to study the effects of masks on bearers of masks, it uses people without masks as a control group. But if these people interact with people with masks, they also benefit from the masks that members of the intervention group are wearing. Masks may also remind people to keep social distances. To determine the effect of the intervention there is no other way than to compare the results for the intervention group and the control group. The quality of this design therefore rests not only on randomization, but also on keeping the control group uncontaminated from the intervention until the study reaches its conclusion. To say that the study focuses on the effects of masks on mask-bearers is misleading for the readers of the study and perhaps even for the researchers themselves because it conceals how the mask-less members of the control have already benefitted from masks. If something good is happening in the intervention group, but something

1  THE CAUSALITY SYNDROME 

3

good is also happening in the control group, it appears as if there is no effect at all. Unsurprisingly, the study concluded that there was a small but insignificant effect of masks. However, we live in an environment where people pound the drum of randomization loudly in the hope that all threats to validity are scared away. “Once and for all.“ What really happens is that the noise from that drum prevents a more subtle and serious discussion from taking place. This is how The Causality Syndrome works. Another study looked at municipalities which were closed down because of high incidence of corona cases. To estimate the effect of lock-down on the number of corona cases, researchers compared the locked-down municipalities with neighbor municipalities in the same region. They argued that since the lock-down was not a result of the municipalities´ own decision, the study design (a natural experiment) was as good as a real randomized controlled trial. What they did not explain is that the closing of a municipal border has implications for citizens on both sides of the border. The sick people in municipality A cannot get out, so they cannot  infect the healthy people in municipality B.  In a similar vein, the healthy people in municipality B cannot get into municipality A, where the infected people live. If you isolate A from B, you also isolate B from A. Logic leads us to expect that a lock-down has consequences for neighboring municipalities, too. However, the phrase “almost as good as a randomized controlled trial” was used to enhance the alleged quality of the study—at the same time as attention was diverted away from an important logical problem, which is that the “intervention” affected both the intervention group and the control group. Unsurprisingly, it was demonstrated that the intervention was not effective. Here, The Causality Syndrome operated again. Randomization was used as a veil to disguise a fundamental logical problem, even if there was no randomization in the study! Finally, consider a foundation opening a new website with information about new initiatives in social policy. A journalist is hired to communicate about new studies and new findings. The journalist (who is not trained in social science) wants to communicate clearly and directly to readers about the quality of any new study presented on the website. A little shorthand scheme is developed so that any study will get a green, orange or red light, depending on the “quality” of its causal design. It is assumed that it is easy to rank all studies only based on their design features, that all studies are causal studies, and that the quality ranking can be done without any particular insight or training in methodology.

4 

P. DAHLER-LARSEN

Again, these incidents are made possible by ideas anchored in what I call the The Causality Syndrome. The Causality Syndrome supports little simplified slogans like “You can easily determine the quality of a study if you know how its design is placed in the hierarchy of evidence.” “Randomization eliminates all threats to validity”. “An RCT clinches all discussions.” “If you don´t have a control group, you can´t say anything.” “If you are doing a descriptive study, it will never get published.” One of the casualties of The Causality Syndrome is that knowledge that does not come in causal form is under-prioritized. Problems are constituted as objects of studies depending on how well they lend themselves to a particular kind of causal analysis. A superficial reference to particular methodological rules is used as symbolic protection even against logical problems inherent in the causal studies themselves. In the worst instances, one of the casualties of The Causality Syndrome is truth itself. The desire for the optimal causal design makes researchers omit details which stand in the way of these designs, even if it leads to misrepresentation of findings. In a broader perspective, The Causality Syndrome positions human beings as machines which transform independent variables into dependent variables. Methodologies aimed at producing causal claims influence what to study and how to think about it. The methodological cart is now pulling the substantive horse. A world view is at stake. The understanding of what it means to be human is at stake. Research integrity is at stake. The casualties of causality are considerable. * * * You are reading a Palgrave Pivot. This special format is longer than an article and shorter than a book. I present my argument in an essay, which is more succinct, pointilistic and provocative than what you find in a usual academic book. You can expect a text with fewer references per page than a journal article, which also gives me space for a more personal voice. * * * When I claim that a large part of social science is on the wrong track due to its fascination with causal analysis, I do not advocate an anti-scientific position. The ambition is rather to save social science from scientism,

1  THE CAUSALITY SYNDROME 

5

by which I mean an exaggerated self-confidence and trust in one aspect of science (methodology) as a judge presiding over all forms of knowledge. Scientism is a glue that makes the Causality Syndrome stick to social science. Do I not fear anti-scientific sentiments, fake news, and conspiracy theories? Yes, these are truly problematic. But their roots lie in the structure of media, political ideologies, social frustration, psychodynamics, and many other things (Hendricks & Vestergaard, 2017). The problem is not that science does not have enough influence in society. It would not help to make scientism king. A war between authoritative, undisputable fact on the one hand and fake news on the other is too stupid, and not a war into which I shall be sending my sons and daughters. Too many silly casualties would come out of that. I am not going to march with a T-shirt saying: “defend science.” I am also not going to arrange a cookie sale to fund it. What is the next thing I should do in defense of science? Folkdance? It is not going to happen. If we want to defend scientific thinking (not scientism), we must acknowledge that science also includes a well-reflected critical view on sciences, including the role and legitimacy of the social sciences in their present form. If we defend scientific thinking, we defend doubt and controversy. With Vattimo, I think scientific thinking is best when it is weak, when it recognizes its own historicity, its limitations in relations to all practices and forms of knowledge at play in human existence. We do not have to buy a methodology before we start thinking. We do not have to legislate in advance about which of our practices should help us most in a given situation, we can find that out when we are in that situation. If that situation is our life, we have to negotiate what role social science should play in our existence, even if social science in its present form pretends that this is not a relevant issue. Luckily, philosophy (in the type of Vattimo), the sociology of knowledge and even science and technology studies are now showing that social science, too, is contingent upon hundreds of historical, institutional, and financial circumstances. Behind every scientific insight is a hinterland of practicalities and narratives (Law, 2004). Of course, something so fragile and contingent should not be our self-appointed king. At the same time, it should be recognized that fragile and contingent knowledge is usually what we have (Latour, 2003). Today, critical reflection about social

6 

P. DAHLER-LARSEN

science is a part of social science itself. A critical reflection in the social sciences, and one that questions the role of causality, is only dangerous for scientism. For the rest of us, it is a blessing. * * * A critique of scientism is not new. Already Aristotle paved the way, arguing that there is a kind of practical wisdom, phronesis, which is different from technical knowledge. The critique of modern scientism is articulated by people like Heidegger, Merleau-Ponty, Bachelard, Luckmann, Feyerabend and others. Simply put, I believe they argue that existence is complicated, and science has grasped so little of what it means to be a human being. And science is not founded on a God’s eye perspective, it is instead a fragile and capricious endeavor which rests as much upon meaning-making and narrative as the rest of our activities, situated as we are in society, culture, institutions, and history. I could also have mentioned Foucault and the idea that knowledge is not just knowledge. It is also power. But I prefer Vattimo for his historization of knowledge, and Becker for his critical discussion of empiricism and, in evaluation studies, Schwandt for his revival of the notion of practice. The weight of these intellectual giants together fills me with awe. My modest contribution is to point out how much scientism permeates social science today through the notion of causality and how much the worship of causality fills the everyday lives of many researchers in social science. It influences how they think, how they make an argument, how they choose a topic for their studies, how they get funded, how they publish, and how they think they make a contribution. The aim of this essay is to unpack the undergirding ideas about causality and the casualties that follow. I wish someone had written a text like this and presented it to me during my younger years when I struggled with how to make contribution that was clearly not “appropriate” in a research environment dominated by notions of causality (and their casualties). So, now that I, against the odds, enjoy the privilege of tenure at a good university (!), I dedicate this text to younger people, in the hope that they find their own way of dealing with ideas about causality which are still playing a large role, and increasingly often a dominant one. In the core of the book, I will discuss twenty-five questions about causality and its status in social science. I ask these questions not on the level

1  THE CAUSALITY SYNDROME 

7

of philosophy of science, but rather on the level of discussions of problems and methods that social scientists encounter in their daily life. I also offer my answer to each of these questions. You do not have to agree with each and every one of them. It is my hope, however, that by considering these questions, we can redefine our positions vis-a-vis The Causality Syndrome. The Causality Syndrome in fact reflects scientism, not science. * * * My essay may sound like a broadside. So let me be clear about what I am not attacking. I am not attacking particular researchers. With very few exceptions, most of us are doing our best under the circumstances. Many of my good intellectual friends are solid, clever and honest people who just happen to be interested in unpacking causal mysteries. I am also not attacking particular studies or research projects. In the following, I shall therefore refer more to types of studies (without references) than to specific studies (with references). I apologize for this deviation from conventional academic practice. My motivation is to discuss principled questions, not to get entangled in methodological hair-splitting over individual researchers’ work. It would really derail the whole discussion. I am not attacking particular paradigms defined as you encounter them in the philosophy of science textbooks. Only in textbooks do paradigms live as if they rule the world. In practice, paradigms do not exist as independent and well-defined masters who control researchers. At best, paradigms are merely some small parts of mental models used by researchers to make decisions about their research in their daily lives (Greene, 2007). Mental models also include personal experiences and inclinations, gut feelings, career concerns, and lots of practical concerns about time, money, and recognition. In the complex, situated reality of everyday social science, I am more worried about taken-for-granted ideas and institutional pressures than I am about formal paradigms. One of the most beautiful texts about how to be humble in social science in general and in causal analysis in particular, is written by folks whose “paradigm” I usually do not consider my own (Cook & Campbell, 1986). I am full of respect for their careful thinking within their own paradigm. The paradigm itself, if it even exists, is not the problem. The concept of causality itself is also not a big problem, as long as we allow it to be interpreted carefully, critically and openly. We should be able

8 

P. DAHLER-LARSEN

to deal with that in social science. Time and space are also problematic. And democracy, nation, gender, community, value, the political, equality, identity, race, class, history, alienation, status, risk, nation, order, crisis, management, participation and sustainability are all essentially contested concepts, so we should have training enough to deal with that. A word becomes a concept as soon as it has a social-historical baggage, as soon as it carries experiences and expectations, as soon as there are struggles about its meanings and implications (Koselleck, 2007). So, I have no problem with the word causality, as long as we remember to treat it as an essentially contested concept (Koselleck, 2007). We should not treat causality as a methodological antecedent to our work. Instead, we should treat it as something contested, something that people can legitimately disagree about before, during, and after the research process. So, what is the problem? It can best be explained by The Story of the Magic Porridge Pot. Once there was a good little girl who lived alone with her mother in a small house at the edge of a little village near a big forest. They were poor and had nothing left to eat. One day the little girl went into the forest to look for mushrooms for dinner. In the forest she met a kind old woman who gave her a little cooking pot. The old woman told the girl that it was a magic pot that would cook porridge whenever it heard the words ‘Cook, little pot, cook’. When there was enough porridge in the pot, the words ‘Enough, little pot, thank you’ would make the pot stop cooking porridge. The little girl took the magic pot home. After this she and her mother were no longer hungry because they ate porridge as often as they wished. One day the little girl went out for the day to visit her grandmother in the next village. When she had gone her mother decided she wanted a bowl of porridge. She said, ‘Cook, little pot, cook’ and the pot started to cook porridge. The pot filled with porridge and the woman wanted to stop it cooking any more porridge, but she had forgotten the words. She said, ‘No more now little pot’ but it kept making porridge. She said, ‘That’s it, little pot, stop’ but it kept making porridge. It made so much that the porridge started to overflow from the pot and spill onto the stove. ‘Stop it!’ she cried, but the porridge overflowed onto the floor and filled the kitchen. ‘No more porridge! Stop!’ cried the woman, but it did not stop. Porridge poured out into the street and into the next house. Then it poured through every street in the village and no one knew how to stop it. People came with buckets and pots to scoop up the porridge but as fast as they did, more porridge filled the streets. At last, the little girl came back into the village and shouted ‘Enough

1  THE CAUSALITY SYNDROME 

9

little pot, thank you!’ The pot stopped cooking. But anyone who wanted to get across the village that day had to eat their way there.1 The problem is not the porridge itself. Originally, it served a purpose. The problem is that the formula to stop cooking porridge has been forgotten. In a similar vein, a large part of mainstream social science as it is found in practice has forgotten how to say “Enough, little causality, thank you.” Now everybody who wants to get across the department of social science has to eat their way through the causal porridge. You also have to deal with the overflow from the causality pot if you get into consultancy, philanthropy, evaluation, social work, and more. But this is just a metaphor of the situation. * * *

The Causality Syndrome Unpacked: System 1 and System 2 The porridge metaphor has clear limitations. A more subtle analytical perspective is needed. Of course, there are many scholars in social science who carry out causal analysis with the utmost attention to detail. They have full respect for the complexities involved in causal inference, given the complexity of the world. So let me pay respect to their work and still problematize the Causality Syndrome. To do so, I distinguish between two ways of thinking, System 1 and System 2. This distinction is very much inspired by Kahneman’s analysis of thinking, fast and slow. System 1 operates more or less automatically and quickly, with little or no effort. System 2 allocates attention to effortful mental activities, including complex considerations. Operations of system 2 are usually associated with the subjective experiences of agency, choice, and concentration. Kahneman invites us to think of the two systems as agents with their individual abilities, limitations, and functions (Kahneman, 2011: 20–21). In the way I apply System 1 and 2 to the argument in this book (for which Kahneman cannot be held responsible), I think of these two systems as ways of thinking about causality. The different ways of thinking are found in different social circles and have different institutional foundations. 1

 https://www.kidcyber.com.au/the-magic-porridge-pot.

10 

P. DAHLER-LARSEN

The Causality Syndrome obeys the logic of System 1. Quick decisions about the quality of a research proposal or the credibility of findings are made based on simple heuristics such as the evidence hierarchy or rules like “without a control group, you cannot say anything” or “a causal study is better than a descriptive study”. Many serious social scientists engage in causal thinking in ways that are much more similar to System 2. Their contributions are characterized by pluralism and controversy. Since they involve eternal and unresolved problems, and since my real concern and focus relates to The Causality Syndrome in System 1, I shall give only a brief sketch of the contours of causality as represented in System 2. The debate about causality in social science takes place between different schools of thought (Sandahl & Petersson, 2016). One school of thought claims that causation in social science is similar to the identification of natural laws. This view leads to several problems. For example, identification of a “law” may just represent an empirical regularity, not necessarily an understanding of the nature of the underlying causal link. Furthermore, in social science, each of these “laws” is likely to be verifiable only as a statistical association, not a deterministic consequence. By implication, it is important to study the contextual conditions under which the “law” holds true or not. If we are to understand the law-like relationships between many social variables, and their contextual contingencies, we need an astronomic number of laws. Another school of thought defines causality in terms of probabilities, ie. the likelihood that Y will change in a particular way as a result of a change in X. This school of thought is also marred by problems. First, it is necessary to exempt a number of instances that are otherwise covered by the definition, such as the probability that if today is Wednesday, then tomorrow will be Thursday, which is normally thought of as a result of a convention, not a causal link. Furthermore, people can compensate for the effects of a variable on another variable, e.g. by doing less exercise when they stop smoking, which may annihilate the effect of smoking on heart disease (Sandahl & Petersson, 2016: 49). If I am told that research has shown that my genes affect my political preferences, I will change these preferences simply to disprove the theory. The apologetic researcher would then argue about “the situation as it was before you reacted.” The determination of the causal effect therefore very much depends on definitions of situations and how the scheme of probabilities is set up. And again, determining the probability under a

1  THE CAUSALITY SYNDROME 

11

given set of circumstances is not the same as understanding the link between the cause and the effect. Building on the idea that it is the variation in X that allows us to talk about the effects of X, some see counterfactuals as keys to causation. According to this school of thought, a situation with non-X is essential to determining the effect of X on Y. An ambiguity rests with whether or not a “counterfactual” has to be established empirically (in which case it is actually not contrary to facts) or whether a thought experiment suffices. However, in both situations, it is critically important to determine exactly which situation with non-X we are talking about. If for example, Bob admits to have shot Joe, but argues that it made no difference since if he had not done it, his brother Bill would have shot Joe, we would usually do not accept that as a legitimate standard of comparison. We would still hold Bob accountable for causing Joe´s death. With this example, we are back to a situation where the exact definition of situations plays an important role for causal thinking. Causation does not rest only on empirical work. The exact way in which the counterfactual is constructed or designed (in real life or in our minds) plays a decisive role. The next school of thought assumes that manipulability is crucial for causality. It is our ability to manipulate X that allows us to see variations in Y that are due to our change in X. This school of thought is said to restrict our causal analyses only to situations where manipulability is possible (Sandahl & Petersson, 2016: 87). This is unfortunate for both philosophical, political, and practical reasons. There are situations where we believe in a cause (such as the sun providing light and warmth) even if we cannot manipulate it. It also seems strongly counterintuitive that if we cannot stop a volcano from erupting, then the eruption cannot be the cause of our death. We can also be restricted in the manipulation of a cause for political, institutional or practical reasons. For example, if we are in prison, we may not be able to choose our own food, but we still know that food influences nutrition. The rule of manipulability seems counterintuitive. Some therefore argue that it is enough to conduct a thought experiment where X is manipulated. However, how do we know which “causes” to manipulate, unless we already have a sense of causality? (Sandahl & Petersson, 2016: 85). If our understanding of causality hinges on our imagination, then we can become mental slaves if we live in a socially constructed reality that we think is naturally given. In that case it is really problematic to let our concept of causality depend on whether we believe

12 

P. DAHLER-LARSEN

we can change things. (On the other hand, it shows that causal thinking does not cleanse itself of ideology, contrary to what is often assumed). A further peculiarity with both the counterfactual model and the manipulability model is that our acceptance of a given phenomenon P in a given situation as causal seems to rest on whether we can find a (manipulable) counterfactual in another situation. But the P in the situation remains the same. It is remarkable that we think of the same situation in two different ways depending on what happens in another situation. An entirely different school of thought takes another position. Whereas the models of causality focusing on probability, counterfactuals and manipulability are all variance-based since they insist on variation in X as critical, if not defining, for the understanding of X as the cause of something, another school of thought is process-based. A process-based approach looks at how X influences Y and factors which need to be in place to do that and the traces that X leaves along the way. A key notion here might be generative mechanisms. Without on understanding of the underlying mechanisms which “trigger” the relation between X and Y, a mere look at statistical associations between the two may be misleading (Pawson & Tilley, 1997). Control of X does not help you if X needs to interact with contextual mechanisms to produce an effect. By implication, the process-­ based camp is in opposition to the idea of an evidence hierarchy where the RCT is positioned at the top. Not surprisingly, the process-oriented position is also not free from discussion. For example, it is unclear whether “mechanism” refers to what really works or our theory about what works. It is also unclear whether there is a limited number of mechanisms in this world, and whether it is metaphysically problematic to assume that they are “underlying” or “stand behind” what is observable (Sandahl & Petersson, 2016: 61–74). Even if the belief in “generative mechanisms” may claim that it is based on a “realist” (not a constructivist) point of view, this realist observer seems to be in a very peculiar, privileged position since he or she can “see” the “underlying” mechanisms that others cannot see. Ambiguity remains as to what exactly constitutes proof of a mechanism. The list of schools of thought is not complete. Some might ask, for instance, how the “qualitative” researchers position themselves. I see at least two major positions among the qualitative researchers. Some are “interpretivists”, others are just “qualitative” (Schwartz-Shea & Yanow, 2012).

1  THE CAUSALITY SYNDROME 

13

The first group, often informed by interpretive social science, hermeneutics, phenomenology or poststructuralism, believes that a study of the human construction of meaning is simply incompatible with the idea of causal regularities (Bevir & Blakely, 2018). This is because human sense-­ making is situated, historical, reflexive, and modifiable in a way that does not comply with ahistorical regularities. Some insist that the construction of meaning is not an epi-phenomenon, it is not a “cause”, not an “effect”, it is instead constitutive of social reality (Castoriadis, 1997; Geertz, 1973). Symbolic systems, cultures, languages and concepts structure reality, but these systems are also situated in history (Koselleck, 2007). Causality is not needed. Another camp among the qualitative researchers argue that it is meaningful to talk about causal links and that causality is too important to be left to others (Maxwell, 2004). More often than not, when qualitative researchers engage in causal analysis, they adopt a process-oriented and/ or mechanism-oriented approach. Let this quick and incomplete overview of positions regarding causality suffice for the time being. I wish to make some cross-cutting remarks. Notice how several of the presented schools of thought define causality in terms of the operations needed to detect causality. Some of them border on circularity. Of course, if we define causality with reference to, say, counterfactuals, then it is no surprise that designs with a counterfactual give a privileged access to causation. Something that is difficult to define is being defined in terms of certain ways of knowing about it. One way of saying this is that the epistemological level helps define the ontological level. We are at a crossroad here where we can take different positions on the positions described above. One position-taking on positions is simply to say that only one school of thought has got it right at both the epistemological and ontological level. Then it becomes a monist position. This is the one position that knows what causality is and how to capture it. Another way to take positions on positions is to say that the very multiplicity of positions reflect a basic ontological uncertainty. Clever people disagree about it. We can still pragmatically use some of the devices at the epistemological level which are introduced by different schools of thought even if we do not really definitely know what causality is on the ontological level.

14 

P. DAHLER-LARSEN

We find ourselves in situations where we have to make sense of the world but we do not have access to god-like knowledge about how the world is organized from the beginning. Instead, we invent certain ideas and operations that allow us to make sense of the world. We use basic concepts such as time and space and causality as sense-making devices. We try to convince others, but also sometimes to learn from them. In order to get to something that people will accept as a causal claim, we can use devices such as “laws”, or “counterfactuals” or “mechanisms”, but none of these constitute guarantees about the “real” organization of the world. They are just incomplete and fragile ways in which we try to make our interpretations of the world a bit more solid and acceptable. A parable comes to mind. Let´s compare schools of thought regarding causality with political ideologies. One way of identifying with a political ideology is to believe that all of society should be modelled after that ideology as the only one (a monist belief). Another view is that several ideologies in a society are legitimate. From situation to situation we can make reasonable political decisions that take into account the views represented by the different ideologies. We can respect an ideology and make concessions to it without allowing it to become totalitarian. In a similar vein, we can deal with schools of thought regarding causality. We can call such a position on positions “pragmatic.” It does not mean “opportunistic”, it does not mean that your peers will let you get away with anything, it just means that your argument and reasoning is situated. The situation sets boundaries for you can do, see, and claim. Causal claims are not echoes of God´s voice, they are merely attempts to make sense of the world under imperfect circumstances. In practice, I see many researchers in social science having a preference for one school of thought, but they also have a sense of the situation they are in. The situation offers opportunities and restrictions. Sometimes researchers combine a variance-based approach with a theoretical attention to mechanisms, which only strengthens their claim in the situation at hand (Clarke et al., 2014). Sometimes researchers have a starting point in mechanisms, but is the difference between a mechanism and not a mechanism in fact not a form of variance? Would it not be possible in a large data set to look at variations, which theoretically represent presence/absence of alleged mechanisms? (Dahler-Larsen et al., 2020). Sometimes researchers propose a practice such as contribution analysis, which combines a causal story with checks and balances seemingly borrowed from different schools of thought (Mayne, 2011). Clever

1  THE CAUSALITY SYNDROME 

15

causal analysts also use critical thinking and everyday experiences in dialogue with their systematic methods. We saw the same phenomenon when we reviewed the different positions. We often found that when we follow the logic of one of them, it seems that they are either too much or not enough to capture all instances of real causality. For example, we capture “too much” with the notion of probability, if we include that Thursdays follow after Wednesday with an extremely high probability, yet there is no causal effect. We capture “not enough” if we believe in manipulability, and still think that a volcano causes death even if we do not have the power to stop it. So, we in fact use our informal knowledge to discuss, adjust and calibrate our formal definitions of causality. Surprisingly often, even advanced philosophers use everyday examples to convince us that a given scientific school of thought does not perfectly capture what we mean by causality. Researchers do the same in practice. They ask themselves whether they have forgotten an important factor. They ask themselves an extra set of questions if their findings are counterintuitive. They can ask whether independent methods confirm a finding, whether different research groups reach the same conclusions, whether there is theoretical knowledge about mechanisms that are analogous to what they believe, and whether findings are reproducible across a range of conditions (Clarke et al., 2014). In other words, prescriptions for empirical operations to detect causality do not stand alone. We also use thinking, reasoning, mental models, and we draw on experiences. * * * As said, my account of positions has been schematic. However, it suffices to illustrate the deeply seated pluralism in the debate about causality. Please also notice the many distinctions already made: –– causal versus constitutive approaches –– variance-oriented versus process-oriented approaches –– schools of thought focusing on law/probability/counterfactual/ manipulability –– quantitative/qualitative/interpretive positions –– positions versus positions on positions

16 

P. DAHLER-LARSEN

These distinctions are far from overlapping. One distinction does not easily map onto another. The debate is not easy to summarize and remains inconclusive on several accounts. There are multiple schools of thought. Controversy continues, in and between different schools of thought. We can learn from a school of thought without accepting its monist claims. In addition, we can use experiences and critical thinking to adjust over formal models of causality. Methodology is a player in the game, but not king of the world. The creation of knowledge is dynamic and contested. It comes from many sources. This is as it should be in the social sciences. My problem is not with causality as such, if we allow ourselves the time, energy, effort and deliberation required to discussing it using System 2, where we pay attention to multiple views, nuances and controversies. However, to an increasing extent, ideas about causality manifest themselves in the exact opposite way. When The Causality Syndrome reigns, System 1 makes ideas about causality simple, prescriptive, and formulaic. As if the debate is already over and not needed. How did that happen?

The Institutionalization of The Causality Syndrome My argument is that monism, usurpation, simplification, and institutionalization have contributed to the present status of the Causality Syndrome. These phenomena provide the conditions of possibility for System 1 to take over (Fig. 1.1). By monism I mean a claim that causality is only one thing, not many. Perhaps surprisingly, a clear definition of this one thing is often not offered, except in very operational terms, such as “the variations in Y which follows from variations in X, all other things being equal”, or something to that effect. As you can see, this attempt to define causality is often closely linked to the structure of observations needed to draw causal conclusions, and these observations can usually only be made by controlling some variations in X and then observing the following variations in Y. A good design is one that keeps all other things equal to the extent possible. If the very definition of causality refers to particular designs needed to create the kind of observations which are necessary to draw conclusions about causality (an operational definition), then it also becomes almost impossible to suggest that maybe there could be other ways to find out about causality. It is logically problematic, however, because it presents an ontological foundation for subsequent operations, which are justified with reference to this

1  THE CAUSALITY SYNDROME 

Fig. 1.1  Factors contributing to The Causality Syndrome

17

Usurpation

Monism

Simplification

Institutionalization

“firm” foundation. If those who argue in favor of a particular set of study designs are already sitting on the very definition of causality, then of course, by implication, they will already be correct in saying that only their favored designs allow conclusions about causality. It is in this particular sense I talk about a “narrow” understanding of causality. Monist closure is achieved by delegating the ontological definition to the epistemological level and then legitimizing the epistemological moves with reference to ontology. However, as suggested earlier, it is possible to have some sympathy for a particular school of thought, and perhaps benefit from its epistemological  recommendations, without necessarily subscribing to a school of thought in a monist way. I can let others talk in a monist way, but I do not have to grant them monopoly. For example, a good causal researcher can supplement an experimental design where X is manipulated with a theoretical interest in the mechanism which produces the effect, and many would believe that this

18 

P. DAHLER-LARSEN

combination only strengthened his or her causal argument. Causal thinking does not by definition rest on monism, but of course, under some conditions, a claim to monism articulated by a particular school of thought (on the philosophical level) can be used as support for a sort of practical monism in given social setting. For example: “Here, we only fund RCTbased studies.” Those who make that claim may, of course, find some support in one of the principled, philosophical positions regarding causality. However, if we take into account the pluralism and the controversies found in general in the debate about causality in social science (briefly outlined above), monism does not follow by necessity. On the contrary. Furthermore, uniformity in funding does not follow by necessity from monism in one school of thought. That would require an extra institutional step, which I call usurpation. But this I refer to a particular type of thinking taking a privileged social or institutional role regardless of the research question and topic under consideration. Usurpation comes in two variations. In the first variation, a special priority is allotted to causal questions above other kinds of research questions. The idea is that it is a more important and noble task to answer these questions rather than other questions. For example, “descriptive studies” are seen merely as a steppingstone towards the “real work”, which is “explanatory studies.” An early precondition for the popularity of the concept of causality lies in envy towards the natural sciences and technology. Given the magic and the wonders that natural sciences, technology, and medicine have produced, and how they have ameliorated the living conditions of millions, some believe that the more the social sciences become like the natural sciences, the greater will be their success. Perhaps there is no idea better than causality fit to model the social world as if it basically consisted only of regularities to be studied as if they were physical laws. This idea is as old as the social sciences themselves, as e.g. illustrated by Auguste Comte’s (1798–1857) attempt to define sociology. With varying intensity, the idea has remained in the social sciences ever since. So, there is a certain almost classical pressure to articular all questions as if they were causal questions. That is the first variation of usurpation: Causal thinking claiming superiority over all of social science. The second variation takes place when one kind of causality, one kind of causal question and one way of answering this question define a model for all engagements with causality.

1  THE CAUSALITY SYNDROME 

19

For example, if we ask whether the intervention X would work for us in our context, then some think that the primary question which must be answered first is whether there is evidence based on a randomized trial (in another context) which shows that X leads to more Y than non-X. Some think that an affirmative answer to that question clinches the debate about whether we should install X in our context, too. Nancy Cartwright, however, argues that there is a long road from “there” to “here”. The question of whether X would work here is fundamentally different and requires attention to a number of supporting factors not covered by an RCT study done elsewhere (Cartwright & Hardie, 2012). Stern et  al. (2012) argue that we can ask several types of questions about causality. We should reserve different approaches for different types of causal questions. For example, to detect risk and side effects, approaches are needed which are not at the “higher level” of evidence where the RCT is found (Osimani, 2014). The third process which contributes to making The Causality Syndrome truly catastrophic is a compression of methodological rules about causation into simplified scripts such as “Without a control group you cannot say anything.” Please pay attention to the exact articulation of this formulaic script. One thing is to say that that a control group is useful. Another is to say that it is necessary if you wish to produce a causal statement (here monism comes in, often supported by usurpation). But the script goes further. It actually says that you “cannot say anything”, the assumption apparently being that if you cannot contribute to causation (as the usurper defines it), then you should remain silent. In other words, we have a problem with several roots. Monism is one of the problematic factors, but it does not necessarily lead to The Causality Syndrome, were it not for the additional effect of usurpation and simplification. Other simple scripts are: “The quality of a study simply depends entirely on whether its design is found on the top shelf or not.” And: “A randomized trial simply clinches the discussion once and for all.” Notice how the ideas are both simplified and sharpened by totalizing qualifiers such as “entirely” and “once and for all.” The metaphorical imagery of the “top shelf” is also illustrative. Certain causal studies rest on a “the top shelf”, even if you haven’t read the study, even if you do not know what the problem is, and even if you have never thought about who constructed these funny “shelves” and who put which studies on which shelves. Personally, I never thought that putting things

20 

P. DAHLER-LARSEN

on shelves is a good metaphor for the contested and dynamic nature of social science inquiry. To understand how compressed and simplified ideas about causality become so compressed, and yet believed by so many, it is important to attend not to the philosophy of science, but to institutional theory, which pays attention to the mechanisms that make certain ideas flourish in the present socio-historical climate in particular ways. * * * The very nature of the knowledge about causality and how to find it changes as it is born anew in new social settings. As social science walks into many domains of social life not the least in the form of Mode-II research (applied research often produced in the context of its application) (Gibbons et al., 1994), many people in many contexts become involved in planning, funding, and using social research. This happens in policy analysis, evaluation studies, implementation studies, studies of social problems, innovation, public health etc. For example, funders, administrators and journalists are interested in research. They need ways to gauge the quality of research studies, but do not have the time and training to go into depth with all the potential issues involved in such assessment. The criteria for judging knowledge production in mode-II are partly defined by the multiple actors involved, but they rely on common scripts to do so. Institutional theory helps make some sense of this phenomenon. Institutional theory is occupied with how legitimacy and credibility rests on particular social norms. Institutional theory broadly explains how ideas that appear to be consistent with modernity, science, rationality, and individualism have a greater chance of being diffused than other ideas (Strang & Meyer, 1993). I have suggested previously that pointing to certain aspects of a scientific study (such as “randomization”) is sometimes used as a magic wand to symbolically enhance the quality of a study and to ward off critique. It is therefore a win-win situation if researchers who are keen on causal studies can influence the institutional landscape around them so that a belief is created in support of a clear formula for the assessment of causal studies (and where non-causal studies are simply not relevant to the same degree). The complexity of the institutional landscape around research only increases the need for social science to appear neutral. Again, presenting a

1  THE CAUSALITY SYNDROME 

21

causal study as something which merely lives up to short-hand rules about how to prove that X causes Y is one way of trying to solve that problem. The idea of causation as guiding motif in the social sciences has become particularly useful as a rhetorical bulwark against waves of politization. One such wave was the revolts at the universities following 1968. Following that, outspoken intellectuals on the left have made social science equivalent to a particular political orientation (sociology equals socialism, if you will). In most recent years, fewer but equally outspoken advocates on the extreme right have used their academic chairs to advance their ideological views. At the same time, we have also seen a new interest in activism in social science, in being woke, etc, as well as an intense critique of same. Causal studies neatly produce a rhetorical defense against alleged ideological capture of social science. “We are impartial”. “We are simply studying whether A causes B.” Causality lends itself easily to a rhetoric that is useful to protect allegedly neutral social science. It provides an attractive defense under social conditions where social science and knowledge production in general are “production forces” in a “fragile” society (Stehr, 2001; Beck et al., 1994). At the same time, the Causality Syndrome is also undergirded by more specific institutions and specific pillars such as rules, money, and norms. In some policy areas in some countries, government agencies want to finance applied research only if it lives up to standards consistent with The Causality Syndrome. For example, many philanthropic foundations want to finance good research. To convince various stakeholders that the funded research is “rigorous”, “robust”, and “trustworthy”, they subscribe to The Causality Syndrome and help finance it in practice. Compared to other sources of income for research, these foundations actually exert a substantial influence on what social research focuses on and how it is done. In addition, some public policies use funding gained from private foundations as a performance indicator. On that basis, the impact of private foundations on research is enhanced through public money, too. You can also find particular ideas about causality in review boards, think tanks, and knowledge centers. Some knowledge centers are founded with a mission to promote a particular kind of causal analysis (such as RCTs and reviews of RCTs). This makes them fundamentally different from other institutions who are searching for the truth or institutions founded to solve particular social problems. According to Cartwright (2022: 173), there is a risk that an institution whose identity rests on a particular

22 

P. DAHLER-LARSEN

definition of causality and a particular approach such as RCTs will “lack intellectual humility”. Ideas about causality also appear in editorial offices of scientific journals and among their reviewers. Some academic gatekeepers such as advisors and educators also use this observation to further motivate young scholars to live up to the standards defined by the Causality Syndrome. Although there is a kernel of truth undergirding this practice, since some journals do prefer some types of studies, advise from seniors may exaggerate the socialization effect of the Causality Syndrome on younger scholars. Nobody in fact knows for sure what the future brings in terms of criteria for publications and promotions, but those who claim to know it are often listened to, which to some extend helps define that same future (Dahler-­ Larsen, 2022). The media world functions as an amplifier of the Causality Syndrome. Media portray the credibility of research as a matter of all-or-nothing. New research findings are oversold as promising definitive results “once and for all” and as “something that totally changes or understanding” of this and that, whereas research that is criticized becomes “not trustworthy”, “paid for by big pharma”, or otherwise “flawed” if not marred by deadly problems such as “lack of integrity” or “problematic ethics.” Media rarely leave sufficient space for careful consideration of pros and cons and weighing of the evidence. Instead, they deepen the expectation gap between what the public expects and what can be delivered (Power, 1997). To be placed on the right side of these dichotomies, some researchers position their study as something that “once and for all answers the question about the effect of X on Y.” The Manichean debates in social media make the problem worse. It is therefore tempting for many to revert to a set of rules of thumb, which can be used to gauge the quality of research beyond dispute. Even in what could be expected to be relatively sophisticated platforms for scientific debate in social media, I find very limited attention to a careful deliberation of criteria for what constitutes a good study, causal or not. In practice, of course, these disputes continue, but as long as the Causality Syndrome functions as a benchmark, and something that people throw at each other in heated debates, its institutional basis is only further enhanced rather than reflected upon. Even students have a responsibility for maintenance of The Causality Syndrome. If their neo-positivist teacher says “it is really important to have a control group”, then some students write in their notebook that “If I

1  THE CAUSALITY SYNDROME 

23

don´t have a control group, I can´t say anything.” If our educational institutions tell students that a good student performance is one that lives up to clear, pre-determined objectives, it is not surprising if some students exaggerate the formulaic character of prescriptions in research methodology. In sum, many processes and mechanisms among many groups of people provide support to The Causality Syndrome. In addition, diffusible ideas need to be simple short-hand formulas that are easy to remember. We live in busy times. Decisions are made quickly (Rosa, 2013). There is an intense time pressure on practitioners in areas where social science is planned, funded, communicated, digested, and used. Already Weiss and Bucuvalas (1980) showed that while decision makers rarely have time to read research studies, they do use short-term heuristics to ascertain the importance and relevance of research findings. Today, the discrepancy between the amount of research being produced and the capacity practical decision makers have to digest what might be relevant for them is increasing every day. There is a genuine need for simple heuristics to determine what “the hot stuff” is and “what the evidence says”. The criteria for what passes as legitimate social science in a given institutional field may have relatively little to do with how social science or philosophy of science discusses methodology. When heuristics are institutionalized as scripts, they are used routinely without much reflection. A heuristic may work because of its imagery and smooth rhetorical qualities, not because of its foundation in philosophy. Short-hand scripts that appear to be consistent with modernity, science, impartiality, and rationality have a good chance of becoming institutionalized under modernity (Strang & Meyer, 1993), but when they are also easy to articulate and easy to remember, then they are much more likely also to become widely diffused. This is exactly why Kahneman´s distinction between System 1 and System 2 is useful for our purpose. The scripts prescribing certain formulaic notions of causality are simplified for a reason. They are used in System 1 for quick assessment and decision-making carried out by people who do not have time to understand the complex, controversial and concentration-­ demanding debates about causality, which takes place in System 2. Heuristics in System 1 are usually associated with a feeling of ease and good mood, whereas System 2 requires energy and leads to intellectual depletion.

24 

P. DAHLER-LARSEN

The Causality Syndrome answers an institutional need for heuristics to help decision-makers make easy and quick decisions with which they are comfortable because they resonate with ideas about the “top shelf” in research. In this sense, the Causality Syndrome resonates both with broader social norms in society and with specific mechanisms known to enhance diffusion. What gets the upper hand in institutional logics is the short hand version of causality. In the preliminary theory of The Causality Syndrome as an institutional phenomenon suggested in figure 1 and elaborated above, dynamism and interaction between the components and their institutionalization play a crucial role. For example, a focus on causality is one thing, but a focus on causality in combination with a monist definition of causality makes the problem worse. Diffusion of ideas would be less of a problem if it did not happen in combination with usurpation and simplification. There is also incentives and reinforcing feedback mechanisms. It contributes to the institutionalization of certain beliefs that those who subscribe to these beliefs are rewarded. In other words, while institutionalization is an important underlying driver, the interaction between the components suggested in figure 1 enhances their total effect. The Causality Syndrome leaves social science scholars in universities at a critical junction. If social science has its own autonomy as an ideal, how does it respond to the institutional pressures in support of The Causality Syndrome? These pressures operate through institutional pillars such as money, rules, and norms. With the increasing interaction between universities and society, with Mode II research and with increased interest in social science with impact in society, social science needs to reconsider the rules of the game in which it engages. The institutional pressures in favor of the Causality Syndrome are, of course, not entirely external to universities. Universities host not only some academic gatekeepers who promote the Causality Syndrome, and some who comply with it hesitantly. The contemporary university also houses specialists in interaction between research and society, such as funding rainmakers and communication wizards. In some situations, to the extent that these specialists also get a sense of the demands of The Causality Syndrome, they become the internal institutional advocates for its ideas and values. These specialists may also have limited understandings of the philosophy of science, but they know what difference it makes if a study from their institution makes it to the front page of the national media. If someone tells these specialists that there is recipe that without

1  THE CAUSALITY SYNDROME 

25

further consideration separates the studies on “the top shelf” from other studies, they become easy victims of the Causality Syndrome, too. When funders, funding specialists, and communication specialists join forces, a bandwagon effect is socially constructed, which reinforces The Causality Syndrome. To understand how we have come to use the quick and simplistic heuristics (System 1) to think of such a complicated phenomenon as causality, which really deserves all we can offer of deliberation, afterthought and debate (System 2), we need to look at collective decision-making and institutional factors (Cartwright, 2022).2 In this situation, it is a critical challenge for social science how it deals with The Causality Syndrome. Acceptance is reinforcement. Reflection is needed. We need to slow down System 1 heuristics. We need to create more space.

Create More Space The theoretical model suggested above is an ideal-typical one. It should not be confused with reality which is more complex, diverse, context-­ dependent, and full of contradictions. The model describes four interacting components (monism, usurpation, simplification, and institutionalization), but maybe not all of them are present to the same degree in a given situation. They are also not equally problematic for all 2  An anonymous reviewer did not accept my argument about simplification (which is enhanced by institutionalization). He or she argued that the key underlying problem was the monism of neo-positivist researchers “who do not know about other understandings of causation, or do not accept them, and therefore engage in strong “my way or the highway” behavior vis-à-vis the students they teach, and the colleagues who engage in other forms of research.” I do acknowledge that this monism contributes to The Causality Syndrome. However, it is a part of the problem, which is difficult to fix. In a democracy, neo-positivists have a right to speak. Furthermore, “monism” is relative. Some people would call my view “monist”, but I should still be allowed to teach and publish. Therefore, in my view, although I think monism is problematic, the real problem begins when a monist view is allow to usurp positions of power so that one school of thought defines research quality on behalf of all schools of thought. Furthermore, institutionalized and mnemonic simplification further adds to the problem. The idea of using a control group is respectable (although not good when used as a monist idea), but claiming that “if you don´t have a control group, you can´t say anything” adds insult to injury. So all in all, I think it is fair to think of monism, usurpation, simplification and institutionalization as factors which all contribute to The Causality Syndrome. It is good to have many explanatory factors, because that gives us more to work with.

26 

P. DAHLER-LARSEN

people in all situations. As a consequence, the ideas that embody causality come in different configurations and combinations empirically. There are different counter-arguments in different research environments. Therefore, there are many fronts in the debate, not just one. For example, some people reject the notion of causality as such, for example on the basis of hermeneutics, phenomenology, or poststructuralist studies of discourse. Others reject only narrow versions of causality. They seek to broaden the concept of causality (Kurki, 2006; Stern et al., 2012). Several advocates also of qualitative methods think that they can in fact contribute to causal analyses, and the issue is too important to leave to the quantitative people (Maxwell, 2004). Qualitative researchers are, for example, interested in “how things work”, so they often use other recipes, for example a process-oriented approach to causality rather than a variance-­ oriented one (Pawson & Tilley, 1997). There are also strong and responsible researchers who believe in the notion of causation and who are as worried as I am about the simplistic conclusions presented in publicized studies in the media. With these researchers I also share a skepticism about the idea that one causal study with a world-class design will answer a given causal question “once and for all.” I respect the cleverness with which they use methodological reasoning to maintain a critical discussion about problematic studies. However, while they maintain high hopes about the importance and usefulness of good (not simplistic) causal studies and their generalizability across contexts, I am much less optimistic about that, since I believe that all studies are context-dependent and thus somewhat problematic. But as said, we do agree on some things and we are both worried when corners are cut in causal studies. So, some see a need for better causal studies at the same time as others want to broaden the notion of causality and others want to totally reject it. Remember, the distinctions characterizing the many standpoints on causality are not easily overlapping. There are, indeed, many fronts in the debate. There also things going on between the lines. Among the quantitative folks, there are both purist and liberal views as to how close a study needs to be to a true randomized experiment to produce trustworthy or relevant causal claims. Some quantitative folks talk about “correlational studies” (which is a derogatory term in comparison with “real” causal studies), but they may sometimes accept that very similar studies produce “antecedents” and “predictors” which are valuable because they are (tacitly

1  THE CAUSALITY SYNDROME 

27

assumed) to be underlying causes of outcomes. From a purist perspective, that argument is technically not permitted, but difficult to criticize, if you do not mention the c-word explicitly. So, technically, some mobilize all the taken-for-granted imagery related to causality, but they do not subscribe officially to the concept, so they exempt themselves and their friends from the methodological rules which would otherwise be applicable. We all “know” that correlation allows prediction but prediction is only effective if there is real causality underneath it. But if we keep a distance between what we “say” and what we “know”, nobody can catch us. The complexity of these many fronts and sometimes even rhetorical moves across the fronts and even within camps makes it obsolete and unproductive to tackle the problem with causality as a battle between two camps over one front (as we did in the old qualitative-quantative “paradigm war”). Instead, my ambition is to create more spaces: More space for diverse funding practices not linked to simple scripts for causal studies. More space for careful attention to the links between how research questions are articulated and how they are answered. More space for people who hold different beliefs about what causality is and how to find it. More space for researchers who are not doing causal studies. * * * Which specific ideas are at play in your country, your field, your department, your subdiscipline, your mental models? I cannot know that with any degree of specificity. You have to find that out. If you want to “generalize” my observations, it is really up to you (Stake, 2000). I trust you can take from the book what you need. I offer my help to you in the following way. I will present 25 questions. I have chosen them because I have come across them in the literature, in my practice as a teacher or evaluator or reviewer, and because I think if we answer them cleverly, we can overcome the Causality Syndrome. Each of these questions boils an aspect of the larger problematique related to The Causality Syndrome down to a specific issue. In real life, people have different answers to each of the questions. The specific combination of ideas that one buys into in a given situation produces a particular stance regarding causality and a particular set of casualties. But people are not tied to all ideas with tight ropes. While some ideas are like deep convictions (which human beings cling to like mammals cling

28 

P. DAHLER-LARSEN

to something that is soft, warm, and nourishing) (Morin, 1990), other ideas are more like challenges, crossroads, or attractive nuisances that we may relate to, but not embrace. So, some people are strongly attached to, say, particular answers to question 1, 2, 3, 4 and 5, while others subscribe to only 1 and 2. Some believe that there is only one way that most of the questions can be answered, never giving them any thought. The more frequently the latter happens, the more traction The Causality Syndrome will get. A strong combination of many of the ideas has the power to produce an ideology of causality inflicting many casualties. On the other hand, subscription to one or a few ideas, and a more reflexive stance against others, and a blunt rejection of the really bad ideas, produce more spaces for nuanced and sensible positions, which is exactly the endeavor I wish to support with this book project. Only quick, thoughtless and affirmative answers to all twenty-five questions would worry me. They would only reproduce The Causality Syndrome as we already know it. I separate the twenty-five questions from each other not only to be empirically flexible towards the reader’s situation, but also to show that sometimes a relative good idea might lead to an almost similar neighboring idea, which is in fact not nearly as good. This is especially what happens when methodological rules intended for complex practical decision-making in fact become reduced to short-hand scripts which are used indiscriminately. Some would argue that less than twenty-five questions would be enough, but it is important to separate ideas which sound almost identical but have different practical implications. You may say yes to Q14 and no to Q15 even if they may sound identical. They are not. I offer you a surgical knife to separate the Causality Syndrome into small parts that can be analyzed separately. That is why I wish to break the analysis of The Causality Syndrome down into twenty-five pieces. I offer my answer to each of the questions. Maybe you want to clarify your position. It does not have to be exactly the same as mine. Let me warn you. Among the twenty-five questions are some really surprising ones. I am deliberatively provocative in the way I articulate them. This is because I believe some institutional practices are undergirded by beliefs which are highly problematic, but which survive only because they are usually not articulated. They become less defensible when they are articulated very succinctly. I am particularly interested in ideas that function as a mediatory myth (Abravanel, 1983). A mediatory myth builds an apparently logical bridge

1  THE CAUSALITY SYNDROME 

29

between a value or norm that is widely accepted and a practice that is not. The coolest example I know of a mediatory myth surfaced in David Frosts interview with Nixon (replayed in the movie with the unsurprising name Frost/Nixon). Frosts presses Nixon to admit that he broke the law during the Watergate affair and the subsequent cover-up, and Nixon responds: When the president does it, it is not illegal. Once a mediatory myth is expressed so clearly, it is easy to see that it only works for Nixon. The rest of us knows that it does not really work, because the law in fact also applies to the president. To expose a mediatory myth is not to attack a straw man. Nixon in fact acted as if he believed that he was exempt from legislation. Whether he actually articulates the dictum—“when the president does it, it is not illegal” or whether someone does it for him is less important. For analytical purposes, the critical step is to explicate those beliefs, which justify a particular set of actions and practices. The analysis only becomes more powerful and perhaps provocative, if the person in focus does normally not articulate that idea or only expresses it under pressure, like Nixon did when faced with Frost’s interview technique. If I succeed in showing this, I am not attacking a straw man just because no sane person would advocate the ideas at hand. They don’t have to. They just have to take them for granted, like Nixon did. Once we articulate questions explicitly, reflection is enhanced, and if people choose to answer them differently after reflecting upon them, it is a good thing. As we weed out the bad ideas, we create more space between the remaining acceptable, but still uneven, ideas.

References Abravanel, H. (1983). Mediatory Myths in the Service of Organizational Ideology. In L. R. Pondy, P. J. Frost, G. Morgan, & T. C. Dandridge (Eds.), Organizational Symbolism: Vol. 1. Monographs in Organizational Behavior and Industrial Relations (pp. 273–293). JAI Press. Beck, U., Giddens, A., & Lash, S. (1994). Reflexive Modernization: Politics, Tradition and Aesthetics in the Modern Social Order. Stanford University Press. Bevir, M. & Blakely, J. (2018). Interpretive Social Science. An Anti-Naturalist approach. Oxford: Oxford University Press. Cartwright, N. (2022). A Philosopher Looks a Science. Cambridge University Press. Cartwright, N. & Hardie, J. (2012). Evidence-Based Policy. A Practical Guide to Doing it Better. Oxford: Oxford University Press. Castoriadis, C. (1997). World in Fragments: Writings on Politics, Society, Psychoanalysis, and the Imagination. Stanford University Press.

30 

P. DAHLER-LARSEN

Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2014). Mechanisms and the Evidence Hierarchy. Topoi, 33, 339–360. Cook, T.  D., & Campbell, D.  T. (1986). The Causal Assumptions of Quasi-­ Experimental Practice: The Origins of Quasi-Experimental Practice. Synthese, 68(1), 141–180. Dahler-Larsen, P. (2022). Your Brother’s Gatekeeper: How Effects of Evaluation Machineries in Research Are Sometimes Enhanced. In E.  Forsberg, L.  Geschwind, S.  Levander, & W.  Wermke (Eds.), Peer Review in an Era of Evaluation. Understanding the Practice of Gatekeeping in Academia (pp. 127–146). Palgrave. Dahler-Larsen, P., Sundby, A., & Boodhoo, A. (2020). How and How Well Do Workplace Assessments Work? Using Contextual Variations in a Theory-based Evaluation with a Large N. Evaluation—The International Journal. Geertz, C. (1973). The Interpretation of Cultures. Basic Books. Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M. (1994). The New Production of Knowledge. The Dynamics of Science and Research in Contemporary Societies. Sage. Greene, J. C. (2007). Mixed Methods in Social Inquiry. John Wiley & Sons Inc. Hendricks, V. F., & Vestergaard, M. (2017). Fake News: Når virkeligheden taber. Gyldendal. Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. Koselleck, R. (2007). Begreber, tid og erfaring. Hans Reitzels forlag. Kurki, M. (2006). Causes of a Divided Discipline: Rethinking the Concept of Cause in International Relations Theory. Review of International Studies, 32(2), 189–216. Latour, B. (2003). Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern. Critical Inquiry (Special Issue on the Future of Critique), 30(2), 25–248. Law, J. (2004). After Method: Mess in Social Science Research. Routledge. Maxwell, J. A. (2004). Using Qualitative Methods for Causal Explanation. Field Methods, 16(3), 243–264. Mayne, J. (2011). Addressing Cause and Effect in Simple and Complex Settings through Contribution Analysis. In R. Schwartz, K. Forss, & M. Marra (Eds.), Evaluating the Complex. Transaction Publishers. Morin, E. (1990). Kendskabet til Kundskaben. En erkendelsens antropologi. Ask. Osimani, B. (2014). Hunting Side Effects and Explaining Them: Should We Reverse Evidence Hierarchies Upside Down? Topoi, 33, 295–312. Pawson, R., & Tilley, N. (1997). Realistic Evaluation. Sage. Power, M. (1997). The Audit Society. Oxford University Press. Rosa, H. (2013). Social Acceleration. A New Theory of Modernity. New York: Columbia University Press.

1  THE CAUSALITY SYNDROME 

31

Sandahl, R., & Petersson, G. J. (2016). Kausalitet i filosofi, politik och utvärdering. Studentlitteratur. Schwartz-Shea, P., & Yanow, D. (2012). Interpretive Research Design: Concepts and Processes. Routledge. Stake, R.  E. (2000). Case Studies. In N.  K. Denzin & Y.  S. Lincoln (Eds.), Handbook of Qualitative Research (pp. 435–453). Sage. Stehr, N. (2001). The Fragility of Modern Societies: Knowledge and Risk in the Information Age. Sage Publications. Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the Range of Designs and Methods for Impact Evaluations. Report of a Study Commissioned by the Department for International Development. Working Paper 38, Department for International Development, London, UK. Strang, D., & Meyer, J. (1993). Institutional Conditions for Diffusion. Theory and Society, 22(4), 487–511. Weiss, C.  H., & Bucuvalas, M.  J. (1980). Social Science Research and Decision Making. Columbia University Press.

CHAPTER 2

Twenty-five Questions

Abstract  This chapter presents twenty-five questions about causality, which the researcher in social science is likely to encounter in practice. Examples include: Is causation the most important and honorable task in the social sciences? Is causality only one thing? Will social science cleanse itself of ideology and normativity, if it restricts itself to causal analysis? Does causation always require a counterfactual? Can you sell your study by pretending that its design is better than it actually is? Well-reflected answers to these questions may lead to a revision of one’s position vis-à-vis The Causality Syndrome. Keywords  Causality • Causation • Counterfactuals • Control groups • Evidence • Evidence hierarchy • Monism

Q1. Is Causality a Useful Concept in Social Science? At first glance, this question sounds relatively innocent, as long as causality is not defined strictly and circumscribed by binding methodological rules. Nevertheless, the idea that causality is useful stands at a major crossroad in social science where people take different paths. History explains why. When modern natural science expressed its findings in terms of general laws explaining regularities in physical phenomena, the emerging social sciences had to decide whether or not to cast their own ideals and practices © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. Dahler-Larsen, Casualties of Causality, https://doi.org/10.1007/978-3-031-18246-4_2

33

34 

P. DAHLER-LARSEN

in the same mould. They founded sociology on the belief that members of a society are subject to a particular set of collective forces which can be scientifically described. Some, especially those who came to define themselves as “positivists”, beginning with Auguste Comte (1798–1857), wished to describe these forces and their regular effects in causal terms. If natural sciences find out that a ship immersed in water is subject to an upthrust equal to the weight of the fluid displaced (the law of Archimedes), then sociology can find out that variations in the rate of suicide across human communities is proportional to the degree of anomie in these communities, as Durkheim’s classical study showed. Others argued that social sciences and humanities should be concerned with fundamentally different objects of study and therefore should observe very different epistemological and methodological principles. Human beings construct meaning out of their experiences. They express themselves through language and other symbolic systems. Whereas “Naturwissenschaft” aim at “Erklären”, the “Geistenwissenschaft” should focus on “Verstehen” (“understanding”), as suggested by Dilthey (1833–1911). This bifurcation has cast shadows for more than hundred years. It has provided a minimal kind of defense for qualitative studies, or more precisely interpretive studies (Schwartz-Shea & Yanow, 2012), in social sciences. As a corollary, the notion of causality has been flatly rejected by some as not useful in social science (Lincoln & Guba, 1985; Bevir & Blakely, 2018). Perhaps paradoxically, some qualitative researchers (who would otherwise be very skeptical about binary distinctions) thus continue to build on Diltheys binary distinction. Although it reserves a space for interpretive studies (which might otherwise have been put under even more pressure), the distinction has many weaknesses which become clearer and clearer over time. One interesting observation is that the study of meaning, signs and symbols is also important in the natural sciences. For example, when a hawk is about to find a nest containing the offspring of the lapwing, the lapwing makes a funny little dance with its wing on the ground, pretending it is wounded, simply to distract the hawk. There is a whole field of biosemiotics which study colors and behaviors among animals as parts of sign systems. A sharp distinction between the natural and the meaningful is, of course, also difficult to uphold in the social sciences. Luckmann (1970), a strong spokesman for interpretivism, argues that there really is no Chinese wall inside human beings which separates the body and the mind. Battles

2  TWENTY-FIVE QUESTIONS 

35

about where to place the boundary are not very productive. Nevertheless, we see several disciplines such as psychology and public health being split into different camps depending on whether they subscribe more to the Naturwissenschaft ideal or the Verstehen ideal. In some psychology departments, the advocates of either ideal are placed on different floors in the same building, so that they can get the different kinds of work done without interfering with each other. Whether this is good for psychology and for the advance of knowledge is uncertain, but it testifies to how deep the consequences of Dilthey’s distinction are, also today. Many new approaches try to cross the abyss or neutralize it. Since Bateson’s (1972) Steps to an Ecology of Mind, there has been an interest in complex systems which implies an elimination of a stark distinction between the social and the natural. Presumably this is an important step to overcome ecological crises. Also, Morin’s (1990) idea of a scienza nuova or the new materialism and Latour’s interest in non-human subjects suggest that it is unproductive to split the world in one part where causality reigns unconditionally and another from where it is extradited. Some of the researchers in the social sciences also find that an all-or-­ nothing view of causality is unproductive, but for very practical reasons. Many of them are involved in Mode-II knowledge production, i.e. knowledge produced in the context of its practical use, such as social work, public health, implementation studies or evaluation. Furthermore, if they observe some unfortunate events, such as elderly people falling, or rockets falling down, they are concerned about why that might happen, even if they do not have a clear theory about what the predictors might be (Maxwell, 2004; Vaughan, 1996). They are also interested in developing and testing some interventions to ameliorate the problem. They are confronted with questions such as “Does our intervention work?” “Why or why not?” They are also interested in whether the intervention might have unforeseen side effects, even tentatively. Usually, researchers in these practical circumstances do not begin with a razor-sharp distinction between causal questions and non-causal questions. They  do not throw out all causal questions and then answer the rest. Usually, they do the best they can with the questions practitioners raise and the evidence they can get, often taking into account whether it is possible to help these practitioners (or overcome their resistance to trust in the findings). They are clever enough to not begin with a version of the causal question which is impossible to answer under the circumstances, such as “What is the net effect of the intervention X on the outcome Y in a large

36 

P. DAHLER-LARSEN

representative sample”. Since X already cost a lot of money or was imposed on them by law, they are more modest, but also more locally interested in whether X works reasonably well in their own community. Fortunately, it is much easier (but still difficult) to answer the latter question rather than the former. The point is not that practitioners thereby escape answering a causal question to the benefit of another type of question. In fact, if you really want hairsplitting, the question of whether X works here can be understood as a small contribution to the construction of knowledge about the extent to which it works more broadly. And if X does demonstrably not work here, then you can safely conclude that it does not work in all places, which might be good to know if someone is trying to sell X everywhere, or oversell X at an inflated price. Still, also at the local level, it is fair to ask whether X produces Y in cooperation with all the contextual factors in the situation at hand. Sometimes even a tentative set of observations are enough. For example, if you observe potentially dangerous side effects in addition to Y, you might take action even if you are not absolutely sure about the underlying causal link. You may also want to know how X fit into people’s lives in that context and how they feel about that. So, there are many aspects of X that you want to know about. The questions that require a causal analysis in its strictest sense are a minority of all questions that one can potentially ask. And the question about the net effect of X on Y is, in turn, only one among those questions. In practical circumstances, researchers twist and tweak the questions they are dealing with and negotiate the rigidity of the rules used to gauge the validity of the evidence provided to answer these questions. Whether or not these questions are defined by some divine authority as “causal questions” is not the starting point for the practical engagement. It is only the starting point for methodological purists who maintain a sharp distinction between causal questions and other questions. There are many ways to engage in questions like “does X work” and “what kind of difference does X make” which do not stand and fall with all strict methodological rules about causality as conventionally defined. Out of fear of entrapment in these rules, it would be sad if we gave up everything that social science in general and qualitative inquiry in particular can contribute to all kinds of mode-II production of knowledge just because we did not wish to touch anything that was called “causality” (Maxwell, 2004).

2  TWENTY-FIVE QUESTIONS 

37

I must agree with Maxwell here. So, I have no particular problem with an affirmative answer to Q1. However, my acceptance of the idea rests on strategic grounds, not on ontological ones. If I choose to engage in a research question with the term “causal” in it, I do not commit myself to a particular world view and an exterior set of methodological rules. I commit to a discussion with people around me interested in the same question about how we understand the question, what kind of evidence we think we need to answer the question, and what we think the outcome of our endeavors might be. If we have to discuss from scratch what the problem is, what the intervention is, what the outcome is, and whether it is reasonable to use the term causality, and what we might mean by all these terms, so be it. This is a part of doing e.g. evaluation in a world with other people in it (Podems, 2018). When I do evaluation, I often do not win those discussions. Sometimes my stakeholders, partners or opponents change their view along the way, or I change mine. But we do not always have to decide in advance if all knowledge created along the way is either “causal” or “not relevant.” For example, if I find out that all people in the local context detest X and never will adopt it as part of their everyday life, it is unlikely that X is going to have the causal effect it was officially expected to have. So, I contribute to the broader understanding of how X “fits in” or not, which is relevant also for the causal folks, even if the knowledge I produce is not designed to answer a “causal” question. I could imagine other studies (“descriptive” studies) contributing to an understanding of how X works and how it should be evaluated in a way that would seriously reduce the need for a “causal” study. In other situations, my clients or partners have already written in an application for funding that their intervention is likely to have this or that causal effect. In that case, I explain to them that I do not think it is fair that as soon as funding is secured, they suddenly turn interpretivists who question the ontological and methodological basis for all talk about effects. I do not think it helps to play the philosophy of science card here. A promise has been made between human beings. The grantees do not have to demonstrate that there was the promised causal effect. But they have to do their very best to find out if there is something that can reasonably be called a causal effect if they have promised that there will be such a thing. If they say, “we know that, but we had to lie a little bit to get funded”, I think that is highly problematic, as it helps support illusions created by The Causality Syndrome.

38 

P. DAHLER-LARSEN

If I evaluate an intervention that has a demonstrable effect upon, say, the reduction of sexual harassment, I do not compromise my integrity if I talk about the “causal” effect of the intervention, but I insist that the role and meaning of causality is part of the discussion about how the intervention works, not above it. If I believe, strategically, that the concept is likely to be detrimental to our endeavors or it stands in the way of our collective insights, it is my intellectual duty to formulate the problem in another way. My answer to Q1 is yes, causality may be a highly useful concept, but it is not above dispute, it is sometimes overrated, and its usefulness is highly situational.

Q2. Is Causation the Most Important and Honorable Task in the Social Sciences? Advocates of this idea think usually think there is space for diversity in social sciences. There can be quantitative and qualitative studies, there can be explanatory and descriptive studies. At first sight, this view sounds accommodating and inclusive. In Diltheyan terms, it means that Erklären and Verstehen can co-exist in the same social science department. Then comes the problem, which is that explanatory studies (which unsurprisingly means causal studies) are more important, more distinguished, and more honorable than descriptive studies (the latter meaning case studies, interpretive studies, and more). This idea has two thousand consequences. One of them is that a hierarchical working relation is established where qualitative researchers can do the all the potato peeling and the cutting and slicing of the vegetables, and then the causal folks do the French cooking with these products. This division of labor is then called “pluralism” and “respect for each other’s work.” It also often means that qualitative work must justify its own existence in terms of how it contributes to another, finer category of research. Qualitative research must legitimize itself based on a terminology defined by others, including criteria for methodological quality (Becker, 1996). An example is the issue of inter-coder reliability in qualitative research. I never understood how you can expect two coders who use open coding to come up with exactly the same terms for new, hitherto un-coded phenomena. So, if you are interested in inter-coder reliability, you assume closed coding schemes, but most of the good qualitative work involves open coding. Furthermore, inter-coder reliability is particularly important if you subsequently want to count the frequencies of observations in each

2  TWENTY-FIVE QUESTIONS 

39

code and maybe do correlation analysis, perhaps on the way to causation. However, if you keep the meaning content of your codes intact, codes such as “flooded landscape”, “tsunami”, and “seems like natural disaster with water all over the place” have enough meaning in common to reduce inter-coder reliability to a negligible problem in all important respects. It is of much frustration to qualitative researchers to have to explain what they do as if it were a low-quality version of quantitative methods, but it is one of the unfortunate consequences of the hierarchization of explanatory versus descriptive research (Becker, 1996). Too often, the hierarchization of different kinds of research is simply and bluntly a hierarchization of social relations between researchers. An easy way to test this hypothesis is to travel to a department where the qualitative folks get as much external funding and international recognition as the quantitative folks. Then you will see that the relation between quantitative and qualitative research is basically social and institutional, not methodological. At the same time, however, it is of course also rhetorical and conceptual. To have a hierarchy, you need distinctions. But people who do discourse analyses of speeches of political leaders, or who observe diplomatic behavior, or who study why an organization sends up a rocket which falls down (Vaughan, 1996) do not apologize for “only” making a “descriptive” study. They never felt that if they wanted to do a very good study, then it subsequently needed an additional “explanatory” component. Therefore, if you hear someone talk about explanatory and descriptive studies and argue that in the larger division of labor in social science there is a place for all, then you should ask who defines these categories and who controls the division of labor. In the meantime, maybe we should attend to how people describe their own work rather than how everybody’s work is described from only one perspective.

Q3. Are All Great Social Scientists Famous for Their Causal Analysis? If causation were, by definition, the most important and honorable task in social science, we would expect that most of the great contributors to social science through the ages were famous because they did this one important and honorable thing: Causation. Then consider Tocqueville, Tönnies, Durkheim, Simmel, Weber, Parsons, Habermas, Luhmann, Foucault, Bauman, Latour, Hirschman, Bellah, Becker, Kuhn, Geertz,

40 

P. DAHLER-LARSEN

Berger and Luckmann, Anderson, Goffman, Lackoff, Michels, Allison, Wenger, Bourdieu, Bateson, M.  Mead, G.H Mead, Schutz, Weick, Ostrom, Putnam, Sen, Dewey, Castoriadis, Yin, Glaser and Strauss, Boltanski and Chiapello, Rosanvallon, Miles and Huberman, Schneider and Ingram, Simon, Meyer, March, Vaughan, Soss, and Lamont. I am sure others would have other names on their lists. Others would mention Butler and Haraway and many more. Nevertheless: None of the names on the list above are mostly famous for their contribution to causation. Even Durkheim, who provided a fine causal study of suicide, is not mostly famous for causation, but for his conceptualization of society as a reality sui generis. Tocqueville did not say that the equality found in America causes tyranny. He carefully considers the factors contributing to the tyranny of the majority versus those that contributed to freedom in the US, and argued that it would be up to Europe to choose its own path. Weber showed, among many other things, that the protestant ethic paved the way for capitalism, but most people agree that it is wiser to see the link between the two as a complex historical elective affinity (“Wahlverwandschaft”) rather than as a causal link. What all the great figures in social science offered was perspectives, conceptual distinctions, ideal types, characterizations, portraits, and ideas to think with (Nisbet, 1976). In some situations, of course, their ideas were transformed into formal hypotheses which were interpreted as stepping stones towards causal analysis. But in that endeavor, their ideas are sometimes basically transformed to fit into what is constituted as a new disciplinary imaginary (Goodstein, 2017). For example, the Wahlverwandschaft that the classical German sociologists talked about was translated into correlation which sends it in a statistical direction in contrast to the meaning of the original term which is closer to elective affinity, to choices made in the context of familiar relations. If causality should be placed at the center of social science, we would expect the great figures who defined social science to have done exactly that. Originally. They did not. Hypothesis falsified. If you think that causal studies constitute all the most prominent, respected and cited contributions in social science then and now, you first have to ignore a ton of good work. Of course, one can argue that while the classics have provided all the great ideas, it is now up to us to test causal hypotheses empirically. But why should the rich sources of ideas in social science dry up just because it

2  TWENTY-FIVE QUESTIONS 

41

is now our turn? If it is us, not social science, which want to place causality at the heart of social science, why do we not carefully consider the pros and cons, and whether it is just one thing, or many?

Q4. Is Causality Only One Thing? If they social sciences must comply with a set of methodological rules about causality, it would be good if causality is defined clearly as one thing and one thing only. Given that ambition, it is symptomatic how difficult it is to derive such definition. We usually can’t see causality with our own eyes. We infer it from observations that we subject to certain rules. For example, we expect that if X causes Y, then X occurs chronologically before Y. There is also an empirical regularity so that when we find X, we also tend to find Y. Then we also like to exclude rival explanations, which is why we “control” “third variables”. And we expect some kind of logic account (maybe a theory) that explains why X would lead to Y. If we comply with these rules, we can say that we have established “causality.” Some codify and formalize these rules and state that causality is a variation in the probability of certain values of Y given a variation in certain values in X (while observing the other rules I mentioned above). So the first thing to notice is here that we really do not see causality itself, we see a phenomenon that survives a test based on a general rule about particular structured and qualified observations. I have already mentioned that the link between Wednesdays and Thursdays live up to many of the rules needed to make a causal claim, but still we do not think it is a causal link. It is a convention. Consider also the likelihood that if a word is a name, it begins with a capital letter. I am not sure where to draw the line, but at least some of the things that are just organized conventionally in this way must be exempt. But if we cannot put an end to our exemptions, we do not have a very good general rule. Let us take a slightly more complicated example. It would probably not be difficult to show a causal relation between gender and length of hair in a sample of the Danish population in the 1950s. Check all the rules, chronology, correlation, control, an explanation, got it, all under control. Then take a sample from the 1970s and the causal link disappears. Do the same study today, and the causal link between gender and length of hair magically returns (although with the addition that gender is no longer an obvious dichotomous variable). What have you learned? Would you not think more deeply about the nature of that link to understand how it can

42 

P. DAHLER-LARSEN

come and go under different social circumstances? And would you not have fooled yourself during the first study if you believed that length of hair was determined by biological gender? But people actually believed that in the fifties! (I read an anatomy book saying that you could recognize the male by a sharper, taller body and short hair, whereas the female had a shorter, more rounded body, and longer hair). So, is the notion of unitary causality not a conceptual trap that leads us directly to a conflation of natural laws and socially constructed regularities (the calendar and spelling of names and the choice of hairdos both being social conventions, although of different types)? Then some people add another criterion. Something is only a cause if it is a manipulable variable. But that cannot be ontologically given, because it is up to people and to politics what can be changed and what not. A more meaningful version of that criterion is that we human beings have a particular interest in phenomena as “causes” when we think they can be changed with the result that we can improve our destiny (for example the causes of cancer, the causes of social inequality, etc.). In some socio-­ political situations, slavery could not be abolished. It turned out later, that in fact it could. Our epistemology hinges on our practices and interests. Therefore, attempts to define causality with a basis in ontology beyond our reach are problematic. Let me give another example. Evaluation based on a realist philosophy suggest that the world is organized in different layers and each layer is governed by a set of underlying mechanisms (Pawson & Tilley, 1997). A deeper layer of reality determines which mechanisms are triggered. That is why a superficial study of correlations in one layer of reality may be mischievous. However, if we attend to the mechanisms, we can unpack context-­mechanism-outcome configurations which are central to understanding how interventions work. This is a key task in realist evaluation, but also the beginning of its problems. How exactly do we define a mechanism so that we can recognize it when we see it? For example, it can rest with motivations, preferences, inclinations, etc., among recipients of a given service. We can imagine a long list of factors on the list. Even with a very long list of motivations, we may miss that one day we build a dam to keep a village from being flooded, but the reason why the dam works is not that it keeps the water motivated to not enter the city. So, we need the propensity of water to not crawl over dams on the list, too. And gravity. The power of an ostensive definition (a definition based on pointing to things that are included) withers away if one can continue to point out

2  TWENTY-FIVE QUESTIONS 

43

many different things. So, realist evaluators take a couple of steps backwards and say that a mechanism is really basically just a principle of explanation (Pawson & Tilley, 1997, pp. 64 and 68). I see pragmatism entering the scene. What needs to be explained and how it can be done is a matter of the kind of argument which can be unfolded under given social circumstances. That is fine. But it is almost the opposite of what the realist ontology claimed. A “mechanism” is no longer one thing only. It is many things depending on what needs to be explained in the case at hand. In practice, we use the term “cause” with different meanings (Kurki, 2006). In a nice summary of causal paradigms, Sandahl and Petersson argue that it is possible to emphasize regularity, probability, mechanisms, counter-facticity, or manipulability, but each of the paradigms leave something out. We are left with choices. We can study causality. Or rather forms of causality. Or we can study how people are motivated to do things, whether they find certain actions meaningful, how structural changes permits or forbids certain patterns of actions, we can study inclinations, preferences, inducements, inhibitions, possibilities, choices, hesitations, how things work, how they are implemented, how they are changed and many other things. We use different terminologies to describe these various phenomena because they are not one and the same thing to begin with. We can place them into confinement in one concept of causality, which turns out to be many-headed at closer inspection, or we can study them for what they are from the beginning.

Q5. Can You Only Ask One Type of Question About Causality? That question is: How much change in Y is caused by a given change in X, all other things being equal? This is the question the research design from the “top shelf” is designed to answer. However, there are many other questions related to the notion of causality that one could be interested in, such is “if x leads to Y there, does it also lead to Y here?” (Cartwright, 2013), “how does X produce Y”, and “does X have side effects?” (Stern et al., 2012). If you want to make X work in a particular context, it is not enough to answer the one type of question that is most frequently advocated as the

44 

P. DAHLER-LARSEN

royal causal question. You would also be interested in knowing something about how to develop or improve X, how to implement X, how to monitor the implementation of X, how to more broadly consider the pros and cons of X itself as compared to the intended changes in Y, as well as potential side effects as suggested above, and you might begin to think of experiments with better alternatives to X, and see what happens then. In other words, in between the royal causal question and the non-causal ones there is a large gray zone of additional questions and helping questions which might be relevant, but which do not fall strictly on either side of an iron-­ cast dichotomy.

Q6. Is Methodology Prior to Paradigms? It is a favorite idea among some methodologists that particular rules of “the scientific method” overrides all differences among paradigms. However, it is only true in their own normative methodology. It does not hold as a description of actual practices. Studies show that even in the experimental sciences and in labs, actual practices of scientific work (Latour, 1987) do not take place as described in the textbooks. There are strong elements of intuition, experience, critical thinking and attempts to convince others through documentation and communication practices, but again, to talk of a unitary method is much too strong. Nevertheless, still reminiscent of an alleged unitary scientific method, some researchers in the social sciences claim that the rules of causal inference apply to all, regardless of subdiscipline and paradigm. What is forgotten here is that before we get to causality, we need to get through a passage which defines a fundamental view of the world, a sort of cosmology that needs to be installed. More often than not, it is one where there are some independent entities (such as individuals) with particular properties (which we describe as variables). All other things being equal, we then hope, over time, to be able to isolate some “effects”, that is a change in some variables as a result of some “predictors.” More often than not, we are interested in finding these effects among some of these units, isolated as they are from other units without these effects. We hope that the effects of smoking stays with the smokers and the effects of large cars stay with the owners of large cars. Or, more precisely, our analysis is made easier if real phenomena comply with the cosmology in which causal relations are easy to identify.

2  TWENTY-FIVE QUESTIONS 

45

This cosmology resonates very much with a particularly modern world view characterized by linearity, componentiality, manipulability, etc. (Berger et al., 1974). Then, of course, we can find “new” and “interesting” “interactions”, such as how non-smokers are affected by smoking or how size of car influences fatalities—the key point being that big cars are safe for their drivers, but absolutely not for the drivers of small cars when there is a collision. We can call those interactions “complex” (as compared to what the basic structure of that cosmology allows). My argument, however, is that causality, simple or complex, is possible only on the condition of the installment of a particular kind of world view. And this world view is surely not neutral to all schools of thought. Let me give some examples of schools of thought in social sciences, which would not be permitted if all thinking were to comply with the cosmology of causality and if its corresponding rules of causal methodology were enacted as legislation applying to all social science. At first sight, these examples may appear fragmented and disparate. However, it is possible to think of them as related by deep undercurrents. The first example has to do with meaning, a notion that is central to phenomenology, hermeneutics, and to interpretive studies in general (Frankl, 2008; Schutz, 1978; Castoriadis, 1997; Geertz, 1973). I like to think with Monet’s paintings during and after World War I as an example. As an homage to the French soldiers who lost their life, he painted weeping willows. Another of his favorite motifs, water lilies, grew into almost abstract depictions of color, shape, and light. Most people find these paintings stunningly beautiful. Nevertheless, all this extraordinary beauty on Monet’s canvasses is somehow a result of or a reaction to the horrors that took place in the trenches in WWI, and to a number of events in Monet’s life. He lost his second wife in 1911, he lost a son, Jean, in 1914, his health was weakened, and he gradually lost his sight. His weeping willows and water lilies can be understood as a meaningful response to all that. They are also a culmination of the impressionism for which he had paved the way over several decades. It would be too simple to reduce the water lilies to a causal “effect” of WW1. They are, of course, also more than that. They are interpretation and life and grace and redemption and sorrow and beauty and a great gift to all who open their eyes to his work. There is this huge gap between the situation Monet was in, and what he made out of this situation, a lap of creativity, which perhaps constitutes art. I can therefore hear an objection already: But paintings and water lilies are not part of social science. Right. But then comes my question. Why is it that we do

46 

P. DAHLER-LARSEN

not accept Monet’s water lilies as a model for the construction of social life, but we more easily accept something like Y= f(X) as a mental model? There were “exotic” sociologists like Georg Simmel who used artistic forms as metaphors for society, but the dominant ones remain the ones drawn from natural sciences, physics, and biology, and in century-old versions dominated by imageries of machines and organisms. In contrast, theories which emphasize the centrality of meaning (be it on the micro or the macro scale) usually insist that there is something autonomous and irreducible in meaning. An implication is that is necessary to show some respect for the constructor of meaning (be it an individual, a culture, a society). This entity is productive and creative, it is not just part of a causal equation. For example, Cornelius Castoriadis (1997) talks about a magma of social imaginary significations. Significations institute meanings, such as “God” or “equality” or “democratic deficit”. They are imaginary in the sense that they are constructions without any physical correlate. They are social in the sense that they are constructed collectively. A society becomes the kind of society that it is through the institutionalization of particular meanings. Finally, Castoriadis provocatively calls all these meanings a magma, I believe, because he wants us to not think of them as a “structure” or a “system” or a “set”. The internal relations between meanings are self-produced, they do not have to comply with any external form of regularity. This view of society is fundamentally at odds with the idea that social life should first and foremost be described in terms of causal relations. Not only do social imaginary significations have the capacity to break with existing regularities. In a very fundamental sense, imaginary significations reconfigure the very building blocks on which causalities may be built. There is schism between the “objective” meaning of a category and how people subjectively fill it with meaning as they identify with that category (Nisbet, 1966; Becker, 1996). This is what happens at the moment with many of the conventional building blocks in sociology, such as class, ethnicity, and gender. It might well be, of course, that within a given social order there are some regularities that we can study as causal relations for a period of time in a given context if we so desire. But these causalities are swimming in large oceans of social construction. If we leave causal methodology in a privileged, primary position, we will never be able to see the kinds of social

2  TWENTY-FIVE QUESTIONS 

47

change that take place in the magma of social imaginary significations. When there is an eruption of meaningful magma which constitutes something fundamentally new, it would, by definition, not be a causal phenomenon. That is why some of my causally oriented friends say that if that should happen, they would just wait until the new event becomes a regularity. Then they can study it. If it is just one new unique event, they have nothing to study in a causal perspective. Without a number of cases, there is no “propensity” of Y to change as a result of changes in X. Another even easier solution is of course to classify the apparently new event together with some more well-known phenomena, so that it belongs to a category with several cases. But then perhaps the new is deprived of its newness. In addition there is the fundamental problem of how we classify a new phenomenon into existing categories, which is an interpretive problem that causal analysis cannot help us with. So, if causal methodology is seen as primary, given, and prior to paradigms, then there is no space for people like Castoriadis. It is nonsense to say that of course Castoriadisians are welcome, they just have to comply with conventional methodological rules, because the whole point of their message is another world view, which by definition breaks the rules of the old cosmology. A kind of pluralism in paradigms is made impossible if a certain kind of methodology is established as prior to paradigms (Dahler-­ Larsen & Sylvest, 2013). Only a very reduced kind of pluralism is left if one methodology usurps that power. I could have made a similar argument with complex systems and complexity theory. Here, too, I could have shown that there is a fundamentally discrepancy between conventional causal methodology and the type of insights produced by the paradigm. Despite their many differences, a common denominator in interpretive studies, in social construction of significations, and in complexity theory, is the idea that social life is capable of producing something fundamentally new, something that refuses to be deduced from a set of pre-existing elements and conditions. So, these are examples of paradigmatic ideas which get squeezed out if causal cosmology takes the throne. I hear an objection: If I do not accept the idea that there are methodological rules superior to all paradigms, is it really defensible to think that each paradigm instead defines its own rules? Would that not pave the way to relativism? Would it not undermine the credibility of social science as such? No. Just because we reject the idea that methodology is master and paradigms are slaves does not mean that a reversal of the same roles is

48 

P. DAHLER-LARSEN

better. It is more useful to think of paradigms and methodologies as less hierarchically organized. They are sometimes partners and helpers, sometimes critical friends, sometimes adversaries. There are debates going on inside fields and subfields about methods, and there is some acceptance about the idea that some fields have distinct methodological criteria (Lamont, 2009), but there is also a lot of debate going on across fields. Just because there is no total consensus does not mean that people are not trying to convince each other. They also apply for some of the same funding. So, there is quite a lot of debate, some competition, and some institutional regulation. There is, in fact, an imperfect multitude of methods and perspectives, which are subject to some forms of imperfect regulation. Just because knowledge is socially constructed and there is a lack of consensus does not mean that there are no control mechanisms (Longino, 2002). There are many ways in which social scientists hold each other accountable (Lamont, 2009). None of these ways are perfect. But it would not be better if one of them, also imperfect, dominated all the others. I take it as a fact that social science has many camps and schools of thought in it. I cannot see how it would be more productive and more peaceful, if all camps had to comply with the rules defined by only one of the camps. In the meantime, most of the camps tend to think that everybody should accept some key ideas which surprisingly enough are in fact quite characteristic of their own camp. What is special about the camp with the most committed advocates of causal cosmology is that its members see reality is structured in such a way that their own methodological rules really deserve the throne and that all camps need to obey the king. After that structuration of their reality has happened, it appears that this methodology really is the key to how the world really operates. As they see it, that is. It is the institutional order on the way to structuring the philosophical order, not the other way around.

Q7. Do Methodological Rules Precede Scientific Practice? What is the origin of methodological rules? Methodological rules are sometimes referred to as if they originate from a place that is not earthly, as if they are given or foundational. If they were, however, that would lead to a paradox. If we want science to search for truth, most believers in science would cherish academic freedom and the autonomy of science. So, it

2  TWENTY-FIVE QUESTIONS 

49

has to define its own rules. Advocates of methodological rules will therefore have to acknowledge that these rules themselves also come from science itself, not from some external source. Science cannot be autonomous if it has no influence on its own rules. How can we account for the origins of methodological rules? Some believe that over time, the most experienced scientist accumulate and synthesize their experience into methodological textbooks. Subsequently, these textbooks are given a sort of superior status. However, perhaps textbooks are relatively boring retrospective summaries written long after new practices have been established. And perhaps they give a schematic or rosy picture of research practices. Sometimes, a description of actual practices (Latour & Woolgar, 1986), may undermine a belief in the more “rosy” picture of scientific work. Yet, science works through critique of former scientific practices. In science, it is our job to appreciate descriptions of reality, including descriptions of our own work. So perhaps it is time to inquire more into practices than into textbook rules, more into descriptive methodology than prescriptive methodology. Those who believe that methodological rules magically rise above practices forget how much scientific practice itself contributes to the definition of good science. Good studies become illustrative exemplars even if they never find their way into a methodological textbook. Therefore, some studies which aim to check whether the literature in a given field actually lives up to its own methodological rules runs into troubles because there may not be a sufficient amount of sophisticated works which are purely methodological and rule-oriented and which can be used as standards. And they may, in the capacity of being prescriptive, have distanced themselves from actual practices. Instead, people in the field more often refer to a number of very good studies which are both substantial and methodological at the same time. But then, of course, the methodological rules do not exist independently of the good studies which illustrate them or redefine them. Sometimes research follows rules. But sometimes research defines new rules. There is no guarantee that when a new finding is breaking through, then the rules to support are already in place. The chronological order may well be reversed. As Feyerabend (2010, p. 112) says, science is sometimes out of sync with itself, as ideas and hypotheses and rules do not develop synchronously. If that is correct, Q7 cannot be answered confirmatively.

50 

P. DAHLER-LARSEN

Q8. Is Scientific Progress a Result of Compliance with Methodological Rules? Methodology derives its authority from being prescriptive for scientific practice. These prescriptions are trustworthy only if scientific progress is a result of following methodological rules. This is not always the case. As Feyerabend suggests, there can be scientific discovery which happens against methodological rules at the time. A fine example was Ignaz Semmelweiss’ discovery of the importance of hygienic practices such as hand-washing in medical institutions. Semmelweiss (1818–1865) was a doctor in Allgemeines Krankenhaus in Vienna. He was concerned with the high mortality among women due to puerperal fever. It was more dangerous to give birth in his clinics than in a randomly chosen haystack. Semmelweiss made systematic observations and noted that the mortality was 10–20% in his clinic 1 against 4% in clinic 2. Semmelweiss wanted to know why. In clinic 1, the women were admitted for free in return for being subjects in the training of doctors. He tried to eliminate as many differences as possible between the two clinics without finding the cause of the difference in mortality rates, except than in clinic 1, there was only doctors, but clinic 2 had both midwives and doctors working in it. One day, one of his friends, Jakob Kolletschka, died after inadvertently cutting his own finger during an autopsy and contracting an infection. An idea dawned on Semmelweiss. Maybe some kind of “death material” had entered Jakob through his wound, and maybe the same material was the cause of the death of many women at the clinic if the young doctors brought this material from the autopsy room into the birth clinic. He therefore ordered all doctors to carefully wash their hands between autopsies and births. He managed to reduce mortality to about 2%, and later 1%. When he published his findings, it was difficult for doctors to believe that their own behaviors should cause death. Semmelweiss’ theory was also in conflict with basic scientific rules. First, his ideas about “death material” were speculative as he was unable to document its existence. This was years before Pasteur pointed to the existence of germs, and before powerful microscopes. Secondly, even if death material existed, a cause so small that it was invisible could not lead to such a large effect as the death of many women. There had to be some kind of proportionality between the cause and the effect according to the good methodological rules at the time.

2  TWENTY-FIVE QUESTIONS 

51

After moving to Budapest and experiencing intense conflicts with his colleagues in medicine, contracting syphilis, becoming mentally ill, and drinking too much, Semmelweiss was left in an asylum where he died at the age of 47. In retrospect, he made important discoveries and paved the way for modern hygienic practices in hospitals. But he broke methodological rules at the time. Some would then argue that if he had used the methodological rules of today, he would have been on safe grounds. But for him, that would have been the methodological rules of the future. So, as a consequence, shall we also today observe the methodological rules that will be valid in the future to make the best possible observations today? Of course, we cannot do that without a time machine. We are stuck with the problem that there might be a tension between our object of study and our methodological rules. The latter is not “above” our practice. The objects and the practices and the rules are all at the same level. Paradigms, too. We are in a mess (Law, 2004), where we struggle with materials, objects, findings, assumptions, methods, paradigms, practicalities, and adversaries. We can only hope that all of these are somehow ordered logically and chronologically, but nobody does that for us. The good news, however, is that we sometimes find something which is true and interesting even if we do not first have all the methodological rules in place to back it up.

Q9. Will Social Science Cleanse Itself of Ideology and Normativity, if it Restricts Itself to Causal Analysis? The history of social science is connected with ideologies and with attention to pressing social problems. It is a standing issue how social science distances itself from or engages with issues of ideology and normativity. The methodological rules related to causation offer an attractive escape from all these discussions. If these rules as seen as the embodiment of science itself and as indisputable, and if a given study lives up to the rules, then the researcher has built a strong defense against accusations of prejudice, normativity and ideology. The intention is to produce something that counts as evidence. However, evidence is not just data. Instead, data becomes evidence through the force of a larger set of arguments in which data and methodology are embedded (Schwandt, 2002). It is this larger set of arguments which should be reviewed for ideology and normativity.

52 

P. DAHLER-LARSEN

An example: A group of organizational psychologists find that certain psychological exercises among firefighters are instrumental in reducing stress. They carry out controlled experiments and confirm that they find a genuine causal link. They argue that their finding is important, because the work role as a firefighter is stressful and dangerous, and everybody knows that someone has to do the job, the salary is low, and it is impossible to reduce the working hours and the workload if the fire is on. Now, pay attention. Everybody knows what? Is it not under the present social and organizational circumstances that it is impossible to change the workload and other conditions of work? The researchers may well have observed all methodological rules in their own study, so perhaps nothing is technically wrong with the causal link they found. Nevertheless, the researchers recommend an individual cure to a structural problem. Their proposed intervention gains in importance because the researchers argue that for the time being it is not possible to find alternative ways to ameliorate the working conditions for the firefighters. However, it is a truth that is socially constructed and remains in force only so long as the present conditions of work for firefighters do not change. It might be, under the circumstances, that the researchers are trying to help the firefighters, and perhaps the psychological exercises are even the best choice, again, under the circumstances. However, the best choice is a strategic (and partly normative) choice in a given situation. The function of methodological rules is here to “science-wash” the proposed intervention in retrospect. Another example comes from psychological experiments that identify particular cognitive heuristics, say, concerning risk aversion in human beings. The experiments are cool and there is nothing wrong with the technical implementation. Then the researchers ask why human beings have such heuristics in their brain. The researchers say, without taking off their scientific hats, that these heuristics were developed as an evolutionary advantage already when we were cavemen. Then my question is whether they carried out a controlled experiment where we were cavemen in the intervention group and not in the control group. No, they did not. This whole narrative about evolution is just made up. Most likely, they don’t know more about evolution than you and I. So, although their experimental design is excellent world class stuff from the top shelf, the set of arguments they use to give their findings meaning as “evidence” lack a solid basis, to put it mildly. It is, however, the conclusion that we do this or that because we were cavemen makes it to the headline news.

2  TWENTY-FIVE QUESTIONS 

53

It is fashionable at the moment to ascribe the way human beings function to biology that is “hardwired” into us because of evolutionary pressure. In other decades, it was all due to socialization. The methodological rules of causation do not prevent against interpreting findings from the most technically sophisticated experiments with reference to the most unsophisticated and sometimes prejudiced explanatory models. The experimenters do not and can not subject the softer parts of their chain of arguments to the methodological rules that they otherwise use to legitimize their activity as “scientific.” In the same vein, differences between men and women are now also being legitimized with reference to when “we” were “cavemen” (sic!). While the methodological rules of causal studies are believed to be a bulwark against normativity, they in fact distract attention from otherwise serious conceptual and normative issues. A classical example is the Stanford prison experiment in 1971. It continues to be debated (Geggel, 2018). 24 males were instructed to be prisoners and guards at an artificial prison arranged by the leader of the experiment, Professor Zimbardo. The experiment was called off after a few days, when guards exhibited a number of oppressive behaviors and several prisoners displayed extreme emotional reactions. According to Zimbardo himself, the experiment illustrates how a few situational characteristics (even if fictitious and manipulatively constructed) can induce unethical human behavior, a lesson parallel to what can be learned from the Milgram experiment, where a “scientist” asked subjects to inflict pain on a third person (who was really an actor). Others argue that Zimbardo himself violated today’s standards of good practice, because he did not inform the participants properly, he did not explain clearly that they were free to withdraw from the experiment at any time, he could not predict what would happen, he did not protect the participants from harm, and he did not debrief the participants properly. Others argue that the abusive behaviors among the guards were in fact produced by Zimbardo himself, because the guards were trying to please him, although that in itself does not take the unethical part out of the equation. In the Milgram experiment, it was also a “scientist” who asked more or less normal people to inflict pain on others. I do not think that pain caused by scientists is less real than other kinds of pain. The most radically remarkable question, however, came from one of Zimbardo’s colleagues, who happened to pass by in the basement of Stanford University, where the experiment took place. The colleague asked: “What is your independent variable?” As if the most serious

54 

P. DAHLER-LARSEN

problem with people abusing each other is that the mastermind behind it all is using a confused experimental design. Still, today, try searching for “the most serious flaw in the Stanford prison experiment”, and the internet will tell you that Zimbardo did not use a control group so that he could manipulate his “independent variable”, perhaps because he did not conceptualize that variable clearly enough. Methodological rules about causation are, time and again, used to distract attention from normative issues. Maybe the colleague who asked the ironically illustrative question to Zimbardo also thinks that he bought himself out of an ethical dilemma by asking that question. I don’t think he did. Better questions would have been: “Zimbardo, what is going on? Do you see people mistreating each other as I do? How do you justify your experiment, all things considered?”

Q10. Does Causation Always Require a Counterfactual? The affirmative answer suggests that if you want to attribute the phenomenon Y causally to X, you must have a situation where X occurs and one where it does not. (The latter meaning “a situation where X is not a fact”, thus “counterfactual”). The idea is that without variation in X, correct causal inference is impossible. (Of course, the two situations should be as similar as possible in all other respects, but let us get back to that.) This idea has become a dictum in causal methodology, but it is based on a number of simplified assumptions which turn it into a half-truth, at best. It comes together with a mish-mash of short-hand dictums which basically add confusion. For example, one says that “you cannot say anything about causality based on a single case.” Technically, this is not true. For example, if someone says that X causally produces Y in all instances, you can falsify that statement if you have one case where X occurs but not Y. This situation is very common in practice. It should not be discarded as merely a philosopher’s sophistry and rambling about black swans. In practice, it very often happens that we purchase X based on a promise that it will deliver Y (whether X is a coffee machine, an investment in our pension, or a school reform). We are often interested in knowing in the “single case”, which is actually our case, whether X actually produces Y. Knowing that X does not deliver Y in the case at hand has serious consequences for producers, investors, consumers, policymakers, etc. For

2  TWENTY-FIVE QUESTIONS 

55

many of us, it is more important to know whether the causal link between X and Y exists in our case than whether it exists generally. Or to be even more precise: We are interested in knowing if X is helpful to achieve Y, whereas the “causality” involved maybe of limited interest, if only X actually works. If X demonstrably does not work in the case at hand in a way that produces Y, it simply cannot “cause” Y. It matters less what would have happened without X. The logic we use when we evaluate X and its contribution to Y in a given situation is usually not a variance-oriented one (since we do not have access to variation in X). Instead, we use a process-oriented approach, focusing on the necessary or sufficient mechanisms through which X may lead to Y, the traces of these mechanisms, and the specificity with which these traces can be attributed to X itself. We also search for witnesses to these processes, etc. In the process-oriented mode, we are much less dependent on the “counterfactual”. This is also true in a criminal case where we want to know if X murdered Y. The logic used to determine whether that happened is congruent with the logic of establishing a causal link between X and Y in a single case. We have legal rules describing how good the evidence needs to be (for example “beyond reasonable doubt”) before we can convict X.  But the lack of a situation where X does not occur empirically is not detrimental to our argument. In the courtroom, it happens every day that we draw these kinds of conclusions without documentation of a counterfactual situation. Unless, of course, we dig a little bit deeper into the meaning of the term “counterfactual.” It literally means something that is contrary to facts, something that does not exist. In that sense, the counterfactual actually might be represented in the courtroom, for example when the prosecutor says: “Were it not for the evil act of Mr. X, then the victim would have been alive and smiling today.” Here, the counterfactual is a logical and rhetorical device implicated in a larger argument. But it is not a part of any “methodological design” which has produced “empirical data.” By implication, the “counterfactual” is something that should compel us only when used as part of a convincing argument, it does not have a metaphysical or foundational status. As a consequence, we should remind ourselves about how easily we can become victims of some rhetorical construction of a counterfactual. We should also be aware that those who require counterfactuals sometimes rhetorically produce these counterfactuals themselves. Were it not for my terrible childhood, I would have become a world-famous rock star.

56 

P. DAHLER-LARSEN

Thus, the meaning of the term “counterfactual” is slippery. The original purpose might have been to issue a warning against too hasty attributions of causal responsibility to X in cases where there is no variation in X. A more perfect reasoning would occur if we had X and not X in the very same situation. But that is impossible. So, we use the term “counterfactual” to remind ourselves about the impossibility of perfect causation. The very term says that perfection can only be achieved “contrary to facts.” But that meaning of the term seems to be forgotten. Instead, people seem to establish the “counterfactual” as part of a controlled design, where they manipulate X so that they can compare X to not-X in situations that are as similar as possible.  However, whereas the term “counterfactual” was originally meant as a spearhead of critical thinking, it now becomes an institutionalized short-­ hand device which aims at an automatic separation of good from bad research. Good research is when you have X and “a counterfactual.” Bad research occurs in all other situations. By implication, if you have only one case with X, then you are stuck with “you cannot say anything about causality based on a single case.” In a further compression of half-truths and short-hand ideas, some reduce this dictum to “you cannot say anything based on a single case” (as if the notion “about causality” can be skipped without consequences, since we all know that causality is the only important thing to talk about anyway). As a consequence, some students in political science say, for instance, that they cannot answer the question “Did the war in Iraq result in the identification and demolition of weapons of mass destruction?” because they do not have access to a similar case where there was no way in Iraq. They do not pay attention to the exact question, which can in fact be answered without reference to causal terminology, and also without a counterfactual. The question is about accomplishment of a goal, not about causality. The belief that it is impossible to say anything based on a single case is detrimental to good practice (Flyvbjerg, 2006). In fact, much can be said, so a “small N” study is a really poor descriptor of how many things you study. There is already a multitude of good case studies in the social sciences, the quality of which do not hinge on “counterfactuals” (Flyvbjerg, 2006; Vaughan, 1996; Allison, 1969). Not all studies are causal studies. Some causal questions can be answered without a counterfactual. And it is important to be really cautious with what is meant by counterfactual. Some researchers who argue that

2  TWENTY-FIVE QUESTIONS 

57

counterfactuals are extremely important can live with a rhetorical counterfactual that they produce themselves. It is fair to say that if you pose a causal question, and if you subscribe unconditionally to a variance-oriented approach to causality, then empirical variation in your independent variable in order to determine the effect of X on Y is critical. Apart from that, many other truncated rules of thumb about the absolute necessity of counterfactuals can be suspended until you see the situation at hand and gauge the possibilities for answering the questions asked in that situation.

Q11. If You Compare Two Groups, Is it then Less Important What the Comparison Is About? The meaning of many of the classical background variables such as gender, ethnicity, and age are under contestation in recent years. We can begin to unpack meanings of such background variables by exploring whether or not they make sense to people, and how people identify with them or not (Becker, 1996). Sociohistorically, some variables change from being merely categories of people to becoming meaningful and powerful identifications through a process called “cathexis”, which makes that category “warm” and meaningful (Nisbet, 1966). If we want to know what a variable “does” in causal terms, we need to know what it “means.” As a corollary, it is important to pay close attention to the variable allegedly describing the difference between an intervention group and a control group. The difference between the two, one with X and one without X, may be a misleading abstraction if there is no careful attention to what X represents. The meaning of a variable does not give itself. For example, if you want to determine the effects of accreditation of hospitals, it matters a lot what kind of unaccredited hospitals are in the comparison group. In a global sample, many of them might be hospitals where there is a lack of soap and other fundamental things. The comparison will then support the conclusion that accreditation makes a lot of difference (as you cannot achieve accreditation if your hospital is short of these fundamentals). However, if a comparison group is found in countries where no hospital suffers from this problem, then the difference between the accredited hospitals and the rest may be negligible. It is therefore almost meaningless to say that accreditation makes a difference without stating very explicitly what the alternative to accreditation is like. You

58 

P. DAHLER-LARSEN

should not use “accreditation” as a broad concept if what you have studied is a specific difference between two groups of hospitals. If you want the benefits of a variance-based approach, you must operate with specific variables. Some would argue that this problem would disappear if only the allocation to the two groups was subject to randomization, because randomization would annihilate all other predictors of poor hospital quality. However, a technically better distribution of subjects into groups does not relieve one of the responsibilities to carefully determine wherein the exact difference between the two groups lie. This problem remains pertinent also in randomized trials. For example, you want to determine the effects of health checks, so you randomize some citizens into a group that receives a mandatory biannual health check and another group which does not. However, in a free country with free medical services many citizens take voluntary health checks whenever they like. So, your comparison is not the one you intended— between health checks and not health checks. It is between some implementation of what you think is mandatory health checks and some voluntary health checks when people feel like having one. In that exact comparison, the “mandatory” health check does not perform well. In another study, you send a serious, authoritative reminder to people who should pay debts to public authorities and a funny reminder to another similar group. More often than not, however, it is the researcher who defines “funny”, and little may be known about whether it was funny for the research subjects. “Fun” is, after all, relative to time and place and target group. The notion of the researcher “manipulating” the independent variable is a trap that leads one to believe that the meaning of the variable for the subjects is also under control. Another example is one where obese men were allowed to use the training facilities belonging to the Glasgow Rangers. They performed better than a control group. It was concluded that a successful health intervention for men is one where they are allowed to use the training facilities of a famous football club. Yet, we do not know if the effects rests with “famous”, with “football”, with “Glasgow Rangers”, with occasionally meeting famous players, or with something else. An attention to what happened in the control group might help, but if we know they went to “Ghosttown FC”, there is no escape from an interpretation of what constitutes the difference between the intervention group and the control

2  TWENTY-FIVE QUESTIONS 

59

group. There is probably more than one important difference between Glasgow Rangers and Ghosttown FC. Let us say that a research group wants to know the effect of risk-based inspections of workplaces by the occupational health and safety authority. So, they find a population of workplaces subject to risk-based inspection. Then they use a regression discontinuity design to find workplaces just above and just below the threshold value on assessed risk, leaving the former in the intervention group and the latter in the control group. The clever part about that design is that the pre-calculated risk is very close to the same in both groups and the design is as good as a randomized one, as it can be assumed that there is no systematic different between these groups, since they score (almost) the same on the variable used to allocate them into the two groups. Even sophisticated researchers can be found to argue that this design guarantees a close-to-perfect assessment of the effect of risk-based inspection, upon for example such outcomes as accidents, injuries, turnover, sick-leave, or other outcome measures you might find relevant. All of that is correct except one thing. The exact difference between the two groups is not that one of them was exposed to risk-based inspection and the other was not. Both of the groups were in fact subject to some kind of risk assessment (where they achieved almost the same score). The real difference between the two groups in the study is that (within the pool of all workplaces subject to risk assessment) only one of the groups got an inspection. Once the decision is made to do an inspection, there is absolutely nothing risk-based about the inspection itself. It is simply and bluntly just an inspection. You cannot test the effect of a risk-based inspection if in fact you are just doing an inspection. Then you can only test the effect of an inspection. The confusion comes from daily language, where the “risk-based inspection” (which is technically applies to how workplaces within a population are selected on the basis of “risk” rather than only randomly or periodically) is also used in practice used to denote the individual inspection. The inspection could also be called many other things. If the inspectors were driving yellow cars, you could call it “the yellow car inspection”. Yet, if you think your study thereby becomes a test of “yellow car inspection” as compared to no inspection, you confuse yourself and others, because you have not one, but two linguistic variations between your intervention group and your control group (the variation between yellow car or not,

60 

P. DAHLER-LARSEN

and the variation between inspection and not). This compromises the idea of your design, because if you want to eliminate all other factors but one, it is not enough to eliminate all factors but two. My recommendation is to take out the yellow car part and then talk about only one variation at a time: inspection versus no inspection. The same with risk-based inspection. If you in fact want to compare inspection and no inspection across your groups, do not claim that your study design is a great step towards knowing the effects of risk-based inspection. We should have gotten a clue in the first place because what is tested actually sounds like two things at the same time: something about risk-basing, and something about inspection. But there should be only one difference between the two groups. The more parsimonious the description of that difference, the better. That would give us more clear and pure take on the variable which represents the difference. In turn, if your claim is that risk-based inspection is better than normal inspection, why not compare one group which is subject to risk-based inspection with another group that is subject to inspection without previous risk assessment. And if you want, randomize all workplaces into these two groups before you start doing anything else. If this is your strategy, you should also clarify your language, as what you are testing is then your risk-basing. You are not testing inspection itself, because you do inspect both of your groups. The underlying problem in this example is not the design itself. The underlying problem is to think that the design in itself eliminates all threats to validity. A given design is better understood as a specific defense against a particular set of threats to validity. In the example just mentioned, the design with a comparison between an intervention group and a control group does not protect one against lack of attention to what is actually being compared. I would have said “the devil is in the detail” were if not for the fact the link between the way you pose a question and the way you seek to answer it is pretty crucial in all research and no “detail”. The exaggerated belief in some causal designs often draws attention away from the crucial link between design of research questions and design of studies. In the example with risk-based inspection, the fortress built to defend the study is strong. The whole force of The Causality Syndrome is called in to provide support. Critics will be blamed for not understanding the beauty of the regression discontinuity design, even if, in the case at hand, this otherwise beautiful design is coupled incorrectly to a research question that is not properly answered based on the research design used.

2  TWENTY-FIVE QUESTIONS 

61

An experiment with a comparison group can be technically sophisticated and backed up by the Causality Syndrome and interpretively naïve at the same time. It makes little sense to say that X is backed up by evidence from a randomized controlled trial unless there is a careful account of the exact meaning of the difference between X and non-X in that trial. You should carefully attend to which variables describes the difference between your groups, and even if you know the difference rests with a variable called “X”, you are not off the hook until the meaning of X has been elaborated and the exact constitution of the difference between the groups has been accounted for. On this point, I am sure that most of my good friends in causal analyses would agree. Where we differ is that they might argue that in my examples given above, the best rules of causal analysis have not been adhered to, whereas I argue that the overall legitimacy of The Causality Syndrome is a vehicle for half-baked rules of thumb, which in fact help sustain such problematic practices.

Q12. Does Reciprocal Causality Mean You Have Not Nailed Genuine Causality? One of the threats to a valid causal claim is that even if X is correlated to Y, after control for other factors, it may be wrong to conclude that X caused Y, because it may in fact be that Y already caused X. For example, some conclude that flexible and inclusive forms of management lead to commercial success, but they forget to check whether commercial success allowed these hip forms of management to occur in the first place. In another example, people who lack social support are more prone to become stressed, but maybe they also don’t maintain social relations because they are so stressful. Maybe obese people do not exercise because their body is a heavy burden for them, but maybe their heavy weight comes from a lack of exercise. There is a tendency among causal methodologists to state that if you only have correlational data at one point in time, you cannot draw valid conclusions about causality. To the extent that the underlying problem is reciprocal causality, the way to eliminate that threat is to design a rigorous prospective study where the cause is kept under control initially so that consequences can be observed over time.

62 

P. DAHLER-LARSEN

It is, of course, logically correct that some misunderstandings of correlational data can be eliminated by a prospective study in some situations. However, even it is successfully established under controlled conditions that when X comes first, it really does influence Y, that does not mean that in real life Y does not work back onto X. If there is a real reciprocal causal link between X and Y, it is sad if we abstain from acting upon either of them because causal methodologists tell us to not believe in any causal link until we have safely established it as a uni-directional link. In that case, methodological rules would be more strict than they than they have to. One can probably lose weight to make exercise feasible, or exercise to lose weight, and presumably both of these strategies might be useful. It should not be automatically assumed that if we face something that might be a reciprocal causal link, then there is some fault in our thinking, that requires us to postpone our actions until a proper design has corrected our mistake.

Q13. Does the Quality of a Study Depend on Its Place in a Hierarchy of Evidence? In several of the ideas presented earlier, we have touched upon the role of experimentation and randomization. These and other features of research designs are compressed into a very popular representation of research designs in a rank order. The idea is simply that all research designs can be ranked on an ordinal scale depicting how strong evidence of causal claims they are likely to produce. According to this cognitive device, designs with a control group are better than other designs, experimental designs with control groups are better than other designs, randomized experiments are better than other experimental designs, and syntheses of randomized trials are better than single randomized trials. The basic idea is that the higher the rank of a given design, the better it performs in terms of eliminating all explanations of variations in outcomes (Y) that might compete with the intervention (X), such as extraneous variables, maturation, selection bias, and more. This hierarchy is usually depicted graphically as a pyramid with different layers. This depiction easily lends itself to metaphors like “research from the top shelf.” A first reservation is, of course, that it is too often assumed that all studies are studies of causality. Advocates of the idea of causation have of course already won half of the battle if they succeed in convincing others that causation is so important that we can let the rules of causation stand

2  TWENTY-FIVE QUESTIONS 

63

for the rules of all scientific work. And, as a corollary, that we can use the evidence hierarchy to assess the quality of all kinds of research. Another reservation is that the quality of a study does not depend on its design alone. For example, we have already seen that careful attention to what is being compared across an intervention group and a control group is of critical importance. Just having the two groups in the design does not in itself do the job. More generally, evidence is based on a whole network of interconnected arguments not only about a design, but also about how it helps answers a research question, why that is important, and what the implications are. A third reservation is that research designs never work in isolation. They are always part of social reality. All research hinges on a large hinterland of structures, problems, resources, and interactions (Law, 2004). In practice, not all practical situations lend themselves to a randomized controlled study. Sometimes we are given facts without access to manipulation of an “independent” variable. Most of what is known in astronomy, geology, evolutionary biology, ethnography, archeology, and forensic medicine is based on some careful examination of facts, but the materiality of the topic and the long history of events forbids experimentation and manipulation of independent variables. Nevertheless, in these domains, we know a lot of things also about what we perceive as causes, such as “cause of death” in forensic medicine. But we did not first manipulate potential causes of death. In social science, the feasibility of a randomized controlled study is not evenly distributed across all topics, fields, and situations. For example, the larger the entities studied, the more difficult it is to organize randomization. We never see a randomized study of planets or nations. Organizations are also fairly difficult to distribute into different groups, not to mention keeping them there in longer periods of time (Robson et al., 2007). You need something that can be put under control. More often than not, individuals are used as research subjects. Furthermore, what is manipulated is usually a limited, short-term intervention which can be represented in a single variable and, again, controlled. This can be a medical intervention, a pill, a piece of information, an incentive, or something like that. We rarely use a change from capitalism to communism, a social reconfiguration of gender roles or a global climate crisis as an intervention in a randomized controlled trial.

64 

P. DAHLER-LARSEN

So, if researchers use the existing hierarchy of evidence as a guide to plan their studies, we can predict the consequences for their choice of topic and perspective. There will be a focus on micro phenomena at the expense of macro phenomena and a focus on single variables rather than on complex interventions. There will also be a strong focus on factors influencing the behavior of individuals (inspired by for example psychology or micro-economics). Educational research will focus on teaching methods, not on social structures and the role of education in society. In medicine, there will be a strong focus on medication of individuals and individual choices regarding food, smoking, and exercise at the expense of how housing, work, social structures, city planning and structurations of daily life contributes to health and well-being. In stress research, there will be a strong focus on individual coping mechanisms, not on the neoliberal economy as a precondition for stress. Although these inferences are speculative, they seem to me to correspond surprisingly well with how experimentalists actually choose topics in research. If the “quality” of a study is determined first using the “evidence hierarchy”, independently of the research topic, it has severe consequence for the choice of topics. Some topics will be overstudied (such as twins!) and some will be understudied (such as social structures and long-term macro change in society) because of the methodological prescriptions per se. A related consequence is that researchers who study phenomena that do not lend themselves to randomized studies or other forms of study near the top of the hierarchy of evidence will place themselves “not on the top shelf” and the knowledge they produce will be considered inferior. They run the risk of being portrayed as not smart enough, as not having figured out what the game is about, which is to do research on the top shelf. The downgrading of knowledge created by the lower levels of the established evidence hierarchy can have unfortunate social consequences. For example, the discovery of side effects usually begins with unsystematic observation of cases (Osimani, 2014). We do not wish to move forward with a controlled, randomized study of Thalidomide to make sure that it has side effects. We are grateful that research designs at lower levels of the evidence hierarchy have helped us stop this drug. Therefore, it is recommended by some to break down the conventional hierarchy of evidence. Instead, different kinds of evidence could be acknowledged to handle different aspects of practice (even in medicine). (Clarke et al., 2014; Osimani, 2014; Ogilvie et al., 2020).

2  TWENTY-FIVE QUESTIONS 

65

In the social sciences, Becker (2017) also turns the concept of evidence around. His notion of accuracy has to do with how well research depicts the perspective and the lived experience of people under study. Becker’s perspective could one day be used to develop a totally alternative hierarchy of forms of research. In the bottom of this hierarchy, you would find designs which depict people as mindless robots subject to natural laws. You would also find synthesis of these laws across time, place, and context. Somewhere in the middle you would find studies which give people a little bit of freedom of expression (e.g. surveys). On top of the hierarchy, you would find studies carefully attending to the perspectives of people, embedded as they are in their historical, institutional, cultural and topographical contexts. This alternative hierarchy could be an interesting and perhaps provocative heuristic device. At least it would demonstrate that there is not just one hierarchy of forms of research. At the same time, it would be a problematic idea, of course, if it continues to assume that research designs constitute a meaningful basis for categorization. And clearly, creating an alternative hierarchy does not help us get rid of the notion of hierarchy as such. In fact, the whole notion of evidence is contested and researchers make different choices about it. Some want to expand the concept so that it includes, for example, Becker’s notion of accuracy. In principle, it can encompass everything that is backing up a claim (be it a legal claim, a descriptive claim or other kinds of claims). Others want to discard the notion of evidence as such because it has been captured by the ideology of causation and it is deemed too difficult to steal it back. This crossroad can best be understood as a strategic and communicative one, it is not one about the deeper meaning inherent in the very concept of evidence. Regardless of one’s strategic position on the notion of evidence, a genuine problem is that The Causality Syndrome has already privileged a certain cosmology, and in relation to that cosmology, not everything in the real world fits in. So, with a belief in the hierarchy of evidence, there will be things that we just do not study. This has real consequences. An alternative, of course, is to say that all kinds of research should begin on a level playing field until a relevant topic has been identified. Then the best approach is chosen given the circumstances and the nature of the topic. In many cases, designs at the top of the hierarchy of evidence (such as randomized controlled trials) will not be possible. Forms of research impossible under the circumstance lose the right to be the best choice.

66 

P. DAHLER-LARSEN

Instead, the best approaches will give the best possible answers to given questions about specific topics under the given circumstances in real-world situations. What we see here is a conflict between two principles, fidelity to method versus fidelity to phenomenon (Schwandt, 2002). With fidelity to method, one commits first to a particular understanding of methods. With fidelity to phenomenon, one seeks to understand a particular phenomenon as it occurs in real life and then adapts the methodological approach accordingly. Fidelity to phenomenon would imply that the quality of a method is context-dependent, rather than given a priori. It might be possible that relatively “inferior” designs could lead to new insights, or that a “superior” design might not be particularly productive, for example if it stands in the way of insights. Again, the argument about what is “inferior” or “superior” should be understood in relation to the situated phenomenon at hand. Even if it one takes a sort of middle position saying that both fidelity to method and fidelity to phenomenon have some merit and some pros and cons, it would be a great leap forward if advocates of causal methodology would concede that the choice of method as prescribed by the evidence hierarchy has consequences for the choice of topic and for our understanding of the world. And for real people. For example, if we are talking about the discovery of serious side effects of public interventions, the consequences of downgrading evidence based on observations not “from the top shelf” can be serious.

Q14. Is the Randomized Controlled Trial a Clincher, and All Other Kinds of Studies Just Vouchers? Nancy Cartwright (2007) distinguishes between research contributions, which vouch for a certain answer to a scientific question, and those which clinch a particular answer. Vouching means “given the evidence at hand, it really seems like this answer is correct, all things considered. But we do not know for sure”. Clinching means “given this evidence, the answer is correct, and there is nothing more to discuss.” As part of the Causality Syndrome, research designs from the “top shelf” (those with experimentation and randomization) are believed to be coterminous with clinching, while all other forms of research, at best, can only vouch. This is a great mistake. There are situations, of course, where “descriptive” studies can clinch an answer to a research question beyond any

2  TWENTY-FIVE QUESTIONS 

67

reasonable doubt, within a given set of assumptions and rules of logic. Once we prove that this person said this or that, or did this or that, there is not always something “soft” about it. We can also use documents to ascertain certain facts. If there is anything in the world that we can “clinch”, our ability to do so is not reduced just because we do not have experimentation and randomization with us in the situation. So descriptive studies can, at least sometimes, clinch things. Some would argue that I misunderstand the intentions behind the evidence hierarchy here, since what the RCT on top of the hierarchy is meant to “clinch” is only the answer to causal questions, not all other kinds of questions. Nevertheless, my reading is intentional, to the extent that many tend to think that the “clinching” pertains to all research questions, not just causal ones. However, even it if comes to causal questions, it is arguable that singlecase studies may, in some situations, clinch a causal study. This is clear in the total absence of a desired outcome. We also let the suspected murderer go free, if the suspected dead person is found alive. We clinch the question of guilt without a control group. In a similar vein, studies with experimentation and randomization do not always, by definition, clinch a given answer to a given causal question. Yet, we can read official documents stating that “randomization secures that all known and unknown sources of error are equally distributed” (Sundhedsstyrelsen, 2018: 82). No, there is no such guarantee. If an intervention group is allocated to hospital A and the control group to hospital B, and there is radioactive radiation in the waiting room in hospital A, randomization in itself does not eliminate this source of error, known or not. Randomization may help with many problems, but it is not a magic wand. It should not be used as a ritual protecting from all evil. Clinching and vouching is not connected to the ranking of research designs in any 1:1 way, and the research design is only a small part of a larger set of elements which, when put together, may vouch for or clinch anything.

Q15. Is a Study Better, the More Control You Have over the Situation? It is no coincidence that the royal design in the hierarchy of evidence is called the controlled experiment. One of the variations which most clearly explain what constitutes the rank of a given research design in the

68 

P. DAHLER-LARSEN

hierarchy of evidence is the degree of control that the researcher has over the research situation. This is both a methodological message and a meta-message about being in the world: If you want to be a researcher, you should seek control of things. You should not study uncontrollable and unpredictable things. Science does not like surprises. Interestingly enough, however, many scientific insights have come out of coincidence and serendipity, like Alexander Fleming looking at some bacteria in a petri bowl forgotten near a window. The bacteria attacked by mold made him think. He then isolated the active part of the mold and discovered penicillin. The history of science is full of situations where insights are not a result of control in the first place (although, of course, subsequent experimentation may help to verify the alleged causal link). It is a problem if students believe that control of a situation is a precondition for good research. As a learning exercise, I like to ask my students how little control of a situation they can live with and still be able to do research. For example, if you are imprisoned, can you still carry out a study? After some thought, most students would say that it would in fact be a wonderful opportunity for prison research (except for the other disadvantages of that situation). But they would like to have some control over a pencil and some paper. Okay! Permission granted! In a broader sociopolitical perspective, it is problematic to equate social research with control, because social control is not equally distributed. Large bureaucracies and commercial interests have great advantages here, all other things being equal. Maybe these interests are not always the best partners for research, or at least, as a minimum, they should not be the only or the most preferred partners for social science. Sometimes it would be insightful to do research from the perspective of those who do not have much control over things. One of the most precious kinds of freedom that researchers have is that they can “case a study” so that it exemplifies a fresh perspective on an otherwise well-known phenomenon. Studies do not have to comply with a “realist” or “naturalist” view where there already exists classes of things into which a study has to fit from the beginning. Instead, a researcher can explore the meaning of a concept in new ways by “casing” a study in an explorative, refreshing and perhaps provocative way (Soss, 2018). This is just one example of how a researcher can contribute to new ways of thinking even in a situation where one has very little control over how events unfold.

2  TWENTY-FIVE QUESTIONS 

69

In the training of actors, they are sometimes asked to perform with a handicap (such as not being able to speak or not knowing what happens next). Some artists use materials found in nature as a starting point for sculptures. Not having total control over things might enhance creativity, at least as one of the dimensions in life. Also in research.

Q16. Will Causal Knowledge Accumulate Over Time? The hope that knowledge accumulates over time serves as a justification for the hierarchy of evidence (with its explicit emphasis on meta-analysis or syntheses of randomized controlled studies in the very top of the pyramid). The accomplishment of this overarching goal faces severe limitations. Causal studies are usually designed to identify the effects of one variable at a time. (The purpose of the randomized controlled experiment is to isolate the effect of X). However, in most practical situations, the successful achievement of Y hinges not only on X, but also on a long list of helpers and conditional variables, which vary from one context to another. The consequence is that just because we know that X works there, we do not know that it also works here (Cartwright, 2013). Following Cartwright, advocates of the evidence hierarchy probably overstate how much valid and generally valid causal knowledge we are going to achieve in the long run. An even deeper philosophical reservation is that if the world were created once and for all, we might accumulate a more perfect knowledge over time. But if the world is under constant reconstruction, then the X’s and the Y’s and their potential relations are also being redefined, and then there is no guarantee that we will actually catch up with the world. Then generalizations do not accumulate, but decay (Stake, 2000). This does not mean, of course, that there is no advantages in trying to do some of the “catching up” regarding causal links. If the world is dynamic at the same time as our knowledge-construction is dynamic, we should have more humble expectations. We should accept lack of knowledge as part of the game, and think more strategically and pragmatically about methodological rules. Rules may help us along the way, or they may not, but they definitely do not secure a road towards perfection.

70 

P. DAHLER-LARSEN

The question about accumulation of causal knowledge over time has surfaced in the debate over the so-called replication crisis. Especially in psychology and medicine, it has turned out the new replications of classical studies are often not able to reproduce the original causal effects. To explain this, some have tried to identify fraud in earlier studies. Others have argued that systematic biases in publication practices lead to an overestimation of causal effects in published studies (which are then, by definition, difficult to replicate). It is obviously fine if reasonable measures are put in place to remedy fraud and publication biases. However, still others believe that replication crisis is a symptom not of biases, but of a problematic paradigm. Why should research findings about social phenomena be the same across time and place? Do new populations in new contexts not construct their own new worlds? Replication is only a meaningful criterion under the assumption that the world is created once and for all and remains stable. With interaction, endogenous effects, and systemic effects in a non-­ linear world, it is not a scandal that it is impossible to reproduce all earlier studies. However, it is maybe a scandal that methodologists continue to believe in a linear cosmology with fully reproducible regularities (Wallot & Kelty-Stephen, 2018). What is really interesting is also the intense, if not furious character of the debate about the replication crisis, and its very emotional ramifications for researchers. The debate has shown that people invest themselves in methodological issue in a way that best resembles a religious war. If they shifted their paradigms so that lack of replication of old studies would no longer be a threat to their fundamental beliefs and their whole identity, it would be easier for them to live with the fact that causal knowledge does in fact not accumulate very much in this world. The replication crisis is, in my view, exactly an illustration of this point.

Q17. Does the Evidence Hierarchy Only Produce Knowledge? The existing hierarchy of evidence does not only produce knowledge. It also produces ignorance in those instances where randomized controlled trials have not been carried out or are not possible. Some treat “evidence” not as an ordinal variable, but as a dichotomy. They insert at “cut point” in the hierarchy just below the randomized controlled trial. Everything below the cut point is considered “not solid

2  TWENTY-FIVE QUESTIONS 

71

knowledge.” And even knowledge produced with designs above the cut point is considered “inconclusive” if the study does not produce significant results. In addition, in situations where a randomized trial is impossible, we will (falsely) conclude that we know nothing. The belief that randomized controlled trials clinch things is often coupled with the belief that other studies do not clinch anything, and, which is logically inconsistent, it is therefore also believed that nothing is vouched for. For example, it is argued that face masks have not been proven to be effective weapons against the corona virus. There have been very few randomized trials in a large scale. Then in the Spring of 2020, a large Danish experiment was carried out. It showed that slightly fewer people among the mask bearers were infected, but the effect was not strong enough to be significant. One of the caveats was that the experiment was carried out in a situation where incidence was low, so maybe the situation was not the best to test the effectiveness of masks. On the other hand, if there is a serious risk, you would usually, for ethical reasons, not ask any research subjects not to wear masks. Shortly after, restrictions required wearing masks in public spaces and public transportation in many countries, so it would be legally, ethically and practically impossible to replicate the experiment. There would be no control group. So, we are not likely to ever know the effect of masks if we think of “evidence” in a dichotomous way with randomization as the key criterion. I am not sure that the inventors of the evidence hierarchy really meant evidence to be conceived in a dichotomous way. After all, they constructed a rank order, like steps on a ladder. We actually do know something about the permeability of different kinds of materials with respect to germs and aerosols. We know how long time aerosols stay in the air and how long distances they fly. We also know that people tend to keep a distance to a person wearing a mask. And much more. It is simply too stark to say that we know nothing if we do not have a randomized experiment with and without people wearing masks. We also never had such an experiment with parachutes, but we do trust and use parachutes anyway. Usually with success. A similar discussion is emerging about vaccine passports in some countries. Some argue that we should not introduce a vaccine passport unless there is evidence to support it. But you cannot test it an intervention group if you are not allowed to introduce it. In this case, the lack of “evidence” is produced using the evidence hierarchy as a taken for granted

72 

P. DAHLER-LARSEN

standard, and the lack of evidence is used as a political argument to not introduce what we do not have evidence about. A similar effect is found also in the case of legislation. For example, if a law exists in all of the EU, it is difficult to find a control group not subject to the law which is reasonably similar to our intervention group. By implication, there is no strong evidence that can be used to evaluate the legislation, neither positively nor negatively. And there is also no evidence to support policy change. So, instead of using what we know, the “lack of evidence” is used to support status quo (Smismans, 2003; Dahler-Larsen et al., 2020). In practice, the function of the evidence hierarchy is to keep some knowledge out. While the intention behind the hierarchy of evidence might have been to motivate the construction of better knowledge, one of its actual consequences (in combination with short-hand and compressed simplifications) is in fact to produce situations where we allegedly “know nothing” even if we actually do know or could have known quite a few things. In addition, the institutionally constructed lack of evidence often has political or commercial ramifications. For this reason, some industries have interests in “manufacturing uncertainty” (Michaels & Monforton, 2005). Agnotology is a term which describes the scientific study of the making and unmaking of ignorance (Proctor & Schiebinger, 2008). Perhaps paradoxically, the evidence hierarchy is sometimes used not to assess the quality of studies on a finely graded scale, but simply to state that anything below a given cut point is “uncertain” and something we simply don’t know, even if there is in fact many things we know which have never been tested through, say, randomized experiments.

Q18. Are the Rules for Causal Inference the Same Regardless of the Practical Situation? It might well be that advocates of causal methodology believe that rules ought to be the same in all situations. In practice, the rules vary as do their application. In some political situations, pretty clear evidence does not inform political decisions. In other situations, even a relatively weak study is enough to introduce new policy. Advocates of causal methodology would probably say that this observation only testifies to the fact that policy ought to be more evidence-based.

2  TWENTY-FIVE QUESTIONS 

73

I could also imagine many situations where a risk that is considered more serious ethically or politically is given more weight than a more likely risk considered less serious. In other words, we accept different levels of uncertainty about the occurrence of a given event (and thus different statistical significance levels) depending on how serious the consequences of the event are. There are sometimes political reasons for weighing evidence in different ways, even weighing the same outcome in different ways depending on what its causes are. For example, I can imagine that it will be very difficult for politicians if a few people die from blood clutters as a result of an anti-covid vaccination program, but politicians can live with, say, twice as many deaths from corona itself. Why? Because politicians will not be blamed for the outcome of corona, but they will be blamed for the outcome of vaccination programs. Decisions about risks like these can be practically, ethically, or politically motivated over and above what a causal analysis shows. Advocates of causal analysis would argue that the politicians should at least be well informed so they know what they are doing, and uncertainties should also be reported (Pielke, 2007). Still, the rules for causal analysis do not translate directly into decision-making. There are good reasons for this. There are also bad reasons when the rigor with which causal rules are used depends on political convenience only. More controversial, however, is that rules are also used more or less rigorously among social researchers. One day, for example, some call a study “correlational”, which is a slightly derogatory term meaning “not from the top shelf.” Another day, a very similar study is “interesting” because it identifies potential “antecedents” and “predictors” of particular outcomes, as one assumes there is an underlying causal link between these variables and the outcomes. As long as these causal links are assumed, but the c-word itself is not mentioned, it is suddenly a very promising study. However, its design is more or less the same as the one which was deemed “correlational” the other day. At the same time as advocates of the Causality Syndrome place research designs at the centerpiece of all assessments of research, they are also in command of a clever language that allows them to exempt some studies or some researchers from a too strict implementation of the rules they otherwise advocate. If you are considered “in the know”, you also know that there are rules but there is also ways to get around them. Another example of very “flexible” use of the rules of causation has to do with shadow controls (the argument about what would have happened

74 

P. DAHLER-LARSEN

in the counterfactual case). Some argue that without a real control group, you cannot make a causal inference. Others celebrate shadow controls (which here brings “counterfactual” back to its naked meaning: something that does not exist). Sometimes surprisingly, the use of self-­ constructed shadow controls becomes a virtue testifying to a person’s deep understanding of the nature of causation. Even among advocates of the same type of causal thinking and belief in the same strong designs, there are different thresholds for how much control we should have over particular extraneous variables. For example, we know that lifestyle factors such as smoking, drinking, and eating junk food causally leads to coronary disease and a range of other health problems. So, if we want to study the impact of the work environment on these outcomes, we must control for lifestyle factors. In the first place, however, nobody spoke up to insist on control for work factors before drawing conclusions about the effect of lifestyle on health. The requirements for the control of these different factors remain asymmetrical also today. Advocates of causal methodologies may be technically correct in saying that we need to control for lifestyle before we determine the causal link between work and health. There is just a conspicuous lack of institutional expectations about also doing the control for work factors before we conclude about lifestyle factors. There are many ways in which antecedent variables and control variables can be introduced or not, ways significance tests are carried out, and ways that the form of associations between variables are specified. There are some very good practices, and some bad ones. And a lot of unreported grey area in between. There are also different rules for when we accept a cause as a final cause. For example, it was believed for many years that stress caused ulcers. Then it appeared that bacteria called Helicobacter Pylori were responsible for ulcers. Research on effects of stress on ulcers almost stopped. It is known, however, that stress can make the symptoms worse. And one might also ask whether a person with stress is more susceptible to be under attack from Helicobacter Pylori. But perhaps we do not feel we collectively need to know more about that, now that we have identified the “final” cause of ulcers. The right time to stop searching for causes behind the otherwise accepted “final” causes is not dictated by methodological rules themselves. When we learn that boys get lower grades than girls in Danish gymnasiums, and research shows that they do not work as hard as girls, we can

2  TWENTY-FIVE QUESTIONS 

75

conclude that the boys themselves are to blame for the poor performance, or we can inquire into the “culture” that constructs such weird gender roles. While believers in evidence tend to operate with a stark distinction of scientific rules on the one hand and practical/political concerns on the other, the two domains are in fact much more intertwined and interacting. The “rules” do not remain the same. 

Q19. Can You Sell Your Study by Pretending that Its Design Is Better than it Actually Is? For example, you have done a randomized study of two groups of people in Denmark. One has been asked to wear face masks, and the other has been asked not to. Both groups were found through contact with a large supermarket chain who let their employees and customers be enrolled as research subjects. The study is reported in the capacity of a randomized study, which it truly is. However, one usually expects the randomized controlled study also to be rigorously controlled. One important form of control is to keep the intervention group and the control separate so that no effects of the intervention contaminate the control group. This kind of control is easier to achieve in a laboratory or a hospital setting than in out in the field. In the study at hand, I assume that some employees who are members of the control group interact with members of the intervention group. This is very fortunate for the control group. Their risk of being infected is reduced because the face mask on a member of the intervention group does not only prevent virus from coming in, but also from getting out. Furthermore, the very fact of seeing a person wearing a mask reminds you of the need to keep a distance and also to remember that corona still presents us with a general risk. So several benefits of masks reach people not wearing masks. That increases the general genuine effect of masks but reduce the effect of masks as measured in terms of a difference between the intervention group and the control group. These are kinds of things that may happen in randomized field trials. But these extra effects are very difficult to quantify. One might say, then, “we only measure the effect of wearing a mask” on the mask-bearer. The results of wearing a mask on the person wearing a mask is measured against the results for people without masks who have nevertheless also benefitted from the masks that they did not wear. The alleged “effects” on mask bearers is not only different from the effects of mask-bearing, it is also miscalculated for the very same reason.

76 

P. DAHLER-LARSEN

It is convenient to say that this is a randomized trial measuring the effect of wearing a mask, and then not report that this is actually not a controlled trial, it is a randomized field trial with limited control. The study provides a limited, if not faulty, measure of the real effects of mask-­wearing. The true problem here is not the limitations of the study. It is the lack of reporting of these limitations, which leaves the reader with the impression that things have been more under control than they actually have, suggesting that the study is placed higher in the hierarchy of evidence than it actually deserves. Another example is a study of municipalities, which were subject to lockdown as a result of local corona outbreaks. The study found that there was almost no difference between the subsequent number of incidences in these municipalities and the neighboring municipalities, which were not closed. It was argued that although this was a natural experiment, it was as good as a randomized one. Again, the issue of interaction between the groups in real life (“contamination”) was not described sufficiently. On a normal day, 40% of the Danish work force cross a municipal border. Craftsmen, salesmen, shoppers, and children in divorced families are also crossing these borders on a regular basis. But not during lock down. When a municipal border is closed, it is closed both ways. People cannot get out of the locked-down municipality, but they also cannot get in. So of course, people in the neighboring municipalities are also affected by the lockdown. When these interactions are not reported, it amounts to pretending that more things are under control in the study than they actually are. If facts were reported, that would lower the perceived quality of the study in terms of its rank in the hierarchy of evidence. This is in sharp contrast with a rule we otherwise cherish, which is that there should be enough transparency to let the reader determine the quality of the study and the trustworthiness of the findings. Here, the ambition to score well in the hierarchy of evidence turns out to be counter-productive in reality. One of the saddest effects of the hierarchy of evidence is the motivation it creates for researchers to not report some features of their studies and the realities in which they took place. What is left out may be critical information that not only influences the assessment of the perceived quality of the study but also the validity of the findings. When there is a conflict between the truth and a highly appreciated research design, I much prefer the truth. However, the alleged hierarchy of evidence, and the whole institutionalized Causality Syndrome provide a motivation to prioritize the acclaimed research design over the truth.

2  TWENTY-FIVE QUESTIONS 

77

Sadly, I also find myself falling into the trap of the allegedly perfect but in reality not so perfect design. I was a member of a review committee. We do our best, but we have to review many applications in a short time. There is much pressure on the human capacity to assess information and evaluate it in these situations. In one example, the applicants wanted to do a randomized controlled trial with a promising intervention and there was a lot of background information that seemed to be OK.  They had also made arrangements with the folks they wanted in the control group. I made a lot of notes, including “RCT”, and much more, and went to the meeting. Most of the members thought it was a neat project. Then one member of the committee said that you cannot randomize if you already have arrangements with people whom you want to be the control group. Of course, that was right. Luckily, we recalibrated our collective assessment. I am still thinking of the extent to which other applications contain (or omit!) little nuggets of truth which contradict the alleged design. Was the applicant aware of this contradiction until we found it? Most of all, I remain forever embarrassed about my own mistake. I fell victim of the same little logical script I am warning others about: if they claim it is an RCT, then we put it in the RCT category even if that claim flies in the face of truth. I promised to do what I can to never make that mistake again. The larger problem, however, is how widespread this robot-like categorization of research designs is in real life. Our control mechanisms, including my own, are apparently not fine-grained enough. An important element of the problem is that the evidence hierarchy, one-dimensional as it is, has become a dominant tool in the assessment of the quality of studies. At the same time, we put more pressure on our control systems than we have to because of the incentives felt by many researchers to let their design “creep” upwards in the evidence hierarchy, often in unnoticed ways. This effect is further enhanced by believing that our careers are threatened if we do not comply with the Causality Syndrome. Without the Causality Syndrome as an institutional amplifier of all of this, we could probably assess research more coolly and carefully, and we would not have to cut through “evidence-washed” descriptions of research designs before we could read about the research designs that people really intend to use.

78 

P. DAHLER-LARSEN

Q20. Is Your Career in Jeopardy, if You Do Not Comply with The Causality Syndrome? Will you not get funded, published, and promoted? Researchers in social science, especially the younger researchers without tenure, are under pressure. The argument here is not that this pressure is much worse than the pressures on nurses, teachers, bus drivers, or personnel in fast food restaurants. So, researchers are not particularly underprivileged. However, the pressure from lack of tenure and tough competition for grants are real threats which influence the way that many researchers structure their research. They also live in times where bibliometric measures and citation scores play a stronger role than ever before, at least in general terms (not necessarily in the individual case). Young researchers live in environments where it is not easy to predict the future. They may, for psychological reasons, ask for advice and guidelines to reduce their uncertainty about the future. Therefore, they might be easy victims of someone who tells them what they need to do to secure their career. Some older colleagues, leaders of research groups and of PhD schools and other significant persons are likely to be asked for guidance, and some of these would like to serve as oracles regardless of whether they know exactly what the future brings. Usually none of us do that, but some leaders or semi-leaders may of course have a bit of real influence on hiring and promotion, so there are reasons why their advice is attended to, again under great general uncertainty. Let us say that some of these local oracles are strong advocates of causal methodology and of the hierarchy of evidence as it is conventionally known. They will then advice their young and untenured colleagues to produce causal studies as defined by the hierarchy of evidence. This advice can be motivated by a real belief in the value of these studies and/or by a belief that this is the best way to get published, to be funded, and to get tenure. It is not necessary to be philosophically convinced about the causal epistemology to be subject to the force exercised by advocates of the Causality Syndrome. All that is needed is to acknowledge the real effects of this instutionalized syndrome. Furthermore, short-hand and compressed versions of what constitutes a good causal study permeates some research foundations and other sources of funding as more people come to believe in The Causality Syndrome. Politicians and others may buy into the same type of thinking, believing that we all need to know what works. So, advice that is based on

2  TWENTY-FIVE QUESTIONS 

79

a half-truth may in fact turn out to be institutionally enforced and become more true through a self-fulfilling prophecy. The hierarchy of evidence has some wonderful properties as an institutionalized idea. It sounds rational and scientific. It reduces complex matters to a simple formula. All you need to do is to choose research from the top shelf. This is easy to understand even if you do not know much about research. However, while The Causality Syndrome is institutionalized and powerful, it does not rule the whole world. At the same time, there is also a rich underbrush of alternatives. There are strong and respected journals, which publish all other kinds of studies, too. Publication in particular journals is a golden path to recognition and tenure in some academic environments, but we also see strong tendencies in other directions, such as an increased focus on open access, on the use of social media and alternative forms of communication. The present publication model, where journals are terribly expensive for university libraries even if academics deliver the inputs to journals, review and edit the content for free, is not tenable. In some academic environments, there is an emphasis on external collaboration with stakeholders and on the social impact of social science. In still others, innovation and funding are of outmost importance. Some young researchers try to do a bit of everything as a form of insurance against unpredictable forms of evaluation. In some circles, there are a set of defined criteria for hiring and promotion, but when a committee of experts get together to review a large pile of applications. Being experts, they use their own judgment and calibrate it with other experts. It is difficult to tie expert evaluators to fixed institutionalized criteria. So, the link between internal and external evaluation criteria is not always robust. It always matters who sits in the committee. All this means that if you are a good and strong young researcher, and you work like a horse, you have a good chance of a fine career even if you do not comply with a given piece of advice about how to design your studies and how to publish. The idea that the competition is only about one thing and that this one thing is defined in advance is a myth that makes the competition even worse. In other words, at least a part of the power of The Causality Syndrome rests on bluff: Its advocates pretend to know that research will be more and more dominated by belief in this syndrome in the future. No guarantees can be issued about how to manage a career, however, since the future is uncertain, and there is a large variation from institution

80 

P. DAHLER-LARSEN

to institution and from one situation to another. It appears that the standards can be higher one day and lower another day. Someone does not get hired because they are terrible to work with. A promising, but not excellent researcher may be hired, because his or her teaching skills are extremely needed at a specific department. Etcetera. The world is not totally fair. But it is also not totally unfair. You should in general not expect too much fairness if you consider science as a vocation (Weber, 2004). I emphasize the lack of guarantees. So, do I recommend taking a chance? To some extent, yes. Research, like art, and like life, is to take some chances. You are also taking a chance if you follow the prescriptions of existing causal methodology, because they are now so mainstream that if you think they will also be mainstream ten years from now, you may be taking a terrible chance, also with your own motivation. Admittedly, there is an element of objective social construction in the institutionalization of The Causality Syndrome at the moment. It is to a large extent backed up by money and power. The institutionalization is crucial to my argument, however. The implication is that it is not the original or most thoughtful and nuanced version that is institutionalized. It is often a compressed, simplified, recipe for research. To comply with this recipe is also risky. People who follow the simple, script-like version may be motivated in ways that are different from what motivates original, nuanced, and thoughtful explorers of causation. They former may also cut other corners. Parts of The Causality Syndrome may also be breaking down because of its internal contradictions. If a lot of the same kind of research is done as a result of the same prescriptions, originality may wither away. The replication crisis may be another source of concern which may undermine general optimistic belief in the future of The Causality Syndrome.  Sure, if you don’t follow the prescriptions from the Causality Syndrome, your future may be unknown. But if you follow these prescriptions, your future may also be unknown. Do what you think deserves to be done. Do what you think is interesting. Do something that you think deserves your motivation in the next four decades.

Q21. Are People Primarily Interested in Outcomes? Most advocates of causal methodology assume that there is great interest in research that informs us about how to get to particular outcomes (meaning causal effects). Their underlying justification is that at the end of

2  TWENTY-FIVE QUESTIONS 

81

the day, people (decision makers, practitioners, professionals, all of us) are primarily interested in outcomes more than anything else in life. First it must be made clear, however, that there is no given definition of what counts as outcomes. My students often struggle with this. When a textbook talks about inputs, processes, outputs, and outcomes as if these terms are self-evident, my students cannot always figure out which terms correspond to which things in real life. This frustrates them, as their underlying expectation is that these terms map real things in a given way. As if God made a list: Toothbrushing = process. No caries = short term outcome. Quality of life = long term outcome. You can look everything up and see whether it is one or the other. When I teach, a problem already occurs with my cup of coffee. It is an outcome because I reward myself with coffee after, say, the first lesson. It is also an input because it energizes me to the next round. The outcome of that input is better teaching, better grades for my students, better ultimate job performance for my students in public administration, and therefore more happy, healthy, and richer citizens in the areas where my students work. I know this, because I heard one day that everything in the world is input, process, and output, until it affects the lives of citizens, then it is an outcome. But of course my students are not citizens. They are students. And then that rule of thumb also breaks down one day, when students evaluate the learning outcome of my course, and they never involve any real citizens in that evaluation, but it is called an “outcome” evaluation nevertheless. Inputs, processes, outputs, and outcomes are in fact relative concepts. We are just making them up and assume some relation between them. Philosophically speaking, they are defined in relation to systems, but the boundary of a system depends on the observer’s perspective (Bateson, 1972). For a bureaucrat, a budget, the larger the better, is an outcome worth fighting for. For others, it is only a means to an end. For some, compliance with the law is a fine outcome of democratic socialization, for others it is, again, a means to an end. For some, a wedding is an outcome of a long process of courtship and planning, for others it is an insignificant legal event (a process). For some, a fine exam is an outcome of energy spent in school. For others: Non scholae sed vitae discimus. So “outcomes” are not reflections of a given underlying order of things. Instead, it is a way to organize things conceptually and think of some of them as worth achieving.

82 

P. DAHLER-LARSEN

Next, then, if we remain interested in outcomes, and accept that they are a result of social definitions, there still seems to be an underlying idea that these “outcomes”, however defined, are separate from some “effort” or “activity” or “process”. In other words, the very deep idea is that we engage in life in some kind of instrumental way, and we are supposed to care less about what goes on “along the way” if we are able to “achieve” some “end goal”. This is the kind of philosophy of life that the causal methodology resonates with. We live for the outcomes. Let me give two examples which modify this idea. The first example has to do with the “outcome” (as the term is usually used) of treatment of a medical condition called a frozen shoulder. This condition causes pain in your shoulder and limits its movement. One of the consequences is lack of sleep because you wake up with strong pain if you incidentally lie on that shoulder. The condition occurs for unknown reasons and goes away by itself over a period of one to three years. Care may include physiotherapy. However, whether or not you receive physiotherapy, the condition will last the same amount of time, and your pain will go away regardless of whether you do your exercises or not. We know this from randomized experiments. What the randomized experiments can reveal is the difference in pain across two situations. If the pain is the same, there is no measurable variation in “outcome”. So, in a variable-oriented perspective, pain is not a relevant outcome. No matter what you do, the pain is gone in the end. But there is still a lot of pain along the way. If you look at it from a theory-based perspective, you could argue that duration of the condition and ultimate amount of pain are irrelevant outcomes, since theory and experience tell us that these variables cannot be influenced by treatment. So, as a patient, I am told to not have expectations about reduced pain as an outcome. However, the physiotherapist argues that physiotherapy is good for my muscle strength and also for my shoulder mobility in the long run. So, there is my potential motivation. However, just because the professionals tell me I should focus on that outcome and not on pain, there is no way to not think of the pain in my daily life. So, I talk a lot with my physiotherapist about the pain, and how all the muscles around the painful joint also start to hurt, because they are too tense, as they constantly try to hold the painful shoulder in place and avoid any sudden moves. But that part of the pain the physiotherapist can relieve with a bit of massage. We also negotiate which exercises I can reasonably do with respect for my pain, and pain tells me when I do too much

2  TWENTY-FIVE QUESTIONS 

83

or do it in the wrong way. So, even if “reduced pain” is out of the equation as an official medical outcome, it remains in my mind. I struggle with it every day. It has to be negotiated as part of the treatment program. It is only irrelevant for people trying to build “evidence” about “outcomes”. I don’t care if counts as an “outcome” or not. I cannot ignore it. It is part of my life. I have to deal with it. Another example concerns a houseowner who sees a person fall down with a heart attack on the pavement in front of the house. The houseowner provides heart massage and calls an ambulance which quickly takes the patient to the hospital. After some days, the houseowner gets really curious about whether the person’s life was saved. What was the outcome of the intervention? A philosopher hears the story and says that there is no need to know the answer to that question. The houseowner has already done the right thing, which is a success in itself. If the houseowner accepts the view of the philosopher, the idea that people are really only interested in “outcomes” does not hold. The sociologist Simmel explained that authors can hold four views about a book that they write. Some think it is finished when it is planned. Others think it is finished when it is written. Still others when it is printed. The fourth group needs to read the reviews. Simmel thinks that the best authors are those who think the book is finished when it is written. There is a beauty in that process itself. It does not depend on any product or outcome. Philosophers might distinguish between a utilitarian ethic on the one hand and an ethic of duty (or of care) on the other. Only the former would be strongly focused on outcomes. The latter would pay attention to the “means” as much as to the “ends”, to “interventions” and “processes” as much as to “outcomes”, or they would not accept having the practical world in which they live atomized into these artificially constructed categories  (Biesta, 2010). They would therefore have strong reservations against affirmative answers to Q21.

Q22. Do We Spend Most of Our Lives Thinking About the Causal Net Effect of X on Y? Why might we be interested in the net causal effect of X on Y? Because we want Y, and therefore we want to find out whether we should purchase, install, promote X or not to accomplish our goal Y.

84 

P. DAHLER-LARSEN

Are most of the decisions in our life be modelled after a similar pattern: Should we choose X to accomplish Y? This is indeed an underlying assumption among many advocates of causal thinking. However, there are many situations where the contours of our life situations do not fit with that model. In many situations we cannot choose or not choose X, but we have to live with the X we are given, and we must make the best of it in our life. Most of us do not choose our native language and our body, but we do the best we can with our language and our body that we have been thrown into (to use a Heideggerian term for our basic “thrownness” into a life situation). Most of us also do not choose whom we love or fall in love with. When we decide to write a book, we do not always feel that we rationally choose a topic. We feel that the topic chooses us. It is not a choice that looks like a consideration of whether X causally leads to Y. Some of the commitments we have in life are against our own best interests, but they are commitments nevertheless (Sayer, 2011: 27). Even policy or legislation is something that we have to live with most of the time. Even if we know that X might be an effective way to achieve Y, we may abstain from X because we find it abusive or unethical (Biesta, 2010). For example, we could reduce inequality among children if we removed them from home at an early age, but it would be unethical and unthinkable for many reasons. So, X is many normative things for us at the same time. We consider the political pros and cons of X itself, the controversy and ambiguity of Y itself, and many other things. Practical wisdom includes thinking of the many aspects of ends, not just their instrumental effects (Sayer, 2011: 80). We rarely choose only one variable at a time. Here, the term “we” may be misleading. It is often relatively few people in a polity who decide, hopefully in a democratic way, whether it is meaningful to adopt the policy X to achieve the goal Y. Usually, however, many more people will have to live with the implementation of X and find out how it might fit into their local context. For the sake of the argument, let us say that knowledge that supports the decision about whether X should be adopted as a policy in education is perhaps perceived as the key question for 100 key decision makers. Then there are subsequently, say, 100,000 teachers who do not have a similar choice as soon as X is adopted as a part of national legislation. For them, it is more useful to know how they can modify and deal with X so that it fits into their local context, and maybe helps enhance Y or perhaps other additional goals that teachers

2  TWENTY-FIVE QUESTIONS 

85

may have. The causal knowledge about whether X leads to Y in isolation from all other issues is only modelling a decision that relatively few people are involved in, if any. The pressing and “fuzzy” questions that a lot of people care about, such as how to implement X into their complicated context and what it might mean to people, are not the questions that the prescribed causal methodology helps to answer, not for most of the people, not most of the time. The most straightforward causal questions may be most useful in situations where there is a clearly defined problem and where differences in subjectivity and normativity matter relatively little to the definition of a desirable output. Most of the time, however, we find ourselves in situations where we have many commitments that reach far beyond a free choice of X (or not) to achieve Y. As Sayer argues, it is a kind of scholastic fallacy in social science to suspend the kind of broad normative evaluative being which otherwise characterizes how people live in the world that is theirs (Sayer, 2011: 15). We also have a lot of activities and forms of expression in our lives where X is something that we cherish, something we find worthwhile, gracious, merciful, beautiful, or enjoyable even it has no particular effect on anything. Hannah Arendt (1950), for example, distinguishes between labor, work, and action. Labor is something necessary to survive. Work is the instrumental accomplishment of something. (I would think this is the domain where we are interested whether X is useful to achieve Y). But finally, there is action. This is where we express ourselves as human beings, where we construct a meaningful life in interaction with others. It is in this context that we may discover that we have something in common in society, that we are subject to the same collective forces. This discovery, as Dewey helpfully points out, makes it relevant and possible for us to engage in collective action to influence our own destiny. As such, in a democratic and collective light, we can discuss such things as “X” and whether we want to introduce it to achieve “Y”, if that is seen as a valued goal, and if we can make a convincing argument that X would help us do that, etcetera. But our thrownness into an existence as multi-related and already-situated evaluative beings is given before we encounter “X” and “Y”, and we are rarely just faced with the choice of whether we should choose X to achieve Y as an isolated decision. It is the non-isolatedness, the contextual embeddedness of X and Y, and the interconnectivivity of human beings which more predominantly characterize our precarious lives.

86 

P. DAHLER-LARSEN

So if we subscribe to The Causality Syndrome, we ask for a kind of knowledge structured after a model that does not resemble the structure of our existence in the world.

Q23. If We Focus on Demonstrable Social Impact, Will We then Maximize the Impact of Social Science? There is increasing interest in demonstrating the impact of social science, for example the impact on policymaking. In this endeavor, the conventional causal type of thinking is also used on social science itself. Here the input, X, is a given research finding, and Y, the outcome, is a given political decision. There is an interesting discrepancy, however, between this model on the one hand and on the other hand contemporary models describing participation, deliberation, and citizen involvement as forms of democratic innovation. These participatory models also involve more intense forms of interaction between researchers, citizens, and other stakeholders. These models are motivated by a desire to make technological innovation more responsive and responsible (Stilgoe et al., 2013), and/or renew the quality of democratic processes. However, the more innovative, participatory, deliberative, and democratic the process, the more difficult will it be to document an impact of social science through the use of the conventional unidirectional causal model which also assumes that X is definite and isolated, so that Y can be unequivocally attributed to X and nothing but X. If researchers want to live up to that type of causal attribution to get credit for the impact of their results, they will not reveal that X is really not an isolated finding, but a stream of research. They will also not admit that they did not individually produce X, since the proper X might be better understood as the product of a research team or, even worse, a debate between several groups. They will also not explain how what is used is not X itself but rather some adapted, modified, implemented, and evaluated version of X that cannot be attributed to the researcher in isolation, but more precisely to an interactive configuration of stakeholders. Those who hold the keys to this process (even if it is collective in nature) may be as decisive for the outcome as the academic finding itself. Finally, Y turns out to be not just a decision, but a multi-faced set of social outcomes. Various political stakeholders may even define these outcomes in different ways to

2  TWENTY-FIVE QUESTIONS 

87

justify them to their constituencies. If all this is the case, a fair attribution of honor and praise for a particular kind of social change will be starkly different from something modelled as a causal attribution of X to Y. In fact, it may impede cooperation among stakeholders if researchers causally attribute social outcomes only to their own findings and ultimately themselves. Such focus on “use” of findings may also be detrimental  to the broader purposes of social betterment (however defined), as researchers and evaluators who aim for demonstrable and short-term use may focus on small issues and easily implementable managerial details rather than larger policy change (Henry, 2000). In contrast, if researchers work truly cooperatively with others, they become unable to take credit for the “causal” “effect” of “their” work. More broadly, social science does not only produce “findings” which influences “decisions”, it also broadly plays a cultural role (Biesta, 2010) as it helps us interpret social life, redefine problems, and discuss value issues. Most of the key concepts we use to orient ourselves in the social world are already normatively loaded (Sayer, 2011: 7), so social science cannot set a foot down without stepping into something normative. It would be a shame to search for the “outcome” of this activity only in terms of discrete political decisions. This is particularly pertinent when social science is used to justify normatively controversial decisions. So there is really no way in which the impact of science can be reasonable boiled down to something that is modelled as the causal effect of X on Y.

Q24. Are Methods Ways to Find Out About Things, But Not Ways to Influence Things? In performance-oriented approaches to education, it is common practice to publish grades and test results as indicators of school quality. However, researchers argue that the raw grades and test scores say more about the social background of the pupils than about school quality. So, to isolate the effect of teaching on the performance of the pupils, it is necessary to control for socio-economic factors, ethnicity, the proportion of single-­ parent families, and more. Researchers therefore propose that these factors be published along with the test results, and the data would reveal the raw scores as well as the added value of the school, which is the difference between the actual score and the predicted score based on a regression analysis where social background variables are taken into account. In terms

88 

P. DAHLER-LARSEN

of proper causal analysis, nothing is wrong with this logical argument. However, many parents feel that these statistics would put an improper negative social label on them, since, as one of them said “it is perfectly possible to be a single parent and a good parent at the same time.” The point with this example is that a causal analysis is not only an analysis, it is also an intervention in reality to which some people might react. This point is perhaps even more pertinent when it comes to experiments. While experiments are seen as powerful tools in causal analysis, especially when it allows controlled manipulation of independent variables, the performative and constitutive aspects of experiments are often not attended to. What if, just for the sake of discussion, if we turn the argument upside down so that the effects of experiments on social life may be central issues, and the knowledge produced is really only a step towards a particular kind of structuration of social relations? Let me give an example: What do students learn when they carry out an experiment? Many contemporary experiments involve the manipulation of the independent variable X.  Until recently, manipulation was associated with negative normative overtones, but now it is seen as a merely a technical aspect of the experimental design. Experiments are usually used to determine how people respond to a hidden variable that they are not aware of or cannot talk openly about (such as a racist bias or their willingness to cheat in a test). But students may take more away from the experiment than that. Students may learn that in general, if you want to learn about people, you do not tell them about your true intentions. You manipulate them through the control of hidden variables. I do not deny that experiments sometimes produce extremely interesting findings. However, I am worried about the larger influence on social life, more specifically a shift from listening to what people are trying to say to a focus on whether underlying variables can be manipulated. I am worried that my own students in the long run will pay relatively little attention to people as normative-evaluative persons situated in cultural and institutional settings (Schwandt, 2002). Instead, there will be a focus on experiments which test whether people’s behavior can be changed by controlling causal variables. The researcher is in the control room, but not interacting with the subjects. If people meet other people with this distribution of roles, I fear for the long-term consequences regarding such qualities of social relations as trust, respect, and openness. In their future jobs, will my students deal with socio-political problems, normative issues, and problems of human

2  TWENTY-FIVE QUESTIONS 

89

interaction using the controlled experiment as an exemplar? Will they search for one variable that they can manipulate behind the back of their fellow human beings? And will they use The Causality Syndrome to back up a belief that what they do is justified as good and normal scientific practice?

Q25. Is the Time Right for Causal Studies? All social research is situated in time and place. All social research is a product of its time and also a response to its time. Reflexivity in social science means being aware of this embeddedness of social science in social time and circumstance. A well-reflected social science is one that knows where it is in time. It is a social science which responds by being aware of its own historicity. Readings to support this perspective include Vattimo (2004), Castoriadis (1997), Koselleck (2010), and more. So, the question is: Is the time right for causal studies? Paradoxically, the present interest in causation is reminiscent of the kinds of belief in causality and experimentalism found in the 1960s and 1970s. However, this earlier wave was soon criticized for being overly rationalistic, for being too optimistic about how much results would be replicable and accumulated, too optimistic about how scientific findings translate into political decisions, too focused on outcomes and not enough on processes and implementation, and too elitist in its general orientation, and not paying enough critical attention to the role of science in society. It was also said to ignore the perspectives and daily lives of citizens. In that light, the present interest in causal studies seems like a remake of what happened, however, without any lessons learned from the intense critique that was raised almost from the beginning 50 years ago. In the meantime, we have seen the linguistic turn, the cultural turn, the practical turn, and a booming interest in complex systems and complexity in social science, and again, the interest in causal methodology seems relatively unaffected by these fairly fundamental theoretical upheavals. We live in times that many experience as quickly changing, in societies that are highly complex, where crises are common, and where not only existing social orders, but the very notion of reality is under attack (Vattimo, 2004). In times like these, it seems paradoxical to focus so strongly on methodological rules to help us determine the net causal effect of one variable on another, of X on Y.  It also seems paradoxical if we believe that by analyzing many other variables like X one at a time, these

90 

P. DAHLER-LARSEN

different analyses will add up and give us a more precise picture of how the world hangs together. Nevertheless, this kind of causal thinking may ironically be a response characteristic of our time, although perhaps not the most adequate one. Social scientists can become engaged in the technical details of research designs much in the same way as they engage in a game: If I manipulate X, what will happen? And the causal methodology promises that if the rules of the game are followed, the result will be as scientific as it ever gets. If funders, journal editors, and university managers believe the same thing, and nobody can be accused of being normative or political in their approach, because this is research from the top shelf, that might be sufficient for them, even if the discrepancy between the world as we experience it and the world as it is described in causal studies increases day by day. Sometimes a causal statement about whether X causes Y may be a small and limited part of a larger configuration of interacting factors. An example: Some politicians in a country want to decentralize educational opportunities from the larger cities to smaller cities in more peripheral areas. Then critics argue that the quality of these new educational programs will deteriorate because there is a more limited supply of academic teachers in the smaller cities. They could well demonstrate a causal link between “center-­periphery” and “supply of academic workforce.” X causes Y. What they do not include in this causal link is that the concentration of education and jobs in larger cities over the years have created a vicious circle, which produces more and more concentration and makes it more difficult for academics to thrive outside the big cities (there are fewer career options, fewer attractive partners, and if you have a spouse, he/she cannot find a job, etcetera). However, the vicious circle is self-reproducing (that is what vicious circles do). It also has other side effects: congested cities, rising housing prices, increased commuting time, and more. At a certain point in time, some politicians will say: Let us break the vicious circle. Let us move some educational institutions out in the country. Critics will then argue that this policy takes place “against the evidence.” The sad part of this story is that “evidence” is apparently unable to capture the larger complexity, which some politicians perhaps sense intuitively: Sometimes, to act immediately in an attempt to change a large, complex system, it is necessary to be “desperate” or “political” or “intuitive.” Systemic change does not always presuppose knowledge about whether an isolated X causes Y. Before the Iron Curtain broke down, we saw protesters in the street claiming “Wir sind das Volk”, we saw helpless and confused bureaucrats, we saw Gorbatjov refusing to use force to protect the Soviet Union, and

2  TWENTY-FIVE QUESTIONS 

91

suddenly a whole empire collapsed from within. I cannot remember even one causal analyst predicting this would happen based on a logic similar to how X causes Y in conventional causal analysis.  Today, it is difficult to imagine that our best response to the climate crisis and to war should be based on firm experimental knowledge coming from one study after another that looks at the isolated effect of one variable on another.  If The Causality Syndrome reigns, and all we can say in social science is whether X causes Y, we will make ourselves less relevant and not able to understand why policymaking is so “irrational”, and how complex and unpredictable the world is. We will find ourselves trying to model the real world after a reductionist world view where all we can say is whether X can be demonstrated to influence Y according to specific rules that we have made up. A commitment to The Causality Syndrome runs counter to a well-­ reflected engagement in contemporary social problems in all their complexity. The time is not right for further subscription to The Causality Syndrome.

References Allison, G.  T. (1969). Conceptual Models and the Cuban Missile Crisis. The American Political Science Review, 63(3), 689–718. Arendt, H. (1950). The Human Condition. University of Chicago Press. Bateson, G. (1972). Steps to an Ecology of Mind. Ballantine Books. Becker, H.  S. (1996). The Epistemology of Qualitative Research. In R.  Jessor, A.  Colby, & R.  Schweder (Eds.), Essays on Ethnography and Human Development. University of Chicago Press. Becker, H. S. (2017). Evidence. The University of Chicago Press. Berger, P. L., Berger, B., & Kellner, H. (1974). The Homeless Mind: Modernization and Consciousness. Vintage Books. Bevir, M. & Blakely, J. (2018). Interpretive Social Science. An Anti-Naturalist approach. Oxford: Oxford University Press. Biesta, G. J. J. (2010). Good Education in an Age of Measurement: Ethics, Politics, Democracy. Paradigm Publishers. Cartwright, N. (2007). Are RCTs the Gold Standard? BioSocieties, 2, 11–20. Cartwright, N. (2013). Knowing What We Are Talking About: Why Evidence Doesn’t Always Travel. Evidence & Policy: A Journal of Research, Debate and Practice, 9(1), 97–112.

92 

P. DAHLER-LARSEN

Castoriadis, C. (1997). World in Fragments: Writings on Politics, Society, Psychoanalysis, and the Imagination. Stanford University Press. Clarke, B., Gillies, D., Illari, P., Russo, F., & Williamson, J. (2014). Mechanisms and the Evidence Hierarchy. Topoi, 33, 339–360. Dahler-Larsen, P., Sundby, A., & Boodhoo, A. (2020). How and How Well Do Workplace Assessments Work? Using Contextual Variations in a Theory-based Evaluation with a Large N. Evaluation—The International Journal. Dahler-Larsen, P., & Sylvest, C. (2013). Hvilken pluralisme? Betragtninger om det kausale design og definitionen af god samfundsvidenskab. Politik, 16(2), 59–68. Feyerabend, P. (2010). Against Method (4th ed.). Verso Books. Flyvbjerg, B. (2006). Five Misunderstandings About Case-Study Research. Qualitative Inquiry, 12(2), 219–245. Frankl, V. E. (2008). Man’s Search for Meaning. The Classic Tribute to Hope from the Holocaust. Ebury Publishing. Geertz, C. (1973). The Interpretation of Cultures. Basic Books. Geggel, L. (2018). One of Psychology’s Most Famous Experiments Was Deeply Flawed. Livescience.com. Retrieved August 12, 2021, from https://www. livescience.com/62832-­stanford-­prison-­experiment-­flawed.html Goodstein, E. S. (2017). Georg Simmel and the Disciplinary Imaginary. Stanford University Press. Henry, G. T. (2000). Why Not Use? In V. J. P. H. Caracelli (Ed.), New Directions for Evaluation (pp. 85–98). Jossey-Bass Publishers. Koselleck, R. (2010). “Erfahrungsraum” und “Erwartungshorizont”. Zwei historische kategorien. In Vergangene Zukunft: Zur Semantik geschichtlicher Zeiten. Suhrkamp Verlag. Kurki, M. (2006). Causes of a Divided Discipline: Rethinking the Concept of Cause in International Relations Theory. Review of International Studies, 32(2), 189–216. Lamont, M. (2009). How Professors Think: Inside the Curious World of Academic Judgment. Harvard University Press. Latour, B. (1987). Science in Action: How to Follow Scientists and Engineers Through Society. Open University Press. Latour, B., & Woolgar, S. (1986). Laboratory Life, the Construction of Scientific Facts. Princeton University Press. Law, J. (2004). After Method: Mess in Social Science Research. Routledge. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic Inquiry. Sage Publications. Longino, H. E. (2002). The Fate of Knowledge. Princeton University Press. Luckmann, T. (1970). On the Boundaries of the Social World. In M. Natanson (Ed.), Phenomenology and Social Reality. Springer. Maxwell, J. A. (2004). Using Qualitative Methods for Causal Explanation. Field Methods, 16(3), 243–264.

2  TWENTY-FIVE QUESTIONS 

93

Michaels, D., & Monforton, C. (2005). Manufacturing Uncertainty: Contested Science and the Protection of the Public’s Health and Environment. American Journal of Public Health 95, 39-48, https://doi.org/10.2105/ AJPH.2004.043059 Morin, E. (1990). Kendskabet til Kundskaben. En erkendelsens antropologi. Ask. Nisbet, R. (1966). The Social Bond. Knopf. Nisbet, R. (1976). Sociology as an Art Form. Oxford University Press. Ogilvie, D., Adams, J., Bauman, A., Gregg, E.  W., Panter, J., Siegel, K.  R., Wareham, N. J., & White, M. (2020). Using Natural Experimental Studies to Guide Public Health Action: Turning the Evidence-based Medicine Paradigm on Its Head. Journal of Epidemiology and Community Health, 74(2), 203–208. Osimani, B. (2014). Hunting Side Effects and Explaining Them: Should We Reverse Evidence Hierarchies Upside Down? Topoi, 33, 295–312. Pawson, R., & Tilley, N. (1997). Realistic Evaluation. Sage. Pielke, J. R. A. (2007). The Honest Broker, Making Sense of Science in Policy and Politics. Cambridge University Press. Podems, D. (2018). Being an Evaluator: Your Practical Guide to Evaluation. Guilford Press. Proctor, Robert N. & Schiebinger, Londa (eds.) (2008). Agnotology: The Making and Unmaking of Ignorance. Stanford University Press Stanford, California. https://philarchive.org/archive/PROATMv1 Robson, L., Clarke, J., Cullen, K., Bielecky, A., Severin, C., Bigelow, P., Irvin, E., Culyer, A., & Mahood, Q. (2007). The Effectiveness of Occupational Health and Safety Management System Interventions: A Systematic Review. Safety Science, 45, 329–353. Sayer, A. (2011). Why Things Matter to People: Social Science, Values and Ethical Life. Cambridge University Press. Schutz, A. (1978). Phenomenology and the Social Sciences. In T.  Luckmann (Ed.), Phenomenology and Sociology: Selected Readings (pp.  119–141). Penguin Books. Schwandt, T. A. (2002). Evaluation Practice Reconsidered. Peter Lang. Schwartz-Shea, P., & Yanow, D. (2012). Interpretive Research Design: Concepts and Processes. Routledge. Smismans, S. (2003). Towards a New Community Strategy on Health and Safety at Work? Caught in the Institutional Web of Soft Procedures. International Journal of Comparative Labour Law and Industrial Relations, 19(1), 55–83. Soss, J. (2018). On Casing a Study Versus Studying a Case. Qualitative and Multi-­ Method Research, 16(1), 21–27. Stake, R.  E. (2000). Case Studies. In N.  K. Denzin & Y.  S. Lincoln (Eds.), Handbook of Qualitative Research (pp. 435–453). Sage.

94 

P. DAHLER-LARSEN

Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the Range of Designs and Methods for Impact Evaluations. Report of a Study Commissioned by the Department for International Development. Working Paper 38, Department for International Development, London, UK. Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a Framework for Responsible Innovation. Research Policy, 42, 1568–1580. Sundhedsstyrelsen. (2018). Evidens for livsstilsinterventioner til børn og voksne med svær overvægt. En litteraturgennemgang. Sundhedsstyrelsen. Vattimo, G. (2004). Nihilism and Emancipation: Ethics, Politics, Law. Columbia University Press. Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. University of Chicago Press. Wallot, S., & Kelty-Stephen, D. G. (2018). Interaction-Dominant Causation in Mind and Brain, and Its Implication for Questions of Generalization and Replication. Minds and Machines, 28, 353–374. Weber, M. (2004). Science as Vocation. In D. Owen & T. B. Strong (Eds.), The Vocation Lectures (pp. 1–31). Indianapolis.

CHAPTER 3

Casualties of Causality and Paths to the Future

Abstract  To live up to the formulaic standards of The Causality Syndrome, studies are sometimes reported as better than they actually are. The casualties of The Causality Syndrome include lack of truthfulness about details, which constitute threats to the validity of causal claims. Another problem is lack of attention to what exactly constitutes the difference between an intervention group and a control group. In a broader perspective, a too strong focus on causation may lead to a bias in the selection of topics for research. The chapter suggests three paths to the future. Keywords  Causality • Casualties of causality • Codification • Evidence hierarchy • Lack of evidence

Casualities of Causality The word causality is not the problem. It is how it has become infiltrated in the larger set of institutionalized imaginaries that I call The Causality Syndrome, and how it has captured the minds of many. Metaphorically, the porridge is not the problem, but how the porridge overflows the pot and disturbs life in the whole village. When a given set of methodological rules are no longer understood as a helpful tool in relation to a very specific type of question, but instead as a general set of rules for social science, the casualties are considerable. The © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. Dahler-Larsen, Casualties of Causality, https://doi.org/10.1007/978-3-031-18246-4_3

95

96 

P. DAHLER-LARSEN

amount of casualties are further increasing because these rules are not understood in their entirety and specificity, and often not used to support careful judgment about the methodological quality of social science studies, but are used in the form of short-hand scripts to plan, fund, and evaluate social science. My list of casualties of causality includes the following. 1. The saddest and perhaps most detrimental casualty of causality in its present version is that some studies are presented as having a better design than they actually have. Observations which allow a genuine assessment of the quality of a study are omitted so that research appears to come from “the top shelf” more often that it really does. This phenomenon does not only undermine the trust in science, it also helps undermine one of the most noble motivations among researchers, i.e. the desire to report on reality as it is, no matter what the consequences are. It is one of the most important casualties of causality that the methodological rules of causation are sometimes given higher priority than the truth. The Causality Syndrome has a part of the responsibility for this situation. An inherent tragedy is that The Causality Syndrome which allegedly supports an objective search for truth through celebrated research designs in fact often leads to a lack of truth about the very implementation of the same research designs. Many factors may help ameliorate this problem: More focus on research integrity, and a reduction of publication pressure. However, I also specifically suggest that if the rhetoric about the celebrated “top shelf” were changed into something more neutral (where different designs are seen as appropriate in different situations), then incentives to present one’s design as “better” than it really is would be reduced considerably. 2. Furthermore, the belief in a research design as an almost magic token in itself sometimes leads to a lack of attention to precisely what an intervention is, how “X” is defined, and what the exact difference between X in an intervention group is as compared to non-X in a control group. A belief in the research design as a distinct and separate source of quality in research in itself may distract attention from the importance of a meticulous coordination between the research question and the design chosen to answer that question. When it comes to this latter coordination, I trust that most advanced causal analysts share my view. Again, my proposal is that a rhetoric which does not emphasize hierarchies and “shelves” would be more helpful than one which emphasizes the research design itself as a source of quality in a study.

3  CASUALTIES OF CAUSALITY AND PATHS TO THE FUTURE 

97

3. The Causality Syndrome produces a sort of schizophrenic relation to things that we know with some degree of certainty but which we are unable to prove given dominant rules of causal analysis. Especially in their very compressed and institutionalized form, these rules for causation make people confused about whether no evidence for an intervention constitutes evidence for not making the intervention. This confusion occurs particularly often when the short-hand version of the rules seems to imply that in a given case there must be a dichotomous choice between evidence or not evidence. In their compressed, institutionalized, and short-hand version, these rules allow no time to explain that there is in fact no evidence to back up most of the things we do in life, but that does not mean that we are completely deprived of knowledge. As a much too sharp distinction between evidence and not evidence is institutionalized, a lot of ignorance is produced at the same time. The scientism inherent in the rules for causation does not acknowledge that some ignorance is part of life and part of the basis of science (there is always something we do not know). The dichotomous understanding of evidence leaves us with strangely mutilated discussions about whether we are allowed to act in situations where our actions are not backed up by evidence. Of course, we are. We cannot live without doing so. I wake up every morning without considering whether there is evidence to back up my choice of the Danish language in my everyday life. With a too sharp distinction between “evidence” and “ignorance”, we fail to see nuances and different qualities in forms of knowledge not accepted as evidence. It is falsely assumed that if we do not have a study that clinches a particular research question, then we have no knowledge that vouches for anything. For example, under the corona crisis, the lack of knowledge was not only due to the akrasia of the common man and the uncertainty about what to do. The leaders in society also did not know what to do. Although evidence (however defined) was very slowly building up, there were many instances where it would have been unwise to reject a particular policy just because it was not backed up by evidence. One example is the policy on face masks. The argument that face masks should not be worn because there is no evidence in favor of face masks is really just a flawed deduction inflicted on ourselves as a result of our subscription to the dominant rules of causation which lack relevance in the situation at hand. It is no scandal that there is no evidence in favor of face masks because it is practically

98 

P. DAHLER-LARSEN

impossible and unethical to carry out a full-blown experiment with face masks, and a control group without masks that we expose to risk during a pandemic. The intuitive sense among politicians may have been a better guideline than the alleged “lack of evidence.” It should be made more clear that whenever there is “lack of evidence”, there is often some knowledge, and it is totally up to us to choose. The most serious advocates of causation in social science are clear about this, but that is not how the rules are used when The Causality Syndrome reigns. Using the Causality Syndrome as a weapon, all initiatives not backed up by “evidence” (even where this evidence will never be produced) will be portrayed as “irrational” and “not supported.” Advocates of the Causality Syndrome have failed to consider how they contribute to this boomerang effect, this backlash against what is reasonable but not “scientific” in their narrow sense. 4. In many disciplines and subdisciplines such as archeology, geology, geometry, history, astronomy, mathematics, and forensic medicine, we have relatively solid knowledge, which has never been based on experiments nor randomization. Again, it seems little useful to downgrade the many forms of knowledge in these areas just because they are not a result of research from the “top shelf”. It is also sad that when experimental designs are supplemented with qualitative studies, some tend to think that the quality of the study is poorer than if the experimental design were carried out in isolation. The way forward here is to acknowledge the potential contribution of different forms of knowledge answering different kinds of questions under different kinds of circumstances, and that sometimes, we understand a problem better if we are able to see it from more than one angle at the same time. The assessment required to do so, however, is not easy to codify (Lamont, 2009). The Causality Syndrome is, in my view, responsible for too much and too simple codification at the moment. 5. Another consequence of The Causality Syndrome is that researchers frame many studies as if they are causal studies even if they are not. This move does not improve the quality of research. It also makes some proposals difficult to assess. Especially in transdisciplinary panels, a study which buys half-way into causal terminology may be discarded because the advocates of The Causality Syndrome see its imperfections, whereas advocates of other schools of thought would have preferred a kind of study which was more pure and true to its own paradigmatic assumptions. It would have to be clear of terms such as “effect” in order not to be

3  CASUALTIES OF CAUSALITY AND PATHS TO THE FUTURE 

99

misunderstood. There is no guarantee, of course, that such a study would be funded, but the assessment would be cleaner and more transparent if studies which are not born as causal studies do not imitate the requirements of The Causality Syndrome. 6. A focus on the rules of methodology particularly in relation to causation may lead to a bias in the selection of topics in the social sciences. There will be an over-focus on issues which lend themselves to causal questions at the expense of other questions. For example, there will be an under-emphasis on studying major, world-changing events until these events can be categorized into a regularly occurring set of similar events, the net effect of which can be determined (which may never happen). Or, in a larger, systemic configuration of factors, only the “causal” relation between X and Y is considered. As a consequence, a large part of social science is only able to deliver fragmented and partial analyses of many of the complex social problems which face us today. The way to get out of this quagmire is to take more nuanced positions towards the Causality Syndrome.

Three Paths to the Future I imagine three paths to the future. The first path builds on existing notions of causality, including the use of randomized experiments as drivers of causal inference, but without any ritualistic belief in this approach. This would imply –– Being more meticulous about the link between a research question and what is actually compared in the intervention group versus the control group –– Removing all incentives to letting a research design “creep up” to higher levels in the “evidence hierarchy” than it actually deserves –– Recognizing that the quality of a study does not rest only on its design, but on how the design interacts with the problem at hand, the research question, and the threats to validity as they play out in the practical situation at hand –– Being more meticulous about the practical circumstances under which research takes place in practice including the differences between the purity of the design and the often messy practice –– Reducing expectations about the generalizability of causal findings from one context to another

100 

P. DAHLER-LARSEN

–– Acknowledging that along with a design to support a strong causal claim we can usually become wiser if we add qualitative or observational knowledge, e.g. about implementation problems, social resistance to interventions, and side effects, regardless of whether these different components in a larger project are “well coordinated” from the beginning or not (Greene, 2007). The second path embraces the concept of causality, but widens and complicates its meanings. Causality is not one thing only. On this path, it is acknowledged that different approaches to causal studies have different pros and cons (Sandahl & Petersson, 2016). This second path welcomes process tracing, qualitative comparative analysis, systems analysis, and more. More controversially, it also acknowledges that the concept of causality is epistemologically, not just ontologically grounded. It is a tool constructed in the minds of humans so that we can make sense of their world, but it is an imperfect tool in the hands of imperfect humans. It is also not stable over time and across contexts. The concept has its own cultural history (Kern, 2004). Therefore, an important question about causality is pragmatic: Which of our present notions of causality help us understand which things in which ways? For example, in the absence of a planet B that we can use as a control group, what kind of causal knowledge can we produce, which can help us with our present global problems? The third path means reestablishing the value of studies without causal claims. These studies include inquiries into how social reality is constituted (not caused), for example with a focus on institutions, processes, concepts, typifications, values, norms, and perspectives of the involved actors as well as well as their interconnections with each other and with materiality. Many have studied society and societizing without causal vocabulary. The time has come to remind ourselves that it can be legitimate to do so, without excuses. The happy absence of causal terminology should be celebrated also in very specific situations. For example, if a philanthropic foundation wants to support a mentor program where an evaluation shows that participants feel that they benefit a lot from their participation, this evaluation does not need to be positioned as a poorly conducted study of the impact of the program based on “subjective views” and a “lack of control group”, not to mention “randomization.” The foundation is simply free to state that it

3  CASUALTIES OF CAUSALITY AND PATHS TO THE FUTURE 

101

wants to support a program where participants feel that they benefit a lot from participation in the program. The foundation pays for the program. It does not have to tie itself to impact terminology, unless it wants to do so. In another practical example, students may say they want to study the causes of harassment. What they really want to do, however, is to understand how and why it is in some situations that some people feel that certain psychological and ethical lines are being crossed in a way that has strong affective consequences. An inquiry into the meaning of these lines in specific situations and for specific people can be fully meaningful without a reference to causes. In fact, a causal study would be unproductive if it did not first pay attention to the fact that harassment is a complex social, psychological and cultural phenomenon. So, this third path to the future simply paves more way for studies not cast in causal terminology. Without excuses. On all three paths, evidence should be understood as a result of a network of arguments, not something that is produced by a research design in isolation (Schwandt, 2015).

Final Word I have presented The Causality Syndrome as an ideal type, as an analytical construct. I do not expect all readers to buy into all arguments in this book, nor to accept its gloomy perspectives as a description of their concrete reality. In some contexts, countries, fields, subfields, institutions or research groups, the problems that The Causality Syndrome presents are pressing. In others, it is less dominant. If you find the problems following from The Causality Syndrome less pressing than I do, it is a good thing. There are many reasonable nuances and positions in this larger controversy. If you, dear reader, have taken a bit more reflected standpoint with respect to just some of the 25 questions discussed in this book, thereby cultivating your own response to the Causality Syndrome, or the version of it you encounter in your academic life, the book has accomplished its objective. By the way, the book did not causally produce the outcome. That is not how books work. I wrote a book. You read it. Something happened. We did something together. Thank you.

102 

P. DAHLER-LARSEN

References Greene, J. C. (2007). Mixed Methods in Social Inquiry. John Wiley & Sons Inc. Kern, S. (2004). A Cultural History of Causality. Princeton: Princeton University Press. Lamont, M. (2009). How Professors Think: Inside the Curious World of Academic Judgment. Harvard University Press. Sandahl, R., & Petersson, G. J. (2016). Kausalitet i filosofi, politik och utvärdering. Studentlitteratur. Schwandt, T. A. (2015). Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice. Stanford University Press.

Index1

A Accreditation, 57, 58 Accumulation, 70 Accuracy, 65 Agnotology, 72 C Casualties, 95–101 Cathexis, 57 The Causality Syndrome, 2–29 Clinching, 66, 67 Codification, 98 Complexity theory, 47 Constitutive, 13, 15 Controversy, 5, 10, 16 Counterfactual, 11–15, 54–57, 74 D Descriptive studies, 37–39, 66, 67 Dichotomy, 44, 70

E Evidence, 35–37, 51, 52, 55, 61–73, 75–79, 83, 90 F Fidelity to method, 66 Fidelity to phenomenon, 66 Fraud, 70 G Gender, 41, 42, 46, 57, 63, 75 Generalization, generalizations, 69 Generative mechanism, 12 H Hermeneutics, 13, 26 Hierarchization, 39 Hierarchy of evidence, 62–70, 72, 76, 78, 79

 Note: Page numbers followed by ‘n’ refer to notes.

1

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. Dahler-Larsen, Casualties of Causality, https://doi.org/10.1007/978-3-031-18246-4

103

104 

INDEX

Hinterland, 5 Historicity, 5 I Impact, 74, 79, 86–87 Institutionalization, 16–25, 25n2 Intellectual humility, 22 Interpretive social science, 13 L Lack of evidence, 98 Linearity, 45 M Manipulability, 11, 12, 15 Mechanism, 12–14, 17, 20, 23, 24 Mediatory myth, 28, 29 Mess, 51 Methodological criteria, 48 Methodological quality, 38 Monism, 16, 18, 19, 25, 25n2 Monist, 13, 14, 16, 17, 24, 25n2 N Normativity, 51–54, 85 O Observation, 34, 36, 38, 41, 50, 51, 64, 66, 72 Outcome, outcomes, 35, 37, 42, 59, 62, 67, 73, 74, 80–83, 86, 87, 89 P Paradigm, 7, 27, 43–48, 51, 70 Phenomenology, 13, 26 Pluralism, 10, 15, 18

Position on positions, 14 Poststructuralism, 13 Pragmatism, 43 Probability, 10, 12, 15 Process-based, 12 Publication bias, 70 Q Qualitative studies, 98 R Randomization, 58, 62, 63, 66, 67, 71 Randomized controlled field trial, 2 Randomized controlled trial (RCT), 2, 4, 12, 18, 19, 21, 22 Reciprocal causality, 61–62 Reliability, 38, 39 Research design, research designs, 43, 60, 62–67, 73, 76, 77, 90 S Scholastic fallacy, 85 Scientism, 4–7 Self-fulfilling prophecy, 79 Simplification, 16, 19, 24, 25, 25n2 Small N, 56 Social imaginary significations, 46, 47 Societizing, 100 Subdiscipline, 44 System 1, 9–16, 23, 25 System 2, 9–16, 23, 25 T Thrownness, 84, 85 U Usurpation, 16, 18, 19, 24, 25, 25n2

 INDEX 

V Variance-based, 12, 14 Verstehen, 34, 35, 38 Vouching, 66, 67

W Wahlverwandschaft, 40

105