111 28 9MB
English Pages 355 [367] Year 2023
Science as a Quest for Truth
Science as a Quest for Truth: The Interpretation Lab By
Bengt Kristensson Uggla Translated by Stephen Donovan
Science as a Quest for Truth: The Interpretation Lab By Bengt Kristensson Uggla This book first published 2024 Cambridge Scholars Publishing Lady Stephenson Library, Newcastle upon Tyne, NE6 2PA, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Copyright © 2024 by Bengt Kristensson Uggla All rights for this book reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN (10): 1-5275-3445-6 ISBN (13): 978-1-5275-3445-2
In memory of Paul Ricoeur (1913-2005)
TABLE OF CONTENTS
Chapter 1 ................................................................................................... 1 Science? Science! Questions about science, scientific questions Science is taken for granted The rise of cognitive nationalism Science is under attack! Can science even be defined? Science raises questions and destabilizes what seems obvious Science is a project Lack of consensus—does it have to be a problem? The diversity of science—is unity possible? Philosophy of science—how this book is organized Part I: The Discovery: Science has a History Chapter 2 ................................................................................................. 23 The Adventure of Knowledge and Voyages of Discovery The story of the “new” world The nineteenth century’s desire for a narrative The university torn between church and nation-state The myth of the irreconcilable divide between science and religion A journey to natural selection Journeys of education inspire adventures of knowledge Shifting perspectives and intellectual agility Modern science emerged with positivism In what sense was Columbus nevertheless a hero? The art of divvying up the world scientifically Chapter 3 ................................................................................................. 51 Anachronisms in the Philosophy of Science What does history mean for science? The dark historical chapters of science Our relationship to the past is defined by anachronisms Can we even speak of science, scientists, and a scientific revolution? Secular anachronisms
viii
Table of Contents
The conjuring trick of scientific accounts The necessity of philosophy of science We need a new science narrative! Chapter 4 ................................................................................................. 67 Theories and Practices, Texts and Contexts Theory and practice in the knowledge arena Human action making science Empirical data cannot be “collected” “Reading” and “writing” the world Texts and contexts The context of the text: who cooked Adam Smith’s dinner? Science is always already embedded in society—as society is in science A Copernican turn transforms the world The significance of theory and “tacit” knowledge for discoveries Part II: A Scientific Revolution Chapter 5 ................................................................................................. 91 Discovering/Inventing the World “The greatest miracle in the history of the world” Researchers need money The invention of discovery—the birth of modern science Quasi novello Columbus When the times are a-changing—from East to West Discovering/inventing a “new” world Mapping the world is never innocent Is it even possible to represent the world? Renaissance imitations: discoveries or inventions? The lasting echo of antiquity and the Middle Ages Chapter 6 ............................................................................................... 115 Technology Opens Up—and Mathematics Explains—New Worlds Science is preceded by technology Technology opens up new worlds When “new” mathematical explanations were drawn from the past Sources of mathematization: accounting, art, architecture The revolt of the mathematicians against the philosophers A more complex picture The culmination of the Scientific Revolution: Isaac Newton! A world governed by laws
Science as a Quest for Truth: The Interpretation Lab
ix
Scholars have also unsuspected qualities The institutionalization of science Instrumental reason—when reason itself becomes technology Chapter 7 ............................................................................................... 137 Scientific Polarization and Hubris When philosophy came to Sweden Inventing an ultimate foundation for knowledge—when nothing is certain Historical preconditions for skepticism and fundamentalism An epistemological turn—and a lingering Cartesian anxiety How the world was divided up in Descartes’ day: subject and object Centering and decentering the subject Empiricism as a settling of accounts with scholasticism Deduction/induction—a wildly exaggerated either-or How is knowledge even possible? The difficult art of maintaining a balance The synthesis collapses—and positivism sets the agenda Scientific dreams The Humboldt brothers and the two scientific projects Part III: Conflicts of Interpretation: The Everyday Practice of Science Chapter 8 ............................................................................................... 173 Positivism—and its Critics—Define Contemporary Science Positivism as dominant science narrative A “new” positivism: science requires verification The unity of science—necessary, but only as an ideal Post-positivistic corrections: falsifiability and paradigms Semmelweis and the hypothetico-deductive method Critical rationalism: gradual progress through falsifiability Scientific development by means of Copernican turns The paradigmatic significance of paradigms for science Paradigms and discourses: practices and perception Constructivism and its limitations
x
Table of Contents
Chapter 9 ............................................................................................... 199 The Two Cultures: Humanism and Naturalism An even more “positive” philosophy: the necessity of phenomenology . . . although phenomenology is still not enough Two separate cultures of scientific knowledge Can people be studied using “scientific methods”? Humanism or anti-humanism? The anthropological deficit in contemporary science Words or numbers, qualitative or quantitative methods? Continental and analytical philosophy Phenomenology and naturalism Reductions—not reductionism Chapter 10 ............................................................................................. 231 Laboratories of Interpretation: Understanding and Explaining The hermeneutic experience: miracle and restriction Misunderstandings about hermeneutics The hermeneutics of understanding: two parallel ways of being scientific Cognitive balkanization in the modern university Critical hermeneutics Explaining in order to understand better Integrative humanities transcending disciplinary boundaries A meeting of interpretations from “within” and “without” “Without divergent opinions, it is impossible to discover the truth” From libraries to laboratories Science in an age of hermeneutics: laboratories of interpretation Part IV: Science has a Future Chapter 11 ............................................................................................. 269 Globalization and Knowledge Society The dialectics of discovery and invention The progress—and metamorphosis—of science The science of globalization and the globalization of science Digitalization: myths and challenges Once again: science contextualized When the fate of the nation-state also became the fate of science The competition state as scientific challenge The future of knowledge society?
Science as a Quest for Truth: The Interpretation Lab
xi
Chapter 12 ............................................................................................. 289 Beyond Relativism and Objectivism: Striving for Truth Relativism, cognitive horizontalization, and post-truth Objectivism and the new formalism Science must content itself with evidence, instead of possessing the Truth, Collegiality and peer-review ultimately define science The implied ethos of science What does it mean that science has a future? Scientific festivities are indispensable! Acknowledgement, 2019 (Swedish Edition) .......................................... 319 Acknowledgement, 2023 (English Edition)............................................ 323 Notes....................................................................................................... 325 Bibliography and References.................................................................. 339 Index of Individuals ................................................................................ 349
CHAPTER 1 SCIENCE? SCIENCE!
This is a book about science, a common enough term that you have heard many times. Today, science is not just a fundamental concept in the academic world, it is an activity that is discussed in hushed tones and associated with great expectations, in both society and business. Yet there is something rather odd about this phenomenon. Even though in all these various contexts science is referred to as something self-evident, nobody seems able to explain properly what it is. And while there may be a broad consensus about the profound importance of science, even the academic world seems unable to agree about how exactly to define this thing we call science. If you ask researchers what science really is, you are likely to get several entirely different—often contradictory—answers. The uninitiated could be forgiven for thinking that this must compromise science in some way, but in fact this state of affair is just as it should be and not at all as perverse as it first sounds. Although sinister forces are currently trying to exploit this lack of consensus in an attempt to undermine the legitimacy of science, there is no real cause for concern—science is by its very nature a world of conflicting interpretations, which must be treated in a way that is as innovative and creative as it is rigorous and responsible. When a book like this one, which was written in Swedish, is to be translated into English it becomes particularly obvious that the word “science” itself causes some problems. At the start of the twentieth century, the world of science, education, and research was heavily dominated by Germany. Accordingly, both our understanding of science and the way the scientific community are organized in our part of the world—in Sweden and the Nordic countries in general—have been profoundly shaped by the Germanic tradition. And the German word for science, Wissenschaft, includes every conceivable discipline, from astrophysics and computer science to marketing and theology. Indeed, this concept of science as a matter of course even includes the humanities. In the post-war era, however, the academic vocabulary in Sweden and the other Nordic countries changed dramatically, following a larger geopolitical shift from German to English as the dominant language for the
2
Chapter 1
international dissemination of research findings. As a result, English now rules as the new lingua franca of global scientific communication. This Englishization has affected an increased homogenization and Americanization of contemporary scientific practice. Within the Swedish scientific community, it has thus become increasingly routine to use the English word “science” without acknowledging that it refers more narrowly to what is known in Swedish as naturvetenskap, the natural sciences, rather than vetenskap, knowledge in the larger, Germanic sense as described above. This conceptual praxis has caused a boomerang effect: when vetenskap is translated as “science,” which is then reimported into Swedish as a synonym of vetenskap, the result has been the emergence of an ideal of scientific knowledge predicated on the natural sciences as the defining framework of all scientific disciplines (or perhaps we might say, that an already dominant view of science as defined by the natural sciences has been further entrenched). For English speakers, it seems obvious that chemistry and physics should count as science and that only exceptionally can the rubric be applied to history, art, or theology, disciplines that are usually called arts subjects. The division in English between science and arts is rendered in German as Naturwissenschaften (the natural sciences) and Geisteswissenschaften (literally, the “spiritual” subjects, i.e. the arts and humanities). However, to adopt such a concept of science in place of the broader sense of Wissenschaft as all rigorous disciplinary thinking not only risks undermining the legitimacy and prestige of large swathes of the social sciences, theology, humanities, and the artistic disciplines, it will likely also undermine their financial basis and diminish their academic influence. In other words, there is nothing innocent about how the word science now increasingly defines what constitutes disciplinary thinking. It is a term that ushers in seismic changes in our conception of what counts as disciplinary thinking—and all this without any serious discussion having taken place. Yet, there will be profoundly devastating long-term effects of allowing the word science to displace the notion of disciplinary thinking, as now happens when academic literature is unreflectingly translated from English to Swedish (as well as other languages) and then used in teaching and research. The vast majority of books about science translated into Swedish from English (and French) are not dedicated to disciplinary thinking in this wider sense, but the natural sciences. As a result, every article or book in which we encounter the word science has the effect of jeopardizing what we understand as rigorous, disciplinary thinking, just as every article published in an English-language international journal will ultimately reinforce the landslide that is now sweeping away our
Science? Science!
3
understanding of such thinking. These complex and ironic consequences of internationalization today pose extraordinary challenges for anyone wishing to find an appropriate vocabulary for disciplinary thinking and to understand the history of science.
Questions about science, scientific questions Science. Savour the word—science. What does it make you think of? Does it feel positive, expectant, pleasurable, laboured, or even ominous? What exactly is science? What can we expect from science? What is the difference between scientific knowledge and unscientific knowledge? What or who decides where the limits of science go? Is there only one kind of science, Science in the singular, with a capital S, or should we use the plural form and refer to sciences? If yes, how do these various sciences relate to each other? Is there a hierarchy of sciences, in which some are more important than others and govern them, or are all sciences equal? And by the way, how long has science been around? When did all this begin? Has there always been science? And can we count on science continuing to exist in the future—or might that which we call science in fact be a transitory phenomenon, associated with a particular historical era that may soon come to an end? Most people would probably agree that science is closely allied to research, but can we conceive of education, too, as laying claim to being scientific? Moreover, is science something that universities and other institutions of higher education have a monopoly on, or might science also conceivably be found elsewhere? Should we try to make society and our world in general more scientific? Is truth the exclusive preserve of science? Are there limits to what science can explain? Can science tell us how to live our lives, govern our societies, and lead organizations? Can science be wrong? Is science always good? Should we expect from science only progress and life-affirming, humane results? If science does indeed cause problems, can that same science help us to rectify them? Does science make us happier, wiser, and better people? Can we have too much science? The questions pile up. Why does science raise so many questions? In the first place, we need to remember that science in general, and research in particular, take a special interest in that which we do not know. And in order to investigate the things we do not know; we need to develop our ability to devise more informed questions. Is it, in fact, this very ability to generate informed questions—including about things that seem so obvious as to destabilize the basis of our self-understanding—that defines successful scientific work?
4
Chapter 1
Science is taken for granted It is no coincidence that science has come to be taken for granted in a society which is not merely defined by scientific thinking, but which has also been materially developed in accordance with scientific findings. Today, most major public institutions, companies, and other organizations claim to be based on science. Society in general is permeated by science and has thus also largely been created by science. Within this societal context, each of us risks being excluded and relegated to a kind of cognitive isolation whenever we are accused of being “unscientific.” Ultimately, we are vesting our trust in science every time we take medicine, travel by air, leave our children at day care, read historical accounts, drive over a bridge, and enter a new building. Science has made possible the building of factories, transport systems, and business processes while also enabling the practical development of our kitchens, bathrooms, and other features of our homes, not to mention our dietary and exercise routines. It is thanks to science that we can now easily cure diseases which only a couple of generations ago would have carried a death sentence; that we have knowledge about ancient civilizations and other galaxies; that we can confidently and reliably differentiate between various kinds of information; and that we can use computers and smartphones powered by digital information technology to do our jobs and communicate daily with friends and relatives. Not least, we also use science to examine and investigate the unwanted impact of all this upon us, our societies, and our environment. It seems, in other words, that science is pretty much everywhere. Research is being conducted on stem cells, human rights, waste collection, the Scriptures, football, black holes, healthcare, and verb forms. Indeed, science raises questions about every aspect of our existence. Sooner or later, anyone who is involved in science will thus come up against a cognitive infrastructure that sustains modern society in its entirety. More than just an academic concern, science has become a crucial question in society at large, something that is particularly clear in the case of the issues raised by opposition to vaccination policies. To some extent, those who oppose vaccination are doing exactly what the Enlightenment motto urges us to do as responsible adults: think for yourselves and trust in reason. But what are we to think when that capacity for independent critical thinking seems to result in an exaggerated skepticism towards established scientific findings and, indeed, a hostility towards the evidence that research has produced? Should we view this new opposition to vaccination as the expression of an anti-modern skepticism towards scientific thinking—or should it instead be seen as a result of the complications that accompany the
Science? Science!
5
processing of scientific evidence? The measure of uncertainty associated with both the assessment of a vaccine’s actual protection and its possible side-effects reveals the profound challenges now faced by science. And there is a real danger that the public’s faith in science is being undermined in an era of cognitive polarization when it can seem as if the only choice lies between adamant objectivism and gratuitous relativism. One of the more ironic consequences of the victory of science is that the dissemination of knowledge throughout society, in tandem with a digital information system lacking the quality control that hierarchies of knowledge provide, has resulted in a horizonalization of all points of view. In the process, science has increasingly been reduced to merely one voice among others. The knowledge landscape has also been opened up to alternative sources of knowledge. We are now encountering knowledge claims in the most unexpected places, across a spectrum that ranges from tradition and religion to anecdotal evidence and scientific findings that deviate from the consensus. Vaccine opponents are merely one example of how the question of scientificity, including how much trust should be accorded to scientific findings, is no longer merely a matter of debate among academics. We find similar challenges in pop-science health literature, among politicians who make unsupported references to research as an unambiguous source of truth, and in websites that offer cast-iron assurances about diet on the basis of quasi-scientific arguments that have not been subjected to any kind of independent fact-checking by experts. Nevertheless, we continue to associate science primarily with universities and other institutions of higher education. Today, the number of people professionally engaged in science in this institutionally limited sense vastly exceeds the total number of scientists in the entire preceding history of humanity. Not infrequently, academic institutions are among the largest employers in a particular region. In Sweden, for instance, universities are the largest state employer, giving work to more than 75,000 people, approximately 30% of the entire state workforce. To this figure can be added the 400,000 students currently in higher education as well as all those who have previously taken a university course. In Finland, the equivalent figures are 65,000 employees and 300,000 students. It is estimated that 41.9% of Sweden’s population and 44.3% of Finland’s have a university education. In Britain there are about 440,000 employees and 2,380,000 students and in the U.S. 3,600,000 employees and more than 20,000,000 students. If we also include other science-based activities as well as all those situations in which scientific work has created knowledge and research opportunities, extending to the obscurest corners of society,
6
Chapter 1
the number of people who can be said to have a close connection to science rises even more dramatically. In an age when science affects almost everything and everyone, it can no longer be thought of as an exclusively academic concern. In fact, science has undergone a profound change in recent years: from having been an external driving force of social development (in the days when academic institutions enjoyed a monopoly on scientific knowledge that was intended to be exported to society at large), science has now become an internal force that effectively permeates all society.1 Our society has been made scientific, even as science itself has been normalized and integrated into the professions, commodities, services, and functions of our entire societal infrastructure. Because science has in this way effectively become a concern for all citizens, the task can no longer simply be to transfer scientific knowledge from academia to society. In the place of this one-sided knowledge transfer, the collaborative task of universities is now to create reciprocal relationships with qualified actors in society at large. This omnipresence of science in contemporary society may also explain why it has become increasingly difficult to offer a simple definition of what science is and where it is to be found.
The rise of cognitive nationalism To a significant degree, the conditions for science in the present day are determined by the state, which during the past two centuries has come to exert a predominant influence upon the financing—and management and control of academic institutions. The modern university, which only saw the light of day in the nineteenth century, was designed as an integrated element of the national project, and for this reason it was frequently taken for granted that nationalism formed part of its cultural framework. However, despite the historically strong ties of mutual dependence that bind together science and the nation state, academics in countries like Sweden have often had a markedly complicated relationship to the nation. A sharp contrast is offered in this respect by Finland, where the national anthem is routinely sung at academic gatherings and where the University of Helsinki (in the new capital that was established in the nineteenth century) has had a key, nationbuilding function in the same way as Åbo Akademi University (in Turku) has been of decisive importance for Finland’s Swedish-speaking minority. In Sweden, no such consensus exists around the singing of the national anthem—although it is possible to discern a clear vein of nationalism in university practices and structures, despite the veneer of current academic anti-nationalism (read: internationalism).
Science? Science!
7
With the hyper-competition ushered in by globalization, the importance of knowledge for a nation’s competitiveness and favourable economic growth has unexpectedly given rise to a cognitive nationalism, which is now a feature of much of the political rhetoric around schools and universities, further education, and research. Indeed, the nationalism which characterizes this area is seemingly matched only by that of organized sports. And yet there is an inherent tension here, one that is all too rarely acknowledged but that sooner or later will need to be dealt with: on the one hand, the nation state expects a return upon its investments, in accordance with its goal of being internationally competitive with regard to knowledge production; on the other, science is formed by its own transnational origins, universal claims, and boundary-crossing collegiality. Scientific activity in universities and institutes of higher education is now entirely reliant upon financial support from the nation state, which in turn expects a return upon its substantial investments. The rhetoric around education policy and research strategy is thus saturated with nationalism. But in this situation, it must be asked whether universities exist only to benefit their own country. Universities occupy a realm of universal knowledge, which in today’s globalized world means not just competition but also transnational mobility and co-operation across national borders. The complications associated with cognitive nationalism that now increasingly determine our expectations of science are by necessity becoming more acute in an era increasingly riven by nationalism and globalism. In all likelihood this tension will be one of the greatest challenges faced by scientific projects in the future. The strategy that has hitherto prevented the university from also being torn apart is the formula for success known as the combination of global excellence and local participation.2 But the question remains: how can excellence and participation be held together in the long term? Indeed, do we not also need both excellence at a local level and global participation?
Science is under attack! Science commands enormous prestige in our era. It is expected to contribute to greater competitiveness, increased quality of life, better schools, social inclusion, the extension of democracy, and concrete strategies for preventing climate change. Our traditional knowledge institutions, particularly universities and institutions of higher education, typically rank highly in surveys of public trust. The Nobel Prize has likewise helped to confer upon science an aura of prestige and nobility that could hardly have been predicted from a historical perspective.
8
Chapter 1
Even so, in recent years science has begun to be increasingly called into question. However, this development has taken the form not of the general skepticism towards science that has always existed, but rather a growing tendency to openly disregard, criticize, or simply dismiss scientific findings. Of course, there is nothing new about critics of science, those people who mistrust scientific findings and for various reasons do not even want to be persuaded of science’s admirable qualities. The history of contempt for science is as long as that of science itself. Popular suspicion of scientific elites has always been around and this is, to some extent, explicable in terms of the gap, which has grown continually since the Scientific Revolution, between our immediate experience of the world in everyday life and the abstract forms of knowledge which scientists have for centuries developed with the help of advanced instruments and analyses. Sometimes, it can be a healthy reflex—and, indeed, scientifically warranted—to be critical of science, as, for example, when we recall that the practice of lobotomization was consecrated by the Nobel Prize in 1949 or that Paolo Macchiarini, a surgeon at Karolinska University Hospital and the Karolinska Institute who jeopardized patients’ health by implanting unsafe artificial tracheae, had been part of program that had received major funding to carry out cuttingedge scientific research. Nor is there anything new about external actors trying to constrain or improperly direct science. Often this has taken the form of powerful interests which have tried to affect, and sometimes even to silence, scientific activity that they regard as a threat or simply as unwelcome. In the past these were often religious authorities but more recently it has become increasingly common for corporations and politics to influence science in various ways. There are countless examples, ranging from the tobacco industry’s deliberate falsification of research findings and attempts to bribe corrupt researchers, to the efforts of authoritarian regimes to use threats, violence, firings, and the withdrawal of funding as a means of curtailing and eliminating unwanted scientific voices and research findings. Today, we are also hearing new kinds of voices, as when politicians— unsupported by any arguments making direct threats to impose cutbacks upon academic activity—publicly call scientific findings into question and reflexively dismiss unpalatable scientific results as fake science. We now suddenly find ourselves in a situation where science is being fundamentally questioned. One index of the changed status of science today may be the way that former President Donald Trump not only ignored but vehemently called into question the legitimacy of climate change science and its status as more than merely another perspective on the state of the world——at the same time as Pope Francis I is underscoring the necessity of listening to
Science? Science!
9
science and climate research when developing a strategic plan and policies for the future. Among the various factors that must be considered when trying to understand how such a situation could arise, one in particular stands out, namely how traditional mass media, social media, and the new geography of digital information are steadily transforming the fundamental premises of scientific work. This is not merely a matter of the “internal” conditions for scientific work—connected to publishing networks and evaluative mechanisms—but also the fact that its “external” relationship to society in general is undergoing a transformation. Media of various kinds have always played a key role for science and the university. History offers numerous instances both of mutual support and malicious competitiveness between academic work and media. The rise of modern printing technology made possible a new world of information whose improved forms of communication set in motion unexpected changes in the global power structure. The printed word was also an important precondition for the emergence of new forms of teaching, as an adjunct to lecturing, as well as the development of new merit-based systems for scholars based on publication. Over time, there has emerged an entire publishing industry, that takes the freely produced material generated by tax-funded researchers and then sells it back to that same academic sector at a hefty profit. Academic activities have also long been closely intertwined with the textbook publishing industry in a complex network that primarily operates at the level of the individual. To this can be added an entire accreditation- and ranking-industry, not to mention the sprawling consultancy industry that has been drawn into the burgeoning— and lucrative—recruitment and staff-development activities of university HR-departments. These parties have become increasingly prone to allowing economic factors to determine the priorities of scientific communication. In order to disseminate research results, which they themselves have financed, national research policy directives have compelled this industry to adopt new forms of open access—whose only result has been to create fees for publishing! Science has always had far more to do with money than introductory science textbooks have been willing to acknowledge, and today we can see how science and the market are being brought together in new ways. If philosophy of science does not consider these economic realities as part of itself, science will lack a realistic understanding of the conditions for its own activity. Digital information technology is currently in the process of fundamentally changing the conditions for science and scientific communication, and we can only guess at the long-term consequences of how new information technology
10
Chapter 1
is effectively shifting the centre of gravity from text to (moving) image, including the latter’s interactive potential. Not only do IT-departments, suppliers of both hardware and software, and the general development of the internet now determine the external conditions for science, they also direct and control the production of academic knowledge at the internal micro-level. Does the global horizontalization that has followed in the wake of this new digital information system create either a levelling of the playing field or a process of democratization? Will it eventually become difficult, if not impossible, to sustain academia’s current knowledge hierarchies and vertical criteria of quality? If all statements about reality are seen as circulating at the same level, and if scientific knowledge, regardless of the evidence, is in danger of drowning under a tsunami of information (and disinformation), what, then, is truth? And what happens to science in a world in which anyone can lay claim to have their “own” truth? These are the kinds of urgent challenges confronting everyone involved in science today.
Can science even be defined? Even though science today seems to permeate modern society completely, forming a crucial element of every aspect of life, from therapy and toothbrushes to the planning of rail networks and telecommunications, there seems to be no consensus about either what counts as science or any simple definition of what science is. Every definition offered so far has its limitations. For instance, if we use a “narrow” definition of science, characterized by a generalized reliance upon mathematical calculability, not only most of the humanities and social sciences but also many other disciplines will be excluded as scientific domains. If the criterion is that science requires experimentation, these same disciplinary areas are likewise largely excluded—and so, too, are fields such as astronomy, which for obvious reasons is limited to observation. If we instead rely on a “broad” definition which understands science as activities that seek to achieve representations of reality and set aside from the fact that many established academic disciplines would question the very possibility of representing reality, it is difficult to see why scientific activity should not also extend to landscape painting and photography. For that matter, plenty of sciences do not even work directly with empirical material. While there are those who might argue that scientific knowledge presupposes some kind of verifiability, there are others who contend that, in the absence of any possibility of such verification, we must settle for a definition of science as restricting itself to
Science? Science!
11
questions that allow of falsification. Yet such a definition would in turn exclude a large part of the research that constitutes the humanities and theology. If we instead proceed from the idea that scientific knowledge involves a meta-reflection, such that a scholar can give an account of the theoretical status of their own thinking, it will be difficult to include much of the specialized research in the natural sciences that is conducted in the form of massive collective projects—even though such activity, ironically enough, belongs to that part of academia which is often accorded greatest scientific prestige.
Science raises questions and destabilizes what seems obvious In the face of these contradictory conceptions of science—and in a time of growing criticism and skepticism about science—it is important to remember that science itself is very much at home with these kinds of questions. Perhaps the severest objections and the most far-reaching critiques of science have often been formulated by science itself, and, historically, most critics of science have themselves been scientists.3 Indeed, the fact that so many divergent opinions and differing intellectual traditions have locked horns within the phenomenon that we call science is itself constitutive of science; it likewise says something important about what science is and how scientific activity works. In fact, this relationship might be described as a key element in science’s continual progress. However, science is sometimes conflated with absolute certainty, as if curiosity no longer existed, which almost gives the impression that science is nothing more than a monotonous defence of some already established knowledge. But science is a project, a continual process, and its primary focus of interest is upon what we do not know—or, more precisely, what we do not yet know. To be sure, it is both necessary and meaningful to safeguard knowledge, scientific advances, and new findings, yet any science that contents itself with what it knows, anxiously defending its prior achievements, has no real future: “Whereas science is changeable, knowledge is a passive state.”4 In fact, the emergence of modern science resulted not from a scientific revolution characterized by a “fetishization” of the truth but from what has been called a revolution of ignorance. In his magisterial account of human history, Yval Noah Harari has underscored that what makes modern science truly singular and forward-looking is precisely the way in which it “openly admits collective ignorance regarding the most important questions.”5 In other words, scientific activity presupposes a
12
Chapter 1
willingness to admit that we do not, in fact, know everything—in combination with an accompanying desire to use empirical observation and mathematical calculation to develop new knowledge in ways that allow us not only to formulate theories but also to acquire new abilities. To differing degrees, this focus on new knowledge also holds true for the entire spectrum of social and human sciences, which involve the development of new understanding, often with the help of new sources, explanatory models, and theoretical apparatuses. It is likely this readiness to admit to ignorance— ignoramus, that “we do not know”—which has given modern science its dynamic and deeply revolutionary character. Research does not concern itself solely with its achievements but is primarily interested in what we do not yet know (at a universal level)—in the same way as education is focused upon acquiring something that students (at the individual level) do not yet know.
Science is a project How can we then have knowledge about something we do not know? The royal road that science takes in order to meet this challenge involves the art of posing questions, highly qualified questions that make it possible to open up and investigate new worlds of knowledge. The alternative would be to become locked in a vicious circle, with questions being transformed into answers and answers being used as questions. Nor is simply not knowing enough in itself—that, too, is something we need to know. Hans-Georg Gadamer expresses this insight, which has been articulated in various ways in the history of philosophy, in the following terse formulation: “In order to be able to ask, one must want to know, and that means knowing that one does not know.”6 Science can never be complete. It is not only a matter of investigating an already defined area of knowledge, step by step, so as to be finally done with it. On the contrary, the more we know, the more we realize how much we have yet to learn. When, in 1869, Dmitri Mendeleev drew the periodic table using the sixty known elements, his real stroke of genius was what he did not include, the “gaps” in the table that he left in order to indicate the existence of as yet undiscovered elements. The dream of discovering the smallest constituent element of matter, the indivisible atom, has likewise shown itself to be an illusion. Within the atomic nucleus lies a microcosm of infinity—which is the counterpart of the macrocosm that similarly appears to be infinite and at the same time expanding. The universe is not a finite box to be exhaustively investigated but equally greater and smaller than was until even quite recently believed—and infinity extends into every
Science? Science!
13
dimension.7 This reality also highlights the fundamental conditions for other disciplines in their investigations of the world. The more we know, the more we realize how much we (still) do not know. Science is a project that—in varying degrees—seeks to investigate, explore, discover, and reshape reality. Scientific thinking is for this reason continually moving back and forward across the border between the actual and the possible. This is the underlying meaning of the idea that science must be critical. Indeed, scientific “criticism” does not involve some kind of general dissatisfaction or sweeping negativity, using science as a cover for ideological aims. Criticism is, rather, in Immanuel Kant’s sense, about the art of thinking differently, i.e., the ability to turn the world upside down. As Friedrich Hegel insisted, we should make it a condition that everyone who wants to practice philosophy should be able to stand on their head! And so, even if science has become an unquestioned part of modern society, it is also continually calling into questions much of what that society treats as self-evident. This alternation between establishing and calling into question that which we treat as given is what propels science forward with such extraordinary force. In these turbulent times, the future of science is also closely bound up with the ability to balance the variously stabilizing and destabilizing functions of science.
Lack of consensus—does it have to be a problem? But the argument could also be made that there are plenty of people who are in no doubt in their mind as to what constitutes reality and who are more than willing to offer a clear and unambiguous definition of science. Typically, such claims come from people who are based outside of academia, but they can sometimes also be heard from within scientific institutions, particularly when scientific findings are being presented in the news media. Our habit as researchers of taking at face value the categorical statements of other researchers does, however, tend to diminish the closer we get to our own area of expertise. When I listen to news reports about scientific discoveries in fields about which I know very little, such as astrophysics or biology, I am far less critical and far more inclined to treat them as established facts than when the issue involves social science, the humanities, philosophy, or theology, where I am often more skeptical, consider the alternatives, and take a problematizing approach. The pedagogical challenge associated with academic teaching sometimes forces us to simplify issues, such that a researcher can experience the simplification or reduction of complexity as a difficult balancing act of truth and untruth.
14
Chapter 1
Unilateral scientific claims are academically problematic for the very reason that the “gods” of academia appear to be surprisingly local. What is declared to be science can vary considerably between different scientific milieux. There is rarely much consensus across time and space: discoveries and orthodoxies vary wildly in different academic contexts. Thus the Swedish poet Gustaf Fröding offered his own version of an old saying: “What is true in Berlin and Jena is no more than a bad joke in Heidelberg.”8 Yet it is not entirely clear how Fröding’s quip is to be understood—as a call for some vague relativism in which anything can be said to be true, or as a declaration that there should in fact be a consensus among members of the scientific community? Personally, I would like to see it as a description of the predicament of science but also as a starting-point for the kind of argument and dissent that is the lifeblood of science. Science is defined by the way that its practitioners seek the truth by systematically and responsibly relating each other’s perspectives. 9 Indeed, the idea of attaining full consensus is entirely alien to scientific thinking, since it would mean that the journey towards full knowledge has come to an end. Thus, CarlGöran Heidegren offers the following provocative twist on Fröding’s dictum: “So long as what is truth in Heidelberg is a lie in Jena, there is hope for academic freedom, but once what is true in Uppsala is also true in Lund, Stockholm, and Gothenburg, it will be threatened.”10 The modern view of science has had an almost neurotic relationship with this lack of consensus, and this cognitive neurosis has been exacerbated at a time when many people claim to see the emergence of a “post-truth” era. But the question is whether this lack of agreement is not actually both a fundamental condition of science and a daily experience for those working in research communities. Institutions of scientific knowledge are defined, inter alia, by the way that advanced seminars and doctoral defences are treated as laboratories of interpretations, which, like research itself, are organized as a mode of conflict-based communication in the sense that they effectively institutionalize conflict. Seminars and doctoral defences take as their starting point the fact that it is possible to adopt different perspectives and opposing viewpoints—in combination with the insight that the collegial challenge of laboriously bringing these views into dialogue is a process that seeks, so far as possible, to be open to critique, something which also presupposes a large measure of self-criticism if any progress is to be made. It is for this reason that we need to adopt a more affirmative approach and a more appreciative attitude towards differences of opinion within science. Given all this, it is no coincidence that I have chosen “Conflicts of interpretation: the everyday practice of science“ as title for the third section of this book. The capacity for navigating the complex mechanisms that
Science? Science!
15
define science’s laboratories of interpretation can only be learned by personally participating in these practices. It is part of the essence of science that it ultimately resists being compartmentalized and summarized in a definition. And yet this presupposition should never lead us to abandon the quest for truth. So long as a critical and self-critical debate can be kept alive, there is hope for science. At the same time, there are sinister forces in the present who have increasingly begun to exploit the constitutive vulnerability that goes with the lack of complete consensus in science. The fact that researchers are not in complete agreement leads them to draw the (premature) conclusion that anyone can claim anything they want with the same degree of legitimacy. Today, even world leaders make references to “alternative facts” and hold up the way that scientific fields are characterized by disagreement and conflict as evidence that there is no real difference between research findings based on evidence and peer review, and frivolous opinions or outright lies—as if “all cats are grey in the dark.” Research on climate change shows very clearly the complex challenges which science today must deal with. The United Nations Intergovernmental Panel on Climate Change (IPCC) does not conduct its own research but rather evaluates the state of research regarding human impact on climate. On the basis of peer-reviewed reports, thus, it has been able to state with certainty that human activity is indeed a contributory factor in the current heating of the planet. Some measure of uncertainty is inevitable when drawing up an overview of the current state of research at this aggregated level and when compiling a scientific basis for risk assessment that is exhaustive, objective, open, and transparent, and the result is always open to debate in the same way as there is with any researcher who takes a divergent view. But none of these changes the overall picture: describing the state of research in different ways can never mean describing it any way you like. Far from being the result of influence from relativistic positions within philosophy of science, such parasitic exploitation of the uncertainty that is constitutive of scientific practice, which is thereby transformed from a virtue into an argument against science, should be understood as a consequence of the uncompromisingly positivistic view of science that the field has itself fostered in order to increase its own prestige and legitimacy. Indeed, this situation may well have arisen precisely because excessive selfconfidence has led to a situation where science “oversteps the boundaries of its scientific competence.”11 Yet the uncertainty, vulnerability, and variability that are the very fundamental conditions for scientific work, and which are a precondition for its successful development, have often been glossed over
16
Chapter 1
in textbooks. This may also have resulted from the predominance of a simplistic theoretical understanding of science, such that it has become impossible to do justice to science as a series of practices of interpretation— beyond both the bastion of absolute certitude and the swamp of arbitrariness.
The diversity of science—is unity possible? Two hundred and fifty years ago, the philosopher Immanuel Kant (1724– 1804) began teaching at the university in his hometown of Köningsberg (now Kaliningrad). During his first term, he delivered lectures on logic, metaphysics, natural science, and mathematics. Although this teaching load seems almost impossible by today’s standards, it did not prevent him in subsequent from expanding his subject range to include physical geography, ethics, and mechanics—and, later, during his forty-year teaching career, adding even subjects such as pedagogy, mineralogy, and theology.12 No-one in a university today could hope to attain such cognitive scope. But there are exceptions, of course. In the early 1900s, Karl Jaspers (1883– 1969) managed to have his university professorship re-categorized from psychiatry to philosophy, and later in the century the polymath Michael Polanyi (1891–1976) made an almost fantastic academic journey from high distinction in chemistry to the social sciences and then philosophy; tellingly, he never received proper disciplinary recognition, even though his intellectual influence greatly surpassed that of most of his colleagues.13 In the last few centuries, the dominant logic of scientific development has been characterized by differentiation. Specialization has continued apace and given rise to many new disciplines as well as increasing professionalization (with its accompanying academic posts and careers). This has proven an extraordinarily successful strategy for the development of knowledge. However, the downside of this development has been a tendency towards epistemological unilateralism, disciplinary dogmatism, and a lack of overarching perspectives—with the result that the university has been transformed into a “diversity”: It would be difficult to describe the university as a coherent and homogeneous world. What we name university is, rather, a more or less intricately connected network of concepts, ideas, structures, and local frameworks. […] A university is therefore not a unified whole. Rather, it is a diversity of different parts.14
In the wake of this increasingly extreme specialization, there has also emerged a need for balancing acts by developing hybridizations and investments in interdisciplinary platforms. With the passage of time, this
Science? Science!
17
combination of differentiation and hybridization, which has emerged as a defining feature of the contemporary university, has come to present a more fundamental challenge: how can science remain coherent as a joint project? The question of how the multiplicity of knowledge traditions that in our era have been gathered under a single institutional umbrella of the university, which has recently been dubbed both diversity and multiversity, is becoming increasingly urgent. In this situation, what does the shared economic interest in maintaining the necessary legitimacy mean for justifying the funding of services and activities? Or is the contemporary university ultimately only held together by central services like heating of premises or staff dissatisfaction about inadequate parking?15 Whilst acknowledging the considerable epistemological diversity embedded in the concept of science, I have chosen in what follows to refer throughout to science in the singular. My aim in using “Science as a Quest for Truth” as the title of this book has been to highlight its universalist ambition of presenting a unified theory of science that does not restrict itself to a single scientific field. Other accounts have too often been hampered by an ambition of legitimizing a particular discipline or scientific field. It may be that the territorial metaphors that so profoundly define the contemporary intellectual landscape have encouraged us to believe in the possible existence of clearly delineated areas of knowledge over which we might assert our control. In an age of hermeneutics, however, there is a growing awareness that this is to be considered a question of different aspects of a single reality.16 By seeking instead to characterize science as a joint project, my aim is to counter the epistemological fragmentation in our time that also ultimately risks ending in arbitrary relativism. Science is one project. Defining science as a quest for truth—rather than a declaration of something completed or a defence of some conclusive truth—also makes it possible to safeguard the diversity that characterizes science, both historically and in the present. By realizing that different disciplines and epistemologies in fact offer different perspectives on, and differing interpretations of, a single reality, we will be better able to safeguard the diversity of perspectives. However, in order to prevent academic “balkanization,” a disintegration into discrete cognitive territories, these points of view must be continually related to each other, actively and critically, within the framework of one overarching quest for truth. Uncontested science is clearly an impossibility—and a dangerous one at that if anyone genuinely believes they have achieved it (as has been the case with attempt to reduce all science to physics). At the same time, anyone who takes an interest in science is necessarily part of a project that is searching for truth.
18
Chapter 1
Engaging in scientific activity is thus more than simply a matter of learning rules and passing on a fixed body of knowledge, it is also about taking part in an adventure—the adventure of knowledge!17 It is time for us to get rid of those stale ideas about trying to conform or administer a preassigned system of knowledge. Imagine instead that we are about to embark on a journey of discovery. Put on your pit helmet, or whatever it is that you associate with adventure in a new landscape, and let curiosity be your guide as you enter the unknown. But do not be naïve—use your judgement and try to subject your own ideas regularly to the sharpest criticism you can find so that your knowledge can be enhanced by exchange with others. Stand upon the shoulders of the giants of the past and absorb their knowledge. At the same time, leave your prejudices behind, along with those things you thought you were sure of, so that you can remain open to the unexpected and the surprising— discovery itself!
Philosophy of science—how this book is organized What can we expect from theories of science? Although I am convinced that questions about philosophy of science are extraordinarily important, particularly at a time when science and the conditions for science are changing quickly and being challenged, I recommend not holding too high expectations. There is no theory of science that can serve as a manual for scientific work. In the absence of such a guide, the most important thing is to acquire scientific awareness—something that is primarily cultivated by taking part in scientific practices such as seminars, doctoral defences, and other peer-review practices. As a practice of interpretation, science aims to sustain—creatively and innovatively as well as critically and responsibly— the adventure in knowledge which the scientific project represents. This book consists of four sections that collectively set out a theory of what science is. Part One seeks to understand what it means for science to have a history, a history that modern science nonetheless tends to suppress (even when it has been assimilated in such a way as to have also been eliminated). Part Two seeks to rectify that account by devoting three chapters to the so-called Scientific Revolution, with a view to liberating it from its many anachronisms and recovering many of the preconditions for modern science. Part Three outlines the underlying structures of the knowledge practices and interpretation labs that define the everyday practice of science. Finally, Part Four returns us to the temporality of science, albeit this time by starting from questions about the future of science and the challenges which that future presents to science today.
Science? Science!
19
Those who wish to learn more about science need to seriously consider its historical and social contexts. Nonetheless, in foregrounding the historical logic of development that underpins the emergence of modern science, my primary ambition here has not been to write a history of science in any strict sense. Instead, I have tried to offer a qualified introduction to science that is characterized by a profound historical contextualization of the practices upon which science is founded. The chronology of my account is somewhat mixed in that I alternate between genealogical perspectives (which use history in order to take their starting point from a later point in time and thereby understand history “backwards”) and genetic perspectives (which follow and chronicle historical developments “forwards”). In so doing, I wish to problematize the sense of timelessness that has often resulted from science’s anachronistic relationship to its own history. By contextualizing science in various ways, I hope to detach the theoretical framework for our understanding of science from the phony innocence that is a frequent hallmark of accounts by philosophers of science—and that gives the impression that all that is needed when practising science is to learn how to administer a predefined body of knowledge. However, without historical perspective and social context, philosophy of science runs the risk of becoming superficial and lopsided—an unproblematic smorgasbord from which scholars can pick and choose at will. Such a path leads quickly to the kind of stereotypes that make the precepts of theory of science seem like mere choices between “boxes” of abstract extremes, that lack any real basis in concrete scientific practices.
PART I THE DISCOVERY: SCIENCE HAS A HISTORY
CHAPTER 2 THE ADVENTURE OF KNOWLEDGE AND VOYAGES OF DISCOVERY
There are some stories that almost everyone knows and that are so fascinating that we cannot help returning to time and again. One such involves the legendary hero Christopher Columbus (1451–1506), who undertook a journey of discovery in 1492 that defied the ignorance of his contemporaries. In sailing across the Atlantic with his three ships, he both discovered America and proved that the earth was round—two epochdefining achievements.18 The story of Columbus had such an extraordinary impact in large part because his journey had the effect of suddenly bringing together two parallel worlds, worlds that would eventually be connected. But the enduring fascination of the story surely also has much to do with the fact that this Caribbean maritime adventure contains so many of the intriguing details that are the hallmark of a good story. Columbus’ feat has furthermore become the paradigm for an epoch-defining achievement. Its solitary hero undertakes a voyage of discovery that results in the discovery of a new and hitherto unknown world. The story draws its particular dramatic force from the sharp contrast between the well-informed and single-minded Columbus on the one hand and, on the other, his ignorant, superstitious, and fearful medieval contemporaries, who were convinced that those on board were doomed to perish because they would fall off the edge of the earth’s flat disc. Academics readily identify with the hero of this adventure, a man who astounded the world by demonstrating with a grand flourish something that should have been self-evident: the world is round! It is presumably for this reason that the story has always been such a favourite anecdote in the classroom.
The story of the “new” world Over time, Columbus’ feat has become something of shared item of currency, an integrated part of our cultural and scientific history, and a story that continues to be retold to schoolchildren and that virtually everyone has
24
Chapter 2
heard. The only problem with this heroic narrative is that it is literally too good to be true. In fact, Columbus did not discover America—for the simple reason that he never properly understood where he had ended up. In all likelihood, no explorer in history has ever been more wrong about his actual location: Columbus thought he had reached the coast of China; he was actually in the Caribbean. Columbus never discovered America, and he was not even on the route to the “new” world (about which he knew nothing, it is important to note if his actions are to be understood). The purpose of his journey was to find a new way to get to an “old” world—Asia. Let us recall the historical context of Columbus’ maritime adventure. After the fall of Constantinople in 1453, the Ottoman Empire had taken control over the land routes to the east. This had increased the expense and difficulty of the vital long-distance trade route, which was a fundamental condition for the economic boom of the late-medieval period and the flowering of Renaissance culture. When economic conditions worsened in the second half of the fifteenth century, many people were taken by the idea of trying to find an alternative route—a sea route—to the Asian markets. The Portuguese financed Bartholomeus Dias’ project of sailing around the Cape of Good Hope, and in 1498 Vasco da Gama finally reached India. By that point, however, Columbus, after several failed bids, had managed to secure financial support of the Spanish regents Isabella and Ferdinand that would allow him to sail west and instead take the “short cut” across the Atlantic. He made his first journey in 1492, after which he made a further three journeys, but even on his deathbed in 1506 he remained convinced that he had reached the east Asian archipelago off Canton (today Guangzhou). All that remained was to find the mainland and navigate a few more islands and there would be the shining cities and fabulous riches which Marco Polo had seen with his own eyes during his (overland) journeys more than two hundred years earlier. Columbus had reached the Caribbean, but because he was convinced that he had made his way to “India” (a designation which at that term served as an umbrella term for Asia), he called the people he met “Indians.” Only later, several years into the new century, did another seafarer, Amerigo Vespucci, begin to suspect that Columbus had in fact achieved something far greater than he himself suspected: the “discovery” of a continent unknown to Europeans. Only when the first maps came to be drawn showing the eastern regions of the as yet unnamed new continent Martin Waldseemüller coined the term America—after Amerigo, not Columbia after Columbus—but even this was entirely wrong since Amerigo was not among the first to make the journey. In a sense, though, it was entirely correct since Columbus never grasped the scale of his achievement.
The Adventure of Knowledge and Voyages of Discovery
25
Columbus’ complete geographical confusion also goes some way to explaining why this voyager with his lust for gold became increasingly frustrated and desperate with every journey he made. Still unaware of the broader perspective, he himself set in motion the long and bloody history of brutal exploitation, war, genocide, slavery, and colonialism which followed in the wake of his discovery. This is the far darker history of Columbus, focusing on his role as Conqueror and Explorer, that has been retold in many other parts of the world, particularly Central America.19 Columbus’ difficulty in fully grasping what he had found brings us face to face with what might be described as one of the greatest riddles of science: how is it even possible to discover something new? Our knowledge never begins entirely at the beginning. Our understanding always builds on assumptions, conscious as well as unconscious. In order to understand something, we need already to have understood it. Understanding presupposes preunderstanding. This mysterious relationship, which means we can only truly understand something that we have already understood, is a recurrent theme in the history of philosophy. Of course, it is never a good thing to be trapped in one’s prejudices, but sometimes we see only the problems associated with having prejudices and forget that our presuppositions are never a blank slate, since we would be unable to understand anything without some kind of preunderstanding (which also includes prejudices). What is more, the fundamental preconditions for being able to discover new worlds and develop knowledge remind us of the productive importance that inventions, such as theories and models, have for advancing knowledge. Columbus lacked theories that might have allowed him to raise his gaze above the immediate horizon of his reason and so realize where he had landed. What he did have by way of theories only made him blind to the new world that was opening up before him. Theoretical frameworks can help us to discover the new, but there will always be a risk that our understanding helps to eliminate novelty by making what is new conform to what is already well-known and familiar—precisely because of the ambition of making it comprehensible. In so doing, however, we also miss the defining hallmark of modern science: discovery.
The nineteenth century: the desire for a narrative The notion that Columbus astounded his ignorant contemporaries by proving that the world was round is fundamentally wrong. At that time, it was simply not the case that most people thought the world was as flat as a pancake. On the contrary, it had long been known that the world was a sphere. And Columbus himself was far from being the modern, rational, and
26
Chapter 2
scientifically minded hero who, in the classic versions of the story, stands in stark contrast to his hidebound age. His way of thinking was thoroughly medieval: he was a Christian businessman driven by a deadly hunger for God and gold. When others later came to view his expedition as having opened up a “new” world, the real thing that made it so eye-opening—if not actually scandalous—was that this feat, which had utterly transformed geography, had been achieved by simple sailors, fortune-hunters, and traders lacking any higher education—not by erudite professors at one of Europe’s many universities, which had successfully spread across the continent during the previous centuries. In other words, there would seem to be a flaw in the standard account of the well-informed Columbus and his foolish medieval contemporaries. Where did the narrative about the medieval view of a flat earth come from, then? How did Columbus, against the odds and contrary to the facts, emerge as a superhero of modernity and scientific thinking? And how is it that the story of Columbus lives on in our culture, where it is often recounted in the classroom and, indeed, repeated so often as to be taken for granted? Questions like these take us directly to the historical context in which modern science was born. The fact is that the story of Columbus and his medieval contemporaries’ fear of sailing over the edge of the earth was entirely unknown prior to 1800. Only in the nineteenth century does the myth of the flat world of the Middle Ages first appear, and it did so in the best-selling adventure book A History of the Life and Voyages of Christopher Columbus (1828). Its author, Washington Irving, used the flat-earth myth to create a dramatic backdrop with which to accentuate Columbus’ role as a stirring hero. For the same reason, he also invented Columbus’ foolish medieval contemporaries, who were stupid enough to believe that they were in danger of falling off the edge of the world. In other words, this narrative is barely two hundred years old, and during its first decades it was mostly just an adventure story. As a purportedly historical account, the story is younger still—not even a century and a half. Only in 1870 did the story of Columbus and the flat world begin to be offered as historically factual, after which it became enshrined as a scientific fact. Thus, the real drama of the story took place four hundred years after the historical journey of the fifteenth century—in the late 1800s, to be precise—at which point the Columbus narrative suddenly came to be considered a story about science and an asset in the university’s struggle for independence from church authorities and religious institutions—a turbulent period in which science had to fight hard in order to maintain and defend its autonomy from a reactionary church. Because leading figures in that church had direct authority over academic appointments,
The Adventure of Knowledge and Voyages of Discovery
27
for example, there was an acute need for a narrative that could illustrate how detached religion was from reality—in contrast to secular science. This notion of the Middle Ages, as the embodiment of a world steeped in religion and dominated by a church synonymous with popular ignorance, offered a perfect contrast to that of a progressive, secular modernity founded on science. Under these circumstances, the story of Columbus proved highly useful as an explanatory myth since it portrayed religious as regressive and as a general obstacle to scientific development. It was also by means of this sharp distinction between religion and science that modern science developed its own self-understanding. For this reason, religion continually appears as the main adversary in the dominant narrative which science, even in our own era, has continued to tell about itself.
The university torn between church and nation-state Today, the notion of science inevitably makes one think of the university, as if it were a given that the two should be associated with each other. In fact, the relationship between the university and science has a far more complicated history than we tend to imagine. Contrary to what may people think, modern science did not originate within the intellectual milieu of the university. The university was devised in accordance with very different ideals, and for this reason it was long an opponent of what we today associate with science. To understand the circumstances under which the story of Columbus and the flat world of the Middle Ages gained currency, it is necessary to remember that the older universities, like many of our traditional institutions of knowledge, was primary founded under the auspices of the church. The dominant model in historiography was that the university had literally grown up in the shadow of Europe’s cathedrals, as the cathedral schools which had been established in the wake of the Carolingian renaissance in the ninth century were developed into universities. An alternative model, less commonly adopted but successfully so in the cases of Oxford and Cambridge, was that universities emerged in the form of a joining together of colleges, which had in turn originated in the intellectual culture of the monasteries (such that all a university really required was a refectory, dormitory, library, and chapel). In both cases, however, it was the church which had established, shaped, maintained, and long exerted an influence over its activities. In our time, when the university is often regarded as the very model of science and secular knowledge, as well as being associated with powerful expectations about things like national prosperity and economic growth, it may come as both a surprise and a provocation to learn that the traditional
28
Chapter 2
institutions of scientific knowledge share these deep ecclesiastical roots— and, what is more, that they are part of a history that extends all the way back into a shadowy past of Arabian madrasas and other teaching institutions in Iraq, Egypt, and Andalusia. Indeed, it could be said that the university, despite its many rapid changes of costume, should ultimately still be regarded as a secularized religious institution. One need look no further than an academic ceremony, with all its fascinating and curious rituals, or the university’s organizational and examination structures, whose numerous titles and concepts have their roots in the medieval world, for this ecclesiastical prehistory to become highly visible. In a society characterized by enormous university systems in which higher education accounts for the highest number of state employees, it is hard to grasp that from the beginning the university was an elite association (in fact, originally a guild) of teachers and students. Until relatively recently, engaging in research and academic activities was only possible for those of independent means or with a secondary income. Professors supported themselves or financed their work through external activities and property, while so-called prebendary houses had responsibility for providing food and other assistance to the holders of such posts. Well into the modern era, universities were also dominated by the faculty of theology and the needs of the church—bishops also performed the function of vice-chancellors, with authority over appointments, in both Uppsala and Lund until the midtwentieth century. Universities remained small for a long time. Only a couple of generations ago, the faculty of a local university consisted of a handful of professors and a student body taken from the social elite, supported by minimal administration. Our default notion of the university is probably defined to a large degree by the splendid university buildings to be found in Lund and Uppsala, for example, whose neoclassical architecture evokes classical antiquity, the Middle Ages, and the Renaissance. However, these grand edifices were only built in 1882 (Lund) and 1887 (Uppsala). For their ostentatious university library buildings, the chronology is the reverse: those in Uppsala are from the 1840s and those in Lund from the early 1900s. If we set aside these buildings and instead put ourselves in the shadow of the cathedral and recall that both professors and students enjoyed far greater mobility than they do today, being able to travel between seats of learning across the continent, we will have a more accurate picture of the activities of the university across a longer historical perspective. Such a history also highlights the fact that the university is comprised not of buildings, but of people. It is quite literally a group of people who came together in a guild— on the initiative either of the students, as in Bologna, or of the teachers, as
The Adventure of Knowledge and Voyages of Discovery
29
in Paris—with a view to coordinating their work, guaranteeing, and controlling its quality, and safeguarding their common interests. However, these universities were not engaged in science in the modern sense of the word. Nor did they conduct research, since there was essentially no interest in discovering anything new; rather, the focus was on passing on something old. Medieval universities were pure teaching institutions whose primary concern was maintaining continuity with tradition. For this reason, the scientific revolutions of the sixteenth and seventeenth centuries did not originate in universities—rather, they came from outside such knowledge institutions, about which they were strongly critical. In order to find any science in the university, one has to go forward in time to the late eighteenth century and only in the 1800s did science and the university really begin to march in step. It was also at this time that the term science began to take on the meaning which it has today, to be used more frequently, and to spread. In general, it was during the nineteenth century that the incipient and fragile notion of science underwent a process of institutionalization that led to greater differentiation of scientific disciplines, even as the university, with its origins in medieval theological and teleological thought, finally began to incorporate the new, empirically oriented natural sciences that had emerged from the scientific revolutions of the sixteenth and seventeenth centuries and that would soon come to dominate the institutions of higher education, something which in turn served to accelerate their fundamental reorganization. It was not until the nineteenth century, and particularly the second half of the century, that universities definitively assumed their current identity as bastions of scientific thinking and that modern science became a truly constituent and integrated element of the university. This institutionalization of science became evident not only in the numbers of new buildings, but also in the rapid expansion of scientific activity. In Sweden, this was made concretely manifest in the establishment of a series of new academic centres of learning, including the Karolinska Institute and KTH Royal Institute of Technology, followed by local university initiatives in Stockholm and Gothenburg and, from the 1960s, in a host of other regional institutes of higher education. Considering the long history of the university, it may be useful to remember how much of what we today associate with ancient university traditions are themselves nineteenth-century inventions. It was during the second half of the nineteenth century and the first half of the twentieth that the basic pattern emerged for the scientific institutions and academic professions of our era. This period also saw the emergence of the nation state, whose administration desperately needed qualified personnel, and the advent of an industrial revolution that required engineers and administrators
30
Chapter 2
for a burgeoning industrial sector. Both these demands in turn served to weaken the dominance of theology and the humanities, as the university began to throw off the yoke of aristocracy and church and lay claim to social autonomy and intellectual independence. With hindsight, then, we can see that what was really at stake was never some kind of full academic autonomy (if this were the case, where would the financing and legitimacy come from?). In reality, this was about society’s predominant institution of knowledge moving from the church’s sphere of interest to that of the nation-state. This process had already been under way for a long time, and it would be many years before its consequences were fully grasped, but the logic of the development was unmistakeable. In retrospect, this would prove to be a stroke of fortune, not least materially, in that the university thereby became integrated into the fast-growing administrative sector of the nation-state and, eventually, a given element of the welfare state with the ability to offer permanent positions, stable working conditions, and guaranteed incomes for researchers and teachers. In the second half of the nineteenth century, the educational needs of the state increased dramatically, tertiary education expanded dramatically, and the numbers of students and employees grew exponentially. Hey presto, a swathe of job opportunities materialized! Admittedly, the state had not invented the idea of the university, but there can be no doubt that, beginning in the nineteenth century, the burgeoning nation-state shaped, financed, and set its mark upon what was to become the modern university. The subsequent emergence of modern science mostly took place within the framework of the university, which had itself been shaped by the logic and needs of the nation-state. From a historical perspective and even in the present, it has never been inevitable that the university, with its roots in medieval scholasticism, should become the intellectual home of—and be seen as having a monopoly on—science.
The myth of the irreconcilable divide between science and religion It was after the universities had begun to expand and develop into autonomous institutions, powerful enough to threaten the church’s power over knowledge and control of people’s consciousness, that the myth of Columbus and his ignorant medieval contemporaries took shape, was widely disseminated, and became so significant. When the universities were secularized, literally in the sense that they steadily moved from the church’s sphere of interest into the domain of the state, the church authorities were often reluctant to cede their power over academic appointments and, with
The Adventure of Knowledge and Voyages of Discovery
31
it, their influence upon research, the expansion of knowledge, and people’s consciousness. As a result, conflicts emerged. In the late nineteenth century, the parameters of science and the status of academic institutions were still being negotiated, and scientists continued to have an insecure financial status and an ambiguous professional identity. At this time, therefore, many academics had strong, and sometimes also good, personal reasons to be hostile to the church, since the latter frequently put obstacles in the way of the growing scientific autonomy that the university was seeking to achieve with regard to academic appointments, research priorities, and teaching curricula. Because powerful forces in the church tried to influence the university, a heavy personal price there could be for being involved in the effort to secure scientific independence from interference by religious censorship. The memory of these often-bitter struggles, which has been handed down and retold within modern science, has created a need for explanatory models and arguments against the church’s assertion of authority. The Columbus story made it possible to characterize these representatives of religion as opponents of science and reactionary despots on the wrong side of history. The narrative recast them as the belated representatives of medieval superstition, who stood in the way of a glorious future built on scientific thinking. This background makes it easier for us to understand why historians of science have often exaggerated the opposition—and underestimated the close ties—between science and religion. It is as if the memory of the universities’ prolonged struggle and complicated process of liberation from the church has lived on in science’s own DNA. For this reason, academic institutions to this day have tended instinctively to recoil in alarm at every encounter with religion, as if it were a matter of something deeply threatening and problematic. Religion has remained science’s primary adversary, an actor that is automatically presumed to constrain freedom of thought and the conditions for the successful development of knowledge. As a result of the conflict between church and university, which profoundly affected the period between 1850 and 1950, the dominant narrative about science has also been shaped and coloured by the sinister backdrop of a persistent confrontation with religion. And yet we often forget two circumstances that are crucial for an understanding of this relationship: on the one hand, there was a third party involved and, as a result, that the real issue was a tug-of-war over science which ultimately ended with the university being drawn into the ambit of the nation-state; and, on the other, the relationship between science and religion during the Scientific Revolution of the sixteenth and seventeenth centuries was never truly problematic. These relationships present a serious
32
Chapter 2
challenge to our conception of science. If the scientific project that we see in embryonic form in the Scientific Revolution of the seventeenth century is not underpinned by a narrative defined by the struggle between religion and church, then what narrative of modern science should we actually be telling? What was the essence of the scientific project that eventually came to be the undergirding of the modern university and whose future hung in the balance in a tug-of-war between the respective spheres of influence of church and nation-state?
A journey to natural selection It was only in the latter half of the nineteenth century that serious conflicts broke out between religion and science, above all in the wake of On the Origin of Species (1859), in which Charles Darwin (1809–1882) presented his ground-breaking theories of evolution. Darwin’s account of species as creatures that changed over time, in a process of randomly generated biological diversity and natural selection resulting from an intra-species struggle for survival, offered no carefully ordered beginning and no inherent meaning or preordained course of development for the world and humanity. In many regards, the book came to serve as an unbridgeable division between science and religion. But even though Darwin’s theories, in more or less modified form, have become established science with immense credibility, his ideas about evolution were initially far from self-evident. How does it work when someone discovers something new and opens up a new field of knowledge? Revealingly, Charles Darwin, this scientific superhero, was never employed at a university. Nor did he need to teach or apply for research grants, since he was a wealthy man able to live off a large inheritance, which he also invested extremely shrewdly. Like Columbus, Darwin embarked upon a long, life-altering journey. Shortly after Christmas at 1831, at the age of twenty-two, having abandoned both a medical degree at Edinburgh and theology studies at Cambridge, he left Portsmouth in the south of England aboard the HMS Beagle. Despite his aversion to sea travel, this was to be the start of a five-year adventure that would take him around the world. Darwin, who had become deeply averse to academic study, would subsequently regard this journey as “strictly speaking, the only education I have received that I regard as meaningful.”20 In order to create new knowledge and acquire an education, journeys of discovery are required. Unlike Columbus, Darwin fully grasped where his journey had taken him. Even so, it would be many years before he could explain what it was that he had discovered during his long travels, and
The Adventure of Knowledge and Voyages of Discovery
33
several more years would pass before he finally published his ideas. The revolutionary theories that he presented were based in part upon his observations of birds, reptiles, and geological formations in the Galapagos Islands off the coast of Ecuador (which he reached after four years at sea), but it would also require theories and models and many years of intense thought before he was finally able to develop convincing arguments for his idea that the world, far from having a fixed and preordained order, was in fact in a state of permanent development—evolution—without any higher purpose. Darwin’s scientific discovery shook the world and forced people to think in new ways. His idea that species were in fact mutable and in a state of continual change within the parameters of a violent struggle for survival— natural selection—were incompatible with a literal reading of the biblical creation story and difficult to reconcile with the church’s beliefs and general conception of life: “For a pious naturalist, it felt wrong to imagine that a benevolent and omnipotent divine being had created all these forms of life only to condemn them to death.”21 Yet we often forget that these theories were not only a provocation for theology and the church but that they also marked a radical departure from, on the one hand, the Aristotelian idea of a (teleologically) meaningfully arranged world and the foundations for a humanistic conception of life, and, on the other, the newly dominant, mechanical worldview that had achieved a widespread currency in the wake of the Scientific Revolution of the seventeenth century. In this way, natural selection presented a challenge to a wide range of positions. It was science at its best. The world would never be the same again! How was this feat possible? As we have seen, Darwin himself was convinced of the necessity of making a journey. Without his journey around the world, his discoveries would not have been possible. He was therefore convinced that travelling was of decisive importance for the development of science and knowledge: In my view nothing is more beneficial for a young natural researcher than travel in foreign lands. The joy one feels at making new discoveries and the chance to truly make a contribution are supremely stimulating.22
My own view is that this insight has a general applicability: it is by travelling that we discover new worlds. In travelling, Darwin also learned something that is completely decisive for making progress in science and research: the necessity of hard, disciplined, and purposive work. Darwin had discovered a new reality during his great journey, but in order to impose order upon his empirical material he also needed something else: theories
34
Chapter 2
and models. In order to grow and develop, knowledge needs more than just the experiences gained by travelling, it needs a laborious effort of thought involving theories, models, metaphors, and narratives that can make us “see something as something” and thereby help us to distinguish contours and new configurations in the world. Discoveries require inventions, among which theories are the most important. Where exactly do theories come from? Sometimes we pick them out of the vast reservoirs of history, and sometimes they come into existence through a lively imagination. Typically, however, theories “wander” among different fields of knowledge. It is important to note that natural selection, which functioned as an utterly decisive model for giving meaning to Darwin’s observations about relations in nature, was derived from another, emerging academic discipline: that of economic science. In October 1838, the young Darwin, mostly for his own amusement, read a book that was to influence his thinking profoundly: An Essay on the Principle of Population by the British economist and cleric Thomas Malthus (1766-1834). In it, Malthus considered the effects of overproduction of population alongside the “terrible correctives of the redundance of mankind” which periodically ensued.23 Malthus argued that population numbers would outstrip food supplies if the former were not kept in check by wars, famines, and epidemics. Should the population increase in the absence of a corresponding increase in productivity, the result would be food shortages, which in turn would lead to a reduction in the population. Eureka! Finally, Darwin had found a conceptual model that he could use: a theory that could help him to understand and explain his empirical data. It is important to remember that Darwin’s ground-breaking theory of natural selection as continual competition and struggle was inspired by the writings of an economist. With the passage of time, this figure would continue its “journey,” now bolstered by the considerable scientific prestige of the natural sciences, and return to economics, where it would form the underpinning of Herbert Spencer’s Social Darwinism—and in this new setting confer scientific legitimacy upon the idea that individuals, groups, organizations, and nations should, in the interests of productivity, be allowed to compete with each other unhindered. We see here an excellent example of how a theory that originated in economics acquired greater scientific prestige after having made a “detour” through the paradigmforming natural sciences. Yet the fact is that Malthus’ theory suited Darwin unusually well, in part because his own scientific grammar was so thoroughly saturated with the metaphors of capitalism. In order to understand and explain how nature operated, he simply made it into a market.
The Adventure of Knowledge and Voyages of Discovery
35
This way of finding a language for reality is not innocent, of course, but this is not a reason for questioning his ideas so long as they do not have the effect of directly distorting the natural world’s modus operandi: “Darwin did indeed see that natural world through capitalist spectacles, but spectacles often help us to see things more clearly.”24 Theories are never innocent and it is important to be aware of what they actually do with reality. Sometimes it can also be helpful to know where they come from. Scientific discoveries require inventions in the form of theories and models. These make visible certain aspects of reality, but they also conceal others—thus, Darwin’s “spectacles” become dubious only if we think that nature is nothing like a marketplace. Theories and models have a general tendency to travel, from researcher to researcher and from discipline to discipline, and when they are translated into new contexts, they often acquire new and unexpected meanings. Without either historical perspective or philosophical awareness, we easily become blind to the limitations of theories and as a result we are unable to responsibly safeguard opportunities for new discoveries.
Journeys of education inspire adventures of knowledge Seldom, if ever, do knowledge and science emerge from an empty, hermetically sealed space. In the case of Darwin’s key insight that animals are not part of a harmonious order—whether organic or mechanical, underpinned by scripture or scientific models—but in fact seem to “fear” each other, he had also taken inspiration from his boyhood hero, Alexander von Humboldt (1769–1859), a researcher who was one of history’s greatest natural scientists. In 1799, during a temporary outbreak of calm amidst the chaos in Europe that had been unleashed by the French Revolution and the Napoleonic Wars, Humboldt had finally embarked upon what was to be a five-year voyage of discovery in Latin America, an adventure that would fundamentally change his life and thinking. Humboldt had inherited a large fortune from his mother and thus, like Darwin, was a man of substantial means, something which at that time was a condition for being able to engage in research. Unlike Darwin, however, he also brought with him a large number of scientific instruments, which enabled him to make observations and take measurements so as to explain and understand the things he encountered during his eventful journey. His considerable strengths included a capacity for wonder at nature.: “Humboldt ‘read’ plants as others did books” writes Andrea Wulf in her major biography of Alexander von Humboldt.25
36
Chapter 2
Like few others, Humboldt was able to retain vast bodies of knowledge from entirely different worlds—cultures of knowledge that would later exhibit a clear tendency of drifting apart into what we will later be discussing in terms of “the two cultures.” Humboldt’s interest in knowledge, which extended far beyond the principles of simple classification that had become a powerful and dominant method in science since Carl Linnaeus, equipped him for the intellectual task of imagining the earth as a vast, single organism. As he saw it, the world was a tapestry in which everything was joined together, albeit violently so, since it was characterized by an unrelenting struggle in which every creature was fighting for its own survival: “What hourly carnage in the magnificent calm picture of Tropical forests,”26 noted Darwin in the margin of his well-thumbed copy of Humboldt’s 1816 account of his journey to South America, as he made his own journey there, many years later, aboard the HMS Beagle. Humboldt’s ability to combine discoveries and inventions in many ways seems to have served as a paradigm for the modern science that had begun to take shape during his lifetime. When he did finally return home from his long journey, he brought with him a new worldview. He now saw connections everywhere. But they did not form a harmonious order in which God, from the beginning of creation, had assigned every species a particular function, nor was it a world whose parts were integrated like a machine. Humboldt did not view nature instrumentally, as if it were merely an instrument or a resource for humanity’s needs. Instead, as Wulf concludes, in looking beyond mere details, classificatory issues, and mechanical connections, to the entire ecosystem, Humboldt laid the foundations of modern environmental thinking.
Shifting perspectives and intellectual agility As we have seen, the story of Columbus as heroically challenging his contemporaries’ belief in a flat earth, a story which only took shape in the nineteenth century, is contradicted by the fact that virtually no (educated) person of his day held such a belief. Instead, the Columbus story reveals a great deal about how modern science has recognized itself—and also about us and present-day science. Even the very word discovery may be deeply problematic in the sense that it offers a distorted perspective in which one party unilaterally discovers another. The very discourse of “journeys of discovery” attests to an ethnocentrism in which a Eurocentric gaze beholds the other whom it meets in other parts of the world, whose name and significance it likewise presumes to have the right to define—as if the histories of other continents only began after they had received a visit from
The Adventure of Knowledge and Voyages of Discovery
37
Europeans! The perspective of Europeans was formed by a “tunnel of time” that extended back to the Garden of Eden or the Golden Age of classical antiquity: “Everything important that ever happened to humans happened in that tunnel.”27 But from the point of view of those who were already living on the continent where Columbus found himself—without really grasping where he had ended up—this had nothing to do with “discovery.” In light of the brutal history that ensued, it would be more accurate to define this first phase as the “conquest” or “invasion” of America. And even the name America itself is problematic, since it derives, as has already been noted, from that of a European: Amerigo Vespucci. What was it, then: a discovery or a conquest? The question has a general applicability. The ability to make this kind of shift of perspective is, in fact, crucial for understanding modern science but also for becoming aware that the other side of the Scientific Revolution, the Enlightenment, and the declarations of universal human rights issued by the French and American Revolutions, were formed by a brutal colonial history underpinned by a racist worldview. It was simply not the case that colonialism came along first and, then later, science was exported to the colonies. On the contrary, the emergence of modern science and the colonial expansion of Europe after 1492 were completely intertwined phenomena, two sides of the same historical reality. Nor was it the case that scientific advances sought to remedy underdevelopment, because the same scientific resources that colonial powers used to develop their own societies was simultaneously used to prevent development from taking place elsewhere.28 This reminds us of how closely the birth of modernity and the early history of industrialism are bound up with the slave trade triangle in which a stream of slaves, cotton, and manufactured goods for two hundred years bound Africa, North America, and West Europe together. As this grim background indicates, it could be argued that, from a non-European perspective, modernity began with slavery. Modernity or slavery, civilization, or barbarism? It all depends on your perspective. When considering Columbus, Humboldt, and Darwin together, it is striking how necessary the journey is for science to be able to make discoveries and develop new knowledge, and likewise how crucial it is to have the capacity to shift perspective and use different theoretical frames to structure one’s experiences. The development of knowledge and science presupposes not only “external,” physical journeys but also “inner” journeys and the development of an intellectual and cognitive agility. Both Humboldt and Darwin seem also to have had an exceptionally well-developed ability for combining a fine sense for local details with an extraordinary capacity for “zooming out” to integrate global comparisons, such that their
38
Chapter 2
investigations succeed brilliantly in combining perspectives from within and without: This flexibility of perspective allowed them both to understand the world in a completely new way. It was telescopic and microscopic, sweepingly panoramic and down to cellular levels, and moving in time from the distant geological past to the future economy of native populations.29
Humboldt may not have discovered a new continent (like Columbus) or uncovered the laws of nature (like Newton), but Wulf is right in saying that he discovered a new way of looking at the world. This reminds us of the importance of inventions for making new discoveries. Discoveries require journeys, both physical and cognitive. But an adventure of knowledge must also be combined with meticulousness, endurance, theoretical awareness, and rigorous judgement as well as bravery, curiosity, a taste for the unexpected, and the ability to think differently. Part of the dynamism and flexibility of science is its ability to shift scale. What we call research contains a spectrum of perspectives that include the planetary systems of astronomy with their infinite spaces, large data sets whose anonymous statistics lay claim to represent human behaviour, the study of individuals at the biographical level, and a microcosm of cells and chemical substances. It is entirely legitimate to make use of different levels of analysis from this variety of scales, but it is essential to always remain aware of the scale being used how it relates to the other scales that exist in scientific work. It is easy to forget that a change in scale does not simply mean “magnifying” or “reducing” the world——a different scale entails a change in the informational content itself. There is nothing innocent about shifting scale, as Blaise Pascal once elegantly and concisely put it: Diversity. . . . A town or a landscape from afar off is a town and a landscape, but as one approaches it becomes houses, trees, tiles, leaves, grass, ants, ants’ legs, and so on ad infinitum. All that is comprehended in the word “landscape.”30
An everyday example of the importance of changes in scale can be seen when one uses GPS at sea: the archipelago on the screen can seem to be neatly defined, but a change in scale to a magnified view suddenly reveals a set of shoals that on another scale would have been entirely harmless for the boat. Science is entirely dependent upon variations in scale. Because different academic disciplines work at different scales, it is essential to understand that variations in scale also change the informational content of
The Adventure of Knowledge and Voyages of Discovery
39
scientific investigations. The possibility of variations in scale within scientific activity presuppose intellectual agility in research and teaching, such that we always remain aware of the scale at which we are operating and how it relates to other scales in the study of reality. At the macro-level, the world can appear anonymous and deterministic, but a change of scale to the individual level typically reveals people to whom we wish to ascribe a very different degree of agency—and if we then continue to the micro-level, this degree of agency can in turn seem to disappear again. The question is not which of these levels best describes the human condition. Rather, it is in the very ability to vary the scale that what truly makes us human is made visible. In order to understand human beings, it is therefore necessary to find a correlation between these different perspectives and interpretations. It is for this reason that the everyday practice of science is so riven by conflicts of interpretation.31
Modern science emerged with positivism As we have seen, the story of Columbus and his foolish medieval contemporaries—formulated as a conflict between, on the one hand, a modern secular hero equipped with superior, scientific rationality, and, on the other, superstitious religion that blinded people to the reality of the world being a sphere—has very little basis in historical reality. Nevertheless, this image of an irreconcilable conflict between science and religion has often been exaggerated, but it is certainly not correct, in particular not with respect to the sixteenth and seventeenth centuries. Rather, this narrative originates from a critical area of hostility that in fact only appeared several centuries later. The real drama in the story took place in the nineteenth century, particularly in an era after 1850 and extends into the early 1900s. It was during this period, as it became necessary to mobilize resources in the defence of science’s institutional autonomy and cognitive freedom against a church that sought to control people and limit freedom of thought, that the Columbus story, about events that supposedly took place three or four hundred years previously, emerged. For this reason, it could be said that the modern view of science, which continues to the present, only acquired its principal features as late as the nineteenth and twentieth centuries, when latter-day ideals and challenges were projected back onto the Scientific Revolution of the sixteenth and seventeenth centuries. The story of Columbus and the flat earth is not an isolated phenomenon, it emerged in a very specific historical context and derives its strength from a larger narrative, that of secularization. Traditionally, the concept of secularization has been associated either with a more limited process in
40
Chapter 2
which property, functions, and power were transferred from church to society, state, and crown, or with religion’s gradual loss of ground in society, even to the degree that it was entirely abandoned. Yet the fact is that this secularization narrative is far less about religion than it is about science, since its focus is on how science sets the terms of social development in general. In the nineteenth century, it had become increasingly evident that science had a crucial role for industrialization, growth, and prosperity as well as for the development of modern society more generally in terms of progress. Such was the insight of Auguste Comte (1798–1857), who is usually considered one of the founders of sociology, but who was also one of the first to formulate the overarching science narrative that is now termed positivism. Comte’s main area of expertise was in what would later be called social science. In the very beginning, he used the term “social physics” to refer to his new science focused on society. But in order to differentiate himself from competing intellectual projects of the time he soon adopted the term sociology. He had taken the word “positivism” itself from Henri de Saint-Simon, a political philosopher for whom he had long worked as a secretary. Because of the temporary dissolution of the educational institutions in the wake of the French Revolution, Comte himself was never able to pursue higher education formally in a way that would have qualified him for academic positions. As a result, he was obliged to earn his living from other kinds of work and donations. Despite this, Comte was able to develop positivism and invest this concept with grand expectations for a new science—and a new society. A social and political radical in the turbulent period of the early 1800s, Comte was critical of church, state, and capitalism. In this situation, he portrayed humanity’s powerful drive to develop knowledge as the organizing principle by which individuals navigated their world. When Comte devised the narrative that would assign science a key role in successful social development, his perspective was dominated by his opposition to religion. From the perspective of philosophy of science, this critique of religion was a reckoning with the tendency of using anthropomorphisms in order to understand how the world works, that is to say the attempt to explain events in nature and society in terms of intentions, as if they were the expression of human actions. When these intentions are transferred to various gods (polytheism), the unitary will of a godhead (monotheism), or speculative concepts and abstract laws (metaphysics), our understanding of both nature and society is effectively clouded. Using positivism’s grand narrative, Comte sketched out the history of humankind as progressing irreversibly and in linear fashion from a theological-religious
The Adventure of Knowledge and Voyages of Discovery
41
stage (defined by authoritarian, anthropomorphic thinking that explains nature in terms of divine intervention), via a metaphysical-philosophical stage (in which a troubled society looks to the abstract principles, ideas, and forces of speculative thinking in order to account for phenomena and change), to a positive-scientific stage (in which the world is declared to be scientific in and of itself, which is to say, on the basis of its own laws, knowledge of which can only be acquired by carefully investigating the data supplied by experience). At that point, Comte recognized history as coming to an end—it was not possible to imagine a continued development because humanity had entered a phase of transparency by virtue of achieving a rational society organized according to science. As already noted, there is a close connection between positivism and the emergence of the social sciences. Given that the social sciences were also made possible by the positivist narrative, which relates the victory of science and the decline of religion, it is hardly surprising that sociologists, political scientists, and economists have traditionally shown a highly limited interest in religion when trying to understand and explain modern society. In its general form, this narrative conventionally relegates religion to its special role of “enemy number one”—while implicitly assuming that it is only a matter of time before religion is suppressed and disappears for good. They likewise see themselves as having identified a necessary relationship between the modernization that has science as its driving force and the secularization that involves both the transfer of various functions from church to society and the waning influence of religion over people’s consciousness in these social activities. Consequently, they have staged an absolute opposition between religion and science, superstition and rationality, history and future, which continues to be reproduced. It is thus upon these distinctions that modern science has built—and continues to build—its own self-understanding. In the light of this clear antagonism towards religion, it is a somewhat curious fact that Comte himself chose to represent the organization of science using models taken from religion. Ironically, his perspective has several similarities with those of powerful religious movements of his day, including puritanism and pietism, such that his own positivism can almost seem like a kind of cognitive puritanism and pietism. While the concept of religion that Comte embeds in his narrative is characterized as superstitious and metaphysical, and even presented as pretending to be absolute truth, his own alternative is no less defined by orthodoxy and similar claims to absolute truth. As a result, it perhaps comes as no surprise to learn that, later in life, he founded a church of positivism in which worship of God was
42
Chapter 2
replaced by worship of humanity and in which saints were replaced by distinguished scientists. In any event, modern science in the real sense of the word arose in tandem with positivism, and it is positivism that has set the agenda for modern science. It might be said that Comte’s book A General View of Positivism (first published in French in 1844) presented a narrative that would become the creation myth of modern science, a narrative that would both actualize and suppress philosophy’s critical thinking (which had, of course, destabilized society by inspiring the disruptive events of the French Revolution) as well as religious myths (whose metaphysics merely obscured the physics of the world). The future belonged to science alone. Although positivism is generally characterized by a foundation in realism that can be described as hostile to theory, Comte’s variety of positivism focused primarily on causal relations. For him, mere factgathering had no value whatsoever if one did not inductively identify in the empirical data a generalizable relationship that could be incorporated into a larger structure of laws. With a view to preventing arbitrariness and fragmentation, Comte directed his primary interest towards the logical structure of science. Since the 1800s, a great many similar narratives have been advanced, all seeking to identify discrete, law-governed phases of development in history, economics, psychology, pedagogy, and religion—and all of them dominated by the idea of development. The way modern science understands itself is strongly influenced by, and ultimately impossible to dissociate from, the notion of progress, as might be expected from the way that science is focused upon the production of new knowledge by means of systematic methods. For Comte, science was a single entity. He saw a clear continuity between all disciplines and ascribed a paradigmatic role to the most successful of them (i.e. the natural sciences). This is a key idea, to which we will return many times in this account, in that it fundamentally set the agenda for the theoretical discussion of science. Yet social scientists’ love for the natural sciences has rarely, if ever, been returned. Physics, the science prized most highly by positivism, would show an almost complete lack of interest in positivism. In the words of Bengt Hansson, whose studies of positivism have deeply influenced my own view: “What positivism calls ‘physics’ is not what normal physicists engage in but rather the idealized image that positivists themselves have created of what physics ought to concern itself with.”32 It is, however, strategically decisive for the way one views positivism’s version of science as a single entity whether one recognizes it as something
The Adventure of Knowledge and Voyages of Discovery
43
that actually can be realized (or, indeed, already has been realized) or treats it as part of a quest for truth in which different knowledge projects need always to be related to each other, in a process defined by conflict and mutual correction of the other’s position. Interestingly, Bengt Hansson underscores the fact that Comte nonetheless exhibited an open-minded approach towards science, as needing to be continually revised and—one of his greatest insights—to be both generalizable and testable, that contrasted sharply with the assurance and infallibility which had previously dominated the understanding of knowledge.33 Putting positivism in its historical context also allows us to understand Comte’s expectation that science would not only contribute to prosperity but also recreate social order and intellectual stability after the chaos which philosophical speculation had unleashed in the French Revolution. Comte was also a pioneer with regard to writing the history of science. The narrative of positivism showed (appropriately enough) that Comte himself had history on his side. Both the mists of medieval religion and the systematic doubt that Comte associated with the free and critical idea within the metaphysical principle that in his own time had inspired a climate of revolution, would die away of their own accord.34 And because science had been assigned such a key role in creating a positively integrated society, it was obvious that authority over scientific institutions should be transferred from the church to the state. Herein lay the outline for a program for modern science.
In what sense was Columbus nevertheless a hero? We began this chapter with the established narrative of the heroic Christopher Columbus. We then proceeded to deconstruct that narrative and steadily reveal a far more complex historical reality. Columbus made a journey whose world-historical importance he himself never fully grasped—because he lacked a theoretical perspective and the other discoveries that might have raised him above his immediate experiences and conferred meaning upon the reality he had “discovered” (without, in fact, having discovered it). When Alexander von Humboldt later set sail for South America, not only did he take with him a technical apparatus of inventions that radically extended his ability to investigate the world, but he also possessed a rare ability to move between the poles of the highly concrete and the deeply theoretical that we recognize as hallmarks of modern science. Darwin took his starting point in Humboldt’s discoveries and theories when he made his own journey around the world. When he found himself unable to make sense of the natural phenomena that he
44
Chapter 2
encountered on that journey, he drew on theoretical frameworks from another, emerging discipline: economic science. While Humboldt and Darwin are considered heroes of science, Columbus was an adventurer who has no real place in the history of science. However, even if he was not the heroic advocate of reason that the nineteenth-century story casts him as, does this mean that he had no significance at all for science? Does this story have anything else to teach us? We will have reason to return to Columbus in subsequent chapters, but for now let me mention some of the things that we can learn from his achievement (despite his ignorance as to its scope). In the first place, the Columbus story tells us something important about the necessity of travelling and exploring the world in order to develop new knowledge. Science demands curiosity and agility, enterprise, and boldness. Second, the story teaches us about the decisive importance of theories for the logic of discovery. Sensory experiences or “pure” empirical data is not enough. Discovery also requires the kind of inventions and models that we call theories if we are to be able both to understand where we have arrived and to explain what we have found there. Third, the Columbus story teaches us something about the necessity of putting things into their historical context and the importance of historicizing even historical accounts. What at first glance seems like a story set in the late 1400s, in fact turns out to be a drama set in the nineteenth century, almost four centuries later. In other words, to understand what science is, it is necessary to know something about its history, particularly the interplay between the scientific revolution of the sixteenth and seventeenth centuries and the consolidation of modern science in the nineteenth and twentieth, when its ideals were systematically projected back onto the earlier period. Fourth, it is important to note that the story of Columbus and the flat earth is part of a larger narrative about progress, a narrative that follows the emphasis on development and progress that characterizes positivism’s own narrative of science as having the future as its horizon of expectation. Fifth, the Columbus narrative shows how closely the advent of modern science is intertwined with the necessity of maintaining a clear line of demarcation with religion. This may explain why religion has long been a marginal theme within the social sciences, but also why the scientific community continues to be so fixated upon religion as the greatest threat to scientific development. Even today, there is an almost alarmist reaction to any encounter between science and religion—a red flag—despite that fact that science (in our part of the world) is rarely, if ever, threated or constrained by religion. A far greater threat is posed by the (authoritarian,
The Adventure of Knowledge and Voyages of Discovery
45
but sometimes also democratic) political regimes and economic interests (in the private as well as the public sector) which often seek to direct, control, and occasionally even eliminate scientific activity. In order to understand why universities, while today informed almost entirely by the grammar of economics and organized along narrowly financial lines, still regard religion as a major threat to academic freedom, we need to recall the degree to which history, even now, continues to make itself felt and to influence the impulses and self-understanding of science. The fact that the social legitimation of science and research today is primarily a matter of economics can be seen everywhere, from the expectations of state and society that they deliver a return on investment in the terms of growth and competitiveness, to the way that most scientists, to a startling degree, are willing to adapt their research focus in accordance with the prevailing opportunities for funding. Despite this, it nevertheless remains routine to gloss over the role of economics in the literature on the philosophy of science. Sixth, the Columbus story can teach us something about how seemingly foolhardy acts can sometimes have a productive value for scientific progress (as it does for entrepreneurship). Many of the most ground-breaking research findings, from Isaac Newton to Marie Curie, Albert Einstein, and Arvid Carlsson, were initially met with skepticism by the scientific establishment. It is important to stress here that, for science, no-one can ever fully grasp all the forces at work: there are always additional factors that have not (yet) been identified, “continents” that no-one has yet discovered. The fact that a solitary, uneducated fortune-seeker at the end of the fifteenth century could achieve something greater than all the university scholars in an entire continent is a historical lesson that remains relevant today. It is true that science demands a rigorous approach to the development of knowledge, but it also produces orthodoxies. Without adventures in knowledge and scientific voyages of discovery, we are unlikely to ever encounter new “continents”—and, as a result, will have nothing to explore and rigorously investigate.
The art of divvying up the world scientifically Columbus was trapped in a conception of the world that prevented him from discovering the “new” continent which he had reached. Quite simply, he lacked the theories that would have made it possible for him to rise above his immediate circumstances and glimpse reality with the perspective of some distance. In its need for theories, science opens itself to the communicative milieux and the collegial scrutiny that underpins its search for truth. In the process, it reveals a world that we can share with each other
46
Chapter 2
in the form of a scientific world. But it also confirms the fundamental precondition for the creation of meaning, that it is only by dividing the world up that we can share it with each other.35 The fact that we need distinctions to be able to orient ourselves in reality, and for the world to cohere for us in a meaningful way, also holds true for the scientific world. However, this also teaches us that there is no innocent way of sharing the world with each other. There is a real art to making distinctions in such a way that the world does not, in the absence of meaning, disintegrate into incomprehensible chaos, while also taking care that, in our search for coherence in reality, we do not assign too much meaning to the world. The successful development of concepts is to a large extent a matter of making shrew distinctions. The concepts we use to describe the world scientifically are largely generated by such distinctions. In the worst case, these can become rigid binary dichotomies, whose conceptual terms are mutually exclusive, resulting in an ossified understanding of reality. Our orientation in the world will be immeasurably better if these distinctions are allowed to operate as dialectical terms, so that they can stand as interdependent poles whose interaction can serve as the motor of an investigative process. For science, the dialectic between discovering and inventing has a strategically decisive role, and in this way, we become participants in a world that is vital and dynamic and that engages us in a process of continual renewal. Within science, there is always a discussion about what fundamental distinctions are most suited for understanding—and intervening in—the world. These discussions periodically develop into controversies and academic disputes or, at worst, outright conflicts. I would like at this point to highlight some of the most common conceptual distinctions within science, which the following chapters will return to and define in greater detail: x Ontological/epistemological. One fundamental distinction in the scientific world concerns the difference between ontology, or the fundamental assumptions we make about the nature of the world, and epistemology, which restricts its interest to questions about the possibility of knowledge about reality. There is a significant difference between the epistemological claim that “our knowledge of the world is merely a social construction” and the ontological claim that “the world is itself a social construction.” It also makes a great difference whether we are referring in these terms to a political party (which few would deny is a social construction) or the Himalayas (which can hardly be reduced entirely to a social construction). If we limit ourselves to epistemological claims, the
The Adventure of Knowledge and Voyages of Discovery
47
issue becomes whether constructionism is slightly less controversial, but if we make ontological claims about how things “really” are, the debate can get more heated. It can get harder to deal with the uncertainty that easily results from claiming that not only organizations and conventions but even phenomena such the economy or love are all “merely” social constructions. Historically, interest in ontology and epistemology has not been equally distributed. It is usually argued that Western philosophy took an “epistemological turn” with Descartes in the 1600s, and that the consequences of excluding issues relating to the theory of knowledge remained a focus of philosophy and science for several centuries. Eventually, however, many thinkers came to argue that we are always already a part of, and interwoven with, the world which we seek to understand using our knowledge. As a result, the twentieth century saw a resurgence of interest in existential conditions, something that once again reminds us of the ontological preconditions for knowledge. x Qualitative/quantitative. Another key distinction that has often divided modern theory of science is the difference between qualitative and quantitative methods. Large parts of the process of change that we call the Scientific Revolution, but that in reality was a natural science revolution, involved a shift from an older, Aristotelian, and teleological tradition, which was primarily interested in the meaning of existence and purposive qualities, to a Galilean tradition, which instead looked for mathematical explanations that might be quantified in terms of causal relations. For example, if we want to understand why a particular population is attracted to a certain political ideology, we can investigate the ways in which they express their preferences (by means of qualitative in-depth interviews that can then be analysed in terms of different explanatory models), but we could also proceed on the basis of statistics on, say, class in relation to ethnicity, in order to see if there are any significant correlations. Putting one’s trust in language in a quantitative spirit versus focusing more on figures in a qualitative spirit has at various times served to define a deep divide between different researchers and disciplines. Today, however, most people recognize that these different methodological traditions can be usefully combined. x Induction/deduction. Anyone who joins a scientific community quickly finds themselves confronted by a choice between an inductive and a deductive approach. Induction means that one, so to speak, works “upwards” and moves from the particular to the general. It is
48
Chapter 2
a matter of being guided by an empirical epistemological interest, which is usually also combined with an ambition of identifying some kind of conformity to a law. Deduction means that one instead proceeds on the basis of different theories, hypotheses, and axioms, whose durability is then tested against empirical reality. In the former (induction), we are dealing with empirical methods; in the latter (deduction), it is the logic of reason itself that governs, so that the movement is from the general to the particular when one proceeds from established theories and then tries to understand, explain, and provide evidential ground for those starting points. However, if the focus on empirical data is counterposed to the interest in theory, as if it were an either-or choice, there is a risk of science being short-circuited. To what degree is it possible or even desirable to engage with empirical data without any kind of theoretical framework? What are the risks in proceeding on the basis of models and narratives whose empirical material has been selected in advance? Underneath this opposition between inductionists and deductionists, we can hear a echo of the recurrent battles between empiricists and rationalists—and, still further back in time, the universalist dispute between (conceptual) realists and nominalists. x Verifiability/falsifiability. An important component in the positivist narrative of science, which will play a key role in this book, relates to the tension between those who hold that scientific knowledge must be empirically verifiable and those who argue that what instead defines science is its being formulated in a way that in can in principle be falsified. This distinction has a strategically decisive function in that it in some sense represents an inner tension within the positivist understanding of science, even as it also articulates a kind of cognitive anxiety that has played a major role in propelling science forward. x Empirical/analytical. Within science, it can often be difficult to differentiate between material and method, i.e., between what one is investigating and how one intends to “go about” investigating it. Since empirical data is almost always the result of (more or less consciously) methodical work, it also raises the question of how we can establish either an empirical level using observation and description (which relates to “what” something is) or an analytical level (which is instead tied up with questions about “how” to understand and explain the phenomenon being investigated but also associated with “why” a phenomenon occurs).
The Adventure of Knowledge and Voyages of Discovery
49
x Implicit/explicit. In some sense, the transition from empirical level to analytical level involves a movement from what is implicit to explicit. During the last fifty years, we have seen a growing awareness about the limits of analysis and formalization, that is to say, how much of our knowledge that is “silent” (tacit) and can only be implied, and how much can in fact be explicitly articulated and can therefore be an object of analysis. This relationship between implicit and explicit is also central to any understanding of how scientific discoveries are made and, in addition, crucial for the development of various theories of learning. These are the kinds of scientific distinctions that enable us to orient ourselves in the world and to investigate reality. However, using them to investigate new worlds requires a dynamic interplay between the different elements of these distinctions—when they become rigid and fixed into binary dichotomies, the adventure of knowledge grinds to a halt and is replaced by stereotyping. If we instead allow these distinctions to implode into a single reality, it becomes difficult to discern the outlines of the world and, more generally, to orient ourselves in the world. It is only by dividing the world up that we can share it with each other. Science is a deeply serious project. Scientific results are simultaneously provisional and binding, and its activities therefore presuppose the kind of agility that is to be found in a fundamentally dialectical approach. Without this interplay, there would be no journey of knowledge nor any scientific discoveries! Science is a difficult art because it involves managing our need for provisional arrangements and preliminary interruptions, so that phases of stabilization do not eradicate the conditions for the continual destabilization of our knowledge which unceasingly advances the adventure of knowledge and the scientific project.
CHAPTER 3 ANACHRONISMS IN THE PHILOSOPHY OF SCIENCE
Let us return to the question posed in the introductory chapter: What exactly is science? The answer to this question is not immediately apparent. Most of us would as a matter of course probably include physics, chemistry, and biology among the scientific disciplines but would be less willing—indeed, would perhaps refuse—to accept that history, theology, and music should also be included within the domain of science. And yet disciplines like history and theology are far older and had an assured place within the university for more than half a millennium before subjects such as physics and chemistry found their way into academia. Music was also long regarded as closely allied to mathematics—so why do we accord it so lowly a scientific status when that of mathematics is so high? How did it come about that we rank the different disciplines and their respective scientific status so differently? In order to find an answer, we need to turn back to history and learn more about the historical process that gave rise to science—because the current grammar of science has in a profound way been shaped by the history of science.
What does history mean for science? There is a striking absence of historical perspective in both the everyday practice of science and the various accounts of philosophy of science. As philosopher of science, Samir Okasha, notes laconically: “In today’s schools and universities, science is taught in a largely ahistorical way.”36 This lack of historical perspective—in which I also include accounts that present different theorists of science as a set of abstract and decontextualized positions—has potentially devastating implications for our understanding of what science is and how science really works. This is a serious issue because our inability and unwillingness to address the history of science risks lending impetus to a cluster of scientistic, dogmatic, and fundamentalist positions, which are currently being reinforced by the anxiety felt by many
52
Chapter 3
people facing the profound cognitive challenges posed by a globalized relativism. In order to develop a more balanced position, beyond both objectivism’s claims to absolute knowledge and relativism’s resigned arbitrariness, we need to seriously consider the fact that science has a history. In other words, we need to carefully situate our understanding of science within a historical context. It is essential to know something about history if we are to be able to say anything about the claims, conditions, and possibilities of science in today’s world. This kind of historical contextualization presents a major challenge to the conventional view of science and its theoretical status because it immediately shows that science is a phenomenon that has changed considerably over time. This insight can provoke anxiety as well as questions. Can we really rely on a science that is continually changing? Does it mean that, as Heraclitus asserted, panta rei, everything flows? What are the consequences of this insight into the mutability of history for scientific truth claims? Matters are further complicated by the considerable challenges that face anyone trying to write a history of science, which is often little more than a narcissistic mirror that confirms the prevailing view of science at that point in time. Whenever science has told its own history, it has all too rarely taken the form of critical investigations or narratives that might challenge how we think about ourselves—ideas that shake us as scientists by also highlighting the discontinuities and ruptures in the history of scientific development.
The dark historical chapters of science Mad scientists and geniuses with psychopathic traits are familiar figures in literary history and popular culture. In fictional protagonists such as Victor Frankenstein, Dr Strangelove, and Mr Freeze, science serves as the unique embodiment of the most destructive and demonic qualities of humankind. The apparently innocent statement that science has a history also reveals the dark history of science and the issue of its contributions to military technology and its role in wars of extermination. It may not be pretty, but the fact remains, that science and war have historically often marched in lockstep. The First World War evolved into “a war of chemistry,” the Second World War into “a war of physics,” and with the rise of Big Science in the postwar era, science has been almost entirely integrated into what has become known as the military-industrial complex, which has to a great extent dominated research funding during both the Cold War and what is sometimes called the “Hot Peace.”
Anachronisms in the Philosophy of Science
53
And yet this scientifically enabled war is really only one side of the story, since war itself has also been an extremely important engine of scientific development. In reminding ourselves of science’s importance to war, we therefore also need to acknowledge that war has had an even greater importance for science’s own development. It is a matter of fact that the Second World War brought about massive advances in a wide range of fields of knowledge, from vehicular and energy technologies to psychology and organizational theory. This brings us back to fundamental questions regarding the complex relationship between theory and practice in the logic of scientific development. Something similar holds true for ecological degradation: scientific development has been a prime factor in the destruction of the environment— at the same time as science represents a necessary component of any strategy that seeks to address our environmental problems. Other dark histories also lurk in the shadows of science, histories relating to colonialism, racism, and forcible sterilization in the name of science, not to mention the Holocaust, an industrially and administratively perfected system that would not have been possible without the advances made by modern science. This forces us to ask serious questions about the extent to which science is responsible for the development of war machines, industrialized genocide, and environmental threats and climate change. These are questions to which we will return.
Our relationship to the past is defined by anachronisms Our relationship to the past is inescapably bound up with anachronisms. But the real question is whether this does not also hold true, in a very particular way, for the history of science. The concept of anachronism refers to narratives that do not fully accord with chronology, accounts that ascribe to people, communities, or events some quality that did not actually exist at the time in question. Anachronisms can arise either by introducing into the narrative something that lies even further back in time (analepsis), or by inscribing something from the present moment, or something that is nonetheless located later than the time, in a previous historical situation (prolepsis). It is probably not a coincidence that the latter kind of anachronism is the most common, since we have a strong tendency of incorporating things, we ourselves take for granted into our accounts of other historical contexts, even when these did not yet exist. For instance, anyone who claims that “the United States is facing its worst drought in a thousand years” needs to remember that the United States to which they refer has a far shorter history, one that is barely a quarter of the timeframe
54
Chapter 3
invoked. It might be assumed that this habit is innocent, but the example of the Columbus narrative, with all its dark sides, forces us to ask whether such an anachronism is not in fact in danger of exterminating the indigenous population for a second time by excising their fate from America’s collective memory. One area where anachronisms are a particularly frequent occurrence is national historiography, since this kind of account invariably has its starting point in the current borders of the nation state, even if these had not been established at the time in question. For example, in accounts of Swedish history, it is not unusual for writers to ignore those parts that today form part of its neighboring country Finland, despite the fact that for more than six hundred years these were integrated parts of a joint kingdom—even as those same writers do not hesitate to include the southern counties of Skåne in their historical accounts of Sweden, including those long periods of time when they formed part of Denmark, and despite the fact that the “Swedish” history of these territories is only half as long as that of Finland. Merely to ask a question like “When did Sweden conquer Finland?” is to find oneself in a web of anachronisms, since neither Finland nor Sweden existed when the “Kingdom of Svea” gradually emerged in geography as a national entity and a historical reality, making it thus difficult to speak of any kind of “conquest.”37 National anachronisms also haunt the historiography of the university. When highlighting the existence of “international” relations earlier in history, it is often pointed out that as early as the thirteenth and fourteenth centuries a considerable number of “Swedish” students were already studying at “foreign” universities in Bologna, Paris, and Oxford—despite the fact that it makes no sense to speak of “Swedish” students, “international” relations, or “foreign” universities at that time. By the same measure, it might be asked why Swedes do not automatically include the prominent scientist Anders Chydenius (1729–1803)—an intellectual giant from the province of Ostrobothnia, now in western Finland, whose liberal ideas anticipated Adam Smith by several decades—in Swedish as well as Finnish histories of science, given that he was active before the separation of the realms in 1809 and was even elected to the parliament in Stockholm.38 Confronted by a mass university that currently seems in danger of losing its educational ideals of Bildung, commentators in contemporary debates about higher education have frequently invoked Humboldt University in hopeful albeit somewhat nostalgic terms, something that has given rise to what might be called “Humboldt anachronisms.” The fact is, however, that this designation for the University of Berlin was a post hoc-invention of the DDR-era: “Humboldt’s university” was an unknown concept for all of the
Anachronisms in the Philosophy of Science
55
nineteenth century and well into the twentieth. It was only when this extraordinarily successful University celebrated its centenary in 1910, creating an urgent need for a grand history, that Wilhelm von Humboldt (1767–1835) was picked out as—and indeed became—the founding father of the modern university in Germany. But in contrast to this Berlin-centered origin story and its importance for a Prussian mythology of the German nation, it was actually at the so-called reform universities of Halle and Göttingen that the real innovation of work routines had taken place in the form of seminars and laboratories. The ideas that characterized Humboldt’s draft programme for a new university in the capital were also largely drawn from other thinkers of the day. Where in the 1990s the anachronistic enthusiasm for Humboldt University in contemporary debates about the university had really been about finding a way to counter a threatening shift within higher education, that same enthusiasm in around 1910 had been a reflection of the pre-eminence which Berlin’s university had attained.39 Another example of anachronism is the way in which public rhetoric in our time unhesitatingly associates science with democracy. We easily forget that for as long as it could a large portion of the scientific community fought against democracy with all its might. Viewed from a historical perspective, the university belongs to a group of institutions that did not adopt democratic principles itself until very late. To this can be added the entrenched resistance towards admitting women into science. Without questioning the good intentions underlying current efforts to associate science with democracy, we need to recognize that science’s legitimacy is undermined by the use of such anachronisms, which effectively conceal the fact that the university and science have often had an ambivalent, when not openly hostile, attitude towards the democratic project. Anyone who takes an interest in philosophy of science is also immediately confronted by a plethora of anachronistic statements. These generally take the form of ascribing science a far longer history than what is reasonable. Yes, science has a long history—and yet not! The history of the human capacity for creating knowledge does indeed extend into the dim past, but if what we have in mind is a culture of knowledge that in some measure resembles that which we today associate with science, then this history is considerably shorter than we usually imagine.
Can we even speak of science, scientists, and a Scientific Revolution? It is not possible to engage with history without using anachronisms, but we must be aware of them and approach them critically. The question is
56
Chapter 3
whether the scientific account of its own origins is so full of anachronisms as to cast doubt upon the very possibility of finding a language adequate to the task of relating the history of science. The challenges are already evident in the very use of the term science. Since its practitioners have now been “speaking English” for more than half a century, scientific prose is today dominated by the English term—in contrast to the German Wissenschaft, as discussed earlier—which comes from the Latin word scientia, a noun that basically means “knowledge” and that derives from the verb scire, “to know.” The historical fact that the word science nonetheless only came into use in the eighteenth century, not coming into general currency until the 1830s, is both surprising and provocative for the many people today who imagine science as a more or less timeless phenomenon. Science, in the strict sense of the word, as a concept whose meaning we recognize, thus seems to have a far shorter history than we typically imagine—even if its long prehistory must always be borne in mind. It is both correct and reasonable to occasionally set out a “long” history of science, which includes root systems that extend across centuries and millennia, but we nevertheless need also to be aware that these are largely anachronisms—and that the result is often that contemporary notions of science are reinforced by being ascribed an impressive history. Yet the way in which we use the concept of science in such a history makes a very great difference to the status that we accord scientific work, both today and in a historical perspective. The very title of the most important publication in the history of science, Isaac Newton’s famous study of mechanics from 1687, The Mathematical Principles of Natural Philosophy [Philosophiæ naturalis principia mathematica], often known simply as Principia, shows how even at that time it was common practice to speak of “natural philosophy”—or even simply “philosophy”—when referring to what we would now instinctively call the natural sciences. Although using the concept “natural philosophy” instead of “natural sciences” in today’s world carries very different connotations and a broader understanding of what constitutes science, these relations also remind us that, historically speaking, the various sciences (in the sense of domains of knowledge) emerged through a process of being gradually detached from philosophy. The concept of natural philosophy thereby provides a clear connecting link back to philosophy, a discipline that has otherwise tended to be held at arm’s length and even dismissed from a scientific standpoint. An anachronistic use of the concept of science similarly conceals this broader context, thereby acquiring an obvious legitimizing function in relation to contemporary science and its uncompromising view of its own origins.
Anachronisms in the Philosophy of Science
57
Other anachronisms become apparent in the widespread tendency to describe Aristotle (384–322 BCE) as “the first great scientist” or to refer to Copernicus, Galileo, and Newton as pioneering “scientists”—even though the concept of scientist did not exist in their day and was only coined, much later, by William Whewell in a review article in 1834.40 How, then, should we regard the countless individuals who are described as scientists in science’s own story of origin, despite the fact that, prior to the mid-1830s, no-one ever thought of, or referred to themselves, as such? Clearly, a critical scrutiny of the numerous anachronisms in science’s account of itself will ultimately require us to re-evaluate large swathes of the history of science. As we have already noted, well into the 1870s it was unclear as to what constituted a scientist, while the professional identity of those who identified themselves as such was to remain fluid for a surprisingly long period of time, because of the absence of any agreed-upon protocols for education, career path, appointments, and means of support. But what happens to our understanding of Aristotle when we describe him as the first great “scientist”? How does the fact that Galileo saw himself as a philosopher affect our view of what it means to be a scientist today? And how should we deal with the gendered assumptions that have historically been made in relation to the word scientist?41 Does our use of gendered categories mean that we are concealing—or reminding ourselves of—the fact that women have historically only been allowed to play an extremely marginal role in science or, embarrassingly, that reforms intended to promote gender equality only really gained momentum in the second half of the twentieth century?42 Similar challenges accompany the third dominant concept after “science” and “scientist”: the Scientific Revolution. Steven Shapin opens his book The Scientific Revolution, now a minor classic, with a formulation that is as ingenious as it is provocative: “There was no such thing as the Scientific Revolution, and this is a book about it.”43 Shapin radically questions the image of the Scientific Revolution as a coherent event in which “everything changed,” as well as the notion that this event was responsible for a visible rupture in human thought. Instead, he insists that the concept has a far more recent provenance. Before Alexandre Koyré brought the term to the attention of a broader public in 1939, it was extremely unusual to refer to the Scientific Revolution, and the term only really entered general currency in the latter half of the twentieth century. None of the heroes of the Scientific Revolution, from Copernicus and Brahe to Galileo and Darwin, had heard of the “Scientific Revolution” in which they themselves would subsequently be accorded leading parts in their capacity as “scientists.”
58
Chapter 3
The concept of revolution itself is complicated and relatively recent. Coupled with our linear, mono-directional understanding of time, and our modern understanding of a revolution as a radical, irreversible new order poses a number of problems since, during the period that we (anachronistically) refer to as the Scientific Revolution, the word “revolution” in fact denoted a periodically recurrent cycle, such as the circular motions of heavenly bodies akin to the “revolutions” around the sun invoked in the title of Copernicus’ 1543 work De revolutionibus orbium coelestium. In other words, the term “revolution” at that time had a diametrically opposed meaning to its later use, including how we understand it today. In science, talk of revolutions originated with philosophers during the French Enlightenment in the eighteenth century, who used it to shake off the legacy of the ancient régime. In fact, it was within the discourse of science that the word revolution was first used more systematically in the sense of an epochal and irreversible change. Only later did it come to refer to political events such as the American, French, and Russian revolutions, before then “travelling back” to science in the form of the twentieth century concept of the Scientific Revolution. Shapin nonetheless calls into question the entire notion that there existed some discrete and coherent entity called “science” in the seventeenth century which subsequently underwent a “revolutionary” transformation. Nor is it meaningful to refer to the existence at this time of a “scientific method” that was coherent, universally accepted, and effectively used.44 This raises very serious questions. Is there a connection between the fact that the history of science is seemingly one of those areas where anachronism is especially prevalent, and the fact that our present moment is dominated by a dogmatic view of science that effectively suppresses historical change and complexity? Has the sheer number of anachronisms functioned as a deviously effective way of representing science as something neutral, thereby legitimizing a view of science as the timeless manifestation of truth? If so, it may not be an exaggeration to say that there must be an element of contrivance in the positivistic narrative of science. This grand narrative of science’s triumphal progress embodies an approach to history that foregrounds one version of history—even while eliminating its critical potential by reducing it to a mirror that merely reflects the dominant ideas of the age. It doesn’t sound very “scientific”!
Anachronisms in the Philosophy of Science
59
Secular anachronisms Representations of the Scientific Revolution have been dominated by the secularization narrative and it has typically presented a caricature of religion and science as two distinct positions, locked in irreconcilable conflict. It would nonetheless be an anachronistic error to imagine that the geniuses of the Renaissance and the heroes of the Scientific Revolution were modern, secularized individualists. Rather, these talented people were firmly rooted in a world in which religion was a given. Yet this fact is almost invariably suppressed. Copernicus was employed as a canon in a cathedral when he made his ground-breaking observations and calculations. Kepler was studying theology with a view to becoming a priest before he was drawn into a career in astronomy. Galileo and Newton were pious men who felt at home in a Christian worldview. One could go on. And yet the entire history of science is permeated by an epic narrative of a conflict between “religious” and “scientific” actors, as if it involved two different groups of people facing off. Yet this was definitely not the case. Steven Shapin makes no bones about the fact that there were conflicts between how natural philosophy viewed the world and the interests of religious institutions, but he emphasizes that these never involved a principle of necessity: “There was no such thing as a necessary seventeenthcentury conflict between science and religion.”45 Lawrence M. Principe likewise underscores that during the sixteenth and seventeenth centuries, and even later, there was no clearly demarcated group of scientific practitioners fighting to free themselves from religious repression—for the simple reason that no such separate camps existed. As he adds: Popular tales of repression and conflict are at best oversimplified or exaggerated, and at worst folkloristic fabrications […] Rather, the investigators of nature were themselves religious people, and many ecclesiastics were themselves investigators of nature.46
The way we tell the history of science would seem to be awash with anachronisms. Yet there is surely no more mythologized and misunderstood episode in the history of science than the epic battle between Galileo Galilei (1564–1642) and the Roman Catholic Church. By stereotyping Galileo as the solitary (modern) scientist confronted by a medieval Inquisition, we obscure the fact that there were also intellectual, political, and personal motives behind this particular conflict, and that Galileo in reality had both supporters and opponents within as well as without the church hierarchy. The fact that the Church made neither geocentrism nor Aristotelianism into
60
Chapter 3
dogma, confirms the idea that this conflict should more properly be understood as an internal struggle between different church factions.47 When Galileo published his book Dialogue Concerning the Two Chief World Systems (1632), in which the relationship between a geocentric and a heliocentric worldview is discussed in the form of a dialogue, he enjoyed the support of the pontiff himself, Urban VIII, who was, moreover, an old friend and admirer of Galileo. But because Galileo both refused to follow his protector’s advice to present the issue as being not yet resolved and (presumably unintentionally) made the pope’s position look foolish by putting his words in the mouth of Simplicio, a protagonist assigned the role of defender of the geocentric worldview, the master of mechanics fell into disfavor and his book was put on the Index. Yet the Church was never actually opposed in principle to the new science. On the contrary, the new scientific ideas were developed and disseminated within the church and by its cadres, thanks in particular to the efforts of the Jesuit schools, which were often the first to teach the new insights.48 What is more, Nicholas V, Sixtus IV, and Pius II were true Renaissance popes, who to a large degree embodied the ideals of humanism. In this context it should perhaps also be added that several of the leading intellectual figures in the new science were either Jewish or of Jewish heritage. Eventually there also appeared the occasional atheist, but the modern notion that Renaissance humanists, university scholars, and the heroes of the Scientific Revolution were secularists and anti-religious is a fallacy, part of an anachronistic understanding of history that is founded, in turn, upon a binary view of the religious and the secular. If we turn to Pico della Mirandola, it is striking how self-evident the biblical frame of reference was for his understanding of human dignity—to the point that he had God himself announce the “founding” of humanism!49 Humanists like Pico, it is true, often criticized the Church’s abuse of reason and power, but they did not reject Christianity. Rather, we might follow Hegel’s precept, that religion should be seen as part of the history of reason and that modern science de facto emerged from a religious prehistory—even if we are thereby lapsing into anachronism (our concept of religion is itself also very modern, dating only from the concept of secularization). But there never was an out-and-out conflict between religion and science during the Scientific Revolution. What did happen was a series of conflicts between how a fledgling natural philosophy viewed the world and the interests of religious institutions. Only far later did an irreconcilable conflict of principles arise. As Principe clarifies: “The modern error comes from a confusion with so-called secular humanism, an invention of the 20th century that has no counterpart in the early modern period.”50
Anachronisms in the Philosophy of Science
61
The fact that scientific investigations were often prompted by religious motives requires us to dissociate ourselves from a long series of common and persistent misunderstandings and prejudices about the “modern” and “secular” character of the Scientific Revolution. As Principe provocatively explains in his book on the Scientific Revolution: First, virtually everyone in Europe, certainly every scientific thinker mentioned in this book, was a believing and practicing Christian. The notion that scientific study, modern or otherwise, requires an atheistic […] viewpoint is a 20th-century myth proposed by those who wish science itself to be a religion (usually with themselves as its priestly hierarchy). Second, for early moderns, the doctrines of Christianity were not opinions or personal choices. They had the status of natural or historical facts […] Thus theological ideas played a major part in scientific study and speculation— not as external ’influences’, but rather as serious and integral parts of the world the philosopher was studying.51
There are thus good reasons for questioning whether people during this period really were as secularized as the standard narrative would have us believe. Whatever the case, it is very clear that those who have written the history of science have generally underestimated the scientists’ own identity as Christians, even as they have exaggerated the difficulties supposedly associated with practicing science while holding religious convictions (something that seems not to be an issue for the Nobel Prize winners in our own era who hold religious convictions). On the contrary, the dominant view recognized the study of “the book of nature” as a religious act, an act that was, moreover, driven by faith and comparable to studying the Bible. The metaphor of the Two Books, which became particularly influential in the wake of the Reformation, demonstrates, if anything, the existence of a positive connection. We might say that there were significant similarities between emergent Protestantism and the new science, in the way that both represented a settling of accounts with traditional dogmas and reliance upon authorities. The metaphor of the two books remained current until the nineteenth century, when the model was destabilized by a combined confrontation with Darwinism and a historical-critical view of the Bible, with the result that it came to seem, to thinkers like Friedrich Nietzsche, that God had not, in fact, written any “books” at all.
62
Chapter 3
The conjuring trick of scientific accounts Science is an adventure, a journey made under the sign of the progress of knowledge. Yet the general routines associated with how scientific results are presented tend to conceal this journey. To the uninitiated, scientific dissertations, books, articles, and essays can easily give the impression that the development of science is a completely linear, planned, and predictable process. It is thus easy to fall into the trap of thinking that those in the past simply proceeded from some particular questions, chose a suitable body of material and an appropriate method, and then gradually undertook their investigations before finally arriving at a series of results and conclusions. Yet research rarely happens like this in real life. It is a far more complex process that only occasionally moves in linear fashion. Sometimes practitioners have an idea that they repeatedly adjust during the course of the journey. On other occasions, they are able to intuit the conclusion from an early stage, in which case the challenge instead lies in identifying the question to which these ideas offer an answer. In any event, the final presentation of research results says very little about the journey taken, because articles and books are generally written with the benefit of hindsight after the journey has been completed and with the “answer sheet” in hand. Textbooks on methodology can sometimes look like manuals, but in practice authors usually write the chapter on method last; only then do they know which way (“method”) they took. For this reason, experienced supervisors tend to encourage those writing academic essays and dissertations to write the first chapter last—since it is only from this point of view that they will see clearly how they have “gone about” the task. In confronting the conjuring trick that forms the basis for every academic text, we thus find ourselves once again confronted by the problem of anachronism: scientific accounts are usually compiled in a process that is the exact opposite of the research process itself. Following Søren Kierkegaard’s axiom that life is lived forwards but understood backwards, we might say that while science develops forwards, it is always understood backwards. In this way, scientific accounts can foster the illusion that the author already clearly saw the way forward, as if they had never been in any doubt about how to organize and write up the research process. It is probably also this conjuring trick that has given momentum and legitimacy to an unrelentingly positivistic conception of science shaped by the idea that there is a “sole path to knowledge.” But research is a history that can occasionally be quite complicated: its progression follows time’s arrow even if the narrative that presents the research results is written from the vantage point of the end. The same holds for an essay or a dissertation organized according
Anachronisms in the Philosophy of Science
63
to these principles. The temporality that is expressed in almost every scientific publication thus provides yet another reason to guard against anachronism and to remain critical of how science tells its own history.
The necessity of philosophy of science We have seen that science’s account of itself is full of anachronisms— indeed, how could it be otherwise? No-one can free themselves entirely from their immediate circumstances and step outside of the prevailing ideas and categories of their time. How can we understand the past if not by using the understanding we possess in the present? And yet there is a crucial shift in meaning that occurs when the significances and referents of a term “travel” in time. But if philosophy of science in our own time is to avoid contributing to the disastrous production of science fundamentalists, it is therefore necessary to think historically—while at the same time remaining aware that we can never entirely free ourselves from anachronism when dealing with the past. Science will always have to position itself with regard to something beyond itself, something that holds true as much for the prehistory from which it emerged as it does for the societal context in which science finds itself. In sum, the practice of science cannot sever its ties to philosophy. Throughout history, philosophy has been the midwife to an array of scientific fields and brought numerous new disciplines into the light of day. As these disciplines began taking an increasingly empirical turn in the seventeenth century, in the wake of (what we have come to designate) the Scientific Revolution, their relation to philosophy also became increasingly complicated. Indeed, it could be said that modern science, with its emphasis upon empirical data, was born out of a break with philosophy (which is, of course, not bound to empiricism). Over time, this development has resulted in philosophy often having an uncertain scientific status within the modern university. In our era, however, it has become ever more apparent that the empirical sciences need philosophical reflection, both for their own self-understanding and in order to retain their transformative dynamism. “What is mathematics?” is no more a mathematical question than “What is physics?” and “What is biology?” can be thought of as physics or biology questions. Whatever the case, such questions are only rarely encountered by students of these subjects. Rather, they can be described as philosophical questions. On this view, philosophy stands out as the only discipline that has itself as its object of study and that continually calls into question the way it understands itself. This has prompted Lars Svendsen to conclude: “No
64
Chapter 3
living philosophy should allow itself to be constrained by what philosophy has been.”52 This statement can also help to explain why the question of what philosophy is has become one of the great questions and controversies within philosophy. While similar tendencies can be observed elsewhere, philosophy as a discipline is probably the most extreme case with regard to the lack of shared criteria for assessing scientific quality. There can be few other academic disciplines in which the same person can be praised by an expert evaluator as by far the best candidate for a post—at the same time as another evaluator finds them wholly unqualified for the post and even for philosophical work of any kind. There is virtually no equivalent in the natural sciences to a situation in which expert evaluators can rank the same applicant for a chair of philosophy in first, second, sixth, ninth, and last place, respectively!53 Although we may disagree about the uniqueness of philosophy’s lack of consensus—similar phenomena do occur in theology and the humanities, for example—it must be asked whether is at all possible to draw a hard and fast distinction between philosophy and non-philosophy, a distinction that is also historically very recent, having only become significant in the nineteenth century. In any case, it serves to remind us of the contribution that philosophy, by virtue of its restlessness, can make to scientific reflections which seek to make something productive out of the dynamic interaction between science and philosophy. Within the framework of this constellation, philosophy can, in fact, help to preserve the agility and rhythm that are hallmarks of scientific development. We will return repeatedly in this book to the way in which a vital scientific practice is defined by this kind of dynamic alternation between stabilization and destabilization, “normal science” and “scientific revolutions”—to invoke two concepts coined by Thomas Kuhn.54 Philosophy continually helps us to see the world in new ways, change our perspective, and re-examine things from new angles, with the result that we are often also forced to question our own positions. Given this background, we can readily understand why, in both the past and the present, there is such a profound lack of unity in philosophy specifically. And yet this reflective and interpretative attitude towards philosophy may also explain the surge in interest in philosophical questions in our own time. As Svendsen observes: Philosophical questions arise when our concepts become confused, when ideas clash with each other, when we find ourselves with indispensable yet irreconcilable notions and no longer know how to relate to ourselves and the world. At that point we need to tidy up our thinking and our language.55
Anachronisms in the Philosophy of Science
65
Although science, as we know it, has emerged from philosophy, the successive detaching of disciplines from philosophy has not infrequently been accompanied by a suppression of the discipline’s historical origins. But science cannot entirely sever its ties to philosophical reflection if it is to preserve its openness towards the unexpected and to remain versatile and dynamic. If science is to remain vital, in other words, it needs to be reminded of its own philosophical origins. To be precise, science needs an ongoing philosophical reflection, and in a way that calls into question the assumptions that in everyday science are often taken for granted.56 Philosophy reminds science that it has a history, that what is self-evident can always be questioned, but also that the project of knowledge, far from having an terminus, is an unending adventure!57 Without historical perspective and philosophical awareness, science risks becoming stuck in its own anachronisms.
We need a new science narrative The philosophical and scientific reflection additionally compels us to confront the lack of history that has often been a feature of our conception of science. Science needs instead to be presented as a multifaceted and successful project characterized by a large degree of variation across time and space. Adopting such a view of knowledge from the perspective of philosophy of action confirms the necessity of historicizing science and introducing philosophy of science by means of strong contextualization. Given all this, it may be asked what science is really about. What is the alternative narrative to the secularization account with all its anachronisms? There can be no doubt about the necessity of replacing the established narrative of science with an alternative. The (alternative) science narrative that I am arguing for in this book rests upon a fundamental historical perspective that can be formulated as follows: Modern science, as we know it, first appeared in the nineteenth century, which was also the century of the emergent nation-state. At that time, a scientific project took shape which sought to pass on and preserve important elements of the knowledge culture that had begun to gather pace in the sixteenth and seventeenth centuries. At the same time, however, this modern science tended to project the intellectual ideals of its own era back on to this earlier period with such force that the latter eventually took on the aspect of a scientific revolution. Anyone wishing to understand modern science must therefore come to grips with science’s account of its own origins and cope with the fact that the nineteenth century both discovered and invented what has come to be regarded as modern science.
66
Chapter 3
When science strengthened its position during the twentieth century by means of extraordinary advances, eventually establishing the dominant cognitive infrastructure for society as a whole, it came at the price of a fundamental transformation of science itself. Yet the fact that society has been totally transformed by science in the last couple of centuries also means that any general philosophy of science will also be a serviceable theory for explaining modern society in all its various incarnations.
CHAPTER 4 THEORIES AND PRACTICES, TEXTS AND CONTEXTS
When I took my first university courses in philosophy of science, we students were introduced to a series of abstract “boxes” containing different theoretical and methodological positions within philosophy of science. These were presented to us as alternatives, entirely lacking historical or social context, like a timeless smorgasbord of theories of science from which we might select. For reasons that were never made clear, we were then invited to decide which of these schools of thoughts we wished to align ourselves with, start using, and then represent. After making our choice and locating ourselves inside one of these boxes, we were immediately expected to identify with and remain loyal to our chosen philosophy of science, and to defend it against other competing alternatives. The choice itself—which was made entirely arbitrarily, as if were a question of taste or sensibility— was never problematized. This curious combination of arbitrary choice and absolute loyalty gives visible expression to an approach to philosophy of science which is as frivolous as it is dangerous.
Theory and practice in the knowledge arena Theory of science? Theory of science may be the designation for an academic discipline with considerable merits, but there is nonetheless something troubling about the term itself. As a label, theory of science can all too easily express and reinforce a unilaterally theoretical view of science, as if science were built upon a series of propositions. Such an approach makes it easy to draw the hasty conclusion that taking care of the theory will automatically allow the practice to function by itself. This view of knowledge, which counterposes theory and practice only to join them using a linear causal logic, has left deep cognitive traces in the intellectual history of the West, in which theory and philosophy of science can also easily become bogged down. Having discussed the relationship between sensory, empirical experience and theory, we now turn our attention from the
68
Chapter 4
perspective of philosophy of action to the relationship between theory and practice. In the final instance, almost all of the challenges facing those interested in knowledge and the systematic construction of new knowledge that we call science, relate to the issue of how to manage the complicated relationship between theory and practice. Merely to use the word knowledge is to be transported almost at once back to the knowledge arena that was famously imagined by the classical philosopher Pythagoras (c.570–497 BCE).58 In doing so, we find ourselves in what might be called the “primal scene” of Western knowledge and, once there, we quickly find ourselves playing one of the roles that Pythagoras associates with the three different groups who come to this knowledge arena. First, those fighting on the arena floor, who are not acknowledged as having a capacity for creating knowledge on the grounds that they are too involved in practicalities. Second, those spectators watching the action from a safe distance on the benches—the philosophers. Here we find the real creators of knowledge, the spectators—the theoreticians. But Pythagoras also identifies a third group who also move around on the benches, the salespeople, who are not considered capable of developing any form of knowledge because they are too absorbed in conducting their business. The roles are thus assigned, the curtain can go up and the drama can begin. With such a point of departure, it is obvious that knowledge is something that can only be produced by those who assume the role of spectators in the “benches” of their particular milieu—which is to say, knowledge can only ever be theory in opposition to practice.
Human action making science It can often be difficult to see that which is evident. Not infrequently, authors of recently completed academic essays or doctoral dissertations, when asked to respond to questions about their work, identify a series of crucial issues—that were so obvious to them during the writing process that they did not find their way into the text. In similar fashion, the most important precondition for our being able to talk about knowledge at all, namely, human beings, often remains invisible in public discussion of knowledge. We easily forget that science is a practice that presupposes the existence of human beings and human action. For this reason, we must continually remind ourselves of something quite fundamental and obvious: there can be no knowledge without people, human action is necessarily present in all knowledge and science.59 In this vein, Stefan Svallfors describes a growing frustration at how the most important element in research—the human being, the body, the group, the team—has a tendency
Theories and Practices, Texts and Contexts
69
of being overlooked in favor of rules and administrative routines. For this reason, we are encouraged to regard research as a rule-governed process, an assumption with devastating implications: “The most important aspects of research are something we never teach.”60 If knowledge is not possible without people, because it is a product of human action and interaction, then it ought not to be considered primarily as an exterior phenomenon, something that can be objectified, something that “is.” This perspective may be both legitimate and necessary, but it is nonetheless not enough. Instead, knowledge needs to be understood primarily as something that people “do.” It is often rightly observed that knowledge is built on facts. However, given the etymology of the word itself (in Latin, factum is the perfect passive participle of facere, to do), the question is whether it would not be more correct to say that scientific work produces facts. Pursuing this connection, Calvin O. Schrag connects the word “fact” with “fabric” in order to emphasize that facts are always part of a wider interpretative context: “the fabric is not something that is adventitiously added to the fact. It is […] constitutive of the presence of the fact.”61 From this we can draw the conclusion that the real meaning of “facts” is “fabricated” and that the term is thus inescapably associated with human action and its social embeddedness. The expression “it is a fact” ought therefore to be regarded as the expression of how something became a fact and not merely in terms of an objectively existing fact.62 The concept of objectivity is often surrounded by a trouble-free and timeless aura in discussions of science and knowledge. Yet even objectivity has a history, one that is surprisingly brief—indeed, not even two hundred years old. It was not until the mid-nineteenth century that researchers began using a kind of mechanistic objectivity as a way to liberate their perspective from all human involvement. Previously, observers had represented reality in ways that sought to control the sheer variety of nature, its countless exceptions and monstrous anomalies, by concentrating upon that which was essential and rule-governed. The result consisted of the kind of stylized images that can still be found today in botany textbooks. When this perspective was replaced by mechanically produced images, it was as if nature “was speaking for itself,” seemingly without any form of human intervention. Over time, however, complications revealed themselves, and the unsustainability of such a understanding of objectivity became apparent. When two photographs captured the same object, the resulting images might nonetheless differ radically. Yet this naïve conception of objectivity was steadily undermined by a growing awareness of the active nature of imagemaking and was eventually replaced by a more robust strategy that
70
Chapter 4
concerned itself instead with how human beings and their trained judgement could be taken into serious consideration.63 At a time when science, following the new formalism, is becoming increasingly abstract, impersonal, and standardized—as if research could be conducted according to a manual—even as the threat of fact-deniers, factresistance, fake science, and a “post-truth society” is generating a dynamic that leads people to expect science to deliver facts, proofs, and incontrovertible truths, there is an urgent need to recover a more action-oriented perspective within philosophy of science. In Sven-Eric Liedman’s elegant formulation: “Knowledge is always tied to living creatures, but it is not confined within them.”64 In short, our understanding of science needs to be contextualized.
Empirical data cannot be “collected” In discussions of scientific method, commentators often refer carelessly to “collecting” empirical data. Methodology textbooks and dissertation chapters on methodology repeatedly invoke the naïve image of the scientist as someone who, like an explorer, “discovers” new territories, new facts, and new empirical data, which are, as it were, simply there waiting to be collected. In our time, the dream occasionally associated with grounded theory, that scientists generate theories inductively on the basis of their empirical data, has lost its appeal for good. In reality, it involves a far more complicated process, something that has caused the advocates of grounded theory to modify their original position. No-one ever “collects” empirical data and facts for an investigation. Empirical data is something one generates, and facts are the fruit of rigorous human action. In social science, which makes extensive use of interviews, it is particularly evident that researchers are themselves involved in creating the situations in which people say things (specifically, things they would not otherwise have said) and in which prior information and questions, above and beyond the immediate situation, become part of the elements actually defining and forming people’s understanding and behavior. As Mats Alvesson observes in relation to the interview as a complex social event: ”Without a theoretical understanding to support our critical judgment, any use of interview material risks naivety and leaves interpretations standing on shaky ground.”65 Empirical data is not something that research simply stumbles across; rather, it is something that has to be established. As we saw in the story about Darwin, reflection, theory, and explanations are all needed if we are to identify something, develop groundbreaking explanations, and ways of understanding our findings that are both creative and robust. Without
Theories and Practices, Texts and Contexts
71
inventions, only few things can be discovered. Meaningful contexts are a precondition for establishing facts. In other words, it is not merely a question of portraying and representing some a priori reality. Science needs explanations and understanding. We ought, perhaps, to treat science as a verb and refer to science in terms of science-making. In order to underscore the active character of all scientific work and foregrounding the fact that science creates its facts, Sten Andersson has ingeniously proposed that science should be reimagined as “created knowledge”; in the same vein he refers to science as “scientifically creating” its facts.66 Perhaps “creating” is rather too strong and gestural a term, which can give the impression that what is involved here is a process of free creation. Personally, I would prefer that we speak of generating empirical data and facts by means of a dynamic interaction between discovering and inventing the world.
“Reading” and “writing” the world One way to engage with questions about how to understand the creative dimension of all scientific work, and to manage the continually shifting interface between ourselves and the world while also doing justice to the actively interpreting relation between researcher and material (regardless of whether it involves empirical data or texts), is to conceive of it in terms of “reading” and “writing.” Figures and empirical data never “speak” of their own accord—anymore then texts and books can “speak” for themselves. They require interpretation, which is to say a person who reads and creates meaning. We can therefore better understand what researchers are really doing when they generate and analyze empirical data by treating them in analogous fashion to what happens when we read and write, respectively. When researchers “read” the world, they are performing an act that sets in motion a dynamic interaction which causes the text to begin “speaking.” In the same way as we can read the same text in different ways, it is always possible to “read” empirical data—interviews, organizations, people, the world, and so forth—in different ways. The relationship between reader and text in the act of reading is by its very nature something of a paradox, since the act of reading can be understood both in terms of a selection with regard to the text’s surplus of meaning (it goes without saying that we can never take in the whole world, but must instead always reduce its contents by means of our perception so that we can seize a few special moments) and in terms of a fulfillment in relation to a text, here regarded as something that is intrinsically incomplete and accordingly includes the act of reading as its own lack (we can never read anything without adding to it things that come from ourselves and that are in fact a precondition for being
72
Chapter 4
able to read in the first place). The receptive action involved in reading and taking in the world therefore always involves within itself a productive aspect—like the productive action contained within it, writing, if it is to be successful, always presupposes a capacity for reception. The insight that we both receive and actively give and do something, both in the reception of reading and in the production of writing, places considerable demands upon the communicative competence of those involved in “science-making.” Even so, the fact that it is indeed possible to “read” the world in different ways should never be taken as justification for the idea that the world can be “read” any way one likes. Interpretation is not the same as “free” imagination or arbitrariness. In scientific work, perhaps more than in other settings, “reading” the world is also required at some point to give way to “writing” the world, which is to say a performative act of creation by which new knowledge may be produced. Once again, we encounter the fact that science must straddle two horses: it involves both “reading” and “writing” the world—and in both cases action and reception are closely bound up with the creative act. Writing is an act of knowledge that presupposes selection and construction, with varying degrees of freedom as regards the use of imaginative capacity and fantasy. Writing that aspires to be scientific, however, can never be equivalent to “free” fantasy; what is involved, rather, is a “bounded” mode of creativity in which science-making can lay claim to be saying something about the world and reality that is both well-grounded and innovative. To this can be added the public and communicative aspect: what we call science is always associated with an open invitation to evaluation by peers. Once again, then, just as empirical data cannot be “gathered,” so, too, it is never possible to just “display” or “present” empirical material. Empirical data must instead be established and “put into writing” by means of performative, productive action. Given this background, it is extremely confusing to find that textbooks on philosophy of science so often refer to facts as if these are unmediated truths that make themselves directly available to our experience without any need for human intervention. Facts really do not “speak” for themselves; they are always “fabricated”— mediated, embedded, and interwoven with contexts of power and meaning. As scientific facts, they must also be connected to rigorous human actions within the framework of some kind of peer-review evaluation.67 Text and interpretation are interconnected as mutually defining concepts, which are ever-present within a framework of philosophy of action. For this reason, there is always a dimension of application when we develop an understanding: “Understanding here is always application,” writes Hans-Georg Gadamer.68 Empirical data and facts do not “speak” for
Theories and Practices, Texts and Contexts
73
themselves any more than texts and contexts do; they must be mediated by means of a process of interpretation. And interpretation is not something that can be introduced when one first begins work on the analysis of the material. Rather, it is something that is continually present throughout the research process, from beginning to end, because people and human action are necessary and continually present in the knowledge-development interface between ourselves and the world. Empirical data and facts need larger contexts to have meaning. In order to “see something as something,” we need metaphors, models, theories, and narratives that science has to manage in a way that is both creative and responsible, which presupposes a hermeneutical awareness of the conditions and boundaries of interpretation. In other words, it is never a question either of “freeing” the imagination or of tearing down the edifice so that everything—or nothing—can be called science.
Texts and contexts These arguments about the act of reading and the act of writing, recognized as paradigmatic stages in the knowledge process, remind us that, even if science ought, as far as possible, to try to liberate itself from prejudices, we can never liberate ourselves from all kinds of preconditions: our “sciencemaking” journey never starts with a tabula rasa. Science requires people and proceeds upon already existing foundations. It is unavoidable; the key is to be fully aware and to subject those existing foundations to continual critical scrutiny. Although scientific work can be a relatively solitary activity for long periods of time, science itself is never something that people do entirely on their own. To be part of scientific work involves joining a tradition, learning routines, applying different methods, taking a stance on the results of previous investigations, and becoming a part of institutional practices, the most important of which is the process of mutual examination by peers. The minute someone begins working with science, they become a part of institutional relationships and structures beyond total control. Indeed, the latter are among society’s oldest institutions, deeply shaped by routines and with histories and values that are not always explicitly stated. There are always preconditions for science, but the most important and most fundamental of these often lie “within the walls.” Given all this, it is somewhat strange that scientific practitioners do not devote more attention to their own history. Perhaps it is the result of an ideal of objectivity that encourages practitioners to present their work as emerging from timeless, self-evident principles. But it also has the ironic consequence that the past
74
Chapter 4
continues to affect their work to an even greater degree, albeit without their realizing. It is customary to present critical thinking and the maintaining of tradition as diametrically opposed, as if they were mutually exclusive alternatives. But it is crucial to remember that critique itself is our oldest tradition and that the maintaining of tradition is an active measure that presupposes, indeed, requires, critical awareness. A tradition emerges, becomes visible, and takes shape by means of distanciation. Critical distance is in fact necessary in order for us to be able to identify a tradition in the first place. Living traditions also presuppose a continual and critical engagement with one’s own heritage—continual change.69 The same goes for scientific traditions. They live on, not merely because researchers have been so diligent in strictly following the scientific regulations of their era, but also because of their phenomenal ability to creatively break the rules in order to achieve new and unanticipated discoveries and innovations.70 Science is the product of human action. If we further recall that it is one of the finest products of human culture, we will also be acknowledging that science does not exist in some hermetically sealed space but is, rather, embedded in power relations and various social forms. This in no way means that we and our texts should be constrained by these contexts. Belonging to a tradition, a culture, and a society always also implies the possibility of distanciation, thinking critically, and making oneself autonomous. Not forgetting one’s origin also means adopting a critical approach towards that origin.
The context of the text: who cooked Adam Smith’s dinner? In working with science, it is important not only to read the text but also to attend to the context. Let us take a concrete example from the field of economics. The fundamental precondition for all economic science is the theory of “economic man”—homo economicus— which is to say, the idea that human behavior is reducible to the proposition that every individual, at every moment and in all situations, acts solely on the basis of self-interest. This economic anthropology is the basis for being able to treat human beings as predictable and as governed by rules, such that it thereafter becomes possible to make calculations and predications about human behavior—at the same time as the theory serves to legitimate this perspective and practice. Its rationale is simple: when each of us behaves solely on the basis of self-interest, the overall outcome for everyone will also be optimal. What ensures this result is an “invisible hand” (in accordance with a notion of Providence that can only fully be understood in
Theories and Practices, Texts and Contexts
75
the context of Calvinism in Scotland). In his study The Wealth of Nations (1776), Adam Smith, now often referred to as the father of economics, uses a graphic and slightly provocative formulation to illustrate this view of how economics works: It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from the regard of their own interest. We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages.71
In order to understand this quotation, it is essential not merely to read the text but also to situate it within a larger context. In her acclaimed study Who Cooked Adam Smith’s Dinner? A Story about Women and Economics (2012/2015), Katrine Marçal shows that in this instance Adam Smith was very far from giving us the whole picture. To sustain the view of economics that he advocates, he was forced to exclude a substantial fraction of the world’s population and to conceal the material preconditions that made his own life possible. Marçal thus poses the key question as the title of her book, Who Cooked Adam Smith’s Dinner? (2012). The father of economic liberalism remained a bachelor and lived for most of his life with his mother, who put food on his table and generally looked after him. In light of this fact Marçal asks: Was it really only self-interest that put food on Adam Smith’s table, day after day? His mother’s actions cannot easily be made to conform to the logic of economic theory. For this reason, her exclusion becomes a necessary and fundamental precondition of an economic theory based upon homo economicus: “Somebody has to be self-sacrificing, so he can be selfish. Somebody has to prepare that steak so Adam Smith can say their labour doesn’t matter.”72 Although the world is divided into two spheres of work—one traditionally masculine, the other traditionally feminine—when it comes to describing the fundamental conditions of economics, only one is ever taken seriously. Adam Smith was indeed writing about economic man. As we widen our perspective from text to context by bringing Adam Smith’s mother into the picture, it becomes clear that greed and fear, selfinterest and rationality cannot be the sole forces driving human behavior. Marçal’s analysis lays bare the gender hierarchy that is intrinsic to economic theory and that has found expression in the exclusion from economics of tasks traditionally performed by women (such as gardening, having and raising children, cooking, making and repairing clothes). But it also means that the modern view of economics requires the obscuring and exclusion of the “second sex”—women. Modern economics is thereby revealed as a deeply gendered theory, which can only be maintained by simultaneously
76
Chapter 4
suppressing a “second economy.” For male butchers, bakers, and brewers to be able to go to work, someone else—mothers, wives, and daughters— have to be available to take care of children, wash clothes, clean the house, and take responsibility for the smooth running of family life. For Adam Smith to be able to write a text like The Wealth of Nations, there had to be a context, one in which his mother cooked and served him dinner. It is therefore clear that his economic theory provides only half an answer to the fundamental question of how food gets onto our tables: Since Adam Smith’s time, the theory of economic man has hinged on someone else standing for care, thoughtfulness, and dependency. Economic man can stand for reason and freedom precisely because someone else stands for the opposite. The world can be said to be driven by self-interest because there’s another world that is driven by something else. And these two worlds must be kept apart. The masculine by itself. The feminine by itself […] At the same time, what we call economics is always built on another story. Everything that is excluded so the economic man can be who he is. 73
Marçal’s contextualization of Adam Smith’s text foregrounds some general conditions of scientific work: whenever we actualize something, we are also necessarily suppressing other things, often a great many. Perception is selection. Although we can never capture reality entirely, we should always try to give expression to a richer domain of reality through careful contextualizing, greater self-criticism, and acknowledgement of alternative perspectives and interpretations. In order for this to become science, we need also to weigh these interpretations against each other, in open and unprejudiced fashion, within the framework of a single-minded quest for truth. What are we to make of the fact that, well into our own era, half of humanity—women—has been excluded, not only from economic theory but from science in general? For practically its entire history, science has been presented as a history of men. Few women even receive a mention in the history of science. When Elena Cornaro, after years of independent study, succeeded in taking a doctorate in Padua in 1678, she was the first to give the lie to the notion that women could be kept out of both science and the university. Not until 1870 did women in Sweden gain the right to complete their high school leaving certificate, which was a condition for applying to university. In 1872, Betty Pettersson was the first women to be accepted as a student at a Swedish university. Ten years later, Ellen Fries was the first woman to gain a doctoral degree, and only in 1937 did Nanna Svartz become the country’s first female professor.
Theories and Practices, Texts and Contexts
77
Notwithstanding Sweden’s self-image as a modernizing trailblazer, its Nordic neighbors have often been more progressive with regard to equality and democracy as well as gender equality. One pioneer was Alma Söderhielm, who held a post as professor of history at the reconstituted Åbo Akademi University from 1927 to 1937. Yet none of these change the fact that women were only allowed to play a role in science at an extraordinarily late date. Thus, it is not only scientific theory that has been gendered. This impenetrable masculine dominance has made the history of science seem far more androcentric than Church history and the history of theology, which are usually invoked as cautionary examples in these contexts. It is fact that science has been a history of, about, and for men—and that the shift towards greater visibility of women has only come about at a late stage. Very late. Embarrassingly late. It was only during the latter half of the twentieth century that women appear in science in any real numbers. Fortunately, this is in the process of changing, and it is becoming increasingly common for the history of science to be subjected to critical scrutiny from a gender perspective. New research is increasingly foregrounding the role played by female scientists: Marie Curie, Maria Goeppert-Mayer, Sofia Kovalevskaja, Ida Noddack, Rosalind Franklin, and so on. Today, women are making inroads into science across a broad front. We will perhaps soon reach a tipping point. Indeed, is it now just a matter of time before science comes to be dominated by women? This actualizes a more general need to acknowledge how relations of dominance and subordination have been embodied in all our theories as well as how they define gender, ethnicity, class, and age. The concept of intersectionality captures this analytical perspective in which different parameters interact and are combined in any given situation.74 The great challenge in these kinds of complex analyses is to maintain the dynamic interplay between different aspects. When these ossify, such that one particular constellation becomes the new orthodoxy, then the dynamic must be recaptured.
Science is always already embedded in society —as society is present in science For the scientific community, its relation to society at large has almost always been a controversial issue. And the same is often true today. Yet the question has usually been formulated in an abstract and overly idealistic fashion, as though we are free to choose the nature of this relationship. In fact, the necessity of bringing a social perspective to bear on science ought to be self-evident, given that science is today present almost everywhere.
78
Chapter 4
The modern world is made of science and in many respects saturated by it. At a fundamental level, science has shaped and reshaped our society, and we use science continually in order to critically analyze and develop the same society. Even those who wish to tear down and literally terrorize our societies, often exploit advanced scientific achievements for their destructive ends. In other words, in modern society we are always already integrated into a network of scientific practices. Why do we nonetheless hesitate to direct at science itself—the scientific community—the same critical gaze that we use in our investigations of society beyond the walls of academia? The likely reason is that science has cultivated an image of itself, and told a story about itself, that to a large degree has been unilaterally theoretical and devoid of context. In other words, it is a consequence of science itself that we find it so difficult to use science to understand science’s relation to society in general. And this holds particularly true for the social sciences. But how have we managed to close our eyes to this? If Steven Shapin is to be believed, it is because of our specific concept of science that science has come to understand itself in a way that allows us to rationalize away something that is self-evident: There is as much society inside the scientist’s laboratory, and internal to the development of scientific knowledge, as there is outside. And in fact the very distinction between the social and the political, on the one hand, and “scientific truth,” on the other, is partly a cultural product of the period this book discusses.75
Likewise, it is in this context that we need to understand the efforts of Immanuel Wallerstein and his colleague to reconstruct the social sciences. As they forcefully explain: No scientist can ever be extracted from his/her physical and social context. Every measurement changes reality in the attempt to record it. Every conceptualization is based on philosophical commitments. In time, the widespread belief in a fictive neutrality has become itself a major obstacle to increasing the truth value of our findings.76
The difficulty in understanding science’s relation to society is thus closely bound up with the fact that our understanding of science has been shaped by academic institutions which from the very beginning have defined themselves by drawing a sharp distinction between society and science. However, the moment we bring a historical perspective to bear on the matter, it becomes far harder to claim to be representing a science that
Theories and Practices, Texts and Contexts
79
somehow exists as a pure and context-free entity outside of society. With the passing of time, and some critical distance, it becomes obvious that science and those involved in scientific work are never merely isolated islands of innocent texts but are always embedded in social contexts. It is obvious that the Thirty Years’ War and the Inquisition affected Descartes’ life and philosophy, that political unrest in England defined the terms and possibilities (and threw a spanner into the works) of Bacon’s activities, just as the Glorious Revolution affected Newton’s work on a new system of physics, the climate of revolution in Europe had a profound impact on Comte’s views on social science, and the experiences of the First World War inspired Freud, Einstein, Wittgenstein, and Loos. In the same way as the Second World War represented a necessary horizon of understanding for a host of scientific and technological advances during the postwar period, the mobilization behind the “Star Wars” missile defense system of the Cold War was a precondition for the economic investments that resulted in digitalization, globalization, the network society, and so forth. The very debate over the relationship between “science” and “society” shows how deeply we remain trapped within categories that have been produced by modern science itself, categories that we continue to reproduce. For this reason, it is important to know something about the transformation that took place between 1500 and 1700 and the many actors in the Scientific Revolution, while also considering the fact that they lived and worked in a milieu that was defined by the extremely volatile economic, political, and intellectual conditions of seventeenth-century Europe. It is not possible to understand this milieu, plagued by endless wars and major climate problems, without also considering the tremendous internal conflict within Western Christianity caused by the Reformation, the emergence of local markets, regional divisions of labor, the spread of global trading, and national and religious wars. Scientists do not live in isolation from the tumult that defines their era. Science has always been a social phenomenon. In other words, science should be understood as historically situated and socially embedded, an activity that is also dependent upon institutional contexts and networks, economic realities, and financing, as well as technical instruments and communications. This is not to say that science can be reduced to an intellectual reflex response to a material reality, but that context both encompasses the conditions that make scientific innovations possible and sets limits to all thought and behavior. Science is developed by people who are subject to the vicissitudes of history and affected by factors such as the social technologies of the welfare state, the almost incredible developments in medicine, the educational system, the revolutions in communication and
80
Chapter 4
transportation, sport, and physical education, and so on. A science that understands itself as pure theory renders this entire context invisible—and at the same time conceals the fact that there can be no science without people! For science, what matters is not only the text but the context.
A Copernican turn transforms the world The Renaissance and the Scientific Revolution overlapped in time. Where the Renaissance took its point of departure in history, science started in space. Developments in astronomy set in motion the entire epochal process of innovation that has come to be known as the Scientific Revolution. Aristotelian cosmology, which had only been introduced to Europe in the twelfth century, was quickly adopted by the burgeoning universities and became part of the foundations of scholastic theology. Yet we should not forget that the idea that the earth lies at the centre of a finite, ball-shaped universe is actually very close to how we immediately experience the world in our everyday lives. Presumably, this proximity to our intuitive experience of an immediate and living world, as much as orthodoxies and scholastics, is what made it so plausible, before it was called into question during the Scientific Revolution. The importance of being able to change perspective, and the power that this conceptual shift has over our conceptions of reality, is closely associated with the name of Nicholas Copernicus. He has even lent his name to a phenomenon, the Copernican Revolution. Even so, there is an anachronistic tendency of making Copernicus more modern than he actually was. Judged by the standards and expectations of a researcher, the educational background of this astronomer can only be described as peculiar: after studying, among other things, law at Bologna and medicine at Florence, he completed a doctorate in canon law at the University of Ferrara in 1503. Despite his erudition, Copernicus had an unassuming personality and chose to spend his life under the protection of the church as canon of a cathedral on the Baltic coast of Northern Poland. Sometime before 1500, he made his momentous astronomical observations from a home-made tower that he erected himself. Yet everything about Copernicus’ research career seems to have been protracted. Only appearing in 1543, the year of his death, his study De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres) called into question the geocentric worldview that had dominated human thought since Aristotle and Ptolemy. Instead, he advocated a heliocentric model with the sun at its center. These were ideas he had developed three decades previously, and it would take another fifty years,
Theories and Practices, Texts and Contexts
81
until the turn of the next century, until they achieved more general currency. The transition from a geocentric to a heliocentric worldview marks the starting-point for a long series of reflections—from Giordano Bruno, who sought to demonstrate the infinite nature of the world, via Johannes Kepler’s ideas that planetary orbits are in fact ellipses, and Galileo Galilei’s introduction of instrumentation (a telescope with lenses) capable of providing some kind of evidence for this new worldview, and his development of a modern theory of planetary motion, all the way up to Isaac Newton’s synthesis, which presented a new cosmology and ontology governed by unchanging mathematical laws of nature. The inspiration for his pioneering ideas had come to Copernicus during his years as a student in Italy, where he had been deeply impressed by Renaissance humanism. It was also in Italy that he had been recruited by the Pope to solve a calendrical problem involving the motions of the planets. Because the Julian calendar year was “too long,” the ecclesiastical year had gradually fallen completely out of step with the seasons. The way in which Copernicus approached the task of finding a more effective astronomical system nonetheless confirms the impression that his thinking, far from being radically modern, remained deeply medieval. Far from moving “forwards” in time by inventing a new way of looking at the world, he followed contemporary practice in looking backwards—indeed, much further backwards—in time, to the thinkers who had preceded Claudis Ptolemy (c.90–170 CE). Copernicus’ solution was typical for his era in that he did not try to devise a new theory; rather, the idea that he came up with came from an even older astronomical theory, namely the heliocentric model which he found in the writings of Philolaus, Ecphantus, Heraclides Ponticus, and Aristarchus of Samos. The circumstances of Copernicus’ contribution are thus deeply ironic. When the heliocentric worldview replaced the geocentric worldview, for the purpose of dissolving Ptolemy’s distinction between earthly and heavenly bodies, what took place was a shift in which the new was actually older still. Copernicus was right but also wrong, and in this regard the Copernican shift is exemplary of the broader developmental logic of science itself, even if Copernicus should be seen as an early part (or perhaps simply a precursor) of the Scientific Revolution. Scientific development proceeds in a way that bears little resemblance to the stereotyped view of linear and cumulative progress, in which knowledge grows, successively and gradually, by means of accumulated research findings. Instead, it often involves a dialectical progression in which the interplay of opposites gives rise to numerous ironies. Copernicus’ starting-point—that the movement of the planets above the moon necessarily describes a circle—is now considered both strange
82
Chapter 4
and erroneous, but in his own time it was taken as self-evident. And yet it was only when Copernicus pursued this (faulty) starting point to its logical conclusion that the Ptolemaic geocentric system began to fall apart. In order to make the movements of the planets add up, so that they formed perfect circles, it had been necessary to imagine that their speed changed, and they variously moved in reverse, came to a halt, and performed loops, resulting in a model so complicated that it eventually served to undermine the very legitimacy of the theory. In order to resolve the accumulating disparity, Copernicus was forced to think in a radically new way, and it was at this moment that an alternative premise, that the earth was not in fact the center of the universe, presented itself. Even though the idea that the planets move in a circular orbit was incorrect—Kepler would later show that they moved elliptically (he nonetheless clung to the notion of the so-called Harmony of the Spheres)—it was this assumption that drove Copernicus’ model to bursting point and, ultimately, to collapse entirely and cease to work. But from this collapse emerged new possibilities, something entirely new, in the form of an impulse to test an entirely different model. Because the way science copes with its own history is so imbued with anachronisms, its development can in retrospect (on the basis of the post hoc narrative that is written with hindsight) seem completely obvious. But when we uncritically join these anachronisms to our understanding of science, we fool ourselves because these ideas were never self-evident to those living at the time. Indeed, the new theory was also accompanied by major ambiguities and serious problems, which Copernicus’ opponents were not slow to exploit. It is easy to forget that the heliocentric worldview runs contrary to how we talk about the world and our immediate sensory impression that the sun “rises” and “sets.” The geocentric worldview naturally presented itself in an era when people still studied astronomy using the naked eye, without the help of technical instruments such as telescopes. If we also factor in that any new hypothesis requires a series of supporting hypotheses—in this case, relating to gravity and atmosphere—then the objections of Copernicus’ contemporaries begin to seem less foolish. Copernicus’ heliocentric worldview in fact gave rise to a long succession of entirely legitimate objections: Why does the wind not always blow in the same direction? Why are clouds and birds not continually moving eastwards? And how is it that objects not tied down, not least ourselves, remain fixed on the earth, rather than being brushed off its surface like water from a whetstone.77 Copernicus, as noted earlier, had already developed his theory of a heliocentric worldview by 1514, but the fact that his study only appeared in 1543 and received a mixed reception meant that the Copernican shift did
Theories and Practices, Texts and Contexts
83
not really take place for almost another century. One might even go so far as to say that the real revolution only took place in the 1960s with the advent of Thomas Kuhn’s ideas, when it came to stand as the paradigmatic example of a dominant figure in philosophy of science: the paradigm shift (to which we will return in Chapter 8). When Copernicus’ book did finally appear, its foreword contained the somewhat curious qualification that “these hypotheses need not be true nor even probable.” The fact that these words were written, not by Copernicus himself, but by a theologian colleague, his friend Andreas Osiander, only became widely known after 1609, thanks to the efforts of Kepler. The formulation can be seen as a gesture of censorship or a cowardly evasion, or as signaling a degree of humility in a situation in which the grounds for rejecting a two-thousand-year-old tradition remained far from self-evident. Or, as Per Strømholm acutely observes: “When, much later, the heliocentric theory established itself, it was not because it was obviously ‘more commonsensical’ or ‘better’ than the geocentric theory, but despite the fact that it was not.”78 What we see here, then, is yet another example of how great leaps forward in science rarely happen as the result of dutiful adherence to established rules and modes of thought, but rather when someone breaks the rules in order to think in new and unexpected ways.
The significance of theory and “tacit” knowledge for discoveries What can we learn, then, from Copernicus’ achievements, and what do they teach us about science in general? The transition from a geocentric to a heliocentric worldview obviously resulted in a dramatic change in astronomical research. But the consequences of the Copernican turn for our conception of knowledge were equally far-reaching. In astronomy, the Copernican turn signaled the end of the dominance of sensory experience, demonstrating the necessity of distanciation and the importance of drawing on theoretical perspectives to understand the world. Moreover, these are the kind of lessons, Michael Polanyi has argued, that we should draw from the Copernican turn. Polanyi also emphasizes the need to challenge the stereotyped notion that scientific work involves some purely objective observation of the world, unconstrained by any “pre-understanding”—a notion that, as he sees it, would entail quite absurd consequences: if we decided to examine the universe objectively in the sense of paying equal attention to portions of equal mass, this would result in a lifelong
84
Chapter 4 preoccupation with interstellar dust […] not in a thousand million lifetimes would the turn come to give man even a second’s notice.79
Polanyi’s conclusion was that when scientists claim to be “merely” observing the universe on the basis of some kind of objectivity, they are merely paying lip service. Because we as human beings are limited to seeing and talking about the universe on the basis of a human perspective and using our language, it is essential to recognize that humans are necessarily always present in all knowledge creation. Polanyi does not mince his words: “Any attempt rigorously to eliminate our human perspective from our picture of the world must lead to absurdity.”80 According to Polanyi, one of Copernicus’ most important contributions to the history of science was that he gave priority to indirect relationships— a mode of understanding developed in the light of abstract theories, at the price of rejecting the immediate evidence of our senses. In other words, taking Copernicus seriously means recognizing the importance of developing a mode of understanding that distances us from our immediate everyday experience of the world (in which the sun does indeed appear to be “rising” and “setting”). Science is about disconnecting from oneself and stepping outside of oneself and one’s own concrete position. The Copernican turn presupposes a change of perspective, an epistemological shift. We are moving towards a criterion of objectivity in which the observer must rely upon the knowledge that emerges from theoretical perspectives rather than immediate sensory experience: “A theory is something other than myself.”81 The scientific development that hastened, and was hastened by, the Copernican turn has therefore led to a situation in which we are increasingly required to rely upon theoretical guidance, rather than the raw impressions of our senses, in order to interpret our experience and develop knowledge. Polanyi takes it as read that this was a question of using mathematical theories, but, interestingly, he also offers geographical maps as an analogy for how a theory works: “all theory may be regarded as a kind of map extended over space and time.”82 The fact that theories can be constructed without even making reference to a normal, immediate relationship to experience also means that Copernicus’ system is to be regarded as more theoretical than Ptolemy’s. This may seem provocative, but Polanyi draws the conclusion that becoming more objective requires us to set aside our immediate experience of the world and our concrete position in space. Instead, we must put our trust in theories that have been articulated with the expectation that they will connect us to reality. Polanyi worked for many years on his magnus opus, a work titled Personal Knowledge. Under no circumstances should the two words of that title be thought of as standing in opposition to each other, as if true
Theories and Practices, Texts and Contexts
85
knowledge were necessarily impersonal. Like so many other philosophers and scientists in the twentieth century, Polanyi was instead building on Gestalt psychology by claiming that knowledge requires the active understanding, participation, and action of living people. For this reason, he also highlights the significance of what he refers to as “the personal participation of the knower in all kinds of understanding”—and continues: “But this does not make our understanding subjective. Comprehension is neither an arbitrary act nor a passive experience, but a responsible act claiming universal validity.”83 In other words, scientific work is more than some purely scientific, automatic machine governed by methodological rules. Formalizing is entirely legitimate and necessary, but only desirable and possible to some degree. Not everything can be formalized: “we can know more than we can tell.”84 Although Polanyi has become famous in recent years for his notion of “tacit knowledge,” if we are to avoid mystification and any curious ideas about there being a special kind of “tacit” knowledge, we need to recognize that he is for the most part referring to a “tacit dimension” of knowledge— a formulation that also serves as the title of another of his books, The Tacit Dimension (1966). What Polanyi wanted to show with this concept was that modes of knowledge are never “pure” but always necessarily embedded in contexts and relationships: we should not focus solely upon what is explicit but also attend to what is implicit. Reality exists independently of our knowledge of it, but we can only explore and encounter it, in accordance with this “realistic” philosophy. An oft-cited example of what Polanyi has in mind by the “tacit” dimension of knowledge is our ability to recognize a face, something that we can easily manage despite being unable to offer either a precise description of the face in question or a theory of facial appearances. In other words, there seems to be a dimension of knowledge that cannot be articulated in language, a “tacit” dimension to knowledge that does not allow itself to be simply or fully formalized. There are, in short, aspects to a text that are both between the lines and dependent on context. In a time like our own, which is defined by the new formalism, and which has been deeply shaped by a narrow-mindedly theoretical knowledge culture involving swathes of manuals and systems, Polanyi reminds us of the dangers in trying to completely depersonalize knowledge and establish a mode of knowledge without people. This line of argument is also highly relevant to the problematic of discovery. Polanyi links the question of knowledge to “an act deeply committed to the conviction that there is something there to be discovered” and adds that this knowledge is personal.85 It is interesting to note that Columbus also makes an appearance in Part Three of The Tacit Dimension,
86
Chapter 4
which is suggestively titled “A Society of Explorers.” Polanyi here discusses Columbus’ discovery of America in terms of a “surprising confirmation,” holding it up as a paradigm for all scientific progress: “discoveries are made by pursuing possibilities suggested by existing knowledge.”86 Polanyi argues that the same hold true of radically new discoveries: Max Planck may have developed his quantum theory on the basis of material that was essentially available to every physicist, but he alone saw, inscribed in it, a new order that transformed our outlook on humanity. We have already touched on the fact that the complications surrounding the story of Columbus confront us with one of the great questions relating to modern science and knowledge: does science discover or invent the world?87 In this way, we are also forced to confront a strategic question that has been responsible for the polarizing tendency in philosophy of science to adopt one of two opposing positions: on the one hand, “empirical approaches,” which tend to view science’s functions in terms of “discovering” something that is already “out there”; and, on the other hand, “theoretical approached,” which tend to foreground the importance of inventions in opr der to actively organize an “exterior” reality. And yet, as we have already noted, reality cannot be an either-or matter. The big question is, rather, how these are mutually related. Hermeneutics, to which we will return in more detail later in this study, can here clarify what is at stake in the relationship between discovery and invention if we take as our starting-point the dizzying sensation that can strike any reader of a text: what is actually contained in the text itself, and what are we actively adding to it—and, inevitably, what are we also erasing—in the very act of reading? A similarly vertiginous feeling can often affect the authors of academic essays or dissertations who find themselves wondering how empirical data is to be presented: what does this material actually contain, and what have I added to this material in my efforts to understand and explain it? These are ultimately questions of interpretation—how we can become familiar with the interface between ourselves and the world. Polanyi has applied this conceptual mode inspired by hermeneutics to philosophy of science by connecting facts and reading: The facts are readings on the instruments of a particular observatory; readings from which we derive the data on which we base our computation and by which we check the results of such computations.88
Polanyi has also made an interesting approach to this issue by means of a discussion of the importance that the “tacit” dimension of knowledge has for our ability to see a problem and to make discoveries. He asks the question of how it is possible to even identify a problem in the first place.
Theories and Practices, Texts and Contexts
87
The effort to solve problems is really an absurd equation, since we either know what we are looking for, in which case there is no problem, or we have no idea what we are looking for, in which case there is no chance of finding anything. In order to highlight the contradictory ability of being able to discover something that is hidden, he invokes Plato’s account of the challenge presented to us by Meno’s Paradox: “For the Meno shows conclusively that if all knowledge is explicit, i.e. capable of being clearly stated, then we cannot know a problem or look for a solution.”89 In other words, if it were possible to make all knowledge explicit, no discoveries would be possible. The logic of discovery itself presupposes the existence of a “tacit dimension” to knowledge; otherwise, there would simply be nothing to discover. According to Polanyi, it is precisely this conception of a “tacit” dimension to knowledge that can help solve the paradox: in the tension between two impossible alternatives, which forces us to choose between meaninglessness and impossibility, we are given a glimpse of “the intimation of something hidden, which we may yet discover.”90 For Polanyi, this brings us to the very heart of modern science: the conviction both that there exists something to be discovered beyond knowledge and that this can only come about through a process that cannot entirely be formalized. Objectivity, in other words, must be balanced by something else. Knowledge has a “tacit” dimension, which also needs to be taken seriously in scientific work. Science can only develop by straddling two horses.
PART II A SCIENTIFIC REVOLUTION
CHAPTER 5 DISCOVERING/INVENTING THE WORLD
We can be precise about it: the scientific revolution began on a farm in the southern Swedish county of Skåne (at that time ruled by Denmark) on the evening of 11 November 1572. On that evening Tycho Brahe (1546–1601) was visiting a relative on the other side of the Øresund strait, which today is a part of Sweden but at that time was the center of the large Danish empire whose capital lay on the other side of the strait. When the Danish nobleman looked up at the starry sky from the grounds of Herrevad, his uncle’s estate, he made an observation that seemed to shake all his previous knowledge to its very foundations: in the constellation of Cassiopeia a new star had suddenly appeared!
“The greatest miracle in the history of the world” It is often said that Tycho Brahe had the best eyes of his generation. In order to understand what was so astonishing about an observation made by a sharp-sighted astronomer on an ordinary autumn evening, we need to remind ourselves that, according to the prevailing notions of the time, the heavens were strictly separated in two: above, an eternal and unchanging superlunary space; below, the everyday and changeable sublunary world; and between them, like a dividing line, the moon. The sudden appearance above the moon of what seemed to be a completely new star, in a sphere of the universe which was assumed to be governed by perfect circular motions and eternal immutability, threatened to destroy the very foundations of astronomy, which had been well established since antiquity. Contradicting the ideas of Aristotle himself, Brahe’s observations were nothing less than sensational. Although others had also noticed and been amazed by the sudden change that could be seen in the night sky in the autumn of 1572, Tycho Brahe was alone in grasping its epochal consequences. He did not hesitate to call it “the greatest miracle in the history of the world,” and by May the following year he had published De Nova Stella (On the New Star), a book that would make him world-famous almost overnight by virtue of its main thesis: the universe
92
Chapter 5
is not immutable. An old world of established knowledge collapsed—and a new era began! And yet, as so often in the history of science when someone proves to have been right, the truth of the matter proved far more complicated. For example, far from being a new star that had suddenly revealed itself in the heavens, this was light from a dying star, a supernova, which was in fact entering its final phase and for that reason shone so much more brightly. There were also other aspects in which Brahe turns out to have been less modern and well-informed than is often assumed. Nonetheless, in Brahe we see for the first time the outline of a modern researcher, an early image of what, much later, would come to be called a scientist. There are strong reasons for arguing that the Scientific Revolution began in earnest with Brahe, not Copernicus.
Researchers need money At the end of the sixteenth century, Denmark was a great power, whose sphere of influence extended from the Baltic Sea to the North Atlantic. The kingdom was ruled by Frederick II, a sovereign with a taste for the new era and its knowledge. In order to keep the twenty-six-year-old Brahe in Denmark once invitations began to flood in from every quarter, the king made him an offer he couldn’t refuse: he offered Brahe the strategically located island of Hven so that he might make his observations from the Øresund strait directly. He put virtually unlimited resources at Brahe’s disposal. Almost two percent of the entire state budget was spent on the castle of Uraniborg and the research facility of Stjerneborg, both of which were built on the island along with a paper mill, a printing works, and an instrument workshop. Supported by this almost unimaginable amount of money, Brahe and his hundred or so assistants were able to enjoy steady financing during the following years, which made possible an uninterrupted stream of new discoveries and the inventorying of thousands of stars in a catalogue whose measurements put all previous efforts in the shade. Their results were to stand for many years. In addition, he received further confirmation of the mutability of the heavens in the form of the comet which suddenly appeared in 1577. In retrospect, we might well ask: what motivated Frederick to invest so heavily in a research facility out in the Danish straits? In order to understand the era we are dealing with, particularly its distance to what we recognize as the modern world, while also gaining an insight into the more general conditions for science in a historical perspective, it may be helpful to point out that what Frederick expected Brahe to deliver in return was, more than
Discovering/Inventing the World
93
anything else—horoscopes. Yes, you read that correctly! It may sound bizarre, but horoscopes were a particular specialty of Tycho’s and something that a sovereign had a real use for, because he believed that they would provide him with information about impending changes and threats of various kinds. The fact that Frederick was not particularly interested in research for its own sake, but rather in its instrumental value, is not really so very different from our time, when the politicians who decide on research budgets are only rarely concerned with knowledge, preferring instead to focus, like Frederick, on the potential instrumental (read: economic) value that research, and science can provide. Figuratively speaking, they, too, want “horoscopes”! It may nonetheless seem strange for a researcher and “scientist” (yes, as we noted earlier, the concept did not exist at this time) to be occupied with astrology. To avoid becoming tangled in the anachronisms that often surround the Scientific Revolution, we should perhaps remind ourselves that we are dealing with an era when the world still seemed coherent: microcosm and macrocosm were joined in a single structure, and the whole world was seen as a divinely ordained harmony. The expectations that lay behind the newly awakened interest in mathematics were closely bound up with this notion that the world was coherent—and that “God was a mathematician!”91 It is perhaps not so strange, then, that people’s interest in mathematics could derive as easily from the possibility of devising mathematically formulated laws of nature through precise observation as from the possibility of gaining an insight into the consciousness of God, or from magical and speculative ideas: numerology and astrology held out the promise of secret knowledge about the world, its nature and course of events. The close, albeit for us strange and utterly alien, combination of astronomy and astrology rested precisely upon the conviction that the world was so profoundly integrated that stars and planets could actually determine the destinies of human beings. Astronomical knowledge was accordingly regarded as having a prognosticatory value, whose importance to a sovereign can easily be imagined. Horoscopes were thus an significant source of income for practically all investigators of natural phenomena during the Scientific Revolution. What is more, the castle at Uraniborg had been fitted with kilns in its cellars for the purposes of alchemical experiments. The fact that Brahe, like many of his contemporaries, was also deeply interested in alchemy—which he referred to, revealingly, as “earthly astronomy,” in the same way as astronomy was known as “heavenly chemistry”—reinforces the impression that he was not quite as modern as posterity tends to represent him. At the same time, from a historical perspective it is possible to understand the keen
94
Chapter 5
interest taken in astronomy by Tycho Brahe and Francis Bacon as the expression of a forward-looking movement in the sense that they were interested, not only in quantitative movements but also in the possibility of qualitative transformations in nature. Even the great Isaac Newton placed great hopes in alchemy, which he regarded as an element of basic research that would hopefully one day help to explain gravity. Let us get back to Hven and the beginning of the Scientific Revolution. Brahe was also part of an early developmental phase with regard to technology. His research activities on the island took place at a time when people had not yet begun to use telescopes with lenses and had instead to rely upon their own eyes. The optical instruments at their disposal made it possible to measure the positions of heavenly bodies by using angles with different measuring scales. But when Brahe and his assistants used the observatory’s revolving steel quadrant, which had been sunk into the ground in order to stabilize the aim and exclude distracting light, they were still looking up at the night sky without the help of lenses. Brahe has been described as the “greatest figure of the pre-telescopic era.”92 Once the telescope entered the toolbox of astronomy in the 1630s, many years after Brahe’s death, it became increasingly difficult to mix astrology and occultism with research. Researchers need financing and always have. However, the fact that research takes money, and that there has always been a very close connection between science and money, has often been deliberately overlooked, or at least discussed in hushed tones, in textbooks and the literature on the philosophy of science. But the fact is that researchers have almost always been very interested in money and, moreover, surprisingly easy to manipulate by financial means. Building a major research facility also requires large sums of money, and Brahe was fortunate in being able to enjoy substantial and consistent financing for an unusually long period of time. The good times ended abruptly in 1588. Frederick II died, a regency took over and appointed a new chancellor, who cut off the financing of all research activities on Ven island. Those now in charge of the state finances preferred to invest in wars. The two-decade experiment was brusquely halted. Brahe left Hven, and by the end of the 1590s the entire research facility—which has been described (anachronistically) as something of “a centre for science in Europe” 93—lay in ruins. We recognize the pattern all too well: how external factors, such as political decisions and changes in financing policy, directly impact upon the conditions for research and the direction it takes.
Discovering/Inventing the World
95
The decision to make a strategic U-turn, shut down the entire research program on Hven, and get rid of its director was greatly facilitated by the fact that Brahe’s own private life was considered deeply immoral by the standards of the time. Research needs people, and trust is a precondition for people being able to act. When that trust is lacking, it becomes difficult to carry on, as Brahe learned to his cost. In order to understand what the real problem was, we need, once again, to think historically. The marriage of Brahe, a nobleman, to a commoner ran entirely contrary to the norms of the day and offered a ready pretext for those who wished to undermine people’s faith in him. The yeoman peasants on Ven had become deeply resentful of Brahe, whom they saw as an unjust master. The great astronomer was undoubtedly highly arrogant, and it is a matter of record that he found peasants and colleagues troublesome. In every age, personal circumstances and a need for legitimacy have also affected the researcher’s stock of trust, even if the concrete reasons for doubt or controversy change over time. The affair came to an end when supernova researcher Tycho Brahe left Denmark to seek his fortune elsewhere. As a result, he was to experience for a few years something of the insecurity that researchers down the ages have often had to struggle with. In 1599, Brahe showed up in Prague with measuring instruments that he had brought with him from Hven. He was appointed Court Astronomer to Emperor Rudolf II, who also wanted to investigate the stars in order to be able to predict the future on earth. Brahe employed as his assistant the young Johannes Kepler (1571–1630), a brilliant mathematician who had studied at Tübingen to become a priest before teaching mathematics and astronomy at Graz and then, in February 1600, joining forces with Brahe at the Benátky Castle outside Prague. For several important months they worked together. It did not go well. Brahe’s temperament was (still) unpleasant. Collaboration with him could not have been easy, but such is unfortunately often the case with researchers. Brahe subjected Kepler to great humiliation. But they were also substantially in agreement with the regard to the fundamental premises of astronomy. Once again, it may sound strange, but the fact is that Brahe did not accept Copernicus’ revolutionary ideas. Deeply skeptical, he instead continued to work from the presupposition that the universe was centered upon the earth (even if he did develop a compromise model, a kind of “geo-heliocentric” worldview in which the sun circled the earth while the other planets, in turn, circled the sun). As noted earlier, Copernicus’ ideas did not enter general circulation until after the turn of the century and were only generally accepted long after Brahe’s death. By contrast, Kepler had already become a convinced Copernican, which led him to correct Brahe’s ideas and change
96
Chapter 5
his results so as to accord with a heliocentric worldview. Moreover, he finally put an end to the fixation with circles that was a holdover from antiquity. Where Brahe had insisted upon the circle as the perfect geometric figure, Kepler argued that the planets moved in eclipses. Yet even Kepler’s theories had to wait for their breakthrough, which happened not in his lifetime but during the second half of the seventeenth century, largely thanks to Newton. However, the collaboration between Brahe and Kepler would prove to be a stroke of fortune. In research terms, their differences made them highly complementary. Kepler was able to draw on Brahe’s comprehensive inventory of stars and planets in the universe, while Kepler categorized them into another worldview that corrected several of the errors in Brahe’s thinking, of which the most important, as we have seen, related to geocentrism and his fixation with circles.
The invention of discovery—the birth of modern science My inspiration for opening this chapter with a precise statement, that the Scientific Revolution was inaugurated on 11 November 1572, is the historian of science David Wootton. Wootton adds that the decisive shift in the evolution of the thinking which began on that day came about in 1704, when Isaac Newton published his book Opticks, in which he replaced the Aristotelian notion of colors as qualities with the idea that colors are a phenomenon of refraction (and can be quantified).94 As I see it, there are extremely good reasons for highlighting Tycho Brahe’s importance for the scientific breakthrough. The astronomical research environment which he created on Ven incorporated many of the components that we associate with modern scientific work: it had a research program, a community of experts, a stable financial basis, and, not least, a willingness to call into question long-established truths. Most important of all, however, and even more innovative, was the fact that Brahe signaled the appearance of something that would become completely crucial during the Scientific Revolution, something that was entirely new at the time and that would be decisive in the future: a focus on discoveries. Brahe marked a turning point in the history of human thought because he de facto invented—or simultaneously discovered and invented—discovery itself. No medieval natural philosopher had engaged with research in any real sense of the word, taken an interest in discoveries, or imagined that science could make progress. For modern science, the necessity of discoveries would become axiomatic.95 And so, even if Columbus cannot be considered a modern, scientifically thinking person, it turns out that there is an important link between his (unintentional) discovery of America and the focus upon discovering new
Discovering/Inventing the World
97
knowledge that has evolved into the beating heart of all scientific work. In his monumental account of the Scientific Revolution, Wootton returns repeatedly to Columbus and shows, somewhat surprisingly, a degree of understanding for his predicament: “Columbus discovered America, an unknown world, when he was trying to find a new route to a known world, China. Having discovered a new land, he had no word to describe what he had done.”96 The difficulty of discovering/inventing discovery should not be underestimated. Language and reality are intertwined. Columbus’ dilemma was not only the result of his lack of formal education, it derived from the fact that he and his contemporaries lacked words for what he had experienced: what it meant to discover something new. Even the word “discovery” itself was not yet an established concept, either in English, or in its Spanish or Latin equivalents. During the following century, however, the word “discovery” spread quickly into every European language in the wake of a book published in 1504 by Amerigo Vespucci in which he describes his journeys to the New World in the service of the Portuguese king (Letter to Piero Soderini) using the Italian term discoperio—discovery.97 But discoveries presuppose inventions. More important than gunpowder, the printing press, and the compass, which will come back to shortly, were the linguistic innovations and theories, and, above all, discovery itself— discoperio. Discovery would in itself change the world in far more revolutionary fashion than any encounter with a new landmass. From this perspective, the Scientific Revolution of the sixteenth and seventeenth centuries more closely resembles a retrospective intellectual reconstruction. Science as we know it first appeared in the nineteenth and twentieth century. In other words, the Scientific Revolution was far less of an epistemological break than is often claimed. Yet Wootton nonetheless views the invention of discovery as something so decisive that he does not hesitate to describe it as the moment in which modern science was born. Despite all the objections that can be made about Columbus’ exploits and significance, his journey would eventually acquire a crucial significance.98 It is usually an advantage to be well-informed and knowledgeable of tradition, but sometimes the most groundbreaking advances are made possible by the lack of prejudice that is the special preserve of the uninformed. In the invention of discovery we are confronted by the embryo of modern science. In order to understand what was so radically new about discovery, we need to recall the profound feeling of nostalgia that predominated before modernity. This was an era entirely devoid of the expectation that the future might bring positive developments and a better life, an expectation that characterized the advent of modernity and has since become taken for
98
Chapter 5
granted. Since history was instead largely treated as a process of decline, the best one could hope for was the restoration of an original golden era. The way forward thus lay backwards, as it were, via tradition. The most that could be hoped for was that history would simply repeat itself, so that those living might share in a fuller knowledge of the world and enjoy an ideal state which had nonetheless been lost in the past. On this view, the discovery/invention of discovery represents one of the very first impulses that during the seventeenth and eighteenth centuries led people in the West, who had previously moved backwards into the future, to slowly turn about in order to begin moving forwards towards the future as their horizon of expectation. At issue here is a hope that the good life, far from involving a return to a past golden age, is something that lies in the future, understood here as being something that remains to be created and which will surpass all that has come before. This new temporal regime rests on the idea of discovery, which is to say, irreversible events within the framework of a linear conception of time that led to development and progress. Our modern view of science and technical development, in tandem with all our notions of research, knowledge development, originality, and progress, are entirely dependent upon there being discoveries to make, that is to say, the possibility of new knowledge and innovations. In scientific research, it is a matter of discovering something new from a universal perspective, and in science education, discovering something new from the individual’s perspective.
Quasi novello Columbus Even if Columbus himself, as we have noted, never understood the implications of his achievement, his crossing of the Atlantic was nonetheless an extraordinary feat. His (unintended and unwitting) discovery of America was not the result of scientific work, admittedly, but it nonetheless soon acquired an unparalleled scientific importance by virtue of having led to the discovery of discovery itself—and, with it, the birth of modern science. In the following century, this revolutionary idea spread, first in geography and cartography and then to other areas. Columbus’ journey was transformed into a kind of triumph of experience, continually opening up new worlds and serving as an imperative to those who wished to make discoveries of their own. Francis Bacon (1561–1626) was one of the first to recognize the scientific importance of the discovery of America as well as the long-term implications of the fact that Columbus’ achievement—sailing to a new world—had been made, not by scholastically trained university men but by
Discovering/Inventing the World
99
uncouth sailors with only an elementary education. He saw, too, that the logic behind the invention of discovery would revolutionize the logic of knowledge development entirely, by challenging the narrow division between gentlemen and craftsmen that had dominated hitherto but that could no longer be sustained. Bacon therefore created an entire philosophy around discovery. Instead of turning to Plato, Aristotle, or the authors of the Scriptures for knowledge, those interested in making discoveries and inventions should seek out sailors, businessmen, craftsmen, and engineers— people, Bacon argued, who had been tested by experience and who had acquired knowledge through contact with nature. In other words, creating genuinely new knowledge required gentlemen and craftsmen to work together, in the same way as theory and practice, truth and utility, were related to each other.99 Bacon realized that, by taking discovery as a starting point, it was possible to organize research along completely new lines, in accordance with the vision of a collective, systematic, and strictly regulated production of knowledge on the basis of empirical data that he set out in New Atlantis (1626). It was no coincidence that this book, which appeared posthumously, is divided into two sections. In the first, Bacon describes the impressive degree of prosperity awaiting visitors to the island of Atlantis, followed by an explanation of where this prosperity comes from, namely the research and science that his book goes on to describe. In so doing, he opens up an entirely new perspective on knowledge, focused on progress, development, and discoveries. Although Bacon himself was admittedly neither a researcher nor a philosopher, as a senior minister in the service of the state he enjoyed for a time a unique position of power that not only enabled him to develop a new kind of scientific policy but gave him access to resources unavailable to any other individual during the Scientific Revolution, which, until intrigue led to his downfall, made it possible for him to put his ideas into practice. When Bacon realized the tremendous consequences of Columbus’ journey and then cast his eye across his own world, he was quick to call Galileo the “Columbus of astronomy,” a quasi novello Columbus (“like a new Columbus”).100 He also saw that the idea of knowledge development as having a universal horizon, in which new knowledge of new worlds took center stage, implied a dimension of competition, something that had also been ever-present factor for the various expeditions on their voyages of discovery. Bacon then incorporated these insights about the importance of the race to be first into his understanding of scientific organization: the concept of discovery always implies some kind of competition.101 In so doing, he can truly be said to have opened the doors to a modern science in
100
Chapter 5
which pride of place is given to scientific advances and groundbreaking publications about new discoveries, not the anonymous continuation of existing knowledge. Bacon’s famous axiom that “knowledge is power” further moved the focus of science towards utility and away from an interest in abstract truths: knowledge is quite simply about making us more powerful so that we are capable of meeting new challenges. Echoes of Bacon’s precepts “for the benefit and use of life” can even today be heard in the rationales given for the award of Nobel Prizes, which often rely on formulations about the usefulness of science. With the spread of a new conception of time, defined by new ideas about development, people also increasingly came to accept, not only that knowledge and ideas can be developed, as the limits of research move in response to new advances, but that they can be converted into intellectual and financial capital. Bacon formulated many of the ideas that would later become important for modern science. But these remained for the most part impulses and thoughts that only much later would gain widespread currency and concretely change both the university and the world.
When the times are a-changing—from East to West Adventurer Christopher Columbus was born in Genoa, a typical port town in Liguria, which at that time had successfully grown into one of Europe’s largest cities. In 1453, when Columbus was two years old, the city’s success story was threatened by the shock delivered to the entire world order by the fall of Constantinople. Although this event, one of the greatest catastrophes of the later Middle Ages, took place at the other end of the Mediterranean, it had a profound impact upon Genoa, whose economic success and vast wealth had been built on trade with the East, via its base in Galata, situated high up on the opposite side of the Golden Horn to Hagia Sofia in Constantinople. For well over a thousand years, Constantinople had been the world’s greatest city. In order to get an idea of its importance during this almost unimaginable length of time, we can compare the status of New York in the present (despite being a far shorter duration, essentially the twentieth century) as a metropolis in which everything of consequence for our era in some way is concentrated and manifested. If we want to imagine the extraordinary importance, economic and cultural as well as religious and symbolic, of the fall of Constantinople for people at the time, we could do worse than compare it to the epochal importance of the terrorist attacks on the World Trade Center in 2001 for how our own era understands itself.
Discovering/Inventing the World
101
These are defining events that from the moment they occur are recognized as turning points in history. It was anything but coincidence that Constantinople had sprung up alongside the Bosporus, the straits between the Sea of Marmara and the Black Sea, a communication link that since time immemorial had served to link Persia, Syria, the Caucasus, India, and China in the East, with Mediterranean Egypt, Athens, Rome, and latterly Venice in the West. Yet the city is actually part of a far longer history that involves three cities and three golden ages. Well before it became the second capital of the Roman Empire, founded by Emperor Constantine in the year 300, Constantinople had been a great metropolis, albeit with the name Byzantium. Eleven hundred years later, following the conquest of 1453, Constantinople was again transformed, this time into the capital city of the vast Ottoman Empire, ultimately becoming the city we know today as Istanbul. For the staggering period of a thousand years, prior to the completion of St Peter’s Cathedral in 1500, Hagia Sofia was the greatest church in Christianity and its most magnificent building, and when the western provinces of the Roman Empire collapsed during the era of mass migrations, much of the culture of classical antiquity found a safe haven within the city’s walls. The cultural gap that had always existed, and that had widened in the centuries around the start of the so-called Christian era into a deep divide between the eastern and western parts of the Roman Empire, had also gradually reproduced itself in the ecclesiastical institutions that emerged after the fall of Rome in the fifth century. Over the years these tensions grew, and in 1054 they resulted in a breach between what in time would become Orthodox and Catholic churches—the Eastern Church and the Western Church, respectively. The conflict became so intense that it should hardly come as a surprise to learn that several of the crusades launched from Western Europe to the Holy Land were easily tempted to make a detour to attack and plunder Constantinople, with the result that the East Roman Empire became progressively weaker. At the same time, the Ottomans were conquering a growing number of Constantinople’s neighboring territories. In the following centuries, the pressure on the city was increased steadily by wars and military sieges until the forces of Sultan Mehmet II finally breached its walls using a gigantic cannon in May 1453. Having allowed his troops to plunder the conquered city for three days, the young sultan rode directly to Hagia Sofia on the afternoon of Tuesday, 29 May, where he took the decision to convert the church into a mosque. By Friday, 1 June, Muslim midday prayers were already being held in the sacred edifice that for a millennium had been a centre of Christian culture.
102
Chapter 5
Amid all this discontinuity, however, there was also continuity. The teachings of classical antiquity spread outwards from Constantinople, through the Ottoman Empire and into the Muslim world, and the city quickly became an Islamic centre and the natural capital of the Ottoman Empire. The expansion and victories of this empire brought increasing pressure to bear upon those of Christian Europe, resulting in wars and a persistent perception of threat during the following centuries and even into our own era. Compared to the Christianity of medieval Europe, however, the Ottomans had a relatively tolerant religious policy. In 1492, for example, the Sultan welcomed a large number of Sephardic Jews from the Iberian Peninsula, who settled in Thessalonica, Izmir, and Istanbul in their search for refuge from the union of the crowns of Spain and Portugal. In the sixteenth century, Constantinople’s population was 10% Jewish, 32% Christian, and 58% Muslim.102 All of this also had a highly beneficial effect on Western Europe in that large numbers of Greek scholars fled westwards, bringing with them manuscripts and learning that would give a decisive impulse to Renaissance culture. This in turn was a key intellectual catalyst for the knowledge cultures of the fledgling universities that were springing up across Europe at this time. For someone like Columbus, who had no more than an elementary education and who had grown up in a trading town whose entire prosperity was built upon the lucrative long-distance trade with Asia, the fall of Constantinople was nevertheless a complete catastrophe: the trading routes to the East were closed off, causing the collapse of Genoa’s formerly lucrative enterprises. The times were changing, and those in the West were about to change their horizon of expectations from East to West. A new era was at hand.
Discovering/inventing a “new” world Europe in 1500 was a continent undergoing radical change. The period was characterized by a succession of transformative processes, which variously reinforced and chafed against each other. These journeys of discovery took place in tandem with the development of printing techniques, the Renaissance, the Reformation, urbanization, the first efforts by sovereigns at state-building, and the increased commercialization that followed on the interaction between local markets, regional divisions of labor, and longdistance trade. In short, the world was growing at breakneck speed—with the result that new perspectives began to challenge established ways of thinking and new mentalities began to tear down what had once been incontestable authorities.
Discovering/Inventing the World
103
Although at this point in time the concept of Europe was far from being a unifying notion for this region of the world, since identities were more likely to be formed by church and religion (indeed, it is reasonable to ask whether Europe is even a region of the world, given its fuzzy outer borders on both east and west and uncertain identity to the north), it was now that Europeans began to travel and “discover” the larger world. A long succession of transformative processes, weaving in and out of each other in complex ways, ensured that the relatively obscure and underdeveloped Europe of the fourteenth century gradually established itself as the centre of the world, undergoing a development that would result, several hundred years later, in British and other European colonial powers establishing a global empire. But even the talk of “discoveries” is telling since it shows how a Eurocentric gaze was the obvious point of departure for encounters with “new” worlds—as if the latter had not existed previously and the peoples who had lived there for millennia were only now being incorporated into a world-historical narrative formulated by Europeans. The cumulative effect of these transformations around the year 1500 changed the world to such a fundamental degree that we can speak of it as an epochal shift. The “discovery” of the “new” world broadened people’s horizons and exploded long-established notions about the size and makeup of the world. These perspectives were further broadened by the invention of a printing press with moveable type, which effectively communicated on a large scale and created the conditions for the emergence of a collective imagining of the world that could be shared. This in turn encouraged literacy and greater intellectual independence, even as it led to the profound convulsions with Western Christianity caused by Martin Luther’s Reformation, which was led by the theological faculty of the University of Wittenberg. And when a new scientific project saw the light of day, it was the result of growing enthusiasm for classical learning in tandem with Renaissance humanism—even if people at this time did not yet refer to “science” and it did not involve “academic” culture under the aegis of the university, since the greater part of the new knowledge culture that emerged in relation to natural philosophy was in fact developed alongside the university as an institution, which since the later medieval period had been spreading across the continent like wildfire. It is not possible to understand modern science without first highlighting its close ties to modern capitalism. In historical terms, the ties between economics and science have long been far stronger than science likes to admit. Yet it is completely impossible to understand the triumphant progress of modern science without taking into consideration its intimate connection to economic interests and expectations. The fact is that science and
104
Chapter 5
imperialism spread across the world hand in hand—they supported, reinforced, and were entirely reliant upon each other. But they were also characterized to a large degree by the same mentality: both were focussed upon discovering and conquering new worlds, whether that involved physical or cognitive realms. It is no coincidence that Darwin made his epochal journey as part of an imperialist expedition, which traditionally included researchers as an integrated component of the colonial project. The fact is that virtually every significant military expedition of this period was accompanied by large numbers of scientists. This close connection between science’s hunger for discoveries and the conqueror mentality of imperialism has prompted Yuval Noah Harari to underscore what he sees as the identifying feature of a shared mentality: Both scientist and conqueror began by admitting ignorance—they said, “I don’t know what’s out there.” They both felt compelled to go out and make new discoveries. And they both hoped the new knowledge thus acquired would make them masters of the world.103
Harari argues that there is a close connection between the new attitude among the numerous “explore-discover-expeditions” that were typical of the period around 1500 and the fact that cartographers in the fifteenth and sixteenth centuries fell out of the habit of inserting decorative motifs in the unknown territories on their maps and instead began marking these areas as large empty spaces. By acknowledging their ignorance in this way, they sent ”one indication of the development of the scientific mindset, as well as of the European imperial drive.”104 All of this serves to confirm the notion that Columbus’ “discovery of America” should indeed by understood as the catalyst that ultimately led to the Scientific Revolution. Yet this is an ironic history, given that he was himself thoroughly “medieval”—both in his lack of interest in finding new knowledge about new worlds and in his refusal to admit to his ignorance.
Mapping the world is never innocent As the world continued to grow at a rapid pace, decade after decade, such that its size and shapes changed dramatically, the need for maps also grew exponentially. With its blank spaces, cartography was a powerful invention for those who wished to investigate and discover new worlds. The complications that arose as these maps became increasingly fashionable during the Scientific Revolution and contemporary Renaissance culture also offer a crucial lesson about the fundamental conditions of the art of discovery/invention. An often-overlooked advantage of the information
Discovering/Inventing the World
105
technology revolution of this era, was that the art of book printing made it vastly simpler to reproduce illustrations and images, such as maps, in comparison with the old manuscript techniques. This proved to be the material precondition for an explosive growth in the number of maps. The rapid development of cartography was hugely important in facilitating navigation during long journeys, but maps also revolutionized the way people thought about the size and complexity of the world. For an emergent capitalist economy in a world that was expanding by means of discovering ever more new territories, maps also became important instruments for the exploitations and subjugation of new continents. Mapping was thus not merely some innocent representation of reality—maps also ultimately transformed the world! Maps have been used on many occasions to symbolize the scientific ambition of representing the world “positively” and have given expression to the conventional idea of an ordered image that neatly corresponds to the real world. But maps are even more important as a starting point for anyone wishing to reflect upon the complications that follow upon the desire to represent and upon the impossibility of conveying a correct image of the world. Maps are never merely a simple rendering of something that is “already out there” in reality. A map is the product of human actions, it is shaped by the historical conditions of its age, and it reveals as much about the person who is trying to represent the world as it does about the object of that representation. Maps are always inscribed within power relations and thus never innocent. Thus, the maps of power and the power of maps are inseparably intertwined. Given this background, it could be argued that the traditional world map, which is most often drawn according to Mercator’s projection, represents one of the most important documents of the Eurocentric era, that period of time when Europe was the center of and dominated the world, defining its hierarchies of values and norms. Indeed, it is only from a very particular perspective that the world can be beheld in such a way as to have Europe at the center and “the others” laid out as its periphery. It makes a striking contrast to Japanese maps, for example, where Europe suddenly finds itself on the edge of the world, something that also makes it impossible to visualize, for example, the connections westward between Europe and America that are a precondition for notions such as the West. It is no exaggeration that the development of the world map is the herald of the European era, a period when Europe, proceeding from what it imagined as its central position, takes on the task of “discovering” the world. Yet something that at first glance looks like an objective gaze soon reveals itself as the expression of a Eurocentric gaze that looks out across the world
106
Chapter 5
from its own special starting-point—Europe. In the process, such observers give priority to their own view, as can be seen with clarity in the fact that Europe is the only continent to have chosen its own name. All other continents have been named by Europeans. This privileged interpretative position makes Europe the center of a world order in which the rest of the globe exists as merely an adjunct to the only self-named continent. At issue here is a process of centering that simultaneously meant a decentering in that it locates itself at the center of the world by defining Asia, America, and the rest of the world as periphery. The world map offers us a world—and simultaneously suppresses other worlds. It helps us to see contexts and to orient ourselves in the world—but it also conceals other contexts. The world map is thus as much a source of disorientation as it is a means of orientation. The map is not the world! The humorous cartoon in which tiny people drag around enormous atlases many times larger than themselves reminds us of the absurdity of having a map of the world in 1:1 scale. But the world is not flat, either, and has other proportions. Maps certainly provide us with an image of the world, but they are also inescapably selective and essentially misleading. The problem of maps thus also illustrates the complications and challenges associated with science’s efforts to arrive at an exact representation of reality. The difficulties in achieving an exact rendering of reality are exacerbated and, indeed, perhaps doubly evident in the case of the world map, which involves the impossible challenge of representing a three-dimensional globe in two dimensions. In Mark Monmonier’s acute but provocative formulation: “There’s no escape from the cartographical paradox: to present a useful and truthful picture, an accurate map must tell white lies.”105 Ironically, the technique used for printing books served to flatten out the world at the very moment in time when, it is conventionally (if erroneously) imagined, people had just begun to see the world as a sphere.
Is it even possible to represent the world? The problem of representation became an acute challenge as Renaissance culture deepened its interest in portraying both the human subject and the world by means of the visual arts, sculpture, and architecture. Because Renaissance artists took inspiration from classical models, the classical remains in Rome a key resource for the Italian Renaissance project. This desire to represent also resulted in important innovations, as when artists began successfully using vanishing point perspective to manipulate the viewer’s perception and create an optical illusion of reality. During the same period, advances in anatomy provided important information to artists and
Discovering/Inventing the World
107
sculptures as they sought to represent the human body. To a considerable degree, the Renaissance was an artistic project that fostered new thinking about the preconditions for representing the world and human beings. As the many biblical and religious themes of these artworks attest, and contrary to how it has sometimes been presented, this was never a purely secular project. For long periods of time, the images produced by the Renaissance remained confined to a mimetic and idealizing paradigm, as can be seen clearly in the exquisite serenity of works by Giotto and Fra Angelico, in which the imitation of nature is an act of humility in the service of God. Even so, this did not necessarily make it a formulaic exercise, as is evident from the developments that took place in the latter half of the sixteenth century and the seventeenth century in the form of Mannerism and the Baroque, in which Renaissance artists’ new skills and experimenting with perfect form was superseded by something entirely different. The ideal of a natural perfect form now gave way to illusion, resulting in the exaggerated, contorted, distorted, and deceptive forms of Mannerism and the Baroque. The paintings of Michelangelo, El Greco, Rosso Fiorentino, and Caravaggio feature a curious tension between form and content, which is also visible from early in the work of Dutch painters such as Hieronymus Bosch and Baroque artists such as Giuseppe Arcimboldo. The latter two are now regarded as precursors of Surrealism and modern art, which has subsequently problematized the fundamental basis for representation to such a degree that reality and representation have ultimately tended to part company. The history of modern art has to a large degree also been a history of reckonings with realism, in which artists have experimented with the boundary between us and the world. With the passage of time, the pictorial element has been edged out by symbolic forms. In this way, art actualizes the entire question of perception, confronting us with the solid fact of selection in the interface between ourselves and the world, as the eleven million bits of sensory data which our senses continually receive are pared down to sixteen bits of informational content in less than half a second. The power of selection is as brutal as it is unavoidable. The importance of art in problematizing perception and the unreliability of images in this interface between ourselves and the world becomes apparent when we trace the historical trajectory that runs from Michelangelo’s “David” (which tries to portray the nude male form as faithfully as possible) to Impressionism (which tries to capture the moment without being constrained by the old myths) and the Cubism of Braque and Picasso (in which perception itself tends towards both the grotesque and the geometric). Over time, the avant-garde has also questioned whether art has
108
Chapter 5
anything to do with beauty at all. If we extend the line further, all the way to the “ultimate” black paintings of Ad Reinhardt, the entire problematic of representation effectively goes back to the beginning. As the many references to art in philosophy and science make clear, these art-historical developments of the last few hundred years have had a profound influence upon both fields. In the wake of Andy Warhol’s “Brillo Box” installation, it became virtually impossible to differentiate art and nonart, since the artwork and the commercial product (in the form of a wellknown brand of scouring pad) are identical in both format and form. Gestalt psychology has also been extremely important for the question of representation. Phenomena such as the so-called gestalt shift have raised our perceptual awareness of perspective and aspect as well as actualizing the importance of hermeneutics more generally. One of the few successful examples of the use of images in the history of philosophy was offered by Joseph Jastrow in the rabbit-duck illusion, subsequently made worldfamous by Ludwig Wittgenstein, which shows how a single image can change from duck to rabbit and back again, depending on the viewer’s perspective. How we see the image depends, in other words, on how we interpret it.106 The critique of representation is radicalized in non-figurative art, which fundamentally challenges our pictorial vision. And yet the implications of non-figurative art are quite ambiguous: either it can be incorporated into a utopian effort to reboot the problematic of representation, forcing us to start over from the beginning, or it can take the form of a quasi-scientific knowledge of the elements and building blocks of the universe. In both regards, art in our era has served to inspire and compete with both philosophy and science. This becomes even clearer when we recall that typography, as the form of the written word, has become the most dominant non-figurative “art.” The paradoxical aspect of discovering that images are by their nature illusory becomes evident when we compare words and images as they appear in painting itself, as René Magritte did in his painting “The Treachery of Images” (1928–1929), which depicts a pipe above an inscription that reads Ceci n’est pas une pipe (This is not a pipe)—which is actually true, although it may sound self-contradictory: the painting itself is not a pipe, obviously, but a painting of a pipe. In his book titled This is not a pipe, Michel Foucault used Magritte’s “pipe-painting” to show the necessity of problematizing representation’s reliance upon the boundary between discovering and inventing.107 This entire problematization of the basis for representation and its challenges has also ultimately dispelled any notion that cartography can be innocent.
Discovering/Inventing the World
109
Renaissance imitations: discoveries or inventions? The Renaissance is conventionally used as a rubric for the numerous intellectual currents that characterized the period centered around 1500, initially in Italy but then spreading to other geographical areas. The word “Renaissance” itself comes originally from the Italian rinascita, meaning “rebirth” (it has come down to us through the equivalent term in French), and was first used in the sixteenth century by Giorgio Vasari to designate what he saw as a contemporary rebirth of the true art of classical Italy after a long and dark age of barbarism. This notion of an intervening period of darkness—the Middle Ages—is perhaps one of the most important inventions of the Renaissance,108 and it gave rise to a tripartite historical schema of two light-filled golden ages (antiquity and the new era) separated by a dark and protracted middle age, squeezed between them. Because the contemporary ideal at that time was light and order, the Middle Ages were necessarily imagined to be a world of chaos and misrule, and because the Renaissance was considered the embodiment of civilization, the Middle Ages must have been barbaric. The fact is that the traditional image of the Renaissance—like the notion of modern science and its view of the Scientific Revolution—is largely a product of the nineteenth century, when it was created by writers like Jakob Burckhardt, Jules Michelet, and John Ruskin. The individualism and realism, modern attitude, and liberalism, that these writers ascribed to the Renaissance were actually inventions and more a reflection of their own desires and ideals than historical realities which could be discovered in the Italian culture that they harked back to. Today, it is abundantly clear that when these nineteenth-century thinkers represented Renaissance culture as a secularized modernity—despite the sheer prominence of religious motifs in its art, the divine sanction given to the birth of the individual, and the unavoidable fact that Renaissance people were Christians—they were underestimating its distance from modernity to precisely the same extent as they were overestimating its distance from the medieval world. For Renaissance humanists, the key concept was imitation—not the discovery of something new. They were not slavish imitators, admittedly, but sought rather to appropriate and adapt models for their own purposes, though without any real expectations of being able to go beyond or surpass them.109 This sounds strange today because it contrasts so starkly with our own era’s ideals of innovation and individual creativity. The notions of development and progress have now become so self-evident to us that we almost instinctively attribute these qualities to earlier periods in history.
110
Chapter 5
However, doing so blinds us to the fact that the Renaissance was not the forward-looking project borne along by utopian energies that we often imagine it to have been. On the contrary, the Renaissance was characterized by a fundamental sense of nostalgia. Taking as their point of departure a perspective shaped by the myth of a golden age, people instead viewed history as a process of decline. All that could be hoped for was a renaissance, a rebirth of a vanished state of happiness—a restoration. More than that, nothing. Once again, we need to remind ourselves of the tremendous pessimism that was a hallmark of the Renaissance era. In the shadow of the plague— which in 1347–1350, barely three generations earlier, had killed between a third and almost half of Europe’s population in the space of only four years—there were few who expected the future to be a golden age. Quite the reverse: it was feared that the end of the world was at hand. In chronological terms, humanity was also approaching the end of the five thousand years that the Bible was believed to have stipulated as the span of human history. Living in an era when geological shifts were causing an unusual number of large earthquakes, compounded by expensive and brutal wars, economic depression and famine, and political collapse and tyranny, it was perhaps only natural to think that the end of days was nigh. In other words, the Renaissance was not the progressive and creative future-oriented project that it is often represented as being, but an era defined by profound nostalgia. At the same time, it is possible to discern an underlying ambivalence in the entire Renaissance project. Although the Renaissance was characterized by nostalgia and regarded imitation as an ideal, and was thus not driven by an ambition to be creative and constructive in the modern sense, the activities of Renaissance artists, ironically enough, nonetheless led to extraordinary innovations. It could be argued that if Renaissance culture was characterized by an explicitly nostalgic ambition while in practice achieving much that was new, the reverse is true for the idea of revolution: the French Revolution had a programmatic ambition to achieve something novel that was truly unique, but in hindsight it is clear that this largely consisted of recycling historical patterns and neoclassical variants of classic models in architecture, politics, and so forth. The American Revolution, too, exhibits this same contradictoriness, albeit with the terms reversed: the Revolution wanted to start from a tabula rasa, but it was characterized by a classical obsession and endless references to antiquity and the Renaissance. Clearly, then, Renaissance culture cannot be reduced to mere nostalgia and imitation. The progress made during the Renaissance was in reality only possible because it was rested on two different cultures. Only in this way
Discovering/Inventing the World
111
was it possible to achieve creative imitations while moving beyond the dichotomies that had hitherto relegated imitation and creativity as well as production and reception into clearly defined separate spaces. It was, in fact, in consequence of these ambiguities, and the ambiguities and inner conflicts that people were forced to manage in a situation defined to a large extent by syncretism, that new cultural hybrids emerged during the Renaissance. This teaches us something about the way in which creativity and innovation rarely take place in a vacuum or as the result of freedom without conditions. If we instead pay attention to the dialectic between tradition and innovation that is the hallmark of the history of science, we will be better able to appreciate the irony in the fact that the Renaissance resulted in epochal innovations and advances despite seeking only to replicate a classical golden age and restore the lofty standards of a vanished era. In the same way as revolutions, in their orientation towards the future, have a unwitting tendency to repeat the past, so, too, did the thinkers of the Renaissance believe themselves to be nostalgically repeating the past—despite the fact that they were actually creating something genuinely new and thereby opening avenues that would lead to entirely new futures. Our own era has seen an increasing focus upon the productive importance of art for scientific discovery. Such hopes involve less the possibility of art being able to represent and convey reality than its capacity to broaden and vary our perception: the art of seeing things differently, of investigating, and of creating and producing new realities. Extending this model and the metaphor’s importance for creativity and innovation in science—how they make us “see something as something”—we might say that art’s most important contribution is productive and creative, not reproductive, and imitative. This also lends art a very different, strategically crucial significance for science, which, taking its point of departure in the dialectic of discovery and invention, is now increasingly focused upon innovation.110
The lasting echo of Antiquity and the Middle Ages Nothing ever begins entirely from the beginning, however. Nor does such an ambition characterize the present book—were it even possible—which instead takes its cue from some of the developmental trends of the last five hundred years that have led to the emergence of modern science. We have noted, time and again, how our thinking about the past is characterized by anachronisms, which smuggle the conditions of a later period into a longvanished era when they simply did not exist. In itself, the concept of the Renaissance also highlights the nostalgic goal of reconnecting—and “restoring”—a previous golden age. It is also customary to refer to (at least)
112
Chapter 5
two previous Renaissances: the Carolingian Renaissance, which resulted in the creation of cathedral schools across ninth-century Europe; and the Renaissance of the twelfth century, which saw an explosion of creativity in technology, theology, music, art, education, architecture, law, and literature. Widening our perspective further, we might add to these the Arab Renaissance that took place between 700 and 1100, when scholarship flowered in the fields of astronomy, physics, medicine, optics, alchemy, mathematics, and engineering, creating intellectual currents, which when they eventually reached Europe, became important sources of inspiration for the Italian Renaissance. The very concept of the Renaissance reminds us of the necessity of activating a larger historical perspective. The roots of modern science lie deep in the philosophy of the classical world and in the philosophy of the Middle Ages. Yet even this cannot truly be said to be a beginning. Our timeframe needs to be widened further still by reinscribing the prehistory of science within the historical phases of language and information technologies, from the importance of the alphabet and the Alexandrine library to the monastic refinement of intellectual disciplines for information management and up to Gutenberg and the art of printing. Anyone wishing to trace the deeper historical roots of the scientific attitude, which first emerged in the seventeenth century and began to take shape as a modern field of knowledge in the nineteenth century, needs to grasp the importance of the Ionian School of natural philosophy. These preSocratic Greek philosophers were active in the sixth century BCE in Ionia, a region that forms part of modern Turkey, who instead of myths and metaphysics began to devise rational explanations for changes in the natural world. When they addressed cosmological questions about what the world was comprised of, how it had come into existence, and what its structure and larger context were, they relied upon mathematics as an instrument for describing the natural world (even though, alongside this analysis, they continued in practice to account for natural process in terms of supernatural and mystical forces). Many of the basic models that we use for understanding knowledge, which have been paradigmatic during the conflicts that have been a recurrent feature in our millennia-long intellectual history, originated in classical antiquity. One distinction that has been passed down through this history is Aristotle’s division of knowledge into theoretical and practical disciplines as a way of differentiating between things bound by laws that human beings cannot change and things that can be affected by human behavior. And whenever we ask questions about the most fundamental nature of matter, we are hearing an echo from the world of classical
Discovering/Inventing the World
113
philosophy. Thales of Miletus proceeded from the notion that it derived from a single constitutive substance, which he argued was water; for Anaxagoras, it was a mixture of basic elements. Admittedly, Democritus’ theory of atoms as the smallest component of matter did not find gain much of a hearing in his own lifetime, but this theory returned in the seventeenth century and in the modern era has emerged as the dominant model for understanding the nature of reality. When Greek philosophy culminated in Aristotle’s theory of the four causes change, of which the final was to be regarded as the most important, something that elevated teleological explanatory models above all others. The result was a conceptual model that gained widespread currency and assumed a dominant position within the medieval university. The breaking with Aristotle during the Scientific Revolution was justified on the grounds of his relative lack of interest in something that was becoming increasingly important: experimentation and mathematical laws. It could be said that a crucial turning-point in the breakthrough of modern science came when teleology was finally abandoned in favour of causality. Yet the influence of classical antiquity lived on, and during the past century teleological thinking has returned and had a renaissance of its own. It should likewise be remembered that the resources that made the breach with Aristotle possible were also largely derived from antiquity and its thinkers’ reflections upon the relationship between the various principles underpinning the diversity of the world. It is also possible to recognize in classical thinking some other precursors of later conflicts within scientific theory, as can be seen articulated in the clashes between the “rationalist” thinking of Parmenides and Zeno, and the “empirical” observations and interests of Democritus. We can likewise find classical models for the logic of scientific argumentation and an anthropocentric notion of human beings as the “measure of all things.” Classical philosophy also featured a skeptical tradition upon which later thinkers could build as well as lines of thinking that anticipated both realism and relativism. The theology and temporal phenomenology of Augustine offered important intellectual models of the human soul and identity, which in the following centuries would inspire a philosophy of the subject and a humanistic reflection in which human beings (read: the soul) occupied the central position. We should not underestimate the importance of medieval theology as an inspiration for modern scientific thinking. While philosophy (and what we will later call science) admittedly became a “servant” of theology during this period, it nevertheless developed (although it should be noted that thinkers such as al-Farabi, Avicenna, and Averroës enjoyed considerably
114
Chapter 5
greater autonomy in the Muslim world than was allowed in the Christian world at that time). Medieval scholasticism shaped the intellectual milieus of the church, the cathedral school, and the university. Occam’s Razor (a rule of thumb which holds that one should avoid unduly complicating lines of argument and instead choose simpler explanations) represents an early precursor of and key inspiration for the analytical clarity and argumentative stringency that have become the ideals of modern scientific thinking. The great medieval philosophical battles over the concept of universals—the clash between nominalists and (conceptual) realists—represent positions that to a large degree anticipate the problematization by modern scientific theory of the interface between ourselves and the world, together with various aspects of the function and role of language in the tension between empirical realism and rationalist anti-realism. In this clash over the nature of language and reality, the great intellectual battle of the Middle Ages between the advocates of Via Antiqua (Thomas Aquinas and Duns Scotus) and Via Moderna (William of Ockham), the need for thinking through what knowledge really is (epistemology) was connected to fundamental questions about the ultimate nature of existence (ontology). It is easy to spot how positions adopted at a far later date were inscribed into the basis of the so-called debate on universals, which turned on whether universal concepts are real (that is to say, function as “discoveries”) or merely serve as names that create order—whether things and their existence are dependent on or independent of human beings (and can thus be said to have the character of “discoveries”). It reminds us that many of the underlying structures of the tension between deduction and induction, and for that matter rationalism and empiricism, had already been addressed and elaborated on during the Middle Ages. Altogether, this supports the view that a scientific-minded person must necessarily also know something about classical antiquity and the Middle Ages. But I will leave to others the task of presenting a more detailed account of this history.
CHAPTER 6 TECHNOLOGY OPENS UP —AND MATHEMATICS EXPLAINS— NEW WORLDS
There can be no knowledge without human beings. Science requires people, and scientific progress needs talented people. Particularly in a time like the present, when we are completely saturated with information via the internet and digitalization, it can be tempting to overlook the fact that information cannot be transformed into knowledge without the active involvement of people. Yet this extraordinary information system also reminds us how technology has increasingly become an integrated component of all scientific work. In the past, too, various forms of technology and its applications have almost always been involved whenever any scientific progress has been made. There can be no knowledge without people—but people are not enough. In order for people to be able to make truly ground-breaking advances in science, they also need technology. Discoveries presuppose inventions, not merely cognitive inventions in the form of theories but also technological inventions of every imaginable kind. In light of this fact, it is perhaps not so strange that multi-talented geniuses should have abounded during the Renaissance and the Scientific Revolution—and that many of them were also “technicians” and “mechanics.” We sometimes refer to “Renaissance men,” and usually we have in mind individuals like Leonardo da Vinci (1452–1519), a polymath who, seemingly effortlessly, combined being an outstanding artist with achieving extraordinary feats as an inventor, engineer, architect, natural scientist, and mathematician even as he dabbled in music and philosophy. Leonardo created—and experimented with—a world in which now iconic paintings such as the Mona Lisa and The Last Supper sat alongside stunningly prescient models of parachutes, aircraft, and engines of war. It is no exaggeration to say that he was a Renaissance man, a true genius whose “name and fame will never be extinguished,” as Giorgio Vasari already predicted in the mid-sixteenth century.111
116
Chapter 6
Science is preceded by technology There is a long-established and prevailing tradition of narrowly identifying science with abstract theories in contrast to practices. Given the strong grip that this theory-practice model had had on Western thinking about knowledge, it is tempting to regard science as pure theory, undoubtedly resulting in practices (which can in turn have social effects) but whose true essence is assumed to be purely theoretical. This dichotomy between theory and practice has its equivalent in the dichotomy between science and technology. Such an understanding of technology is closely aligned with a Platonic view of science as a series of propositions, a conceptual and rational system wholly detached from social and material relations. And yet to proceed on the basis of such an idealistic and ahistorical understanding of science, as an abstract, incorporeal phenomenon without any situatedness whatsoever, is to ignore not only the problematic of perception but also the technological preconditions for science as well as its institutional situatedness. This one-sided focus on theory not only risks suppressing the importance of human action and concealing the complex relationship of theory and practice that is a hallmark of the development of knowledge, it also risks obscuring the fundamental importance of technology for science and the development of knowledge. A longstanding and dominant view of technology as the product of prior scientific developments has often had the effect of reducing technology to applied science. Scientific theories come first, it is imagined, followed by technology and then technical applications. The significance of technology has thus become a purely instrumental question of how to make use of scientific advances. However, to understand the relationship between science and technology in this way is to close our eyes to the technology necessary for science, or what Don Ihde has called the instrumentation of science.112 If we view technology solely as the result of advances in theoretical science, a technical application that may admittedly have social consequences but only in accordance with the causal chain of science— technology—social impact, then science essentially becomes an abstraction, a series of propositions, a conceptual and rational system liberated from social contexts and material relations. The picture that emerges is of a disembodied science, lacking context and unconnected to either perception or technology. Yet this model of understanding science and technology ignores the fact that technology has often been an utterly crucial condition for scientific progress. Historically speaking, the development of science has been just as dependent on technology as the development of technology has been dependent on science. Indeed, it is no exaggeration to say that
Technology Opens Up—and Mathematics Explains—New Worlds
117
“science owes more to the steam engine than the steam engine does to science.”113 Modern science differs from the “science” of antiquity in always appearing to be a phenomenon that is technologically and institutionally embedded. Instrumentation itself therefore plays a decisive role for the interface between ourselves and the world within modern science. And, as we have already noted, it is not science primarily that has given rise to technology—technology and the use of technology typically precede scientific development, historically as well as ontologically. If we are to take science seriously, then, we will have to account for the presence and productive significance of technology in scientific work. Furthermore, technology and the use of technology have a centrifugal effect, as can be seen with particular clarity in the case of printing, which must be understood as, not an isolated phenomenon, but something of decisive importance within a large number of different areas. Typically, however, the importance of Gutenberg’s invention is seen as confined to its capacity to reproduce text, with no consideration of the crucial significance of this information technology for the reproduction of things like images and diagrams, materials that had proven extremely unmanageable during the age of manual reproduction. With the rise of the printing press, it suddenly became possible to illustrate works of anatomy, botany, and zoology using images, maps, and diagrams.114 Printing was also of crucial importance for the development of a monetary economy, for the emergence of national languages in the space between local dialects and the universal language of Latin, for linear thinking in modern science, for pedagogy, for the development of freedom of expression, newspapers, and a political public sphere—not to mention the fact that the printing trade itself became one of the fastest-growing industries in an emerging capitalistic economy. These insights into the centrifugal effects of printing, together with its transformative power in many different areas of society, should encourage us to think further about how our own era’s information technology revolution and the advent of the Internet Galaxy will fundamentally transform our society and the fundamental conditions for doing scientific work. We need to consider how the writing process for scholars today has been integrated into an information system by various kinds of software and the extraordinary opportunities for interaction with other scholars offered by the Internet; the ambiguous function of excessive information offering broader perspectives but also creating confusion; the opportunities for copying other researchers’ work as well as for exposing plagiarism; the possibility of potentially reaching a global audience and the concomitant danger of one’s publications simply drowning and disappearing in the flow;
118
Chapter 6
and so forth. In other words, the omnipresence of information technology in the contemporary scientific world forces us to view technology as more than a result of science—an insight that is underscored by growing concerns about the effects of the digital information system, AI and ChatGPT. In the first half of the twentieth century, a new philosophy of technology emerged by reintegrating our understanding of science in the context of human action, while also emphasizing the reciprocal interaction between technological and scientific development. In the same way as science and knowledge must be understood as always appearing first as practices before becoming theory, within the framework of a complex dialectic between the theoretical and practical dimensions of knowledge, so, too, has technology just as often been a precondition for scientific progress as it has itself developed with the help of science. In an era defined by technoscience— when science is to a very great degree embedded in technology and when we “read” the world through technology—it has become increasingly common to talk about both science as driven by technology and technology as driven by science, that underscores how closely interwoven science and technology are.115 This ambition of contextualizing science has had a tremendous impact and has been elaborated by philosophers with widely differing perspectives, including Popper, Lakatos, Toulmin, Polanyi, Fleck, Kuhn, Ihde, Latour, and others. However, this focus on contexts should not detract from researchers or obscure the contributions of individual scientists. Rather, it means that recognition of these people needs to be situated within a social and historical context. Science is about human action, but human action does not take place in isolation or without technological mediation. Today, also human actions by scientists are almost invariably embedded in technology. In some fields of knowledge, scientific development takes place by means of interactions within huge knowledge organizations and networks, both physical and virtual, which presupposes the thoroughgoing involvement of various forms of technology. Technology and science go hand in hand. Modern scientific thinking is unimaginable without technology. Technological mediation is not limited to disciplines such as the natural sciences and technology, but extends to social science, the humanities, and theology.
Technology opens up new worlds Modern scientists have often expressed a sense of awe and wonder at Leonardo da Vinci’s advanced technical sketches and mechanical models. Yet it is Galileo Galilei who, more than any other individual, has come to embody the importance of technology and instrumentation for broadening
Technology Opens Up—and Mathematics Explains—New Worlds
119
our perception and making possible new discoveries. In retrospect, and in light of a history shaped by anachronisms, we easily forget that researchers such as Nicholas Copernicus and Tycho Brahe—who, despite having vastly different worldviews, were crucially important for what has come to be called the Scientific Revolution—had no access to advanced instruments such as telescopes. Instead, they had to content themselves with simple measuring devices when observing heavenly bodies and movements in the firmament. This put immense demands upon the capacity of the naked eye. As Lawrence M. Principe remarks in his study of the Scientific Revolution: “Tycho had been the greatest naked-eye observer; he was also among the last.”116 Then Galileo Galilei came along. It was during a visit to Venice that Galileo got wind of a new Dutch invention that involved putting two polished lenses in a special relation to each other and thereby bringing the object far closer to the viewer—the telescope was born! The telescope radically extended the range of human perception, thus also making it possible to defend the Copernican worldview. This invention made possible, in turn, the discovery of a rash of new inventions. The telescope that Galileo installed in the University of Padua suddenly revealed the existence of mountains on the moon and dazzled observers with the moons of Jupiter. As a result, long-established truths could be called into question or dismissed out of hand—at the same time as it became possible to defend new ideas more effectively. The discovery of binoculars, the telescope, and the microscope would fundamentally transform science’s ability to explore both macrocosm and microcosm—to enlarge the world and simultaneously to shrink it. Lennart Hultqvist speaks of “the cleverness of how just a few glass lenses, when combined correctly, can open hitherto invisible worlds, both immense and miniscule.”117 Science presupposes technological development and the use of techniques, just as discoveries presuppose inventions. As a matter of fact, neither the Renaissance nor the Reformation, nor even the Scientific Revolution, would have been possible without the technological inventions that came to light during this period. The use of various new instruments created previously unimagined opportunities for creating new knowledge.118 The many technical inventions that were made during these years not only included the telescope (making possible to observe previously unknown worlds in macrocosm) and the microscope (enabling the study of previously unknown worlds in microcosm, such as insects, blood, and water) but were of decisive importance in the development of new ship designs (allowing people to sail to other continents, which required in turn the use of technical
120
Chapter 6
innovations such as the compass and maps) and the invention of the scalpel (enabling the examination of human and animal innards).119 The innovations that rapidly increased opportunities for investigating the world, unseating received knowledges, and opening new worlds, conceptual as well as physical, also included printing, which we have already referred to on more than one occasion. The fact is that Gutenberg’s discovery of the printing press with moveable type was of such allencompassing importance that the past half-millennium has sometimes been described as the Gutenberg Galaxy. In order to understand the scale of the transformation that printing brought about both in the world and in human thought, we need to exert ourselves to see the ways in which the various processes that evolved during the sixteenth and seventeenth centuries interacted and reinforced each other quite profoundly. It was not only that humanism’s revitalizing of interest in classical texts received an extra stimulus from this information technology revolution. There was also a close internal connection between the rise of humanism, the discovery of the printing press and moveable type, the discovery of the New World, and the various reformations within Christendom. These technological developments were accompanied by a flood of information about new worlds, new planets, new plants and animals, new peoples, and the new minerals and suchlike that resulted from the discovery of new continents. Religious change and technological development also lie more closely together than we often imagine. A reformer like Martin Luther lived in a world that was anything but an information technology vacuum. On the contrary, he was a key figure in the contemporary information technology revolution by virtue of being one of the world’s best-selling authors in the 1520s. Luther himself was also clearly delighted by the opportunities that new technology presented, and he was really fascinated by how his books were printed and distributed on a mass scale. The information technology revolution was also a key reason as to why his Reformation could not easily be crushed. Ultimately, it was printing that saved Luther from suffering the same fate as Jan Hus, who a century earlier had sought to reform the Church, only to meet an ignominious end at the stake in Konstanz in 1415. Now that printed literature was being widely distributed, simply getting rid of dissenters was no longer enough to suppress their message because their books continued to be spread across Europe and could speak in their authors’ stead. Although censorship was developed from early on as a way of regaining control, there was now ultimately no way to prevent new information from being disseminated. The new information technology destroyed old authorities, disseminating power and influence in new and
Technology Opens Up—and Mathematics Explains—New Worlds
121
unexpected ways. Its consequences were both far-reaching and long-lasting: texts created better conditions for reading and for spreading new thoughts and ideas. Another property of the printed book is that it has a clear originator in the figure of its author. Printing can therefore be said to have contributed very substantially to a contemporary turn towards the subject. The reproduction of books on a mass scale fostered the emergence of the material conditions necessary for the development of distinct authorial identities, and the increased opportunities for reading gave rise to the reading subject, which ultimately served to intensify the individualizing impact of humanism. The mass distribution of books led to the emergence of the Author, hitherto a less distinct figure with an unclear status in an era dominated by a culture of manuscript copying. Contrary to popular belief, Luther was far from being either a modern or an advocate of some kind of liberal individualism. But as a key figure in the Gutenberg galaxy, by virtue of being one of the era’s most successful authors, he gave impetus (unconsciously and indirectly) to a movement that would eventually assume a critical importance for the emergence of modern individualism. Reading had the same powerfully individualizing impulse. Previously, reading had been a deeply social phenomenon, but the spread of printed books created new conditions for the silent and private reading that had already been developed within the framework of the monastery. As private silent reading became a more general phenomenon, the conditions were also created for the masses to develop individuality. In sum, the importance of the Reformation for the modern turn towards the subject had less to do with theology than with technology—the “Reformational technology” of printing.
When “new” mathematical explanations were drawn from the past One way to describe the fundamental shift in thought that usually goes under the rubric of the Scientific Revolution is to talk about a transition from an Aristotelian form of knowledge, in which the object in the world was defined in its context and in relation to other objects on the basis of multiple explanations within the framework of an overarching view of the world as serving a purpose or end (telos), to a Galilean model, which in terms of cause and effect sought to explain the word causally. This shift from teleology to causality created new opportunities for studying the logical underpinnings of the world—and suddenly the world became a book that
122
Chapter 6
could be “read.” The possibility of explaining the machinery of the world on the basis of mechanics also gave mathematics a brand-new, decisive role. The Scientific Revolution marked the start of an extraordinary renaissance for mathematics. There were considerable advantages to a strictly mechanistic view of nature since it offered an entirely new degree of comprehensibility and predictability. And from here it was only a short step to viewing the whole world as a machine. But even this “new” way of using mathematics to describe and explain complex causal relations contains an element of nostalgia, since the search for mathematic connections involved not just a progressive movement but actually led scholars back in time, towards history, because this newly awakened interest in mathematics reconnected scholars to a long tradition, extending as far back as antiquity, in which philosophers had been fascinated by how mathematics enabled people to discern patterns and larger contexts in the world. If we are to properly understand how mechanical thinking arose, we should therefore take note of the fact that Copernicus, Kepler, Galileo, and Newton were actually part of a far older tradition, a history going back to Pythagoras that can be traced even further back in time to the natural philosophers of the Ionic School, who, instead of discussing changes in terms of a balance between the four elements, chose to describe the world in mathematical terms. But the idea of mathematical numbers as the ultimate substance, form, and process in the universe also revealed important links to music and mysticism. These connections were to fall into obscurity and will no doubt seem eccentric to many. When we today classify mathematics (for the most part hastily and unreflectingly) as a natural science, we should also remember that in antiquity the perfection of heavenly bodies was considered a kind of pure music—a “music of the spheres” which could be accessed through a mystical affinity.120 It was this two-thousand-year tradition, from the Pythagoreans and others, which was revived during the Scientific Revolution by thinkers such as Copernicus, Kepler, and others, when the universe began to be perceived in terms of simple mathematical relationships. For contemporaries, it seemed possible to identify relations based upon geometrically perfect harmonic numbers. When Descartes formulated his universal mathematics, and Galileo used numbers instead of letters as mathematical formulas representing measurable quantities according to a mechanical worldview, they turned backwards in time and took their inspiration from the past. Here, we are once again confronted by the paradoxical phenomenon that what at first looks like an unambiguously forward-looking movement turns out to involve a movement backwards in time—in terms of a reactivated tradition.
Technology Opens Up—and Mathematics Explains—New Worlds
123
Only in the eighteenth century did this nostalgic temporal regime begin to be seriously challenged. It was at this moment that the consequences of this U-turn in Western thought became clear: from having moved backwards into the future, humanity began now to turn its back on the past and look to the potential of a future that lay ahead. Henceforth, the good life would be regarded, not as the recovery of a golden past, but as something yet to be created by human actions, in a time still to come, in the form of something that would surpass all previous attainments. It was this progressoriented idea that informed the American and French revolutions, which was borne by the winds of liberty, a feeling of holding the rudder of history, of having the future in one’s own hands. This strong belief in the future was often associated with the necessity of making a drastic break with the past. During the Scientific Revolution, however, people’s relationship to different temporal regimes still remained ambivalent. This ambivalence may also have been productive, since it is possible to see the complex interplay between tradition and innovation as one of the key factors in why the advances made during this period became so significant that we have come to refer to the seventeenth century as the Scientific Revolution. This powerful nostalgia, in combination with an obvious pessimism about the future, also represents a key context for understanding Isaac Newton, who described his own ground-breaking contributions in terms of a metaphor that has since become famous—he had been “standing on the shoulders of giants.” In relation to its original context this statement is anything but forward-looking; on the contrary, it should be seen as an expression of nostalgia, the romanticizing of a past Golden Age of knowledge whose achievements seemed literally to dwarf those of the present.
Sources of mathematization: accounting, art, architecture As has already been mentioned, the impulse to mathematize the world came not only “from behind,” in history, but also “from the side,” in the form of theories and models that “wandered” between different fields. Many people today would be surprised by the powerful ties that existed during the Renaissance and the Scientific Revolution between the mathematization of research on the natural world and accounting. It was not merely that many Italian mathematicians supported themselves by bookkeeping, there was also an internal connection between how abstract numbers were used to render complex concrete relationships in both economics and nature. Historically speaking, mathematics and bookkeeping have often developed
124
Chapter 6
hand in hand. It is easy to imagine the fascination that must have been excited by the discovery that a concrete world of supply and demand for silk, cotton, and sugar could be made mathematically “legible” by doubleentry bookkeeping, which applied the same highly developed capacity for abstraction that characterized the new scientific knowledge inspired by mathematics.121 Yet again we see an area in which science and economics are intertwined. But there was another important origin for the mathematization of the world during the Renaissance, namely a resurgence of interest in geometric principles that informed the use of perspective in painting and architecture. For example, Brunelleschi’s creation of the “vanishing point”— the oculus or zero-point of graphical perspective—suddenly made it possible for artists to occupy two radically different locations in a simultaneous straddling of the finite and the infinite.122 In the application of geometry and optical illusion to art and architecture, too, mathematics thus became a key for opening undreamt-of opportunities for discovering and inventing new worlds.
The revolt of the mathematicians against the philosophers This renaissance in mathematics was far from uncontroversial. On the contrary, the Scientific Revolution of the seventeenth century was in many regards defined by a conflict between philosophy and mathematics. In some sense, the mathematization of nature by naturalists represented a revolt of mathematicians against the authority of philosophers—with the important qualification that a single individual could easily play both roles at the same time. The combination of mathematization and experimentation became the hallmark of the emergent research paradigm and its quest for universal, directly corresponding observations. This was a culture of knowledge that differed fundamentally from both philosophy and theology. The developmental logic of the Scientific Revolution can be described, in David Wootton’s words, as “a successful rebellion by the mathematicians against the authority of the philosophers, and both against the authority of the theologians.”123 In the years around 1650, a mechanistic approach to the natural world, characterized by a mathematical, rational, experimental, and instrumental view of knowledge (often inclining towards utilitarianism), definitively broke away from philosophy—even as aspects of philosophy also underwent a shift as philosophers like Leibniz aligned themselves with the ideals of these new fields of knowledge. But mathematization had ontological implications, too. The Scientific Revolution can be described as a transition from organic metaphors to
Technology Opens Up—and Mathematics Explains—New Worlds
125
mechanical explanatory models. During antiquity, the Middle Ages, and well into the Renaissance, an organic worldview had prevailed, but after the turn of the seventeenth century this began to be replaced by a mechanical worldview. In the wake of a succession of scientific luminaries that included Kepler, Galileo, Bacon, Hobbes, and Descartes, the intellectual world began to move away from the Aristotelian mode of knowledge, which defined objects in terms of their context and relationship to other objects, offering in its place empirical observations and experiments (that presupposed a causal logic) within the framework of a mathematized understanding of reality.
A more complex picture Galileo has become the pivotal figure in the mathematization of explanatory research models, which came to define the Scientific Revolution, and which were expressed in a general tendency to reduce quality to quantity. It might be said that Galileo and Aristotle have come to emblematize the opposites in the recurrent battle between figures and letters, which has been fought out in scientific arenas whenever, as is often the case, quantitative research ideals are presented as wholly incompatible with qualitative methods. At the same time, it is important to realize that this picture is not black and white. As we have already seen, it has often been possible to combine both roles in one individual. And even the linguistic forms chosen by these writers when presenting their pioneering ideas in many regards balances out what can initially look like an unwavering tendency towards mathematical models. Following in a long tradition that goes back to Plato, Galileo’s book about the two world systems takes the form of a dialogue in order to tease out and communicate the complexities. This implies an understanding of science that goes beyond unilateral propositions to include constellations and complex negotiations. The implicit presence of qualitative dimensions, both in and through the very form of scientific representations, opens onto a more dynamic understanding of the relationship between letters and numbers. In some sense, there is a connection here between the more rationalistic scientific project of the seventeenth century, as represented by Descartes and Galileo, and the more “carnivalesque” science of the sixteenth century, as represented by thinkers like Erasmus and Michel de Montaigne.124 Deepening the complexity of the Scientific Revolution, which was actually an event that took place across a timeframe of seven generations, is the fact that it involved three different worldviews that to variously degrees clashed and intermixed with each other: an Aristotelian-scholastic worldview
126
Chapter 6
(minimalistic, skeptical of reason), a Platonic-hermetic worldview (committed to reason but with a focus on occult, hidden properties), and a mathematicalmechanistic worldview that was also materialist.
The culmination of the Scientific Revolution: Isaac Newton! The entire development that bears the (anachronistic) name of the Scientific Revolution—and whose long succession of scientific luminaries, including Copernicus (if he is to be included), Brahe, Kepler, Descartes, Bacon, and Galileo, ushered in a new worldview—culminates in the polymathic genius Isaac Newton (1642–1727). In an instance of historical serendipity, Newton was born in the year of Galileo’s death, almost as if a baton had been passed when Newton brought the new scientific thinking into the eighteenth century and established a worldview that would hold sway during the next three centuries. It has been said of Newton that he was the kind of person who is born only once a millennium. Be that as it may, it is no exaggeration to describe Newton as the most famous natural philosopher of the modern era. Newton’s physics was exemplary of the new science then emerging, as was the fact that he worked inductively as well as empirically. This meant that he no longer asked why—in an Aristotelian and scholastic spirit, as if the world possessed an inherent goal (telos)—but focussed instead on the issue of how the laws of a mechanistic universe might be explained by means of mathematical equations. Newton’s mechanistic thinking is usually summarized in the three laws of motion and the law of gravity. It involves explaining motion by means of the following three laws: first, that an object remains at rest or continues to move at a constant velocity, unless acted upon by an external force; second, that changes to an object’s motion are proportionate to the force being exerted and act in the direction of that force; and third, that every force is always countered by an equal and opposite force. These were supplemented by a law of gravity, which held that all particles in the universe attract each other. Together, these fundamental principles definitively established the world as a realm governed by laws. Newton has had an unparalleled importance for the development of classical physics and for scientific development generally. Even so, it should be noted that the impulse and the inspiration for this new direction in thought did not come from the academic world of the universities. In fact, Newton was essentially self-taught in areas such as mathematics, optics, and experiment, even if he did take his degree at Cambridge, where he was later appointed professor of mathematics in 1669. Barely twenty years later, he published his Principia (1687), whose contents he subsequently spent
Technology Opens Up—and Mathematics Explains—New Worlds
127
decades laboriously elaborating. The Principia is in many ways a manifestation of the new, mechanistic natural philosophy, for which reason it has been called “the greatest achievement in the history of science.”125 Newton’s improvements to Copernicus’ theory were so comprehensive that we might almost speak of a “Newtonian revolution” in mechanics. Where Aristotle had explained change teleologically, on the assumption that all objects seek their natural state, Newton effectively did the opposite by showing that things continue to move in the same direction if no external force is exerted upon them. The same phenomenon is accounted for by different theories, that is to say, two mutually exclusive viewpoints resulted in a teleological (goal-directed) way of thinking being replaced by a mechanistic (causal) way of thinking. This new scientific thinking advocated the most universal and simple explanations that accorded with observations as well as with the principle of utility. The latter was important, since even Newton himself was often prepared to reject what was philosophically reasonable in favour of what was empirically productive. Once again, we see how utility should be considered a factor at work in scientific thinking.
A completely law-bound world Taking his point of departure in the machinery of the world—the world conceived of as a giant clock—Newton put the language of mathematics at the heart of a world that was primarily to be comprehended based on causal relations described in terms of mechanical metaphors. Modernity and industrial society would come to be dominated by this process of disenchantment, which robbed the world of all mystery and magic and detached humanity’s fascination with time from its conception of nature, a theme that would later be explored by Max Weber and others. Newton’s conceptual models for the laws of nature would fundamentally shape modern science, and questions were soon asked about whether other areas of life might also be governed in the same way by laws. Mechanical metaphors were not only pedagogically lucid and simple, but they also held out the possibility of predicting and even affecting or directing the future. The idea that heavenly bodies were governed by laws thus set in motion a search for natural laws in a range of other fields, including: medicine (Julien Offray de La Mettrie’s book L’homme machine [Machine Man]); psychology (John Locke’s development of a model of psychology whose structure was inspired by Newton’s laws); ethics (the special significance accorded to physics for modelling moral philosophy, as in Immanuel Kant’s conceptualizing of the nature of freedom in terms of a “moral law”); and in the social sciences (Auguste Comte’s identification of societal laws, which
128
Chapter 6
he also referred to in terms of “social physics,” and Max Weber’s discussion of bureaucracy as a “machinery of regulations“). Another area in which people began looking in earnest for laws of nature was in the realm of economics. It was not much of a conceptual leap: if there exist laws in nature, then there ought to be laws and a kind of natural mechanics in economics and social systems. Furthermore, in this area observers were disposed to find close connections between explaining and predicting phenomena and processes. When Adam Smith used the world view of physics as a model for the underpinnings of economics, he did not have to look far for analogies: if economics can be said to work like a mechanical machine, then the force of gravity could be considered the equivalent of a self-interest that was logical, rational, and predictable. However, it is not difficult to identify the problems entailed with drawing over-hasty analogies between physics and economics, as can readily be appreciated if we recall that markets, in contrast to electrons and cells, are comprised of human beings, who think, feel, and are affected by other people and by their environment and, above all, influenced by the way we describe and refer to this reality. Ostensibly objective descriptions of economic relationships are never innocent; they will always affect the way economic actors behave. Financial markets are not mechanical machines or fully rational systems populated by homo economicus but consist of people, who cannot be considered in isolation as straightforwardly rational individuals because they also behave collectively and in ways that are determined by their emotions. Amartya Sen, Daniel Kahneman, and others have argued that people are more inclined to avoid risks than to maximize their self-interest, and that human societies in general could not function if people were driven solely by greed, fear, self-interest, and rationality.126 The idea of economics as a system of machinery, which mathematics allows us not merely to understand but to predict in relation to economic development, continues to exert a strong appeal today. It also confirms the exemplary power of Newtonian science. But Newton’s own theories were, from the very beginning, already part of a more complicated history which unfolded in a way that does not match our image of linear scientific progress. This much is clear from the fact that it took so long before his theories were accepted. The idea of gravity as operating at a distance initially seemed quite impossible, given the prevailing view—inspired by Descartes’ way of drawing a line of demarcation against magic—that change, in the mechanical worldview, required direct contact in the form of kinetic relations. In this situation, therefore, there were those who saw Newton’s theory of gravitation as an attempt to reintroduce the very magical thinking that science had just managed to
Technology Opens Up—and Mathematics Explains—New Worlds
129
disassociate itself from. It was thus hardly surprising that his ideas initially met with skepticism, that the anticipated breakthrough did not happen, and that it took a long time before they met with general acceptance—and in the next phase, Albert Einstein would prove that several of Newton’s assumptions were, in fact, incorrect ... And so it continues. It is tempting to say that science takes giant steps both forwards and backwards!
Scholars also have unsuspected qualities There are aspects of Newton’s thinking that still seem very modern to us— and others that even his contemporary supporters found hard to understand. Newton is also a good example of how scholars in the past had no difficulty reconciling positions and attitudes that are now considered merely just incompatible but compromising when used together.127 Without detracting in the least from Newton’s extraordinary advances, it is easy to recognize the complicated contradictoriness of both his position and personality. Viewed from the present, the fact that he spent as much time on alchemy and theology as on natural science and mathematics can seem rather curious. Philosophers of science have frequently chosen to overlook Newton’s profound interest in practical alchemy, an area in which he conducted experiments extensively. But even his contemporaries and, later, his disciples, had great difficulty dealing with the fact that Newton wrote thousands of pages about ecclesiastical history, prophecies, and heretical movements, and—as if this were not enough—he even published commentaries on two of the most complicated and controversial books of the Bible: the Book of Daniel and the Book of Revelation.128 A deep ambivalence is also visible in the fact that he produced narrowly chronological proofs that the earth had been created in the year 4004 BCE—despite also having calculated that if, as he accepted was the case, the earth had originally been a red-hot sphere, it must be at least 100,000 years old.129 How are we to make sense of this? If we recall the fact that mathematics, in Newton’s eyes, represented the language of nature, it may be easier to understand why he saw it as the task of natural philosophy to reveal the laws governing the universe. If we then combine the machine metaphor of the mechanical worldview with the notion that God, in his capacity as “the great mathematician,” has given humanity mathematics precisely so that it can understand his creation, then it also becomes possible to recognize how intimately the different roles in Newton’s life were connected. Given all this, it also becomes possible to understand how his interest in the Bible and other ancient texts was bound up with his ambition to reveal hidden codes and meanings. Newton, who grew up in a time of religious
130
Chapter 6
conflict in England, abhorred sectarian controversy and sought to develop an open and tolerant form of religion. He professed himself an adherent of what was called Arianism, a heretical interpretation of Christianity that had been rejected—erroneously, as he saw it—by the Early Church. Newton was convinced that the church had falsified and concealed the original meaning of the Bible, and his intense interest in the Bible was intended to identify its true meaning. His intensive study of ancient texts was thus driven by a desire not to discover something new but to rediscover something that had been concealed long ago. He was convinced that Plato, Pythagoras, and Moses had all shared both the worldview described by Copernicus and an understanding of gravity. In the same way that his studies of the Bible were meant to recover knowledge that the church had suppressed, concealed, and condemned, his interest in alchemy was driven by an ambition to recycle a lost corpus of knowledge about the laws of nature and their inner secrets. In accordance with the prevailing “two books” metaphor—that God had written two “books,” the book of scripture and the book of nature, which humanity ought to study as closely as possible in order to extract information about the deeper patterns hidden in the world— Newton’s interest in the thinkers of antiquity was driven by a desire to rediscover the secrets of nature and the structure of the world in order to be able to explain and predict the progress of the world.
The institutionalization of science Another dimension of the instrumentalization and necessary technology of science relates to the institutionalization of science It is hardly an exaggeration to say that the new project of natural science seemed to be the severest critic of the university, the knowledge institution of the Middle Ages. Nor had the new natural philosophy emerged from that preeminent knowledge institution, which was governed by a very different, teleologically and qualitatively oriented culture of knowledge. The creative scholars who were driven by the force of the dialectic of discovery and invention, and who set in motion much of what would eventually become modern scientific thinking, thus had to build up their own networks and institutions alongside the university. This took a number of forms, including another way of organizing themselves, the emergence of scientific academies and societies, and the development of new modes of scientific communication that focused on scientific articles published in scientific journals. The English statesman Francis Bacon was a paradigmatic figure in this process of developing organizational forms for the new science that would make it socially beneficial. What strikes the modern reader of Bacon’s
Technology Opens Up—and Mathematics Explains—New Worlds
131
posthumously published scientific utopia, New Atlantis (1626), is not merely by the lingering elements of magical and mystical thinking that are its hallmarks. More striking still is the prominence of a pious and deeply theological discourse, particularly in the first half of the book. That Bacon moves in a world that takes religion for granted is apparent early, when the scientific institution upon the island of Bensalem is variously called “Salomon’s House” and “The College of the Six Days Works” and described as occupying itself with “the Study of the Works and Creatures of God.”130 This (surprisingly religious) introduction is followed by a summary of long series of apparatuses, located in caves, houses, lakes, and mountain tops, that researchers use to observe, experiment, and manipulate the various processes of nature, all with the goal of benefitting society. In other words, creating knowledge is no longer about carrying on an established tradition but, rather, about discovering and inventing new things “for the benefit and use of life.” Bacon describes a research institution that is organized for the purpose of developing healthful medicines, more nutritious food, and controlling the weather as well as darkness and light. The final section of the unfinished manuscript contains a forwardlooking description of how these research activities are carried out by individuals in teams in order to systematically build up useful knowledge. Some collate experiments that have been set down in books, others derive them from mechanical practices. Some try out new experiments, others set down all these experiments as documents and tables. Some check others’ experiments with a view to identifying practical applications, so that others in turn can “Direct New Experiments of Higher Light.” Some conduct these experiments and report on them so that others can raise their discoveries by experiments “into Greater Observations, Axiomes, and Aphorismes.” The collective organizational character of the scientific work represented here is reinforced by the presence of deliberations about what should be published as well as by the ceremonies and rites that not only take the form of galleries and statutes of prominent scientists but also—and the religious language reappears at this point—in “certain Hymnes and Services, which we say dayly, of Laud and Thanks to God, for his Marveillous Works.” Lastly, Bacon discusses the importance of speaking tours for the public dissemination of new and useful inventions as well as declaring: “Natural Divinations of Diseases, Plagues, Swarmes of Hurtfull Creatures, Scarcety, Tempests, Earthquakes, Great Inundations, Cometts, Temperature of the Yeare, and diverse other Things.” Then adding: “And wee give Councell thereupon, what the People shall doe, for the Prevention and Remedy of them.” They are, in other words, about communicating and implementing research findings. Since the focus is always on utility, it is no coincidence
132
Chapter 6
that the researchers are also expected to give advice about what people should do to prevent and cure diseases and the like.131 Bacon’s enumeration includes most of the components and institutionalized practices that we associated with modern scientific thinking: a communicative and peer-evaluative organization; the existence of research reviews, experiments, documentation and protocols, and peerreview processes; the public character of these activities; academic festivities, research exchanges, and conferences; processes for research communication, implementation, innovation, and so forth. In sum, Bacon delivered an outline of the organizational structure of modern science. And this scientific policy program was itself presented as a fierce attack upon the university, as is apparent from the fact that the organization he outlined bore little or no resemblance to how universities operated at that time. In the spirit of Bacon, the new scientific thinking began to draw researchers from different social classes and build its own institutions, scientific academies, and other associations with researchers, which now grew up across the land. The new idea of academies had already been realized in differing ways in Italy and France during the sixteenth and seventeenth centuries when societies began springing up in Paris and London. Among those to have survived the ravages of time is the Royal Society—interestingly, initially called “the Royal Society of London for the Improvement of Natural Knowledge”—which was founded in London in 1660 in the spirit of Bacon’s ideas and is now the world’s oldest national scientific institution (note the anachronism!). Although this scientific society became “royal” in 1662, it operated as a private institution and had no major economic resources of its own. Furthermore, it was organized as a society to which an individual was elected—unlike its French equivalent, the Académie des sciences, whose organization had been created “from above” by Louis XIV and his prime minister Colbert. Its members were not elected but appointed and given a salary from the state, because they were part of a mercantilist project within the framework of a policy of promoting national prosperity. In 1662, the Royal Society founded the first scientific journal, revealing titled Philosophical Transactions and thus defined as “philosophical” rather than “scientific.” Previously, scholars had communicated with each other by means of letters, which circulated within wide-ranging correspondence networks, but with time the scientific article became the most important form for communicating scientific results. Or, more precisely, what we today call science became increasingly divided into two separate communication cultures: on the one hand, those who wrote books about culture; on the other, those who principally published journal articles about
Technology Opens Up—and Mathematics Explains—New Worlds
133
nature. In the nineteenth century, the institution of the university brought both of these scientific projects under a single umbrella.
Instrumental reason—when reason itself becomes technology Widening our perspective a little, we could say that where pre-modern scientific thinking was concerned with observation, modern scientific thinking sought to intervene in reality and became increasingly manipulative, and our own era’s scientific thinking has largely come to focus on the productive importance of science (in the sense of laboratories and research).132 With the growing significance of technology and the crucial role assumed by instrumentation in modern science, a new instrumental attitude combined with a new, manipulative mentality has come to dominate, particularly with regard to experimentation. As they sought answers to their questions, the researchers who emerged from the Scientific Revolution no longer turned to old texts written by the finest minds of a distant golden age. Instead, they began to automatically say something hitherto unimaginable: “Let us test it!” Instrumental reason, which was developed out of the mechanistic worldview and driven by the imperative of utility, took as its premise that the world was governed by laws in accordance with the logic of causality: its course could therefore be predicted and altered. In 1500, this new attitude was almost unknown to researchers. A century later, it was considered possible but was still relatively new and startling. By 1700, it was taken for granted. The ideas about cause and effect, which Galileo, da Vinci, Bacon, and others had developed in accordance with mechanical conceptual models, had a profound impact in shaping the research ideals of the burgeoning natural sciences, which thus came to be dominated by an instrumental approach. In parallel with this development, however, there emerged a critical tradition that sought to settle accounts with instrumental reason. Many years later, this tradition would culminate with the “philosophical fragments” developed by Theodor W. Adorno and Max Horkheimer in Dialectic of Enlightenment (1944). In this book, they describe the complex contradictoriness that seems embedded in the Enlightenment project and that finds expression in a kind of self-destructiveness, because this instrumental reason tends to transform forward progress into regression, freedom into subordination to nature, myth into enlightenment. The “dialectic” to which the authors refer here denotes a process by which concepts are overturned, transform into their opposite, and finally merge with each other. Beneath this line of argument lies a far-reaching critical
134
Chapter 6
confrontation with the dominant logic of subject-object, which ultimately results in all of nature being made subordinate to an imperial subject. As Adorno and Horkheimer observe grimly, humanity seems to treat nature in much the same way as a dictator treats people. In this light, instrumental reason’s supremacy over nature appears as a kind of “blind objectivity,” an exercise of control, subordination, and discipline: “[o]n the road to modern science, men renounce any claim to meaning.”133 Instrumental reason thus leads to the instrumentalization of our entire world. As Adorno and Horkheimer continue: “For the Enlightenment, whatever does not conform to the rule of computation and utility is suspect.”134 In the postwar era, this bleak conception of the essence of technology, with its pessimistic view of where instrumental reason is leading scientific thinking and society, has found many adherents. In old age, the late Georg Henrik von Wright (1916–2003) published a series of books that were deeply critical of civilization, in which he invoked Horkheimer and Adorno’s famous analysis in order to return to the issue of how the Scientific Revolution ought to be comprehended. Part of what makes these books so extraordinary and provocative is the way that von Wright’s narrative of the history of modern science foregrounds the fact that there were three, not two, parties involved in the struggles over the Scientific Revolution: science, religion—and magical thinking. The pioneers of the emergent project of natural science were often deeply religious individuals, who were also seriously interested in magic. When conflicts emerged between these three factors, magic and scientific thinking frequently found themselves on the same side, united in an instrumental ambition of “manipulating” the world that contrasted sharply with religion’s more “humble” stance. What made von Wright’s narrative even more provocative, and also sparked a lively debate when his book Vetenskapen och förnuftet [Science and Reason] appeared, was his dystopian analysis of the future that awaited a civilization governed by science and technological industrialization: “One possibility that I do not consider unrealistic is that humanity is heading towards its extinction as a zoological species.”135 In his writings that criticize civilization, von Wright thus advances the thesis that science, together with the technology that it has developed by means of its dominant instrumental reason, should primarily be seen, not as a remedy for the serious threat posed to our very existence by our undermining of the conditions for life on earth, but as a part of the problem. This perspective, which can also be found in other civilization critiques, has become well established in modern philosophy of science. Although it foregrounds a darker side of science that undeniably ought to be discussed,
Technology Opens Up—and Mathematics Explains—New Worlds
135
the question remains whether it is not an overly one-sided image, given the extraordinary achievements in science that instrumental reason has made possible, and whether it would not benefit by being complemented with a more balanced view.
CHAPTER 7 SCIENTIFIC POLARIZATION AND HUBRIS
Arguments in philosophy of science have far too often been dominated by clearly demarcated, stylized positions within a highly polarized landscape. This kind of division into philosophical “boxes” has the pedagogical merit of helping us to distinguish clear differences and oppositions, but this unilateral approach has also had devastating consequences for our understanding of science. In presenting philosophers and scientists as straightforward representatives of either empiricism or rationalism, either induction or deduction, it has rendered invisible the dynamic connections that (of course) must exist between these attitudes, resulting in the sidelining of the crucial discussion of what comprises a balanced view of the relationship between scientific knowledge and reality. What is more, it has made many scientific thinkers appear slightly foolish. The very threshold of scientific practices in our own era has thereby also become invisible. In order to remedy this, we need to return to a more complex history and draw upon the resources it offers for the development of a more composite approach.
When philosophy came to Sweden René Descartes (1596–1650) is the only significant figure in the Scientific Revolution with any kind of connection to Sweden—and that contact had devastating personal consequences for him. The French philosopher, who had been based in the Netherlands for reasons of personal security, was generally considered one of the intellectual giants of the age when in 1649 he journeyed north to Stockholm at the invitation of Queen Christina, who was thirty years his junior. The young sovereign and her court wished to learn more about what would today be called the “cutting edge of research.” The queen and the philosopher were already well acquainted, having been in correspondence for two years, when Descartes arrived in the Swedish capital in October that year with a view to serving as teacher and adviser to the court.
138
Chapter 7
Far from being a success, it ended in disaster and ruin. Descartes could not bear the cold and the dark and hated the unfeasibly early lessons which the queen insisted upon having at 5 a.m. in the freezing castle library while the city outside still lay in darkness. During his first—and last—winter in Stockholm, he contracted pneumonia and died from complications in February the following year. Descartes’ life tells us something about the world during the period we refer to as the Scientific Revolution. The fact that it took him almost two months to make his way to Stockholm attests to the poor transport infrastructure of this time. But it also upends many of our conventional ideas about science and how knowledge is developed, for example, if we recall that Descartes was not only a natural philosopher and mathematician, who incorporated the existence of a divine being into his arguments, he also combined the role of researcher with that of soldier. For someone trained at the military academy of Maurice of Nassau in the Netherlands and in the Jesuit college of La Flèche, followed by studies in mathematics in Paris and law in Poitiers, there was nothing strange about mixing theology, philosophy, and mathematics with law and military science—subjects that we have since come to view as strictly separate. In fact, it was probably while serving in (what has subsequently been called) the Thirty Years’ War that Descartes, in dreams (!) that came to him in 1619, had a vision of the form that a new “science” would take. However, it would be many years before he fully grasped what he had seen and how to interpret what he had experienced in these dreams. He lived an itinerant existence, taking up residence in many different parts of Europe, before he finally settled in the Netherlands. It is no coincidence that it was there he wrote many of his pioneering works, since it was a relatively liberal corner of an otherwise intolerant continent that was difficult to navigate for innovators trying to avoid repression and an untimely death. We find ourselves, in other words, far from the utopian expectations associated with science and research in our own era. It was also partly from a fear of getting into trouble in the Netherlands that Descartes decided to travel north in the late 1640s to become a private tutor to the Swedish monarch.
Inventing an ultimate foundation for knowledge— when nothing is certain In the figure of Descartes we are confronted with one of the truly great philosophers and role-models in the Western tradition. The fact that he lived during (what we have subsequently come to call) a Scientific Revolution meant that he had to navigate a world where much once considered
Scientific Polarization and Hubris
139
incontrovertible was being called into question. Nothing seemed certain any longer. If you have been forced to recognize that our immediate sensory impressions can often be deceptive (the sun appears to “rise” and “set” even though it is actually the earth moving), and then realized that, contrary to the tradition of knowledge that dominated the old universities, you cannot unproblematically rely on classical learning in the form of traditional scholasticism (established truths about the movement of bodies, for instance, turned out to be untenable when tested in experiments)—then what and who can you trust? Descartes thus realized that, in order not to be deceived when studying “the great book of the world,” a radical approach was needed, in the sense of thoroughly “going to the root” of the problematic. In his own words: Several years have now elapsed since I first became aware that I had accepted, even from my youth, many false opinions for true, and that consequently what I afterward based on such principles was highly doubtful; and from that time I was convinced of the necessity of undertaking once in my life to rid myself of all the opinions I had adopted, and commencing anew the work of building from the foundation, if I desired to establish a firm and abiding superstructure in the sciences.136
This notion of razing everything to the ground to start from scratch on the basis of a sure foundation is usually called foundationalism, and this ambition has repeatedly figured as an ideal in modern philosophy and science. When Descartes felt that he had sufficiently matured as a person in order to be able to seek wisdom, he therefore chose, tellingly, to isolate himself in his house, spending days on end alone, sitting in his winter coat in a chair in front of the fire, occupied with “the general overthrow of all my former opinions.” Such, at least, is how he portrays his philosophy in his famous Meditations on First Philosophy (1641), in which he begins his search for a solid foundation for his thinking, a first principle. The attitude that led him to this position has subsequently been referred to as a philosophical method defined by “systematic doubt.” But if nothing is certain anymore, if everything can and must be questioned, then how can we know anything for certain? Descartes’ strategy for dealing with this crucial question, which potentially concerns all established knowledge, was to try to find an absolute zero point, a foundation upon which it might be possible to rebuild knowledge. The risk otherwise is that we lose our footing entirely. And it is this unbearable thought that threatens to paralyze him when he begins his second day of reflection in his chair in front of the fire:
140
Chapter 7 The Meditation of yesterday has filled my mind with so many doubts, that it is no longer in my power to forget them. Nor do I see, meanwhile, any principle on which they can be resolved; and, just as if I had fallen all of a sudden into very deep water, I am so greatly disconcerted as to be unable either to plant my feet firmly on the bottom or sustain myself by swimming on the surface. I will, nevertheless, make an effort and try anew the same path on which I had entered yesterday, that is, proceed by casting aside all that admits of the slightest doubt, not less than if I had discovered it to be absolutely false; and I will continue always in this track until I shall find something that is certain. Archimedes, that he might transport the entire globe from the place it occupied to another, demanded only a point that was firm and immovable; so, also, I shall be entitled to entertain the highest expectations, if I am fortunate enough to discover only one thing that is certain and indubitable […] What is there, then, that can be esteemed true? Perhaps this only, that there is absolutely nothing certain.137
The driving force behind Descartes’ intellectual project was how to find a “firm and immovable” foundation for his thinking and how this might be established reflexively, that is to say, by means of a methodological doubt that did not shy away from anything. By doubting absolutely everything that there could be the slightest reason to doubt, he hoped to be able to arrive at something that could not be doubted: the ultimate bedrock of knowledge. Because of his deep conviction as to the importance of thinking and reflection, Descartes is usually described as a rationalist. Like a great many of his contemporaries and other figures in the Scientific Revolution, he was fascinated by the possibilities of mathematics, whose implications Descartes pursued as far as he could. In the world of mathematics, truths such as 2 + 3 = 5 are true regardless of the circumstances, yet the mathematician nonetheless needs to rely on something that is connected to reality. Confronted by the dismaying notion that we can be systematically deceived, Descartes was rescued by the certainty that we can nevertheless rely upon our own thinking. He therefore observed that it must be true that I exist, because I am now able to state or think it. This notion causes him to exclaim delightedly: Thinking is another attribute of the soul; and here I discover what properly belongs to myself. This alone is inseparable from me, I am – I exist: this is certain, but how often? As often as I think […] I am therefore, precisely speaking, only a thinking thing, that is, a min (menssive animus), understanding, or reason […]138
Cogito ergo sum—I think, therefore I am! This conclusion rests, of course, on an implicit premise: the “what” that “thinks” must exist. This premise is
Scientific Polarization and Hubris
141
founded upon a kind of intuitive evidence in which cogito-argumentation is a guarantor of reality. It means, in the spirit of rationalism, that only by taking thought as our starting point can we overcome our doubt in sensory experience. Descartes inaugurated a long and rich tradition in Western thought that has made the subject the foundation of an order based on reason, in the spirit of Archimedes, whose classic statement, “Give me a fixed point and I will move the whole world,” can be discerned in the previous quotation. The mathematician needs an axiom, a postulate, an absolute and certain starting point from which to expound by deduction a compelling argument. In summary, we can say that Descartes’ method, which seeks to arrive at absolute certainty by means of systematic doubt, contains four parts: establishing a certain starting point; division of the problem into separate parts; addressing those problems in a specific order; and, finally, making sure that every part of the problem has been solved in such a way as to add up to a solution of the whole. Descartes’ philosophy has come to symbolize the new cognitive self-assurance, inspired by mathematics, which emerged with the Scientific Revolution. But was this the only possible route that philosophical thought could have taken? Were there alternatives?
Historical preconditions for skepticism and foundationalism The Scientific Revolution took place in a violent social context within a chaotic and dangerous landscape that philosophers and naturalists only navigated with difficulty. The turbulent seventeenth century also set the terms for the development of philosophy and science. In his anthropological study of science, Cosmopolis: The Hidden Agenda of Modernity (1990), Stephen Toulmin has argued that the (as he sees it) unfortunate turn that Western thought took with Descartes cannot be understood without consideration of the radical historical discontinuity that characterized the transition from the sixteenth century to the seventeenth. At that time a radical shift in thought was taking place: from Renaissance humanism’s focus on the oral, the particular, the local, and the timely, to the Scientific Revolution’s preference for the written, the universal, the general, and the timeless. As a result, the sixteen century’s tolerant and practical wisdom, with its curiosity about every concrete detail of human life, as exemplified in the argumentative skepticism of Michel de Montaigne, was replaced in the seventeenth century by theoretically oriented thinking, which instead came to be defined by Descartes’ abstract veracity and mathematical rigour.
142
Chapter 7
This abrupt turn in the history of thought can scarcely be understood without consideration of the turbulent state of European society in the late 1500s and early 1600s. Toulmin highlights how the murder of Henri IV of Navarre, in 1610, was a turning point in serving as evidence for many contemporaries that the policy of religious tolerance had failed. Instead, the aim would be, whatever the cost, to establish social and political stability on the societal level in tandem with intellectual certainty on the philosophical level. In the world at large, the uninterrupted economic expansion of the sixteenth century had ground to a halt and been replaced by crises, depressions, and uncertainty: “Far from the years 1605-1650 being prosperous or comfortable, they are now seen as having been among the most uncomfortable, and even frantic, years in all European history.”139 Although this context of brutality must clearly have had a profound effect upon the conditions and possibilities for scientific thinking, it is rarely mentioned in the accounts of philosophy of science. Toulmin’s historical contextualization, which situates a dramatic shift in scientific thinking in a context of politics and war, also needs to be supplemented by a perspective that is far too rarely considered in science but that has become increasingly pressing in our own era: climate change. We need, in fact, to ask ourselves: what is the significance for the Scientific Revolution of the fact that the transition from the sixteenth century’s “carnivalesque” skepticism to the seventeenth century’s mathematical foundationalism, as Toulmin notes, coincided with the Little Ice Age, a period when the earth experienced some of the coldest weather recorded in a thousand years? These extreme climate and weather phenomena, which severely disrupted almost every country during this era, caused thinkers to predict the imminent end of the world. This must also have fundamentally affected the scientific thinking of the day. When global temperatures began to fall in around 1618, resulting in a climate so extreme that harvests suddenly failed for several years in a row and epidemics broke out, it can hardly have left scientific activity unaffected. Nonetheless, this line of argument does not mean that we should slavishly adhere to deterministic explanatory models when trying to understand the significance of climate change. But it does suggest that, as Geoffrey Parker notes, poor political judgement, and decision-making by the authorities of the day made a bad situation worse—such that what began as a crisis grew into a disaster. Instead of concentrating on overcoming the challenges involved in the radical climate changes to which their populations were being exposed, most governments instead tried to put into action their plans for war—with the horrifying outcome that as much as a
Scientific Polarization and Hubris
143
third of the world’s population died.140 We recognize the pattern from other eras—and perhaps also from our own?
An epistemological turn—and a lingering Cartesian anxiety Back to Descartes, whose thinking emerged victorious from this dramatic process of historical change. Since the time of Hegel, Descartes has been considered from a philosophical-historical perspective associated with the so-called epistemological turn in Western thought. From having been focused on ontology (that is to say, preoccupied with question about the nature of reality), philosophy from the seventeenth century onwards came to concern itself primarily with the theory of knowledge, epistemology (human cognition). Descartes’ scientific ideal was influenced by mathematics, and he was in this sense a precursor because mathematics has subsequently often been regarded as holding the key to the riddle of the world. And yet, as already noted, the deductive logic that was the hallmark of this kind of philosophy also required a point of departure in certainty. Yet Descartes underestimated the difficulties associated with establishing a starting point in certainty, in the same way as he seems to have been blind to the fact that mathematical truths are not necessarily true. He was also fumbling for guarantees. The version of cogito, as the foundation of an order based on reason, that he presented still needed a guarantee. Fearful of the troubling notion that it is perhaps actually we ourselves who are systematically mistaken, and that there is thus no order in our world, Descartes saw only the existence and absolute perfection of God as a guarantee for the foundation that cogito had established. For this reason, it was no coincidence that he chose to conclude his third meditation with the following observation: “And thus it is absolutely necessary to conclude, from all that I have before said, that God exists.”141 God? Necessarily exists? In order to understand where this line of thought comes from, we need to bring in another theological context, one that was a given for Descartes, even if it is not today. For no sooner has Descartes established the subject (the cogito, “I think”) as a foundation than the question arises: but what if everything is an illusion, if there is no order at all in the world? Tormented by the idea of an “evil God” who systematically deceives humanity, Descartes suddenly sees clearly that his foundation nonetheless needs a guarantee, namely God. At this moment, the modern metaphysics of an all-powerful God, whose complete absence of weakness (the same weakness as the Biblical tradition talks about) makes him the ultimate and absolute guarantor of existence, is born.142 From here,
144
Chapter 7
there emerges not only a scientific foundationalism that is continually seeking an ultimate foundation for existence but what Richard Bernstein has called a ”Cartesian anxiety,” which seems to offer philosophy of science only two extreme alternatives: either an objectivism that offers a certain foundation for a centered subject within the framework of a stable and rational order; or a relativism in which everything is in flux for a decentered subject that has abandoned any ambition of establishing a basis for knowledge in a contingent world without stable contexts or a given meaning.143 Descartes had a nightmare about being imprisoned in a well in which he could neither get down to the bottom or climb up to the top. This image has etched itself into the thinking of modern philosophy and philosophy of science, resulting in a lingering tendency to fluctuate between extremes. Needless to say, this is a hasty assumption. It is certainly not the case that we automatically fall into relativism, as something which is being offered without any costs, simply because we stop clinging to an absolute foundation. In many regards, relativism is, philosophically speaking, an even harder position to defend. And philosophy has long known this fact by virtue of its recurrent reflections on the difficulty in being believed when one declares that everything is relative—a statement that should not itself be taken as relativistic …
How the world was divided up in Descartes’ day: subject and object Descartes’ rationalistic point of departure in the human capacity for knowing was bound up with a philosophy of substance that presupposed a dualistic order defined by a hard distinction between soul and body, interior and exterior reality. He therefore treated res cogitans (that part of the world that contains consciousness and thinking) and res externa (the material world that only contains extension and that is governed mechanically in the absence of consciousness) as two distinct substances, each defined by its own logic. In this light, it comes as no surprise that Descartes, following a contemporary resurgence in interest in Augustine’s thinking on the time of the soul and the self, took his point of departure in precisely the “thinking substance” of humanity’s spiritual capacities. On the other side of this dualistic order, he simultaneously developed a mechanical philosophy of nature that considered animals as small machines within the framework of an overall view of nature as a gigantic machine. Descartes as philosopher has therefore come to embody a dualistic way of thinking, in which his cogito-philosophy, with its rationalistic starting point in the notion of innate
Scientific Polarization and Hubris
145
ideas, not only co-existed but presupposed its opposite in the form of a mechanical materialism that included the body as a machine. What emerges is a dualistic conceptual model in which the whole world is divided into two parts, two separate spheres, each of which is ascribed an entirely different substance: either res cogitans or res extensa, consciousness or matter. Descartes fondness for mechanics also had an important theoretical function regarding knowledge, functioning as a kind of epistemological cleansing insofar as it offered a possibility of drawing a clear line of demarcation between natural magic and philosophical naturalism (what we have come to call natural science) by only accepting kinetic relations (which occur as a result of direct contact) as an explanation for change. The great shift from natural magic to natural science (note the anachronism!) took place during the period 1580–1630, and it happened, interestingly enough, with the joint support of the Protestantism ethic and the spirit of capitalism. Shapin has summarized these societal implications for the development of knowledge of the natural world as four connected processes of change. First, a mechanization of nature and an increased use of mechanical metaphors to construe natural processes and phenomena. Second, a depersonalization of natural knowledge, associated with a growing rift between human subjects and natural objects. Third, the development of explicitly formulated methodological rules for disciplining knowledge production and eliminating the effects of human feelings and interests. Fourth, and finally, a desire to use knowledge for moral, social, and political ends.144 In short, the Scientific Revolution saw the emergence of a new view of nature as an object divorced from the subject and defined by causal relations in accordance with the predominating mechanical metaphors. At the same time as this ontology defined by the mechanical worldview detaches itself, there appears an epistemological subject isolated from nature that plays the part of observer, bystander, and manipulator. They are united in a dualistic ontology that constitutes, on the one hand, an objectified nature of facts, causal relations, and predictability, and, on the other, a subject of spirit, freedom, and values. While the world is emptied of qualities and reduced to quantifiable matter, the subject is given a monopoly on qualities and values. What emerges is a dualistic order that to a large degree has set its mark on the subsequent history of theories of knowledge, in which rationalism and empiricism, deduction and induction, voluntarism and determinism, humanism and naturalism have served as opposite sides of the same epistemological model.
146
Chapter 7
Locating subjecthood between centering and decentering In his Oration on the Dignity of Man (1486), the Renaissance humanist Giovanni Pico della Mirandola (1463–1494) memorably describes how God, on the sixth day of the Creation, creates human beings and charges them with learning the laws of the universe, loving its beauty, and admiring its splendor. In contrast to the rest of Creation, God has not tied human beings to a fixed place or occupation but given it complete freedom and free will. We are far from any illusions of the Renaissance as a secularized era when, in an address to Adam, chooses to put the following words in the Creator’s mouth: We have given you, Oh Adam; no visage proper to yourself, nor any endowment properly your own, in order that whatever place, whatever form, whatever gifts you may, with premeditation, select, these same you may have and possess through your own judgment and decision. The nature of all other creatures is defined and restricted within laws which We have laid down; you, by contrast, impeded by no such restrictions, may, by your own free will, to whose custody We have assigned you, trace for yourself the lineaments of your own nature. I have placed you at the very center of the world, so that from that vantage point you may with greater ease glance round about you on all that the world contains. We have made you a creature neither of heaven nor of earth, neither mortal nor immortal, in order that you may, as the free and proud shaper of your own being, fashion yourself in the form you may prefer. It will be in your power to descend to the lower, brutish forms of life; you will be able, through your own decision, to rise again to the superior orders whose life is divine.145
At this point, Pico continues himself and declares: “Oh wondrous and unsurpassable felicity of man, to whom it is granted to have what he chooses, to be what he wills to be!”146 Pico’s text exhibits a classical anthropocentric understanding of human beings as the vantage point from which to understand the whole world. What the Copernican turn really meant for people’s view of humanity’s place in the universe is, however, far from clear: is the result a centered or decentered subject? If we recapitulate the context, it seems clear that the Copernican turn was more than just a matter of what the cosmos looked like, it was also a revolution in the contemporary view of consciousness.147 But, as already noted, the result of this revolution in consciousness is not immediately apparent. On the one hand, Copernicus aligns himself with a tradition critical of the subject, in which the Copernican turn can only represent a decentering of the subject. This means that humankind loses its place as the center of the universe within the framework of a tripartite entity, in which
Scientific Polarization and Hubris
147
the following movements are provided by Darwin’s insight, that humanity is not unique in nature, and Freud’s notion, that our thoughts and behaviour, due to the dominance of the unconscious in relation to consciousness, are only governed by reason to a limited degree. But this profound decentering of the subject—in cosmos, nature, and consciousness—is matched, on the other hand, by a subject-oriented philosophical tradition that instead places the subject at the center, in a tradition that continues in different variants via Descartes, Kant, Kierkegaard, and Habermas, and that makes human subjectivity and human reason the starting point for understanding the world. Renaissance men and women were not modern, but at around the time of the Renaissance they began (or to some extent continued) a process that would ultimately lead to a new view of humanity, nature, morality, and religion, and that has made possible both new technological innovations and a commercial revolution (trade and the money economy) as well as new approaches to state governance in a culture separated from the church in terms of a secular society. The turn towards the subject thus forms the starting point for a humanism in which an autonomous human subject occupies the central place, an idea that is in turn a precondition for democracy, human rights, moral philosophy, corporate life, market economy, and so on. The opposite pole to Descartes’ centered subject has come to be represented in modern philosophy by the decentered subject proposed by Friedrich Nietzsche (1844–1900). In his ironic meditation On Truth and Lies in a Nonmoral Sense, Nietzsche ferociously mocks the proud selfconsciousness of the Cartesian cogito by uncompromisingly looking into the abyss: Once upon a time, in some out of the way corner of that universe which is dispersed into numberless twinkling solar systems, there was a star upon which clever beasts invented knowing. That was the most arrogant and mendacious minute of “world history,” but nevertheless, it was only a minute. After nature had drawn a few breaths, the star cooled and congealed, and the clever beasts had to die. One might invent such a fable, and yet he still would not have adequately illustrated how miserable, how shadowy and transient, how aimless and arbitrary the human intellect looks within nature. There were eternities during which it did not exist.148
In the text, Nietzsche goes on to ironize about “the proudest of men, the philosopher” by comparing Descartes’ centered cogito to a mosquito flying around the room and pompously “feels the flying center of the universe within himself.” By means of this naturalistic decentering, Nietzsche confronts the entire philosophical tradition of humanism and a centered
148
Chapter 7
subject with a fundamental crisis, a destabilization that in turn risks unravelling the entire rule of reason. Centering or decentering? These extremes have far too often been presented as mutually exclusive alternatives. Yet historical contextualization helps us to see how they are connected and can in different ways be combined. Between these extreme positions lies, in fact, the concrete reality of human beings that, despite long having been explored, still awaits nuancing.
Empiricism as a settling of accounts with scholasticism Descartes’ cogito argument has made him one of the leading representatives of rationalism, a philosophical and scientific movement that tends to be considered as diametrically opposed to empiricism. Where rationalism puts its faith in thinking, empiricism instead seeks to develop knowledge with the help of sensory experience, that is to say, by listening to the testimony of nature itself and by observing empirical reality. According to rationalism, we have to trust in reason, because thinking is more true than our experience of the world and, ultimately, knowledge rests on no more solid ground than the subject’s own intelligence and reason. Empiricism, by contrast, regards things as more true than words because empirical observation plays the central role in a knowledge interest focused on the evidence of our eyes. Philosophical accounts of science have long been dominated by this opposition, which counterposes rationalism to empiricism as mutually exclusive alternatives. Today, however, we have to ask ourselves if matters really are this simple. Is it true that these epistemological positions have historically been so unilateral? Is it even possible for those aiming to develop knowledge to separate reason and sensory experience from each other so completely? Let us go back to history. One could, of course, say that the entire Scientific Revolution was characterized by a realistic ambition that focussed on a newly awakened interest in an empirical approach, often in combination with a fondness for experiments. The motto of this new era was: “Let us test how it works!” And yet we need also to understand this strong empirical emphasis as a reaction against both metaphysics and speculation, and as an expression of the attempt to settle accounts with a lingering nostalgic view of history, a scholastic culture of knowledge that effectively blocked the logic of discovery in an early phase of the history of scientific thinking. The idea of a linear, cumulative intellectual development had certainly begun to circulate, but at this stage it remained new and was far from being generally accepted. The dominant understanding was instead that the thinkers of
Scientific Polarization and Hubris
149
antiquity had superior knowledge of the state of the universe—and that history ever since had been a process of decline. For this reason, we still encounter an ambivalent perception of temporality in thinkers like Bacon, Galileo, and Newton. In this context, empiricists were often keen to emphasize the importance of observing first and generating data before starting to theorize, which is to say trying to approach the world unconditionally rather than allowing one’s investigations to be hamstrung by preconceptions and prejudices. All for the purpose of safeguarding the intention of discovering new worlds and developing new knowledge. But when this empirical approach was taken too far and made too one-sided, people sometimes began to underestimate the importance of inventions, something that was further confirmed by the dependency of the empirical approach upon technological innovations. Francis Bacon became the defining figure for this “realist” ambition to break with an older scholastic dogmatism based on tradition—which tended to treat ideas as the reflection of metaphysical circumstances rather than of reality—and to install in its place a mode of scientific induction based on experience. In Bacon’s thought, a contemplative wonder at the world, that had previously functioned as a dominant ideal of knowledge, was replaced by a desire to actively explore nature in order to be able to intervene in its progress and to master it in the sense of making it beneficial and useful for society. In contrast to an Aristotelian-scholastic philosophy that was seen as ineffectual and meaningless, the new would take priority over the old, curious investigation would come before loyalty to authorities, utility before orthodoxy, manipulation before contemplation, and innovation before tradition. But this perspective also presupposed the elimination of “the idols,” a concept that Bacon used to refer to the different kinds of prejudice that prevent us from seeing the world without presuppositions and that instead entrap our thinking within a deductive logic. His reference to “idols” was targeted in particular at the anthropomorphizing tendency of seeing nature as analogous to humanity, but also the prejudices that come from habits, upbringing, and natural predisposition as well as the power exercised over thought by social life and language. But the greatest opponent in this situation was still the philosophical tradition. Against the “idols” obscuring reality, Bacon counterposed a mode of induction in the spirit of empiricism built on our capacity to ask real questions about nature. In doing so, however, he was not only underestimating the necessity of deduction but diverging from the strong interest shown in mathematics by many of his colleagues during the Scientific Revolution. This contrast to the importance ascribed by many to
150
Chapter 7
mathematics at this time can serve as an indication of Bacon’s occasional overestimating of the inductive powers of science. Ironically, Bacon’s utopian vision of an empirically driven science defined by realism and hypothesis-testing, and his conviction that reality comes to us from outside via experience, was in some sense confirmed by the fact he died from a severe cold—which he had contracted, tellingly, while carrying out an experiment. The need for polemical tools in concrete historical situations may help to explain the tendency towards a certain one-sidedness in relation to both rationalism and empiricism. Even so, we can note that in concrete scientific work there has never been much of a distance between the scientific ideals of deductive rationalism and inductive empiricism. The fact that Descartes would come to have a far greater influence on empiricism than on rationalism could be taken as an indication that there was not a particularly sharp dividing line between these different strands within philosophy of science. Even within British empiricism, which emerged as a powerful and dominant constellation of positions in the late seventeenth and eighteenth centuries, the principle of intuitive evidence remained dominant, even if its adherents, in contrast to Descartes, turned directly to sensory impressions as the only path to knowledge. Philosophers such as John Locke (1632– 1704) rejected the notion of innate ideas, starting instead from the assumption that human senses were a tabula rasa, a completely empty and blank page, and that all ideas therefore came “from without” and arise from knowledge of the senses. This approach to answering questions about what and how we can know things was continued by David Hume (1711–1776), who would nonetheless come to reject causal explanations based on his conclusion that all that exists is a stream of continually changing sensory impressions. In a way, it could be said that Hume was pursuing the logical consequences of empiricism when he argued that there is no direct, logical connection between cause and effect. But because it is not possible to have sensory experience of a law-governed relation between cause and effect, it is likewise impossible to find any other basis than habit for the claim that effect follows cause. In other words, a “purely” empirical focus on its own cannot take us very far. The Scientific Revolution was characterized by a settling of accounts with speculation and metaphysics, for which reason it was heavily focused upon the importance of grounding knowledge on empirical sensory experience. But this one-sided empiricism was far from self-evident in the seventeenth century—it quickly became clear that empirical observations could not be treated in isolation and could not speak for themselves. For
Scientific Polarization and Hubris
151
example, when Galileo made his observations using his telescope, those sensory experiences, mediated via instrumentation, were in themselves insufficient as scientific evidence to convince his contemporaries. Instead, evidence had to be guaranteed communicatively. When he headed to Rome in 1611, he therefore gathered a group of philosophers above one of the city’s gates so that they might look through the telescope together. It was only when several people could compare observations made using an instrument that the discoveries became convincing enough to circulate widely. This example of how empiricism is embedded in wider contexts demonstrates the necessity of a public perspective when it comes to scientific experiments, a criterion that has become an important component in all modern science and a general feature that now defines the peer-review process that accompanies scientific procedures for publication and communication. In other words, there seems to be a close connection between rationality, the public sphere, and communication—which in turn presupposes a communicative context of collegiality and peer-review evaluation that combines rationalism and empiricism.149
Deduction/induction—a wildly exaggerated either-or As we have seen, the opposition between rationalism and empiricism has deep roots and precursors in the thinking of classical antiquity and the Middle Ages. In light of this, it is perhaps not so strange that modern thought has strikingly often become bogged down in the tracks of the recurrent conflicts along this battleline. But is the issue quite as simple as a choice between mutually exclusive alternatives? The fact is that modern science clearly contains aspects of both empiricist and rationalistic thought. The caricatured image of polarized positions may be pedagogically appealing by virtue of its simplicity, and for this reason it has been readily invoked in philosophical accounts of science, but on points of fact it has also been seriously misleading. In real life, it has for the most part involved a spectrum of positions, none of which can really be described as unadulterated rationalism or empiricism. Rationalism and empiricism, like deduction and induction, may start from divergent problematics, the fact is, as Strømholm has noted, that each of them actually represent “its own philosophical style rather than some crucial epistemological difference.”150 It is important to remember this if one is not to be led into making simplifications, which, while didactically effective, present us with an impossible either-or choice. History teaches us that only rarely are we presented with simple choices between extreme positions as if they were mutually exclusive alternatives.
152
Chapter 7
Rather, the development of modern scientific thinking has moved along a continuum between these polar opposites, never dogmatically defending one of the positions at the costs of the other. Bacon’s program for a reorganization of knowledge and research admittedly focused upon an empirical study of nature in which the experimental method and inductive philosophy were his primary interest, in contrast to those who claimed that reading books was all that was needed to dispel the illusions (“idols”) that distort reality for us. And yet, even for Bacon, it was never an issue of narrow empiricism. Although Bacon in his writings sought to overturn his contemporaries’ excessive faith in tradition, logic, and deduction, for which reason he argued for a research process defined by strict inductionism, taking its point of departure in the gathering of empirical material, observation, and experimental result, he nonetheless always worked both inductively “upwards,” towards a higher order of generalization, and deductively “downwards,” so that new observations might also correct earlier generalizations. Moreover, Bacon was convinced of the close connections between theory and practice, that the contemplative life and the active life were two sides of the same coin. This connection is also reinforced by the fact that the theoretical research in Bacon’s program has a practical purpose in his utopia New Atlantis. Today, rather that becoming bogged down in the tracks that have been thrown up by the caricaturing of extreme positions as mutually exclusive opposites in terms of idealism and realism, we need to affirm that both the human sciences and the natural sciences almost always contain elements of both rationalism and empiricism, observation and theory. Not even the experiment itself should be viewed as the expression of a purely empirical viewpoint, since it requires, for example, adherence to rules, which must be seen as the expression of a rationalistic impulse. Accordingly, we might say that a culture of knowledge represented by experiments is as much rationalistic as empiricist. In other words, Bacon and Galileo were simultaneously empiricist and rationalist. Far from being deluded, they suspected that both sensory impressions and reason were needed for the development of knowledge. Although they were not always sure how this ought concretely to be related to each other, and in concrete situations, as we have seen, were preoccupied with a polemic that forced them to adopt somewhat one-sided formulations, both these aspects existed alongside each other, in some situations more explicitly and in others merely as a vague implication. A recurrent theme in Per Strømholm’s account of the Scientific Revolution is that what seem like polar opposites—Descartes’ deductive rationalism and Bacon’s inductive empiricism—were in reality intimately bound up with each other. It is as though they were each observing the same
Scientific Polarization and Hubris
153
distinction—but from a different angle: “For Descartes, knowledge was useful because it was true; for Bacon, it was true because it was useful.”151 Where Descartes went from theory to observation, Bacon moved from observation to theory. The issue was never about adopting a position that was purely deductive or purely inductive but rather about an inductivedeductive continuum—and, ideally, a dynamic interaction. In other words, the knowledge we moderns call science is, to a greater or lesser degree, always based upon both mathematics and experience. This becomes clear from the metaphors that Bacon uses in order to situate himself within the epistemological landscape. Here, the ant serves as a metaphor for those who, by means of experiments, wish only to gather and use, while “dogmatists” (rationalists), with their reason, resemble the spider who fabricates by weaving its silken web from its own substance. As an alternative to both of these one-dimensional perspectives, he proposes the bee as a metaphor for the true essence of philosophy and scientific thinking, because the bee gathers its material from flowers in the garden but then transforms it into something entirely different by means of its own abilities. Instead of relying solely upon the power of consciousness or merely gathering material from nature or experiments, Bacon speaks of the need for “a closer and more binding alliance” between the experimental and the rational—something that points towards a possible, more dialectical solution to the riddle of knowledge.152 Following the precept that it is a requirement to take one’s point of departure in either thought or sensory experience—rationalism or empiricism, truth as coherence or correspondence—philosophical accounts of science have often developed two kinds of foundationalism, each of which dogmatically claims to have established an ultimate foundation. In recent discussions within philosophy of science, these positions have often been articulated in terms of deduction or induction. Sooner or later, all those who have been initiated into the theoretical world of science are therefore presented with the choice of taking their starting point in either theory or empirical data. This problematic recurs repeatedly in the research process in the form of a doubt as to whether one has actually “discovered” something that really exists or merely “invented” something oneself. Can my results be justified by observation (induction) or logical thinking (deduction)? The old questions about how conclusions are reached have thus tended to linger on, shaping the grammar that has also determined scientific philosophical considerations in our own era. Deduction proceeds from a conviction that if the premises are correct, the conclusion will be, too. Yet it is easy to see the weaknesses associated with this line of reasoning, given that conclusions often clash with concrete
154
Chapter 7
reality. At the same time, we know that empirical data cannot “speak” for itself and that inductive thinking easily leads to false conclusions. A deduction must be anchored in a more or less tangible concrete reality, in the same way as induction is repeatedly forced to wrestle with the issue of how to justify the process of syllogism. In other words, both ways of thinking are associated with weaknesses—but both also seem indispensable and implicate each other. Science presupposes a deductive dimension, but this needs always to be balanced against the need for inductive thinking. Truths are not immediately given in empirical reality, as if it were merely a question of “gathering” data. Nor is it possible to merely think things through “in the clouds,” without any contact with concrete reality. We can study this dynamic in relation to how empirically-driven methods such as grounded theory have been developed: from an original ambition of trying to work purely inductively on the basis of empirical data, this methodological approach has been successively modified, corrected, and supplemented by incorporating a series of deductive elements intended to justify the classification of material. The concept of abduction appears in philosophical discussions of science at the point at which it becomes clear that deduction and induction need to be combined. However, seeing abduction as the new great Solution to the dilemma is risky. Talk of abduction is often accompanied by a reductive narrowing of deduction and induction such that they become caricatures. The many links that connect these positions are thereby obscured, since a dynamic between deduction and induction is already entailed by, and in many respects implicitly exists in, the thinking of someone like Descartes, even if, as we have seen, he is usually characterized as a “pure” rationalist: Descartes’ philosophy was an attempt to mediate between theoretical and practical knowledge, between metaphysical certainty and instrumental mastery. For historical reasons, he regarded the classical ideal of absolute certainty as more important than practical utility.153
For such a dynamic to arise, scientific thinking needs to be elaborated by straddling two horses. It is naïve to think that research “collects” or “gathers” empirical data—but it is no less risky to talk about “creating” or “constructing” empirical data and reality. The question of how we should categorize and understand the relation between body and soul is a recurrent theme in modern scientific thinking. The various ways of approaching this problematic, as we have seen on more than one occasion, also have a general scientific philosophical relevance that is highly significant for the principal concerns of this book. The dominant model has been to make a dualistic separation between two
Scientific Polarization and Hubris
155
essentially different domains and qualities. An alternative approach is monism, which refuses to separate them but instead allows them to merge into a single entity. A less common, but to my mind more productive way of managing this problematic, is to think dialectically, that is, to draw a distinction, not for the purposes of separating, but, on the contrary, in order to stage an interaction between them by highlighting how what is explicit in one polar opposite is only implied in the other, and vice versa. Dialectical thinking is always seeking a “third” term—something that at times can nonetheless also be a perilous balancing act, as we will shortly see. This line of argument has brought us squarely within the modern epistemological problematic, leading us eventually and inexorably to Kant.
How is knowledge even possible? We have seen how philosophical accounts of science, in their quest for clarity, have often exhibited a fondness for highly caricatured positions. This has led to the history of science becoming an unnecessarily polarized landscape and has also resulted in many philosophers of science seeming to be rather one-sided—and sometimes outright foolish—in how they approach the world. The preceding chapters have problematized this way of tackling entrenched positions in philosophy of science by foregrounding the fact that rationalism and empiricism, deduction and induction, cannot be treated, either historically or today, as mutually exclusive opposites between which one must choose. Rather, they are related to each other and more closely connected than the literature on the philosophy of science usually makes out. One way to destabilize these stereotypes is to recall that when objectivity and subjectivity made their appearance in medieval prehistory as a pairing, in the work of Duns Scotus and William of Ockham, their respective meanings were reversed. At that time, the term “objectivity” referred to how things are present to consciousness, while “subjectivity” referred to things in themselves. Even in Descartes’ writings, the concepts appear with these reversed significations. This can seem strange and confusing, but it also serves as a reminder of just how intertwined the terms are. It was Immanuel Kant who dusted off the old scholastic terminology and breathed new life into the debate by creating a foundation upon which to ascribe the concepts new—and almost reversed—meanings. For Kant himself, however, the line of demarcation between objectivity and subjectivity primarily involved the relation between universal and particular. It was not until Kant’s philosophy reached a wider audience, from the 1820s and 1830s on, that the dividing line between the world (passive nature) and
156
Chapter 7
consciousness (the self’s active understanding) came to be the dominating way of articulating theories of knowledge. Kant is nonetheless the thinker whose name is indissociable from this attempt to connect rationalism and empiricism. He was a philosopher who revolutionized the world of thinking—even as he himself lived a superficially uneventful existence at a minor university in Königsberg (now Kaliningrad), a city that, strangely enough, this universalist philosopher and proponent of cosmopolitan ideals never left at any point in his life. Since the rationalistic approach easily develops into dogmatism, Kant took his own point of departure in the Scientific Revolution’s general empirical focus and Hume’s notion that all knowledge derives from sensory impressions. Kant, too, thereby affirmed the fundamental importance of empirical data and experience. But even if sensory experience was utterly essential, it was nonetheless insufficient as a condition for developing knowledge. What we experience through our senses must in fact be structured with the help of concepts in our reason. Observations in themselves are not meaningful unless they are organized. Kant therefore introduced a transcendental component, a synthesizing capacity, independent of experience, that leads to a kind of “third” position through which Kant sought both to connect with and supersede both rationalism and empiricism. It means that what we call knowledge can actually be thought of as having two different sources: “The understanding can intuit nothing, the senses can think nothing.” At the same time, however, these cannot be treated as mutually exclusive alternatives if we really want to understand what knowledge is: “Thoughts without contents are empty, intuitions without concepts are blind.”154 Unlike monistic systems of thought, such as materialism and idealism (in which, so to speak, all of reality is in the same basket), and dualistic ontologies of the kind we encounter in Descartes and others (which strictly divide up reality into two separate worlds), Kant was primarily interested in a kind of dialectical “third” way and sought to disclose the process by which the synthetic capacity itself, on the basis of these two sources, produces knowledge. Knowledge is in fact something one “makes” for which reason it must be treated as an expression of human action. Kant’s interest came to focus upon what he called Einbildungskraft, a concept that in English is translated as imagination. From this starting point it becomes necessary to think about knowledge in other terms than eitheror dichotomies while always keeping one’s gaze firmly on a “third” alternative. At stake here is a “critical path” that takes the form of a kind of self-critique of reason. This means that knowledge cannot really be treated as something that only “exists.” By focusing on a subject who teases out a
Scientific Polarization and Hubris
157
pattern by using a particular set of data, Kant in a sense relocates the aesthetic problematic, from the periphery to the heart of the knowledge discussion, so that it in a way becomes the “hidden art” of knowledge formation. The question of what is real can thereby no longer be kept separate from the question of how we imagine reality. Kant wanted to achieve what he himself described as a “Copernican turn” in philosophy. Instead of treating knowledge as a given, in line with empiricism’s “out there” or rationalism’s “in here,” Kant presents an act of knowledge in an active subject whose creative synthetic capacity is continually mediating between dialectically related antinomies. We thereby move from a representing and imitating epistemological paradigm to a creating and productive epistemological paradigm, in which our imagination (Einbildungskraft) emerges as a creative act, an aesthetic ability to imagine new relations within a given body of empirical material. Yet this is not a matter of free imagination in a self-defining subject able to create their own reality, but a subject that is defined by an inner disproportionality aiming to establish conflictual mediations in order to configure knowledge. It is a matter of what Paul Ricoeur, one of the twentieth century’s great dialectical thinkers, has repeatedly described as a constrained, regulated imagination.155 This interpretation is reinforced if we carefully consider the importance of “schematization” in Kant’s thinking. When the imaginative capacity produces knowledge, it does not do so freely but with the help of “schemata” that are in motion. Here, Kant’s ideas in a sense anticipate the linguistic and cultural turns that emerged as a defining arena of conflict in philosophy of science during the twentieth century. From a larger historical perspective, it is therefore possible to consider language as the last of the four epistemological “worlds,” whose mutual relations are necessary to cope with for all those interested in the theory of knowledge: first, reason (rationalism); second, sensory experience (empiricism); third, imagination (transcendental philosophy); and fourth, language (philosophy of language).
The difficult art of maintaining a balance It could be said that Kant, in his transcendental philosophical project, not only aligned himself with empiricism and rationalism but marked his distance from and superseding of both positions. According to Kant, knowledge is not something that comes solely “from without” by means of sensory experience, nor is it something that can be entirely derived “from within” by means of thought. Instead, knowledge is part of a “third” concept, the synthetic capacity of an active subject. This three-worlds model, in which human beings are regarded as citizens of a realm of both
158
Chapter 7
freedom and necessity even as they emerge as the main transcendental architect of knowledge, inevitably involves a kind of balancing act, which in the centuries that followed was continually in danger of leading to imbalances and new forms of unilateralism. By means of his “transcendental idealism,” Kant had sought, as it were, to learn “to swim while on dry land” in order to realize his ambition of describing the synthetic capacity in itself, an endeavor that would subsequently be revealed as highly problematic. In the neo-Kantianism that followed, there was also an idealistic tendency, which meant that its focus on imagination continually risked ending up in an excessively strong subjectivity, as if the subject itself not only created the categories for understanding the world but also created the world itself. As a result, the world was in danger of seeming like the shapeless matter from which the subject itself, without any constraints, can create a world of its own. The Danish phenomenologist K.E. Løgstrup (1905-1981) has called this a “culturally-biased view of life” and warned that such a transcendental perspective risks detaching the self and culture from the world and nature, resulting in a pan-cultural ideology associated with Kant’s subjective idealism, written large in Kantianism as a view of life as formed, or constructed, by human perception, without due sense of reality as “pre-given,” that is, already informed.156 The genuine challenges associated with trying to affirm a “third” position in the difficult balancing act of knowledge has meant that theories of knowledge in recent centuries have repeatedly shown a tendency either to slide into a more or less idealistic position or to confront various materialist and naturalistic reactions that have wanted to establish positions that their proponents regard as more realistic and closer to reality. Because this opposition between idealistic and materialist positions has defined the subsequent history to such a degree so as to resemble two monistic tendencies, the numerous underground connections between these systems of thought have remained invisible. While interpretations of Kant may have often developed in an idealistic direction, they are far from being the only way to reading his texts—and note that Kant himself did not use the word “idealism” in the sense that has subsequently dominated, i.e. as the diametric opposite of materialism. There are also considerable resources in Kant’s own philosophy that could be used to balance an idealistic interpretation of transcendentalism. If transcendental philosophy in some sense focused upon the human capacity that makes it possible to develop knowledge about things “as they appear to us,” Kant’s ideas about das Ding an sich, the “thing-in-itself,” already present a demarcation against various attempts to establish absolute knowledge about reality on the basis of a centred subject. There are always things in the world
Scientific Polarization and Hubris
159
that our knowledge does not cover—the map will never correspond to the entire reality. The years around 1800 were a dramatic period in European history. An industrial revolution had been under way in Britain for some time, and at the end of the eighteenth century a political revolution had swept away the old social order in France, after which Napoleon’s armies had easily conquered a Germany that was yet little more than three hundred Lilliputian states. But even Germany was the site of a kind of revolution, albeit a revolution largely limited to the world of ideas: a kind of “philosophical revolution” that was to have a lasting impact on the entire continent and, ultimately, the rest of the world. During the first decade of the nineteenth century Friedrich Wilhelm Hegel (1770–1831) at the University of Berlin launched a grand attempt to extend Kant’s thinking by building a philosophical system that would encompass historical reality in its entirety in the form of synthesis of all knowledge and science. Where Kant had tried to “swim on dry land,” as it were, in his transcendental attempt to get to the knowledge-creating subject that is the precondition for every kind of knowledge, Hegel’s monumental ambition was to incorporate all science and all of reality within a historical system. He thereby gave the word “concrete” a new meaning, which, by including everything that we consider as reality, almost reversed the conventional understanding of the relation between concrete and abstract. Where the successors to the Enlightenment have sometimes viewed philosophy as a reason-based interest in knowledge, clearing away the metaphysical log jam that was preventing the development of scientific thinking, philosophy for Hegel was the “crown of sciences” and, as such, the discipline that systematized and completed all disciplinary thinking. Hegel was probably the last philosopher to make a serious attempt to grasp this entirety, and it has sometimes even been said that philosophy in the last two hundred years has for the most part been wandering among the ruins of Hegel’s system. Even so, attempts to definitively reconcile consciousness and the world have continued in a vast range of forms. The University of Berlin attracted students from a wide range of backgrounds, particularly after Hegel’s death in 1831, when he was succeeded by Friedrich von Schelling (1775–1854), one of the great thinkers of German Romanticism, who completed the attempt to reconcile consciousness and the world, as well as faith and knowledge, within a common philosophical system. The students who travelled to Berlin to listen to Schelling’s lectures—Søren Kierkegaard, Mikhail Bakunin, Karl Marx, and so forth—heard only idealism, and they all returned home determined to settle accounts with Hegel’s system. From this was born an individualistic
160
Chapter 7
existential philosophy (Kierkegaard), political anarchism (Bakunin), and dialectical materialism (Marx). These “left” Hegelians were drawn to the concrete but ended up outside the university, while the idealism of “right” Hegelians lingered on for a while and came to dominate academia. The fact that the philosophical tradition from Kant to Hegel ultimately gave rise to so many differing ideas signifies that the philosophy they elaborated was pursuing something far more interesting and complex than mere idealism. German idealism’s attempts to bring all knowledge into a unified system soon collapsed because of both external and internal criticisms. An inability to coordinate and balance the “two sources of knowledge” meant that Hegel’s philosophy was ultimately forced into narrowly idealist or materialist strands. But the growing difficulties encountered by these philosophical attempts to hold the entire field of scientific thinking together were also bound up with larger social developments. During the nineteenth century, Germany, which in practice had mostly comprised a shared Romanticist culture sustained by poets and philosophers, underwent an extraordinarily successful process of industrialization as well as a national unification, with the result that it emerged at the end of the century as Europe’s strongest state by far. The natural sciences were crucial to this dramatic societal development. In practice, the concrete tools for building a new society came not from literary studies or theology, but from the natural sciences and technology, as well as from Germany’s successful integration of industrial development and scientific research. This development was governed by pragmatism and utility, and in the place of the Philosophical Germany that had collapsed and disintegrated into philosophical and political conflicts, a Germany of steel emerged, characterized by scientific naturalism and positivism, combined with a democratic deficit that was gradually balanced out by an embryonic welfare state and political democracy. In turn, this practical orientation, with its focus on utility and nation-building, eventually came to dominate the academic world. The nineteenth century was in many ways the century in which modern science proper first saw the light of day, and it was Germany that emerged victorious as the great scientific nation. It was also in Germany that the needs of industry were most successfully connected with the competences of academia, and it was at German universities that the different scientific disciplines began to coalesce under the same university umbrella, such that everything from theology and history to physics and engineering could coexist within the framework of a university, whose responsibilities for education and teaching traditions were now extended to include using research to contribute new knowledge for the development of society. Germany was an academic precursor and would probably still be
Scientific Polarization and Hubris
161
dominating the world’s scientific culture—and thereby probably have made German a lingua franca—had it not been for the violent self-destructiveness that this nation of Bildung showed itself capable of unleashing in two devastating world wars during the first half of the twentieth century. Anyone interested in science needs to ask themselves what role—or roles—were played by science in this context, when civilization suddenly degenerated into barbarism, and humanism was replaced by militarism. To what extent can science cause and reinforce—or counter and resist—barbarism of this kind?
The synthesis collapses—positivism sets the scientific agenda Epochs and historical periods rarely coincide neatly with dates. Nothing particularly noteworthy occurred in the years 1800 or 1900. Instead, following the terminology of Eric Hobsbawm, the 1800s are usually referred to as the “long” nineteenth century (beginning prematurely with the French Revolution in 1789 and extended into the following century, so that the world of the nineteenth century is imagined as coming to an end with the outbreak of the First World War in 1917 or the Russian Revolution in 1917) in the same way as the 1900s are usually referred to as the “short” twentieth century (beginning in 1914 or 1917 and ending with the fall of the Berlin Wall in 1989 or the collapse of communism in 1991). The nineteenth century seems in so many regards to have been a strategic century that we need to examine it more closely in order to understand the course of history in our own time. If what we have come to call the Scientific Revolution largely took place in the seventeenth century, then the Enlightenment of the eighteenth century was a pedagogical (and philosophical-political) project through which the new ideas were disseminated, before being realized and materialized in the industrial society of the nineteenth and twentieth centuries. It is impossible to imagine the social transformation of the nineteenth century without scientific thinking—and it was also during this historical period that modern science as we know it assumed a definite form and contours. We have seen how philosophy during the nineteenth century—in the form of transcendental philosophy, metaphysics, German idealism, and Romanticism—sought to establish a synthesis of all knowledge, as well as how these philosophical attempts and systems finally collapsed and became obscured by social developments. Instead, it was positivism, with its critical approach to metaphysics and its utilitarian focus on social transformation, that took over and came to define the scientific project. Positivism is usually
162
Chapter 7
characterized as a “realist” position within philosophy of science, focussing on the concrete and “positively” given: empirical data and experience. It is probably this empirical focus, and the contrast to scholastic knowledge (which regards knowledge of the world as synonymous with understanding God’s will) and philosophical speculation (which involves understanding the world on the basis of a higher principle), together with its attempts to actively connect with the legacy of the Scientific Revolution, that has resulted in positivism playing such a powerfully defining role in modern science. “The positive” in this context refers as much to what is real and certain as to what is beneficial and functional. But positivism can also be said to be “positivistic” in the “optimistic” sense that the word positivism has sometimes come to denote. Positivism was underpinned by a belief that gaining knowledge of the natural laws governing the world would also make it possible to improve the conditions of life and promote progress for people and society. The concept of positive in fact gains its meaning as a result of a long succession of distinctions in which the real is prioritized above the fictive and the beneficial above the harmful (the issue was to improve people’s conditions in accordance with the idea of development). The certainty of the new science was given precedence over the uncertainty of the old metaphysics and its endless philosophical discussions, and the exact was made superior to the vague, as facts were to values—all in contrast to metaphysical speculation and theological idealism. As noted in Chapter 2, the term positivism was coined by Henri de SaintSimon, but it was Auguste Comte who gave it its meaning and gave it wider currency, and it is he who is conventionally regarded as the founder of positivism as a “school” of philosophy of science (even if many other important advocates, such as Bentham, Mill, Spencer, Haeckel, Durkheim, and Wundt, also contributed to the definition of the concept). Over time, positivism has also evolved into a collective heading for a series of positions within philosophy of science that are united in their desire to avoid speculation. But positivism was also underpinned by a narrative that to a considerable degree has laid the foundation for how modem science understands itself. Ironically, despite the fact that positivism emerged as a critical reaction to Hegel’s speculative system and philosophy of history, the thinking of Comte and the other positivists was characterized by an equally speculative tendency in that their core narrative was also shaped by the idea that historical developments are governed by laws which tend in a certain direction, a telos that tightly binds the theory of knowledge to philosophy of history. Ironically, positivism thus shared a historical-philosophical
Scientific Polarization and Hubris
163
framework and the concept of development with the very Hegelianism that it so desperately wished to dismantle. The science narrative of positivism, like many other contemporary accounts, is closely connected to a history of development defined by progress. In positivism’s case, however, this narrative was about how human development had begun in a theological stage before moving to a metaphysical stage and then a positivist stage, the latter being a scientific stage that Comte regarded as “a final phase of rational positivity.” Where the theological phase had been characterized by an effort to explain nature by means of anthropomorphic ideas about divine intervention within the framework of a society defined by faith in authority, the metaphysical stage instead involved an attempt to develop explanations by means of abstract ideas and forces, which was distinctly appealing in a society defined by anxiety and egoism. By the positivist stage, however, humanity had begun to explain the world scientifically, in the sense that it was developing explanations, in a spirit of materialism, based upon the world itself. Instead of divine powers and philosophical principles beyond the realm of experience, science would enable humanity to reveal the conformity to laws that can only be known about by means of careful investigation of the data provided by concrete experience. The “positive” sciences—among which the exact sciences had come further than the more abstract—are thereby regarded as having appropriated the capacity for predicting and controlling both nature and society, thus making them, in turn, a precondition for the construction of a rational society. Comte here saw great opportunities for the new science focused on society that he wished to establish— sociology—not least by using statistics to explain social development by means of methods inspired by mathematics, astronomy, physics, chemistry, and biology. The word “statistics” itself originates in the fact that states were the first to begin collecting information about society.157 Comte’s positivism can be regarded as a “realist” reaction to scholasticism and speculation. It was influenced by a general empiricist ideal without being strictly inductionistic in any real sense. His advocacy of an epistemology based on the idea that only what allows itself to be observed via the senses can be called knowledge was reinforced by a kind of materialist ontology that represented both nature and human society as phenomena governed by regulatory laws. In the background can be discerned an ambition, inspired by the ideals of Newtonian physics, of discovering the laws that defined society like a “social physics.” Instead of the kind of vain learning that mechanically accumulates facts without trying to derive them from others—no matter how numerous and accurate they
164
Chapter 7
are—Comte regarded laws existing independently of ourselves in the external world as the principal objects of his new science: That basis consists for us in the laws or Order of the phenomena by which Humanity is regulated. The substitution of human life to this order is incontestable; and as soon as the intellect has enabled us to comprehend it, it becomes possible for the feeling of love to exercise a controlling influence over our discordant tendencies.158
As we have already mentioned, Comte’s ambition of developing sociology as a science of humanity, characterized by the same methodological features as astronomy, physics, and biology, exhibit a kind of parallel to many others during this era, from the economic laws of Smith to Kant’s efforts to discover a “moral law” governing human freedom. Taking his point of departure in an empirical and objective investigation of social facts, Comte thus sought to discover the causal relations that determined human behavior, in order to use those casual relations as the basis for a technology of society—social engineering. Just as explanations of causal relations in mechanics could be used to predict and govern nature, Comte saw the potential in being able, at a societal level, to predict determinate relations, control processes, plan the future, improve, and develop society. In short, the new social science would in this way become a means of social planning in the service of the future. Considering this background, it might reasonably be asked whether positivism’s original base and inspiration was not actually within the emergent social sciences rather than the natural sciences. Positivists had very little to say about the humanities, by contrast. On the one hand, this is slightly odd given that the idea of progress could be said to have been equally prompted by the humanities. On the other, it can be explained in terms of the difficulties of implementing positivistic attitudes within the kind of knowledge culture that has traditionally been a hallmark of the humanities and the “law of freedom” that Kant’s moral philosophy wished to explore. In a book revealingly titled A System of Logic (1843), John Stuart Mill (1806–1873), another of positivism’s founders and a wealthy friend of Comte’s (Mill also partly financed Comte’s work), sought to argue that the scientific investigation of human beings should adopt the methods of natural science in order to discover and explore law-governed interactions in both the past and the present. In other words, knowledge was here accorded strategic importance in planning society and predicting individual behavior. Mill argued that it was possible to identify causal connections based on how a series of events repeats itself and to use statistical probability calculations to draw inductive conclusions. The possibilities for making predictions in
Scientific Polarization and Hubris
165
this fashion, as Mill thought would be the case, have nonetheless turned out to be more complicated, since the basis for making predictions in the social sciences and humanities is clearly limited. In politics, ethics, and economics, it can even be asked whether it is desirable. In the same way as there can be no science of the future, it would be problematic at the very least for a science to claim to have a comprehensive explanation for human behavior.
Scientistic dreams It was during the nineteenth century and thanks to the dominant narrative of positivism that modern science fully emerged—and it was during this very period that contemporary ideals began to be systematically projected back onto the Scientific Revolution of the sixteenth and seventeenth centuries. Yet we must fundamentally call into question the positivist narrative of science and the image of a relentless battle between science and religion upon which it is premised. In fact, religion was a non-question during the sixteenth and seventeenth centuries. What were then really the stakes in the prehistory of modern science when researchers investigating nature developed their ground-breaking theories in the sixteenth and seventeenth centuries? In this book I am making the argument that we need to shift our science narrative and find another focus for the process of historical development that led to modern science, by looking instead at the logic of discovery— and discovery itself. In light of the recurrent questions about whether we discover something “out there” or whether science is about inventing something that can be said to be located “in here,” the art of discovering/inventing the world will be crucial both for the concept of knowledge and for our understanding of the stakes in the Scientific Revolution—as well as in the way it is used as an object for projected ideals, as was the case when the scientific project took shape in the nineteenth and twentieth centuries. In order to manage the sheer quantity of anachronisms and to be able to recreate the logic of development that formed the basis for what we today associate with that science, we must therefore restore the scientific project (or, more precisely, its prehistory) to the concrete historical context and social reality in which it originated, critically examine and deconstruct established narratives of science, and rediscover/reinvent the narrative underpinnings of the scientific project. However, there is no escaping the question of the unity of science. Although Comte was undeniably doubtful about the possibility of establishing a general scientific method that would be acceptable to every
166
Chapter 7
scientific field, his thinking was characterized by a kind of unitary ideal that regarded science as a coherent whole. In practice, the fact that positivism has taken the exact natural sciences as its role model has had the result that all other academic disciplines have had to work in the shadow of an understanding of science in which the natural sciences serve as a universal model. The complications and problems, challenges and opportunities, which this has caused have served as a driving force in the development of philosophy of science during the last two centuries. Not only has it resulted in the natural sciences having only a shallow self-understanding, it has also helped to influence the human sciences, and eventually also the social sciences, whose operating conditions have been determined to a very great extent by the threat from the dominant natural sciences. From this point of departure in naturalism and “zoologism”—how are we to do justice to humanity, human culture, and the social forms of human life? These questions have led to a situation in which scientific thinking has been repeatedly confronted by the difficulties associated with linking together and balancing the two sources of knowledge—which have a continual tendency to give rise to two distinct scientific projects in the form of a “hard” versus a “soft” Enlightenment.159 This ambivalence is also a hallmark of the social sciences. Positivism was developed within a nineteenth-century society—and subsequently within the framework of a narrative that described both the emergence and evolution of the social sciences. If positivism can be described as a systematic and secularized scientific project aiming to reveal general regulatory laws on the basis of empirical knowledge of reality, then it is no coincidence that, in the beginning, alternative terms, like Kameralwissenschaften and Staatswissenschaften, were used as collective headings for these social sciences because they were created as a response to the modern state’s need for exact knowledge by which to govern, plan, and take decisions. In an age when legitimate sovereigns were being replaced by legitimate nations and peoples, science focused on the laws that were supposed to governing social development. The concept of science held by the burgeoning social sciences rested on two premises—symmetry between the future and the past, and dualism between humanity and nature, consciousness, and matter—and was defined by an overarching quest for general laws of nature. Immanuel Wallerstein and his colleagues, from whom I have drawn inspiration, have mapped out the ferment of different categories that arose in the wake of the emergence of a disciplinary structure for the social sciences in the 1850s— ”In the course of the nineteenth century, the various disciplines spread out like a fan”160—before finally being consolidated into discrete scientific branches within a shared domain of scientific thinking as late as 1945.
Scientific Polarization and Hubris
167
However, these disciplines were marked from the beginning by profound internal tensions between, on the one hand, a world governed by deterministic laws and, on the other, the human capacity for inventions. For the most part, however, the result was that Newtonian mechanics, following the technocratic ideals of the new social physics, triumphed over philosophical speculation. Empirical evidence trumped speculation, One of the two “cultures”—ironically, that which ought perhaps to be termed “nature”— thereby defeated the other.
The Humboldt brothers and the two scientific projects It is worth repeating that what we refer to as the Scientific Revolution was principally a revolution of the natural sciences—and because of this historical fact the philosophical discourse on science has been dominated by natural science. In the latter half of the twentieth century, this was further reinforced by the fact that English—or, more precisely, bad English—has been established as a lingua franca, resulting in disciplinary thinking at large being defined as “science” in the sense of natural science.161 After the collapse of the idealistic attempts of Romanticism to bind scientific thinking and the university together in a common spirit by means of an overarching philosophical model, the field was open for other, more unilateral figures to step in. To this should be added the far-reaching consequences of increased specialization, which made steady inroads into scientific thinking and resulted in the development of separate theoretical schools and a spectrum of methods that no philosophy could ultimately accommodate within the framework of a unified whole. The strength of positivism in this situation was that it advocated a programme of unified science modelled upon the natural sciences, and that it could also offer a programme centered around the utility and contribution of scientific thinking to the development of society. And this situation has for the most part continued, with the deeply ironic outcome that the university’s traditional knowledge culture (artes) has been increasingly overshadowed, in ways that have proven difficult to reverse, due to the dominance of natural sciences. The operating conditions for the humanities and the social sciences have thus come to be defined by a permanent feeling of having been eclipsed and of having to speak from a position of inferiority. It is rather strange, given that, from a historical perspective, the humanities have always been both creative and innovative with regard to things like nationbuilding, which had hitherto been a transformative rather than a conservative role.162
168
Chapter 7
But before knowledge “lost its balance,” so to speak, by developing in an idealist direction, Kant’s ideas were further developed in the creative milieu that arose among the talented thinkers who flocked to Jena in the late eighteenth century, an intellectual environment that was to be profoundly important for the future. Within the framework of “Jena Romanticism,” questions were asked about whether the conditions for knowledge are even possible in relation to both an “outer” and “inner” world. Where Enlightenment thinkers had tended to view the interior and the exterior world as two separate domains, a number of thinkers—including Friedrich von Schelling, the Schlegel brothers, Johann Wolfgang Goethe, and Wilhelm von Humboldt and his brother Alexander—sought, with inspiration from Kant, to restore its lost unity, often with contributions from art, poetry, and the power of emotions. Half a century later, a comparable milieu emerged, in a kind of parallel to Jena, in the American community of Concord, where philosophers and authors such as Ralph Waldo Emerson, Henry David Thoreau, Nathaniel Hawthorne, Margaret Fuller, Hans Adolph Brorson, and Louisa May Alcott developed philosophical reflections that were deeply influenced by and highly like the intellectual worlds of Romanticism and German idealism.163 In the light of Jena Romanticism and its counterpart in Concord, a movement that in our own era has at times been represented as a novelty— namely, the use of abduction as a way of overcoming the opposition between induction (empiricism) and deduction (rationalism)—can perhaps be thought of as an attempt to reinvent the wheel. It was evident, in both Jena and Concord, that empirical data and facts are absolutely necessary— if not sufficient—components for the development of robust knowledge. Empirical facts are never entirely “pure.” We need theories, models, and metaphors, in addition to observation of empirical reality, to be able to create meaning and “see something as something.” And yet, as already noted, it has proven difficult to manage this balancing act of knowledge. This has also made the entire modern discussion of epistemology into a complicated history, with anti-empiricism frequently being cast as naïve idealism in the same way as critiques of narrowly theoretical perspectives have often been developed into equally naïve forms of empiricism. In a sense, the origins of this entire problematic go back to the lives and academic adventures of the Humboldt brothers. Together, Alexander and Wilhelm Humboldt, represent the two great projects of scientific thinking— natural science and humanities, respectively—while also personifying the two great journeys of formation (Bildung): for Alexander, via nature and geography; and for Wilhelm, via culture and literature. After drawing inspiration from a circle of almost impossibly talented thinkers—including
Scientific Polarization and Hubris
169
Goethe, Schiller, Fichte, Schelling, and Schleiermacher—who for various reasons gathered in Jena in the years around 1800, the brothers chose separate professional paths. Wilhelm pursued a career in the civil service that culminated in his drafting, in his capacity as Prussian foreign minister, of the statutes of Berlin University in 1810. Alexander chose a more traditional academic path and devoted his life to the service of naturalistic research. The Humboldt brothers are in many regards personifications of the two scientific projects that to this day can be identified in both the university and disciplinary thinking more broadly: the study of culture (Kulturwissenschaft) and the study of nature (Naturwissenschaft). Despite being very different personalities—one was happiest when left alone with his books, the other when walking in the forest—both were driven by a shared ambition to bring the two knowledge cultures together: “Humboldt ‘read’ plants as others did books.”164 But both Alexander’s study of nature and Wilhelm’s theories of language were characterized by an ambition of putting their respective subjects into a broader context, which meant that not only nature but language too was treated as a living organism, a whole whose various parts were all interwoven. And the drive for learning was as closely associated with nature as with the human capacity for language: “Just as nature was to much more than the accumulation of plants, rocks and animals, so language was more than just words, grammar and sounds.”165 In stark contrast to the focus on mathematical predictability and laws of nature that had been a hallmark of the Scientific Revolution’s point of departure in an understanding of nature as a complex apparatus that operated in an analogous fashion to machines, clocks, and automatic mechanisms (in the work of Descartes, Newton, Leibniz, and others), Alexander von Humboldt thought in terms not of classificatory categories but of ecosystems. Determined to learn how the world fitted together, he saw connections everywhere in the fragile world of living things. In similar fashion, Wilhelm von Humboldt imagined language as being not a tool for expressing ideas but something that shaped ideas: different languages should be considered as reflections of different worldviews. In the process, the brothers replaced mechanical constructions with organic metaphors, turning their attention to the net that binds action, thought, and speech together into an organic whole. In the latter part of the nineteenth century, the lingering conceptual world of Romanticism grew weaker before being finally swept away because of a differentiation and specialization that eventually led to a kind of scientific tunnel vision. An empirically focused, naturalistic interest in knowledge that created the conditions for the industrialization and
170
Chapter 7
modernization of society became increasingly dominant, with the attempt to incorporate the different cultures of knowledge into a single form of knowledge being essentially abandoned. In contrast to Jena Romanticism, which had been a source of intellectual inspiration for both Alexander and William, the dominant strands of scientific thinking in the second half of the nineteenth century were redirected towards a more narrowly empirical mode of research and a view of nature as little more than a resource for humanity’s needs. The ties and connections between the “two cultures” were thereby lost. As Andrea Wulf observes at the end of her biography on “the lost hero of science,” Alexander von Humboldt: Alexander von Humboldt has been largely forgotten in the English-speaking world. He was one of the last polymaths and died at a time when scientific disciplines were hardening into tightly fenced and more specialized fields. Consequently, his more holistic approach—a scientific method that included art, history, poetry and politics alongside hard data— has fallen out of favor. By the beginning of the twentieth century, there was little room for a man whose knowledge had bridged a vast range of subjects. As scientists crawled into their narrow areas of expertise, dividing and further subdividing, they lost Humboldt’s interdisciplinary methods and his concept of nature as a global force.166
At the threshold of the twentieth century, the two scientific projects of the West had been brought under the shared umbrella of the university. But they remained deeply divided. In both academia and society at large, contemporaries could see the emergence of two distinct cultures of knowledge that exhibited a considerable degree of mutual estrangement and, not infrequently, manifested a deep-rooted antipathy towards each other.
PART III CONFLICTS OF INTERPRETATION: THE EVERYDAY PRACTICE OF SCIENCE
CHAPTER 8 POSITIVISM—AND ITS CRITICS— DEFINE CONTEMPORARY SCIENCE
“A spectre is haunting the university—the spectre of positivism. All the powers of the old university have entered into a holy alliance to exorcise this spectre …” Such is how we might paraphrase a classic text by Karl Marx and Friedrich Engels in order to capture something of the climate of threat—mixed with unease—about a powerful and apparently insuperable force that during the last centuries has been set in motion within philosophy of science, and that has set its mark in a very particular way on how positivism is viewed in large parts of the academic world.167 Especially in the last fifty years, positivism has functioned as a kind of spectre that is continually haunting most debates within philosophy of science. For many, it has come to be the Other of philosophy of science, and in this guise, it has also come to represent an ominous driving force that is discernible behind the development of science. Rarely if ever, however, has any attempt been made to qualify what positivism’s actual position is or who these positivists are. When science is typically discussed in public forums, a vaguely positivistic viewpoint has often assumed a central role when commentators unproblematically refer to science in terms of proven knowledge, theories, or conclusions that are supposed to have been derived more or less directly from experience by means of observations and experiments—in brief, science has been regarded as a supplier of cast-iron truths. While positivism in the humanities and the social sciences has also raised expectations about being able to deliver a ready-made scientificity, positivism has remained a pejorative term with exclusively negative connotations. Meanwhile, elsewhere within the same university, positivism had been so unproblematic a term, particularly for natural scientists, as to not even need mentioning. Positivism has here functioned as an uncontroversial way of defining science, and in practice this is, in fact, what large swathes of academia probably have taken to be how science ideally ought to be conducted.
174
Chapter 8
The cumulative effect of these conflicting understandings among humanists and social scientists on the one hand, and natural scientists on the other—the “two cultures”—has been that the positivistic position has rarely been articulated and presented with any clarity and precision. As a result, a peculiar situation has arisen in which the dominant scientific narrative almost entirely lacks representatives. Clarification of what this view of science involves has also failed to materialize. In short, positivism has remained a spectre.
Positivism as the dominant science narrative Science isn’t what it used to be—and probably never has been. A recurrent theme in this book about philosophy of science is the necessity of thinking historically if we want to understand what science is. Broadening our historical perspective can also open our eyes to the extent to which our modern view of science has been largely created and shaped by positivism and its grandiose dreams of Science. Yet historical contextualizations of this kind also makes it possible to recognize the constant change that has characterized scientific work across the centuries—and thus that scientific thinking is a project. Even positivism has a history, which involves change. Positivism was clearly a controversial project right from the start, and already from early stage positivists met with objections from critics, located both “within” and “without.” During the last two centuries, positivism in its various incarnations has therefore repeatedly been scuttled by its critics, only to repeatedly come back to life and reappear in a new form. In this way the representatives of the “old” nineteenth-century positivism (Comte, Mill, Spencer, Durkheim, et al) were succeeded in the twentieth century by the “new” positivism’s logic-inspired renewal of the concept of science (via thinkers such as Neurath, Mach, Schlick, Russell, Moore, and Wittgenstein). Today we might also refer to a third variant of positivism—a “globalized” positivism, characterized by a view of science that has been deeply influenced by the conditions and possibilities offered by a digital information system.168 The view of science that we have inherited from the nineteenth century’s “old” positivism and the twentieth century’s “new” positivism have in any case remained a recurrent point of departure for the many challenges that science continues to face in the current post-positivist era. Furthermore, as a scientific narrative, positivism has always provided the underpinning to lingering great expectations about what science ought to be—a kind of phantasm that continues to haunt scientific thinking. Alan F. Chalmers argues that such expectations from science, which he regards as highly
Positivism—and its Critics—Define Contemporary Science
175
dubious, involve the idea that “scientific knowledge is based on the facts established by observation and experiment,” and he contends that they are often associated with the view “that scientific knowledge should in some way be derived from the facts arrived at by observation.”169 But if positivism, and especially the neo-positivism that emerged during the twentieth century, can be said to have played a crucial role in our understanding of what generally characterizes science, we must also be careful to actively include the many corrections of this understanding of science that have been made by both “internal” and “external” critics. Even critiques of positivism—including scattershot attacks on positivism—must therefore be included in this particular definition, which dominates scientific thinking today.170 If we include this wider spectrum of positions, it may also be easier to accept my argument that positivism— together with the critical discussion of positivism—has played a defining role in establishing the grammar that informs scientific philosophical considerations in the present moment. This kind of philosophical perspective on science also makes it both possible and necessary to actualize as well as problematize the fundamental assumptions of positivism. Even the shifting positions which positivism has come to occupy within the landscape of scientific thinking should be taken into consideration. Ironically, during its brief history positivism has gone from being a scientific project almost entirely embedded in a (radical) political project to instead representing a view of scientific thinking that resists any association with politics. This history also includes the dramatic shift that occurred when positivists went from being marginalized critical underdogs to a position of power in the scientific community from which they could make pronouncements with tremendous assurance and consign, to oblivion individuals and positions which they disliked. Furthermore, an even more important shift involves the way in which positivism in the 1990s evolved from narrowly representing a theoretical dogma of meaning characterized by a desire for verification, to a modified position whose claims were limited to pragmatically recommending the verification principle as an aim only. In many respects we are also indebted to positivism with regard to the emergence of new disciplinary fields such as history of science and philosophy of science. At the same time, it should be remembered that the dominant science narrative of positivism is premised on a history that which is simultaneously suppressed when all variability disappears in its scientistic haze of immutability.
176
Chapter 8
A “new” positivism: science requires verification As we saw in the last chapter, classical positivism emerged during the nineteenth century by modelling itself upon the natural sciences. But since this love for the natural sciences was not always returned, positivism instead made its home within a new disciplinary area that positivists had themselves helped to establish and that would eventually become a university faculty of its own: the social sciences. Scientism, that is to say the view that the methodologies of the (natural) sciences are the best suited for understanding and explaining all phenomena, was a hallmark of nineteenth-century positivism. This kind of thinking was relatively vague in form and acquired most of its identity and profile thanks to its dominant narrative of science. In the early twentieth century, however, this “old” positivism was “upgraded” as the original “positive,” empirical, and mechanical template was supplemented by the logic and conceptual analysis of modern language philosophy. Thus, there emerged what has variously been called logical empiricism, logical positivism, and neopositivism. In turn, these three labels signaled its connection to empiricism and sensory experience, the importance of taking language and logic seriously, and the fact that this was indeed about making a new start, that is to say, a “new” positivism. Philosophy and science do not develop in a vacuum. It is therefore important to remember how shifting social conditions fundamentally changed the basis for the development of philosophy of science during the last century. The new positivism originated in two epicenters: on the one hand, Oxford and Cambridge in Britain; and, on the other, Vienna in Austria. Recalling how two world wars, together with the experience of Nazism, fascism, and communism, rewrote the geopolitical map of Europe in the first half of the twentieth century will also help us to better understand why a scientific project with one foot in Austria and the other in Britain did not remain unscathed during the bloodiest half-century in history of humankind. Modern positivism originally enjoyed its strongest support on the Continent, in a Vienna that had attracted a stellar grouping of philosophers, scientists, politicians, economists, artists, and musicians, many of whom would leave their mark on the century. But the advance of fascism, and its almost total domination of the Continent during the 1930s and the first half of the 1940s, combined with the fact that a striking number of the leading positivists were socialists and/or Jews, who were forced to flee Europe westwards, resulted in positivism being effectively eradicated in mainland Europe. Instead, positivists gradually established a formidable base in the Anglo-Saxon world, principally the United States. It is deeply
Positivism—and its Critics—Define Contemporary Science
177
ironic that the neo-positivism that was initially almost entirely “Continental” (which sounds rather strange given how we have since come to use the term “Continental” philosophy to denote a tradition opposed to “analytical philosophy”; more on this in Chapter 9), ultimately wound up on the side of the victors after the war. In the process, however, the neopositivists acquired legitimacy and moral gravitas, because neo-positivism’s confrontation with fascism ensured that it had a good standing and political support during the post-war reconstruction of Western societies, at the same time as they were able to benefit from highly favorable material opportunities for using that position to establish dominance in those parts of the world that were shaped by the post-war intellectual climate. An array of prominent philosophers formed part of the new understanding of scientific thinking that emerged in the 1920s before going on to flourish in the 1930s and later, and whose articulation was inspired by the “old” positivism’s focus on empiricism and regulatory laws combined with formal logic and conceptual analysis. Their number included the world’s first professor in philosophy of science, Ernst Mach (1838–1916), his successors Moritz Schlick (1882–1936) and Otto Neurath (1882–1945), the latter of whom became the principal organizer of the so-called Vienna Circle. The scientific imperative of the new positivism involved the demand for verifiability, a principle of testability which held that all theorems which cannot be unequivocally fulfilled or refuted with the help of the senses (i.e. empirically) are meaningless.171 Since sensory experience was regarded as the actual source of knowledge, observable facts were regarded as the basis upon which all knowledge was founded. Taking physics as their model, neopositivists held that claims which cannot be observed as facts about reality should be treated not only as scientifically invalid but as meaningless. At best, they could be seen as having artistic or purely emotional value. The neopositivists combined this strong focus on empirical verification with a strong interest in experiment. In so doing, they saw themselves as discovering laws of nature whose ability to explain and predict the world would allow the accumulation of scientific thinking and the steady refinement of knowledge. The central importance of verification for the new logical positivism makes it fair to say that it was characterized by a markedly inductive ideal of scientific thinking. Science begins with unprejudiced observation, and science’s ambition of developing theories about determinate relations by means of systematic generalization on the basis of observations made it, in a nutshell, logical and empirical. By extension, the neopositivist viewpoint implied that there was only room in academia for two kinds of knowledge: that which pertained to those working empirically, such as natural scientists,
178
Chapter 8
social scientists, and historians; and that which pertained to those working a priori, such as mathematicians, logicians, and philosophers. Ludwig Wittgenstein (1889–1951) cut a colorful figure in this neopositivist world. In his personal journey from Austria (where he grew up in an extremely wealthy family) to Britain (where he began his studies in philosophy in earnest), he personified both the link and the movement between logical positivism’s two centres: Vienna and Cambridge. Subsequently a legendary figure, Wittgenstein had almost no philosophical training when he suddenly turned up in Cambridge, where he had sought out the world-famous professors of philosophy Bertrand Russell and G.E. Moore and delivered an enormously self-assured critique of their work before quickly presenting his own solution to the greatest problems in philosophy. Moore and Russell, who were leading neopositivists, were nevertheless overwhelmed and soon came to regard Wittgenstein’s “early” philosophy as one of the most important articulations of their position. In accordance with this view of scientific thinking as constituted upon the basis of empirically verifiable statements, these positivists interpreted the final paragraph of Wittgenstein’s Tractatus Logico-Philosophicus— “Whereof one cannot speak, thereof one must be silent”172—as saying that whatever falls outside the limits of language, and therefore cannot be tested or verified, should be considered meaningless and thus non-existent. However, in his “later” philosophy Wittgenstein himself turned away from the strictly empirical focus of neo-positivism, in which language was primarily regarded as a reflection of empirical reality, in order to develop a more pragmatic philosophy of how language works as action, akin to a toolbox. While many people already saw this as a kind of break with positivism, the seeds of this disagreement were there from the beginning since the “whereof one cannot speak” that Russell and Moore had automatically assumed should be regarded as non-existent had always been seen as the most important thing by Wittgenstein, whose real interests lay in the unspoken and the mystical.173
The unity of science—necessary, but only as an ideal Modern scientific thinking has been shaped and defined by positivism—yet always accompanied by the many critiques of positivism. Within the framework of the field of cognitive positions opened up by this formulation, we find the variant that we have come to call science. Although the rejuvenating effect of neo-positivism has been of fundamental importance for our concept of science, neo-positivism can only be taken seriously if it
Positivism—and its Critics—Define Contemporary Science
179
is corrected and understood alongside these many critiques and its own selfcorrections. In a manifesto bearing the ostentatiously manifesto-like title “The Scientific Conception of the World: The Vienna Circle” (1929), logical positivism presented itself as a decisive philosophical and cultural turningpoint in history. In the spirit of unified scientific thinking, it announced that all disciplines should accept that only statements capable of being empirically verified can be said to be meaningful. This requirement of scientificity extended not only to hands-on and empirical disciplines but also those of the medieval university, the humanities and theology, which were expected to meet to the requirement of empirical verifiability. It was Neurath who devised the core concept of unified science (Einheitswissenschaft), and this insistence upon the ideal of a unified science has remained a core component of positivism’s subsequent development. Neo-positivism argued that only empirically verifiable statements can be considered scientific statements. This markedly anti-theoretical starting point was combined with logical stringency: “All science consists in establishing correlations between observable data, and all scientific explanations consists in deductions from the established regularities.” All science consists of the establishing of correlations between observable data, and all scientific explanations consist of deductions on the basis of those established regularities.174 Yet the requirement of verification encountered major problems right from neo-positivism’s infancy when logical positivism tried to solve the challenges of physics by addressing the issue of whether atoms really exist. A critical weakness of neo-positivism’s hostility towards theory was that almost no leading physicists (except for the neopositivist Ernst Mach) shared its position on empirical verification (because physics, which is highly theory-driven, in fact emphasizes the importance of theoretical awareness). For this reason, physicists found little to be inspired by in the positivist approach. The failure to elicit physicists’ enthusiasm for the requirement of verification was, for positivists, rather like an unrequited love.175 Undeterred, neopositivists continued to insist upon the fundamental requirement of empirical verification, a position whose profoundly contradictory and absurd implications became visible when Schlick argued that the word atom ought to be removed from science because atoms do not allow of observation. With hindsight, neopositivists’ stubborn insistence upon verifiability may seem rather peculiar. It is a curious historical circumstance that the period in which neo-positivism grew most rapidly coincided with the emergence of entirely new and different theories in physics, such as quantum physics and relativity theory. It was also around this time that Karl
180
Chapter 8
Popper and Gaston Bachelard published devastating refutations of the very premises of neo-positivism. Despite this, positivism continued its triumphal progress. This may have been partly due to the logic of development in contemporary society, but it probably also owed something to the corrections that neo-positivists themselves made to their founding premise. The fact that physics, which was regarded as the primary model for scientific thinking, did not live up to the requirement of verification posed a major challenge for neo-positivism. However, even if neopositivists were forced to relinquish their absolute demand for verification, and to accept that it is not possible to observe atoms by referring to sensations that are to some extent governed by laws, they clung to the ambition itself and the quest for verification as something necessary in cleansing science of speculation and metaphysical elements. Since positivism was forced to address its own complications, particularly the necessity of modifying its criterion of verification, it is in a sense possible to say that positivism’s fundamental criterion, contained, from the very beginning, a flaw or potential “internal” critique. In order to understand the significance of positivism for modern scientific thinking, we therefore need to be careful to also include the challenges and complications which had emerged from its initial assumptions at an early stage. Because disagreement arose early on about the nature of sensory experience, there was, as it were, a slippage, right from positivism’s starting point in the principle of verification, which meant that it quickly found itself forced to cling to the position that it should ultimately be possible to observe every theory empirically and reconcile it with experience. The fact is, however, that the qualifier “ultimately” destabilizes the static character of positivism’s originally purely binary criteria, opening up a more dynamic perspective on scientific claims to truth. As a result of this problematization, verification increasingly came to be understood in terms of suitable/unsuitable rather than true/false. It might also be said that this insistence upon verification had to be separated off from definitive truth claims and directed instead towards the quest for truth represented by later representatives of this tradition, notably W.V. Quine.176 This has prompted Ingvar Johansson’s pithy observation: “If we want to call Hume the first real positivist, then I think Quine will turn out to be the last great positivist.”177 The ideal of unified science is one of positivism’s most fundamental tenets, and it highlights the contrast between positivism and efforts to establish separate epistemological and ontological territories existing in parallel. At the same time, as with verification, it is difficult to imagine how it might be possible, or even desirable, to implement or realize this ideal of unified science. Instead, this claim, too, must be interpreted in terms of a
Positivism—and its Critics—Define Contemporary Science
181
quest for the unity of human reason. Yet, rather than taking our starting point in efforts to establish a clear line of demarcation and focus on the limits of scientific thinking, such as to eliminate multiplicity or, in the name of unified science, exclude particular knowledge traditions from the domain of scientific thinking, I believe that the ideal of unified science might serve as a limit concept, which regulates the quest (or pursuit) for truth that should be considered as characteristic of all scientific thinking. The claim made by unified science—that there exists only one world and one truth about that world, and that everything can be fused into a single account within the framework of a universal ambition—thus ought to be an ideal of unified science that remains just that, an ideal. Viewed as a limit concept, the ideal of a unified science has an important contribution to make to a philosophy of science facing challenges in an age when the scientific field risks being split into an infinite number of disciplines and thus developing fundamentally divergent scientific cultures. In this situation, the quest for unity in scientific thinking can serve as a safeguard against the “cognitive ghettoization” and “disciplinary balkanization” that occurs when people try to establish their own scientific premises by isolating themselves from their surroundings. The strength of the ideal of unified science is that it does not accept parallel scientificities or epistemological isolationism, positions that deviate from the universal character of all scientific work and do not sufficiently affirm the profoundly public character of scientific thinking, its openness to all forms of critical scrutiny and evaluation. It is a matter of fact that there is only one world and, to some extent, only one truth about that world. At the same time, it is time to abandon the ambition of unified science to bring everything together into a single account, something that needs also to be resisted as this quest for scientific unity evolves into countless attempts to realize a program of unified science. Doing so not only impoverishes the knowledge culture of scientific thinking, it also undermines the nature of scientific thinking as a project. It is undeniably a result of positivism’s dominance that the natural sciences, together with the social sciences and other disciplines that have adopted this ideal of scientific thinking, have become insufficiently selfcritical. As Skjervheim rightly observes: “The positivism of the social sciences easily forgets that the critique and the demands that it directs towards others also bear directly on itself.”178 Indeed, positivism’s power position arguably owes something to the fact that auto-critique has been a neglected activity in the positivist tradition. How, then, should we cope with the enormous diversity within scientific thinking? How should we understand day-to-day science if the various
182
Chapter 8
sciences are unable either to live up to the ideal of a unified science or to survive as parallel systems? At this point I would like to make the case once again for a dialectical approach, one that recognizes a kind of “discontinual continuity” between the different sciences, while also insisting that these must ultimately be prepared for a confrontation with each other within the framework of a shared striving for truth.
Post-positivistic corrections: falsifiability and paradigms The neopositivist worldview was strongly grounded in empiricism, and its original imperative of the necessity of verification presupposed that theories could be derived more or less directly from the data of experience by means of observations and experiment. But this inductive view of scientific thinking quickly ran into the so-called problem of induction, because even if many observations confirm a relationship, this does not guarantee that all observations in the future will also have the same result—and even if one accepts that scientific knowledge derives from observational statements by means of induction, induction itself cannot be used to justify induction. Inductionism is insufficient. The neopositivist view of perception also ignored the complexity that is apparent in the fact that two observers of the same object do not necessarily have the same sensory impression. Our perception of the world is never pure and innocent. Developing knowledge on the basis of observations requires theories. But this means that theories also precede our observations. In any case, observations and theories are so closely interwoven that it is simply not possible to sustain the kinds of watertight distinctions between direct observation and theory in the way that neo-positivism’s inductive view of knowledge presupposed.179 This problem offers a basis for understanding the following two “internal” critiques of positivism. These correctives to the verification thesis have been profoundly important for the emergence, development, and selfunderstanding of modern science. The two “internal” critiques that I will be considering are the critical rationalism of Karl Popper and Thomas Kuhn’s ideas about scientific development as defined by paradigm shifts. From the start, it is important to be clear that these two correctives should not be understood in the first instance as expressions of an uncompromising resistance to positivism’s understanding of science and modern scientific thinking itself; rather, they are correctives to positivism inspired by a scientism that proceeds from, and to some extend remain within the limits of, a more or less modified version of positivism. Yet Popper’s and Kuhn’s contributions also involve different views of how
Positivism—and its Critics—Define Contemporary Science
183
scientific thinking develops: where neo-positivism proceeded from the assumption that knowledge grows cumulatively, Popper imagined scientific thinking as developing in stages and Kuhn, who focused on scientific revolutions, saw an evolution marked by sudden leaps. In order to understand how science works in reality, we need to connect theoretical claims by philosophy of science to concrete scientific practices. A common way of making such connections is by means of exemplary narratives. The literature on the philosophy of science exhibits a set of such narratives that eventually form a canon of accounts with exemplary significance for our understanding of what scientific thinking is, how scientific work is carried out, and in what way scientific thinking develops. It is easy to be drawn into these narratives and become uncritically absorbed by their intrigues to the point that one loses sight of the fact that this kind of stories about excellence parables are never innocent. In fact, the various narratives that are usually invoked are closely associated with specific positions and theoretical movements within philosophy of science. In what follows I will restrict myself to two dominant narratives (one of which we have already encountered) that are typically used in order to advocate the importance of falsification and paradigm, respectively, for scientific thinking and its development.
Semmelweis and the hypothetico-deductive method The use of different narratives reinforces our impression of the importance proceeding on the basis of practices when engaging with philosophy of science. One the most frequently cited narratives in philosophy of science is the story of the Hungarian doctor Ignaz Philipp Semmelweis (1818– 1865). As head of the delivery ward in Vienna’s main public hospital in the late 1840s, he began to investigate why his ward had a maternal mortality rate of over 13 percent for puerperal fever when that of a comparable ward in the same hospital had a mortality rate of only 2 percent. Semmelweis was prompted to address this problem after the death of a colleague who had cut himself on the finger while conducting an autopsy. Because the colleague had exhibited a similar pathology to the women who died of puerperal fever, Semmelweis suspected that there might be a connection between puerperal fever and contact with corpses. When he tested this hypothesis in a systematic study, he was able to confirm that he and his students had in fact been carrying the infected particles on their hands when they went from the autopsy room to the delivery ward. It turned out that the incidence of puerperal fever could be reduced to 2.38 percent simply by doctors’ washing their hands thoroughly in a solution of chlorinated lime. The following year,
184
Chapter 8
puerperal fever was effectively eradicated by requiring that all instruments be cleaned before being brought into contact with patients on the delivery ward. Semmelweis’ dismissal the following year has sometimes been viewed with outrage, but it is perhaps not terribly strange. The established scientific thinking of his day was firmly opposed to Semmelweis’ discoveries, preferring instead to work with explanatory models that located the source of disease in an imbalance of the four humors in the body. It is therefore unsurprising that Semmelweis was not recognized for his discovery until after his death. The reason that his discovery met with such weak support among contemporaries was quite simply that the world at this time lacked a reasonable explanation for his discovery. As did Semmelweis himself. Bacterial theory, which would have worked as a suitable explanatory model, was not discovered/invented until much later. Semmelweis’ own explanation, that it was a matter of a kind of “cadaverous particle,” was not merely wrong, it compromised him to such a degree that his colleagues came to suspect him of being superstitious. Only someone with an unbendingly anachronistic perspective could therefore be upset at the lack of enthusiasm and appreciation for Semmelweis’ achievements among his contemporaries. Once again, we are confronted by the complex problematics of discoveries. The selection of narratives in philosophy of science is never innocent. It is therefore no coincidence that the story of Semmelweis is so often used by advocates of the hypothetical-deductive method, since it fits hand in glove with their own philosophical position. According to this model, the research process follows a linear logic. Here, we begin with a scientific problem, something that challenges us, something we do not know, something we have reason to wonder about, something that is not quite right. The next step is to devise a hypothesis, on the basis of a qualified guess, about how the problem might be solved. What separates real science from pseudoscience is that scientific problems and hypotheses must be formulated in such a way that they can be answered via empirical studies and that their conclusions can be tested. Falsifiability is therefore the criterion of demarcation that sets scientific theories apart from theories in general. The next step involves a deduction—the use of experiments and observations to test and control the hypothesis’ validity. Finally, we arrive at corroboration, through which we either receive confirmation that the hypothesis is correct or find reasons for rejecting it. The more tests that the hypothesis passes, the more the hypothesis can be said to have been corroborated, although no ultimate proof or verification is possible because it is impossible to be completely sure that a hypothesis is true. There are always additional tests that might be carried out.
Positivism—and its Critics—Define Contemporary Science
185
A parallel narrative that is often cited in relation to this understanding of science involves Florence Nightingale (1820–1910). Here, too, prominence is given to rationality and the systematic testing of different hypotheses. During the Crimean war, she managed to reduce mortality at a military hospital from 42 percent to 2 percent by improving its hygiene, after which she was able (albeit too late) to connect the same patients’ poor health at the Scutari hospital in Constantinople (now Üsküdar, Istanbul) to the fact that the hospital was located on top of a drain. As with the story of Semmelweis, it can be seen as the expression of an inductive-deductive mode of reasoning.
Critical rationalism: gradual progress through falsifiability Behind these narratives can readily be discerned the outlines of the hypothetical-deductive method formulated by Karl Popper (1902–1994), which has today in many regards become the dominant methodological model of research in large parts of the scientific community, particularly those working experimentally. However, in contrast to positivism’s original focus on verification, Popper recognizes that scientificity involves the possibility of falsification. Scientific thinking is simply not based on a bedrock of true theories that have been acquired by observation and experimentation; rather, scientific work is entirely dependent on theories. In contrast to the cautiousness of verificationism, falsificationism might be said to be characterized by boldness and imaginativeness. However, this absolutely does not mean an abandonment of the scientific project. By means of the strategically decisive criterion of demarcation, the demand that scientific hypotheses and theories must be formulated in such a way that they can be refuted, it becomes possible to effectively wipe the slate clean and remove particular theories from the scientific agenda. In Popper’s case, the target was clearly Marx’s historical materialism, Freudian psychoanalysis, and Adler’s individual psychology theory—but also Einstein (!) and his general theory of relativity. For Popper, what makes these theories so unsatisfactory is that they have no weaknesses, that is to say, they see themselves as able to explain everything within their respective areas. In short, selecting facts in this fashion, on the basis of whether they fit a theory, turns science into ideology: there is no possibility of disproving such theories on the basis of observable facts. It is no exaggeration to say that in practice much scientific work is carried out in accordance with Popper’s precepts. For this reason, falsificationism has largely concerned itself with questions about the limits of scientific thinking. The object that is often raised in connection with
186
Chapter 8
Popper’s model is that his criterion for scientificity is on many counts too narrow. What is more, it may be asked what happens if the observations and experiments that are regarded as able to falsify theories are themselves reliant upon faulty background knowledge, that is, if it turns out that it is the observations, not the theories, that are wrong. Popper’s reply was that all observations of phenomena that run counter to established theories must be treated with the same rigour and skepticism as the theories themselves. Yet this merely raises new questions, because if it is the case that an observation can never be definitively confirmed and a theory never definitively falsified, is there not then a risk that the entire fundamental thesis of falsificationism becomes meaningless? It is obvious that at some point one has to decide that one is going to believe in falsification. These problems led Popper to himself abandon his commitment to strict falsificationism and instead treat scientific thinking as an ongoing process of tests and attempts at falsification. Without cutting ties to positivism completely, Popper levelled three critical objections at it—and from them we can also see how the recurrent tension between empiricism and rationalism, as well induction and deduction, lives on in the positivist tradition, as a kind of scientific philosophical problematization of science itself. Popper’s first objection took the form of a critique of positivism’s methodological naturalism: it is not the case that we can gain knowledge of reality only by means of our senses. Such a view underestimates not only the possibility of developing hypotheses about things that are unobservable but also the productive role played by speculation in scientific thinking (although speculation in this specific context is only allowed on very particular conditions). For this reason, we are not getting a reasonable picture of scientific thinking if we inductively say that it is necessary to start from observations and measurements before going on to develop hypotheses and theories. This is also why Popper, as his second objection, rejected the inductive method and the view that knowledge grows cumulatively. Instead, he argued that knowledge, when confronted by concrete problems, works in such a way that it formulates suggested solutions, which are in turn subjected to critical evaluation according to the overall model of the alternative, hypothetical-deductive method that Popper himself advocated. Knowledge and insights do not necessarily increase because new theories are built upon old theories but because a dynamic arises when scientific theories are disproved and replaced by better theories. As Popper notes in a now-classic passage: The empirical basis of objective science has thus nothing “absolute” about it. Science does not rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The
Positivism—and its Critics—Define Contemporary Science
187
piles are driven down from above into the swamp, but not down to any natural or ‘given’ base; and if we stop driving the piles deeper, it is not because we have reached firm ground. We simply stop when we are satisfied that the piles are firm enough to carry the structure, at least for the time being.180
Popper’s third objection, a questioning of positivism’s ideal of unified science, involved claiming that there actually exist certain fundamental differences between the natural sciences and the humanities/social sciences. The difference that Popper himself highlights as central to all scientific work is the necessity of differentiating strictly between the context of discovery and the context of justification. Science is driven forward by the interaction of guesses and exercises in falsification—trial and error. Popper’s corrective to positivism’s requirement of verification, an assertion of the falsification principle, is conventionally known as critical rationalism and can be summarized as follows according to Gilje and Grimen: Human knowledge is never something definitive and absolutely certain. No scientific theory is sacred and beyond criticism. So called “scientific truths” are merely guesses or preliminary hypotheses that must be subjected to rational critique and strict scrutiny. We approach the truth by eliminating false theories.181
Critical rationalism presupposes that we are receptive to critique and counter-argument and that we develop our own auto-critiques. Popper’s critical rationalism thus draws its strength from a dialectic of always listening to critical arguments and ways of learning from experience, an approach that basically involves continually seeking to move closer to the truth in a process that has no real end. In other words, if when we view scientific thinking as a quest for truth, it becomes impossible to make claims to absolute knowledge. Instead, we find ourselves being directed towards a practice defined by the insight that we ourselves can make mistakes. The hypothetical-deductive method builds on a combination of creative intuition, observation, and deduction. For this reason, according to Popper, no scientific theory can really be treated as absolutely certain: “every scientific statement must remain tentative forever.”182 A theory functions like a kind of hypothesis, which plays an active role in directing our observations even as their viability is being tested. What sets scientific theories apart from other theories is precisely the fact that they are formulated in such as was to be testable. In other words, a scientific approach is not about certainty but being able to manage uncertainty by
188
Chapter 8
means of collective efforts, about thinking critically while also being willing to undergo the most rigorous evaluation in a process where eliminating incorrect theories brings us closer to the truth. As Popper himself summarizes his own critical rationalism: […] rationalism is an attitude of readiness to listen to critical arguments and to learn from experience. It is fundamentally an attitude of admitting that “I may be right, and by an effort, we may get nearer to the truth.”183
In contrast to the neo-positivists’ dream of accessing empirical reality directly and their expectations about being able to verify sensory experience, Popper advocates the view that theories precede observations. Yet because all observations are already impregnated with theories, to a greater or lesser degree, testing them is also a more complicated process. Scientific thinking makes progress, but it is only by means of testing and errors that we can arrive at what for the moment is the best knowledge within a given area. We must content ourselves with this. Science can do no more. Although Chalmers sympathizes with Popper’s basic position, he has a number of objections, which, interestingly, point towards the necessity of including Kuhn in the discussion, because theories are tested not only against the world but also against competing theories. We need also to ask whether it is possible in the long term to insist upon the difference between theories, which can only be accepted preliminarily, and falsifications, which are regarded as definitive. Is it really possible to imagine that falsifications are definitive while also questioning whether observations rest on absolutely solid grounds? Does history not show that if falsificationism had been pursued consistently, many of the best scientific theories would never have come about but been instead rejected at an early stage? In confronting the necessity of historicizing, we find ourselves at the starting point for Kuhn’s theories of scientific thinking.184
Scientific development by means of Copernican turns Let us turn now to the second narrative that predominates in the literature on the philosophy of science. This is considerably older story and one that we have already encountered in this book. We have met its main protagonist, Nicolaus Copernicus (1473–1543), on several previous occasions. Posterity usually remembers him as the person who (without realizing it) initiated the Scientific Revolution and who gave his name to the concept of a Copernican turn. The standard narrative about Copernicus is similar in in many respects to the received version of the Columbus story. In similar fashion, it typically presents Copernicus as a well-informed
Positivism—and its Critics—Define Contemporary Science
189
person who announces a self-evident truth to a general public which deludedly believes the Earth to be the centre of the universe: the sun is the centre of the universe, and the earth is a planet that revolves around the sun and also upon its own axis. In this unproblematic fashion Copernicus is taken to mark the shift from a geocentric worldview to a heliocentric worldview, as if the latter were utterly obvious. The story of Copernicus is inscribed within the dominant narrative of the Scientific Revolution, giving the impression that he was a modern, secularized scientist who, in contrast to his narrow-minded contemporaries (made so by religion), sensibly built up knowledge by adding one finding to another. Yet the real complications with the Copernican turn, what happens when we exchange perspective for macro-perception, only become apparent if we give the narrative more context and complexity. Copernicus was in every sense a man of the Church, who studied theology, medicine, jurisprudence, astronomy, and mathematics at various seats of learning in Europe and spent most of his time working at the cathedral in Frauenburg on the Baltic seaboard of northern Poland. In 1514, he was commissioned by the Pope to investigate the calendrical problems that had arisen as a result of the Julian calendar falling out of step with the seasons. Given that Copernicus, ten years earlier, had made observations that indicated that the problem could be resolved by shifting perspective from the Earth to the sun, that is to say, by replacing a geocentric with a heliocentric worldview, it is easy to imagine that he would have had little difficulty in helping to solve the problem. But Copernicus seems to have been a person who was reluctant to cause trouble by calling into question the grand geocentric tradition, which stemmed from Ptolemy and Aristotle and which was completely dominant both during the Middle Ages and his own era. Accordingly, he did not allow his ideas to be published until the last year of his life, 1543, and even then only after extensive persuasion and the inclusion of a preface, written by his friend and theological colleague Andreas Osiander, that cushioned the explosive force of his theory by underscoring that it was merely a hypothesis. Yet the contents of De revolutionibus were already known in certain quarters. Although it had circulated in manuscript, it was only as a printed book that it was included on the Catholic Church’s Index of prohibited books (indeed, it is not entirely clear when it was removed from that list). Although in retrospect it might seem obvious that Copernicus’ contemporaries ought to have accepted the new worldview, we must once again recall a series of extenuating circumstances. The fact is that the theory that the world in fact moves around the sun runs contrary to our immediate sensory impressions. Even today, we are still following our everyday
190
Chapter 8
perception in referring to the sun “going up” and “going down”—even though we know that it is the earth that moves. This contradicts the idea that the issue was ever self-evident. Nor is it as simple as saying the Catholic Church deliberately advised people against reading Copernicus’ book, whose complicated style in itself made it almost impenetrable for most readers. We also often forget that every new major hypothesis relies for its legitimacy upon a great many supporting hypotheses, which are not always readily available, and Copernicus lacked theories of atmospheric strata and the action of gravitational force at a distance, theories that were only proposed and established much later. Moreover, Copernicus’ planets moved in their orbits with the help of a curious theory of crystal spheres. Copernicus was right, but also wrong on many counts. For example, it would later be shown that the planetary paths that Copernicus had treated as circular were in fact elliptical. Copernicus also took for granted that the heavens were divided into two realms, something that presupposed an unchanging sphere of fixed stars, an idea that Brahe and others would later refute. And so on.
The paradigmatic significance of paradigms The philosopher of science Thomas Kuhn (1922–1996) often lamented the absence of historical perspective in scientific thinking, and he argued that lack of interest in history also lay behind an inadequate contemporary understanding of the radical breaks that are a feature of scientific thinking’s discontinual progress. As a result, it was impossible to understand how the extraordinary advances and innovations of scientific thinking had come about. For Kuhn, the story of Copernicus would serve as the epitome of the scientific revolutions, or paradigm shifts, that he regarded as the most important explanation for the historical development of scientific thinking that he presented in The Structure of Scientific Revolutions (1962). In addition to Copernicus’ revolution in astronomy, he saw the same pattern repeated in physics with Einstein and in biology with Darwin, to take just two examples. Taking his point of departure in the development of physics, he identified the following basic general schema of how scientific progress happens: prior knowledge—normal science—crisis/revolution—new normal science—new crisis. And so on. In contrast to neo-positivism’s ambition of conducting purely empirical observation and verification by direct access to empirical reality, Kuhn started from the conditions and problematics of perception. This means that we never see the world directly, as the inductionists believed, but only through different paradigms and models. Theories are therefore necessary,
Positivism—and its Critics—Define Contemporary Science
191
and Kuhn took issue with the neo-positivist ideal by arguing that our starting point must be the fact that paradigms and reality never fully agree. We cannot observe and think without such paradigms, but because at regular intervals problems arise that cannot be solved within the framework of the existing paradigm, anomalies are continually being generated. The accumulation of these anomalies eventually becomes so great as to trigger a paradigmatic crisis. An example of such an event was when Copernicus tried to resolve a calendrical problem by testing a solution that called into question the whole Aristotelian paradigm. A scientific revolution occurs at the point when the majority of scholars abandon the old paradigm for the new. In short, paradigms are completely necessary inventions that help us to discover new reality. At the same time, however, they also conceal reality from us—nor could it be otherwise. Without a paradigm, scientific thinking cannot operate, in the same way as it is not possible to discover something without inventions. A paradigm that coincided fully with reality would not in fact be able to help us or to function like an invention with whose help we can make new discoveries. Kuhn’s theory of paradigms, as we have already noted, can be seen as a reminder of the productive value of established theories and models for scientific work and as a clearly identified corrective to neo-positivism’s faith in the possibility of direct testing and verification of observations. Because it is only possible, within this view, for facts to appear as facts within the framework of perception shaped by a paradigm, facts are also revealed as dependent on theory. But this in no sense means that scientific thinking can be regarded as a narrowly theoretical question. On the contrary, Kuhn’s theories foreground the social and practical nature of scientific thinking. Science is something one learns by practicing science, not from theoretical definitions. This means that periods of normal science characterized by consensus—when, so speak, problems are solved within the framework of the rules of the game but while also accumulating anomalies in the form of things that cannot be incorporated—alternate now and then with revolutionary upheavals through which new paradigms are established. Kuhn’s theory of paradigms involves a kind of historical contextualization by which he seeks to do justice to how successful scientific thinking operates concretely and becomes the default view in the workshop of scientific thinking. A paradigm, such as the geocentric worldview, is constituted by a series of ideas, values, and methods that are shared by the members of a given community. It involves a cluster of fundamental theoretical assumptions, which are accepted by everyone within the
192
Chapter 8
scientific community, but which cannot be made fully explicit because their nature can only really be understood by means of a series of examples. The reorganization of an entire field that is implied in Kuhn’s concept of a paradigm shift therefore resembles, in a sense, the way that a gestalt shift operates in Gestalt psychology. By its very nature it can be difficult to properly describe a prevailing paradigm—its contours only appear in retrospect with clarity. Ultimately, Kuhn’s book about scientific revolutions itself brought about a revolution in our view of scientific thinking and its development. Indeed, it is no exaggeration to say that the paradigm has gained something of a paradigmatic significance within philosophy of science and has upended the successive explanations and unquestioned regulatory laws of the “nomological” model of scientific thinking.185 The theory of paradigms presupposes that theories and conceptions precede scientific observations. Discoveries presuppose, in a word, inventions. As a result, naive inductionism is called into question, and neo-positivism’s logical empiricism problematized, on the basis of the insight that all observations are saturated with theory. For Kuhn, scientific observations and facts can thus no longer unproblematically be treated as fundamental but, on the contrary, are revealed as derived and inferred from specific paradigms.
Paradigms and discourses: practices and perceptions Sometimes things happen in parallel in different worlds without anyone fully realizing. Such is often the case in the world of scientific thinking. The philosopher of technology Don Ihde has pointed out the interesting parallels and striking similarities between Thomas Kuhn’s theories of how paradigms operate as archetypes for scientific thinking (until they have gathered sufficient anomalies for a revolution to usher in another paradigm) and the work of Michel Foucault (1926–1984), who argued that scientific practices simultaneously make assumptions, operate productively, and are regulated by means of underlying systems of ideas—episteme—that govern what can be thought and what counts as knowledge at any given moment. Kuhn and Foucault are connected by the fact that they both represent a non-linear view of scientific development and therefore focus their interest on structural factors and the way in which different gestalt and perspectival shifts, which have often been made possible by new perception that in turn has been made possible by new instrumentation, give rise to different forms of discontinuity. The Copernican turn’s fusing of perception and praxis in Kuhn’s account of the origins of a revolution in the natural sciences has its direct counterpart in the fundamental reorientation within the human
Positivism—and its Critics—Define Contemporary Science
193
sciences by means of sudden discontinuous shifts in macro-perception that Foucault presented in Madness and Civilization (1961) and The Order of Things (1966).186 Interestingly, these books appeared during the same period of time as Kuhn published his book on scientific revolutions. They are thus parallel in more than one sense, and it is possible to identify substantial similarities between Kuhn and Foucault, including their shared inspiration in the work of Gaston Bachelard. Of course, this is not to say that the positions of Kuhn and Foucault are identical, something that is evident from the fact that they were active and took their examples from quite different disciplinary domains. Foucault has become one of the most important sources of inspiration for the kind of discourse analysis that in a post-structuralist era has had an extraordinary impact in the humanities and social sciences. The concept of discourse circulates in the same field of theories as the concept of the paradigm in that it seeks to reveal the preconditions of thought and knowledge that are simultaneously sustained by means of the discursive practices of those involved. In his inaugural lecture at the Collège de France in 1970, subsequently published as The Order of Discourse (1971), Foucault announced his intention of studying the procedures by which the production of discourse is controlled, selected, organized, and distributed. From this perspective, power is not something that has an existence of its own, as if it were attached to a sovereign or a general staff (Foucault’s examples); rather, it inheres immanently within relationships and is always already inscribed in the cultural codes that simultaneously use when speaking, understanding, and deciding.187 An opportunity thereby presents itself for a new analysis of power, in which power and knowledge represent two sides of the same phenomenon yet—nota bene—without ever becoming identical. On this view, there is no central figure who, as it were, “has” power. Instead, power is viewed as distributed among relations of power that, in turn, are entirely integrated within the discourse we use in order to speak about power. But since power is everywhere, in all relationships and in everyday techniques at the micro-level, language and scientific thinking are also stripped of their innocence. This has consequences for research on power but also for research in general, because if power is integrated in asymmetrical, continually active, and changing relationships by means of knowledge, practices, and technologies, then it must also be impossible to study power in itself. Power is instead implied by human practices that are simultaneously regulated and sustained by knowledge. Foucault himself refers to “[t]he omnipresence of power: not because it has the privilege of consolidating everything under its invincible unity, but because it is
194
Chapter 8
produced from one moment to the next, at every point, or rather in every relation from one point to another.188 Both Kuhn and Foucault were interested in gestalt shifts that are regulated through perceptions by paradigms and discourses, which affect our understanding of entire constellations within scientific thinking. This idea may also help to explain the non-linear character of the development of scientific thinking. We recognize here the interplay between inventing and discovering, when perception both forms and is formed by discursive practices. As we have already seen, there is nothing innocent about the choice of narrative to concretize and illustrate one’s position within philosophy of science. It is therefore no coincidence that the story of Semmelweis has acquired such a powerfully legitimizing function in relation to Popper’s argument for a critical rationalism that relies upon a hypothetical-deductive method—just as the story of Copernicus has been taken up by paradigm philosophers because, in line with Kuhn’s ideas, it clearly foregrounds the dependency of observations upon paradigms and the discontinuous character of scientific development. The narratives about Semmelweis and Copernicus are not just different, they underpin diametrically opposed perspectives on what characterizes scientific work. Even so, the fact that this is a matter of perspectives implies that they cannot be said to be entirely mutually exclusive. Rather, they ought to be treated as different aspects of a single process of knowledge creation, in accordance with a multidimensional view of scientific thinking. In order to understand what scientific thinking is and how scientific work develops, we therefore need to listen to both Popper and Kuhn, in accordance with an understanding of scientific thinking as a continually ongoing process of mutually critical correction in which different paradigms are used in order to invent/discover reality. Scientific thinking is a practice carried out by people, usually in combination with instrumentation and the use of technology, and this practice can be understood in terms of different perceptions and is directed by different theoretical movements. The issue of scientific development and whether scientific thinking does in fact make progress has thus resulted in two additional answers. Alongside neo-positivism’s cumulative view of scientific progress by means of successive verifications, we have encountered, in Popper, the notion of gradual progress by means of critical rationalism’s elimination of false hypotheses, and finally, in Kuhn’s studies of revolutionary upheavals, a model in which scientific development takes place by sudden leaps in paradigm shifts that alternate with phases of normal science. And yet, regardless of whether the development of scientific thinking is regarded as
Positivism—and its Critics—Define Contemporary Science
195
a cumulative, gradual, or disruptive process, there is, particularly in the work of Popper and Kuhn, a shared interest in practices, an insight into the importance of perception and interpretation, an optimistic view of the development of knowledge by means of the complex dialectic of discoveries and inventions—and modern science’s earnest and never capricious quest for truth.
Constructivism and its limitations Both Popper and Kuhn work as barriers to any attempt to reconcile concepts and reality. They are caught within the great dialectical interplay between discoveries and inventions. Because of their caustic criticisms of neopositivism, both have sometimes been regarded as hostile to scientific thinking, but this is a mistake as both are in fact devoted adherents of the scientific project. From an early stage Popper moved easily among positivistic circle, while Kuhn had no hesitation in placing his epochal study of scientific revolutions with logical positivism’s own publishing series Einheitswissenschaft [Unified Science]. Neither of them recognized criticism of scientific thinking in general, or caustic criticism of positivism in particular, as involving a break with science. Instead, they both acted as people in scientific communities do: they made criticisms within the framework of an overarching quest for truth. In similar fashion, and perhaps even more so, a thinker like Michel Foucault has sometimes been considered the representative for an antiscience movement, as if his theories about human disciplining and technologies would inevitably lead him away from scientific thinking. In order to make sense of this criticism, it is important to remember that Foucault also spent his entire professional life at academic institutions, eventually joining the faculties of world-leading scientific universities. Even if the French word science causes the same kind of problems as the English term by primarily signifying the natural sciences, in contrast to the greater inclusiveness of Wissenschaft in German or vetenskap in Swedish, Foucault undoubtedly defined his work as scientific in the broader sense of the term that I have been using in the present study. If we include in our conception of scientific thinking both Popper’s critical rationalism and Kuhn’s theories of paradigmatic scientific shifts, each of which serves as a corrective to an originally positivistic and neopositivist understanding of science, the result is an activation of a historically contextualized understanding of the human practices that underpin scientific thinking.
196
Chapter 8
Positivism—and its many critiques—define modern scientific thinking. Although the critiques made of positivism have resulted in a modification of its requirement of verification, this does not mean that its adherents have abandoned scientific thinking as a project. Popper’s critical rationalism is underpinned by a quest for truth that he never abandons even if it can never fully be realized. Kuhn’s historical contextualization of scientific work has had a decisive impact in ensuring that modern science has begun to be treated as a historical, cultural, social, and political phenomenon. In this way Kuhn’s paradigm theory has contributed to a deeper understanding of how scientific development occurs, while also generating new questions and challenges: What are the limits of a paradigm? How well-defined can a paradigm be? Can several paradigms exist in parallel? What is the relation between breaks and continuity in a paradigm shift? Does a paradigm shift happen on the basis of objective evidence or peer-review processes? And so on. Like Foucault, Kuhn has met with objections and robust criticism. The hostility that was often triggered by the view of scientific revolutions implied by his paradigm theory can largely be attributed to the fact that many people chose to draw conclusions which were considerably more farreaching than those envisaged by Kuhn himself. Kuhn was unhappy that his work came to be regarded as an attempt to fundamentally call into question the rationality of scientific thinking. Such an intention was utterly alien to him, even if its corrective tendency in relation to positivism’s faith in scientific development as a linear and cumulative process of development made paradigm theory a critical objection.189 Ironically, the view that no idea can be entirely neutral or objective, because it is always dependent upon the observer’s biological, cultural, linguistic, and scientific perspectives, did not have its greatest impact in the field in which it was developed, namely the natural sciences, but was instead principally adopted by the social sciences and the humanities. For this new context, paradigms did not operate by dominating successively, as in Kuhn; rather, several paradigms could exist at the same time and operate in parallel. Within the sociology of knowledge there were those who therefore concluded, in an anti-scientific and culturally relativistic spirit, that concepts such as truth and rationality could be dispensed with. Yet Kuhn’s own intentions had never been to attack or undermine scientific thinking. On the contrary, his goal had been to explain how science had been able to make such extraordinary progress. Only by letting go of an established model and opening one’s mind to new perspectives can we develop a more grounded image of how scientific thinking actually works as a social praxis.190
Positivism—and its Critics—Define Contemporary Science
197
In the background to these theories of scientific revolutions, in Kuhn’s focus on the natural sciences and Foucault’s focus on the social sciences and the humanities, it is possible to discern their profound and consistent inspiration by Gestalt psychology. It is as if the “rabbit-duck” illusion that Joseph Jastrow invented and that Ludwig Wittgenstein subsequently made famous, continually recurs in any discussion of how the discontinuous development of different “structured macro-perceptions” within the framework of different orders of knowledge, paradigm changes, and knowledge regimes cause perspective and viewpoints to shift. Unlike the earlier conception of scientific thinking—idealistic, abstract, ahistorical, and non-bodily—that proceeded from a series of propositions, a conceptual and rational system detached from social and material connections, scientific thinking within the framework of this “internal” critique of positivism appears as a social, embedded practice whose perception is formed by a historical reality comprised of people, practices, and technologies.191 But what remains is a curious ambivalence, because paradigm theory, like Foucault’s theories of episteme, easily makes scientific thinking seem both “conservative” and “revolutionary.” Four years after the appearance of Kuhn’s epochal study of scientific paradigms and revolutions and Foucault’s The Order of Things, Peter L. Berger and Thomas Luckman’s The Social Construction of Reality (1967) was published. In this book, which was an instant bestseller and has remained so, the authors seek to raise awareness of the profound importance of construction for our image of reality, a theme that has been a feature of philosophical thought since the time of Kant and has been developed within sociology by Karl Mannheim and Alfred Schütz in the form of a grand alternative to the hostility to theory that was a feature of positivism’s empirical claims for verification. Within the framework of a dialectic between social reality and an individual lifeworld perspective, the concept of culture expanded such that language eventually came to seem like something that forms and constructs reality—and, perhaps before long, will also seem to create it. For those with a training in philosophy, it is clear that this constructivism quickly risked developing into a new version of the transcendental idealism that viewed reality as something humanity itself effectively “invents.” Philosophers such as Ian Hacking clearly identified this tendency and called into question this development in books such The Social Construction of What? (1999). We can also note that there was a fundamental ambivalence in the very title of Berger and Luckmann’s book: on the one hand, the main title refers to reality as a social construction, as if it contained ontological ambitions, but on the other hand, its subtitle—A Treatise in the Sociology
198
Chapter 8
of Knowledge—makes it seem as though the book is limiting itself to a cognitive perspective. Perhaps this ambivalence should be seen as an index of how constructivism in fact appears in two forms: partly as a “strong” version that makes ontological claims, and partly as a “weak” version that restricts the construction of reality to domains of epistemology. Another, even more differentiated, approach would perhaps be to differentiate ontologically between three kinds of reality: a subjectively dependent reality; an objectively dependent reality; and an objectively independent reality.192 Yet another alternative would be to make an even stronger claim and claim that there exists something more fundamental than the empirical reality intended to serve as a corrective to constructivism, something that certain variants of phenomenology claim and that is a crucial premise for the problematic to be examined in the next chapter.
CHAPTER 9 THE TWO CULTURES: HUMANISM AND NATURALISM
By the beginning of the twentieth century, it was becoming increasingly clear, not only that scientific knowledge had to be recognized as central to how modern society developed and understood itself, but also that the field of scientific thinking was itself fundamentally divided. To contemporaries, it seemed more and more as though two scientific projects were competing for space with each other, epistemologically and ontologically. To simplify somewhat, we can think of two “cultures of knowledge” defining, respectively, what might be called the humanistic and the naturalistic traditions. The sheer fact that it is difficult to find shared concepts and unambiguous definitions when referring to these knowledge traditions shows how complex the problematic is. In the background, however, we can discern the outlines of a deep historical divide within the field of scientific thinking, a divide that we have already encountered in terms of a conflict between rationalism and empiricism, deductive and inductive methods, and so forth. While philosophy of science has often portrayed the most extreme versions of these positions as polar opposites, my own account has taken pains to highlight the many connections that exist between them. In so doing, I hope also to create the possibility for developing a more dialectical approach in which the poles that appear to repel each other can be seen as actually reliant on each other. In a longer historical perspective, the situation within the university around the turn of the twentieth century seems highly ironic. As we have noted, the new project of scientific thinking that emerged with the extension of the revolution in the (natural) sciences during the sixteenth and seventeenth centuries had almost no links with the scholastic knowledge culture of the medieval university. Yet this new model that came from without was to restore energy and status to the university during the nineteenth century. In a sense, universities reinvented themselves when these teaching institutions dedicated to tradition, nostalgically facing history “backwards,” became transformed into future-oriented scientific institutions
200
Chapter 9
with research, discoveries, and progress on their agenda. The “old” university had suddenly become the self-evident home for the “new” scientific thinking! Perhaps it was the insecurity to which this contradictory situation gave rise, that created an urgent need for a narrative that could use the Scientific Revolution as a screen upon which to project its own ideals of scientific thinking. In this way, the university as institution gained a sense of historical continuity which, despite the obvious historical discontinuity, would carry modern scientific thinking into the twentieth century. But nothing could fully hide the fact that there was a fundamental conflict between two separate “cultures of knowledge” within the university and that, sooner or later, they would have to be dealt with. Modern scientific knowledge came to be defined by positivism, in both its “old” nineteenth-century incarnation and its “new” twentieth century incarnation—but never by positivism alone. As we have already noted, positivism must always be recognized in relation to both its “internal” and “external” critics. It might be said that modern scientific thinking is a “community of problems” and that positivism and its critics have jointly shaped our understanding of science. The previous chapter concentrated on the “internal” correctives to the original idea of verification within this new positivism, in this chapter we will examine the “external” critique that was inspired by the phenomenological movement and that ultimately resulted in a tendency for scientific thinking in our era to assume the appearance of two separate cultures. With the passage of time, it has become increasingly difficult to understand scientific thinking without also including a phenomenological perspective. But the question arises: is phenomenology enough?
An even more “positive” philosophy: the necessity of phenomenology The situation for scientific thinking around 1900 should be understood against the backdrop of general frustration with neo-Kantianism and idealistic scientific ideals associated with far too many elements of speculative thinking. At that time there was a widespread desire for a new kind of concreteness that could prevent thought from flying away and losing itself in metaphysics and speculation. In their search for a new realism, philosophers sought to establish more tangible points of departure for thinking and began to talk about the necessity of starting from the concrete. But these calls for “the concrete” meant very different things. Indeed, they were in direct competition with each other: neopositivists focused on the
The Two Cultures: Humanism and Naturalism
201
empirically concrete, existential philosophy wanted to anchor thought in the concrete “lived” body, and phenomenologists addressed the concrete experience.193 Edmund Husserl (1859–1938), who is usually considered the founder of modern phenomenology (even if phenomenology can be said to have a prehistory in Kant, Hegel, and even further back in time), was driven by an anti-metaphysical critique of speculation and idealism, that actually has a number of elements in common with the positivists’ critique. Husserl himself was a mathematician, and, although he wanted to settle accounts with speculative theories, his thinking was in many ways shaped by a similarly scientistic ambition of giving philosophy a solid foundation in science. Phenomenology’s ambition of going to the “things in themselves,” starting from pure experience with no intermediary stage whatsoever, sought to reveal something that was regarded as even more concrete than— and the very precondition for—the empirically concrete. On the basis of this aim of beginning radically from the beginning, Husserl was even able to assign phenomenologists the role of the “true positivists,” because he saw them as having assumed an even more “positive” position than positivism. Phenomenology’s claims were radical, and its focus on concrete experience should therefore be understood as a highly ambitious attempt to move beyond both psychologism and materialism in order to engage with the true basis of knowledge in human experience. Phenomenology wanted to exercise epoché, that is to say “to bracket” the empirical question of what exists beyond the phenomenon and instead concentrate on how the world actually appears to us. The phenomenon—Erscheinung—means precisely “that which shows itself,” “that which appears.” By getting rid of the interweaving of experience and that which is experienced, phenomenology sought to “humanize” knowledge by restoring the reciprocal relationship between human beings and the world. Note, however, that the goal of “bracketing” was not an attempt to restrict perspective but rather to achieve a widening of perspective to encompass all that is given in our concrete experience of the world. By “beginning from the beginning” with a starting point in given phenomenological experience, phenomenology would take seriously the sheer richness of the act of consciousness (that is, the correlation between experience and that which is experienced) and not allow what we call experience to be limited by the crassly empirical focus of modern naturalism. Because the “pure” experience that was revealed in this way, both historically and ontologically, appeared more originary than the knowledge subsequently developed by physics and psychology, phenomenology also argued that it represented the basis of all knowledge.
202
Chapter 9
By focusing on the constitution itself—that is to say, the act of the phenomenological experience’s intertwined relation between us and the world—phenomenology sought to derive experience from the fundamental fact that all consciousness is characterized by intentionality, that is to say, that it is directed at someone and by someone. The goal of “phenomenological reduction” was to try to ignore (at least temporarily) all pre-existing notions in order to access a “pure” (in the sense of being free of theory) description of the content that in immediate fashion shows itself in acts of consciousness. Phenomenology could be described as a kind of extended, concrete philosophy of experience, beyond the constraints of psychologism and physicalism. It is driven by an ambition of “humanizing” knowledge by revealing the fundamental basis of knowledge in the content and structure of acts of consciousness, and it focuses on the conditions of knowledge that modern scientific thinking tends to ignore or quite simply forget. Husserl and the phenomenologists argued that there is something more fundamental than physics and psychology, whose ways of describing the world are of such recent provenance that they cannot claim, ontologically or historically, to be “originary.” In other words, we must move beyond materialism and psychologism, physicalism and mentalism, if we really want to understand what knowledge is. For this reason, phenomenological reduction leads us, methodically and systematically, back to “pure” experience, such that all phenomena appear in terms of how they are constituted by these acts of consciousness in a transcendental ego. How are we to understand this concrete experience? In order to get to grips properly with phenomenology, we need to go back to praxis and remind ourselves of phenomenology’s investigative and experimental character. As Don Ihde emphasizes: ”Without doing phenomenology, it may be practically impossible to understand phenomenology.”194 Ihde himself describes phenomenological reduction as a practice that is defined by four things: first, focusing on that which appears in experience; second, not trying to explain but putting oneself instead in a position to describe; third, treating all immediate phenomena as horizontally equivalent; and fourth, seeking phenomenology’s structural features and essence by varying one’s acts of consciousness.195 The path to knowledge involves a balancing act. By starting from the constitution of phenomenological experience in order to describe the contents of acts of consciousness, phenomenology sought to find a middle way that was sustained by the hope of opening up a new perspective on knowledge beyond German idealism and British empiricism as well as constructivism and descriptivism. The fact that phenomenology’s selfunderstanding makes similarly ambitious claims as positivism while at the
The Two Cultures: Humanism and Naturalism
203
same time radically deviating, epistemologically and ontologically, from the fundamental premises of both neo-positivism and its “internal” critics, opens the field to a larger problematic. Phenomenology’s view of knowledge as grounded in the forms of human experience in all its variety can, in fact, be seen as something that is already implied by the fault lines and shifts within positivism which derive from the fact that there is always human presence in all knowledge. However, if phenomenological reduction means that phenomena always appear to a consciousness located in a transcendental subject, there is also a risk that “putting brackets” around empirical reality will lead to phenomenology turning into idealism. In their later lifeworld philosophy, Husserl and his adherents sought to balance and correct this idealistic tendency by situating phenomenology more firmly in history and by embedding it within a lifeworld perspective, but also by taking as their starting point the multiplicity of variations in acts of consciousness (until grasping the “essential structure” that arises when one reaches a point beyond which it is impossible to proceed), which also opens phenomenology up to the multiple variations of the problem of interpretation and a more hermeneutical viewpoint.196 Phenomenology can be compared to a labyrinth with many entrances and exits in the form of a series of thinkers whose positions have also shifted over time. To simplify somewhat, we might say that there are two main entrances to phenomenology: the Cartesian way, which starts from consciousness and focuses on epistemological issues; and the ontological way, which instead starts from the lifeworld as an originary, already lived world that is characterized by intersubjectivity. But its radical claims and prominent scientific profile, which finds expression in claims that all other experiences can ultimately be traced back to phenomenological experience, mean that phenomenology was always likely to end up in a confrontation with positivism. The meeting of phenomenology and positivism has the appearance of a clash between two kinds of foundationalism, each of which claims to have discovered the ultimate basis for all knowledge. In a sense, this confrontation with positivism became even more apparent with the development of phenomenology by Husserl’s student Martin Heidegger (1889–1976). Instead of making scientistic claims on the basis of a transcendental consciousness, Heidegger took phenomenology in an ontological direction (in an extension of Husserl’s later philosophy of the lifeworld) that also grafted a hermeneutical problematic onto the multiplicity of variations of phenomenology’s acts of consciousness. Heidegger argued that the ontological experience of always already being given in existence (Dasein) precedes the observer’s interpretation and the
204
Chapter 9
entire logic of subject-object. Heidegger thus settled accounts with the scientific subject by showing that we are never only epistemological observers of the world but are always already embedded in the world (Being-in-the-world) and involved in practical concerns and cares (Sorge). Interpretation can therefore no longer be a question of how an epistemological subject views the surrounding world at a distance; rather, hermeneutics emerges as an integrated part of a Being that understands itself as an “understanding Being.” Heidegger’s hermeneutical phenomenology in this way becomes a mediating step in an understanding of Being in which interpretation is connected to the question of the meaning of Being. Because we are, according to this view, always already part of the world we are trying to understand, the hermeneutical circle can no longer be limited to a purely epistemological relationship between part and whole in the object of study. The circular structure instead reveals an ontological relationship in human existence that always already exists in any situation. As preunderstanding becomes a necessary precondition for understanding, the interpreter, too, is drawn into the circular movement.
. . . although phenomenology is still not enough It is no exaggeration to say that there is considerable disagreement about how to understand phenomenology’s status and position. This disagreement is perhaps a result of the sheer variety of phenomenological approaches— from a “descriptive” phenomenology and a phenomenology focusing on how phenomena are constituted from the point of view of a transcendental consciousness and a “grasping of the essence of being” by multiple variations of the acts of consciousness, to an “existential” phenomenology and the more ontological profile of a lifeworld philosophy—and the fact that phenomenology, like positivism, has met with “internal” as well as “external” critiques that have served to correct its original assumptions. Finally, most of the scientific discussion of the theoretical status of phenomenological experience—understood as the concrete, immediately experienceable, everyday reality in which we live our lives with all its taken for granted preconditions (regardless of whether this experience is limited to an epistemological relationship or extended to encompass a broader, ontological lifeworld perspective)—has boiled down to the issue of whether it can truly claim to reveal a pre-scientific reality, that can also be said to be a precondition for scientific knowledge. Or is the result of phenomenology’s radical claims in fact that reality and the field of scientific thinking are both being divided into two separate worlds?
The Two Cultures: Humanism and Naturalism
205
This problematic has been thoroughly examined from a lifeworld perspective by one of Husserl’s students, Alfred Schütz (1899–1959), whose ambition in works such as The Phenomenology of the Social World (1932) was to cultivate sociology with the help of phenomenology. The recognition of the importance of social constructions for our reality here once again raises questions about the limitations of constructivism and about the relation between scientific knowledge and everyday knowledge. Is it really possible to sustain phenomenology’s “foundationalist” distinction between knowledge of “the first order” (the lifeworld) and “the second order” (scientific thinking)? These questions are familiar to us from Karl Mannheim, Peter Berger, and Thomas Luckmann. And in general, what is the relation between human action and social reality? If society is comprised of actions, how should we understand society as objective facticity in relation to the opposition of nature and culture? Are these challenges associated to the two “cultures”—or two “natures”? Should phenomenology’s ambition to provide foundations, its foundationalism, be considered a kind of claim for unified scientific thinking, and does this claim ultimately condemn phenomenology to an unresolvable conflict with the natural sciences and naturalism? The question forces us to address the conflict between naturalism and humanism that has profoundly defined scientific thinking in our era. The question of whether the transcendental ego can really be recognized as the basis for all experience, and thus for all knowledge, has driven phenomenology in two different directions: a transcendental variant, which situates itself in a subject-philosophical tradition that has to some extent repeated the problems of neo-Kantianism; and an ontologically oriented variant, which has instead developed phenomenology into a lifeworld philosophy whose starting point in the lived body has given it a very different materiality. Phenomenologists such as Martin Heidegger and Maurice Merleau-Ponty (1908–1961) sought to develop the ontological line of thinking by anchoring phenomenology in a deeper and more “originary” intentionality in the lifeworld and existence (Dasein). They thereby tried to distance themselves from Husserl’s attempts to restore subjectivity to the world, aligning themselves instead with the latter’s lifeworld philosophy and the experience of always already being part of the world, prior to becoming conscious of it or developing any knowledge of that relationship. Going back to the world that is always already given, prior to all abstract scientific thinking, means beginning with one’s own lived body, which by virtue of being both object and subject sets the horizon for our knowledge and constitutes the pre-understanding for all knowledge, because all
206
Chapter 9
perception takes place in a concrete context and must be related to mediatory factors such as language and history.197 Regardless of which version of phenomenology is being invoked, and regardless of how carefully the phenomenological analyses have been developed, it seems impossible to avoid the question: can all knowledge ultimately be derived from some kind of phenomenological experience of the given world in which we live? Moreover, is it really possible, on the basis of phenomenology’s ambition of “turning back to” something more fundamental to regard (natural) science and a naturalistic knowledge culture as secondary to the phenomenological experience of “that which shows itself”? Paul Ricoeur (1913–2005) has argued at length that phenomenological investigations may well be indispensable—but they are also ultimately insufficient. It is quite simply not possible to trace all experience and knowledge back to phenomenology’s concrete experience. Laws of nature, phenomena and objects in physical processes can indeed exist independently of phenomenological experience. Phenomenological reduction is necessary, but it cannot serve as the foundation of all knowledge. Ricoeur therefore argues for a phenomenology without absolute knowledge, that has abandoned its ambition of serving as a prima philosophia by claiming to be the fundamental basis of all knowledge. The world may not be entirely reducible to empirical data and causality—if it were, how could we develop knowledge about empirical data and causality?—but nor can it be entirely reduced to phenomenological experience. In his earliest philosophical observations, Ricoeur notes that the limiting function of “the thing-in-itself” (das Ding an sich) in Kant’s critical philosophy seems not to have an equivalent in Husserl’s phenomenology. For this reason, he points to the possibility of developing a kind of “post-Husserlian Kantianism,” which he defines in the following way: “Husserl did phenomenology, but Kant limited it and founded it.”198 The founding of knowledge by limiting its claim, is a recurrent trope in Ricoeur’s hermeneutics and a condition for being able to continue to relate differing interpretations to each other in terms of a conflict of interpretations. By limiting phenomenology’s claim, it is possible to open it up to other lines of thought, so that it can operate within the framework of a broader interaction with other philosophical and scientific traditions (without asserting itself as their foundation). In his later writings, Ricoeur elaborates on these ideas about how phenomenology might operate within a broader dialectical framework. Ricoeur claims that one of the strengths of phenomenology is its ability, by pursuing its lines of enquiry to the limit, to generate a large array of conundrums (aporias) that in turn reinforce its
The Two Cultures: Humanism and Naturalism
207
asymmetry in relation to that which phenomenological enquiry simultaneously excludes. Phenomenology’s most important contribution thus has to do with the opportunities that it creates at the very moment when it breaks down. Ricoeur’s prime example is how phenomenology’s failure to explain all time on the basis of an inner consciousness of time ultimately leaves us with a radical asymmetry: “it is as if the part of time that phenomenology has left outside its field, only becomes lager the more phenomenology turns inwards.” But it is also important to recall the limits of naturalism and that Ricoeur in fact ascribes phenomenology a crucial role, albeit one that is rather different from how phenomenology imagines it. Through the asymmetry between “external” objective time and the “inner” phenomenological time that appears at the very moment when this analysis of time breaks down, two distinct temporal regimes emerge—along with the insight that neither of them can ultimately be derived from the other. In other words, no absolute foundation is possible. And yet, at precisely the moment when foundationalism must be abandoned, a new opportunity presents itself in dialectics.199 Phenomenology thus has a crucial contribution to make to contemporary scientific thinking. But not even the most fully developed phenomenological investigation can do justice to the reality connected to geology, biology, thermodynamics, quantum physics, and astrophysics. Some philosophers have made dramatic pronouncements about the limitations of phenomenology, as when Ingolf Dalpherth writes: “Even since it was invented by Husserl, phenomenology has been disturbed by the fact that the world exists.”200 Or when Don Ihde, tongue firmly in cheek, rhetorically asks: “How many phenomenologists does it take to detect a ‘greenhouse effect’?”201 As an alternative to the foundationalism that characterizes both naturalism and phenomenology, I will instead join Ricoeur in advocating a “broken” ontology and “an open-ended, incomplete, imperfect mediation.”202 This involves a view of scientific thinking that uses hermeneutics trying to develop dialectical mediations at the precise juncture where reality is divided into two cultures. Therefore, having abandoned any hopes of definitively tracing knowledge back to a single foundation, what lies ahead are the challenges associated with developing a dialectical approach to the numerous paradoxes and symmetries that phenomenology has disclosed.
Two separate cultures of scientific knowledge In a lecture given in 1959, C.P. Snow coined the phrase “the two cultures.”203 In this talk, in which he criticized the myopia and narrow-
208
Chapter 9
mindedness of the contemporary elite culture facing the coming challenges of the modern world and recognized a dramatically widening gap between scientists and literary intellectuals, the former represented by forwardstriving positivists (often holding up mathematicians and physics as paradigms) and the latter by opponents of modernization, intellectuals who did not regard scientists as worth engaging in conversation. In a previous era not so many years earlier, Snow argues, writers and artists had routinely mixed with scientists. He expresses a deeply regret that so little of the rich and formerly so vital knowledge culture of the nineteenth century had survived into the twentieth. In this regard Snow also maintains that he identified a worrying tendency for communication between these two groups to cease entirely, resulting in a disastrous division into two “cultures.” He likens it to two galaxies that have lost contact with each other: naturalism and humanism are becoming the starting points for two separate knowledge cultures—two kinds of scientific thinking. When the medieval university emerged more than nine hundred years ago, at a time when classical culture began to be “reborn” in the form of a Renaissance, the use of Aristotelian philosophical models quickly became taken for granted. However, during the Scientific Revolution of the seventeenth century, an alternative scientific project appeared, albeit initially still within the parameters of that same Renaissance culture. Yet this new form of scientific thinking steadily emerged as a competitor by virtue of being defined by entirely different epistemological and ontological ideals. There emerged what might (anachronistically) be called two parallel scientific projects, represented by Aristotle and Galileo, respectively. It must nonetheless be remembered that this did not necessarily correspond to actual positions held by Aristotle and Galileo as historical figures. These served, rather, as the emblems of two positions because, to simplify slightly, the desire to explain change in terms of a teleological understanding (“from within”) was presented as the opposite term of the ambition of identifying causal explanations (“from without”). In strictly ontological terms, this was the emergence of a mechanistic worldview that eventually led to a growing gap between, on the one hand, a teleological view of the universe as arranged by design and, on the other, an image of a worldly machinery in which all things were taken to be connected by cause and effect. The outlines began to appear of a dualistic order in which culture and nature represent two discrete forms of reality, an “inner” world defined by freedom and teleological purposes and an “outer” world defined by causal necessity. It is not difficult to see that this way of thinking expresses an intellectual dichotomy that radically separates soul
The Two Cultures: Humanism and Naturalism
209
and body, culture and nature, as if they belonged to two distinct “cultures” (even if, strictly speaking, only the former has the appearance of “culture”). At the same time, it is important to highlight the interdisciplinary, boundary-crossing tendency that has always existed between the two “cultures.” The humanities and cultural sector in general have long been characterized by a growing interest in materiality—and, ironically, the body has become a key area of interest in the “culture” that, according to the body/soul-dualism, could be expected to remain narrowly interested in the soul. Today, the humanities have also embraced a great many structural oriented methodologies along with new disciplines such as digital humanities and neuro-theology. Echoing Immanuel Wallerstein and his colleagues, we might ask: what happens to this distinction in an era when scientists themselves are arguing for the non-linear over the linear, complexity over simplification, the impossibility of eliminating the observer from the observation process, and that the stable, temporally reversible systems described by Newtonian science represent only a limited part of reality and that nature itself is active and creative?204 In the book La nouvelle alliance, leading researchers in the natural scientists, Ilya Prigogine and Isabelle Stenger, have argued that is is necessary to form a new alliance between the two cultures. They argue that we should restore enchantment and coherence to the world on the basis of “a call to break down the artificial boundaries between humans and nature, to recognize that they both form part of a single universe framed by the arrow of time.”205 For the uninitiated, it can seem confusing that the two cultures—two such different projects of scientific knowledge which initially developed in parallel and, for a long time, partly in competition with each other—have for two centuries sought to co-exist under a single “university umbrella.” Nor has it always gone well. If we want to understand the reasons why the two cultures nonetheless managed to stick together, we need to look for the primary reason beyond philosophy of science. In general, we underestimate the crucial role played by money and finance in the academic world. But if we want to understand the university and how researchers operate in practice, it is very much a case of following the money! The shared university umbrella has shown that it can operate as an overarching framework for legitimizing very heterogeneous activities—so long as there is an attractive financial arrangement. The idea of including the new natural sciences in the university has thus undeniably shown itself to be a successful maneuver. Yet it has become increasingly clear that the university had brought a cuckoo into the nest. Nevertheless, it was largely thanks to these “new” and “useful” fields of scientific knowledge that the university, together with schools of engineering science and economics that followed,
210
Chapter 9
was able to persuade the state to finance other scientific areas. In this way, the university, from having been an institution close to extinction in the eighteenth century as a result of crises and dissolution, became during the nineteenth century, and, above all, the twentieth century, a powerful institutional pillar of society that the state was prepared to invest in and support financially. But perhaps the real problem was not the natural sciences themselves but the social sciences, which took the natural sciences as an ideal and accelerated the process by advocating disciplinary thinking in a dogmatic and positivistic fashion. The result of the various actors’ strategies for fulfilling narrow disciplinary requirements has been that universities have chosen one of the following strategies: either exclude the humanities and social sciences from the concept of scientific thinking, so that only “science” in the sense of the natural sciences remains; or use the carrot and the stick to try to get these fields of scientific knowledge to adapt their methods and self-understanding to an ideal of knowledge based on the natural sciences. Sometimes the human sciences have just been left for themselves, which has allowed them to exist within a carefully defined and protected domain, but the result has made their scientific status, and budget, highly uncertain. This is hardly a realistic path to the future. It is easy to get lost in the scientific and philosophical battles that have taken place between the two cultures. The terminology used to define the problematic is itself complicated and treacherous—not least when translating concepts across languages—in an era when we have come to equate scientific thinking with science in the narrow sense of the natural sciences, forgetting that Wissenschaft in fact includes the entire spectrum of academic disciplines. If we consider the humanities’ dominant approach to studying human culture and its various modes of expression as we encounter them in literature, art, and religion—and factor in the ambition of preserving traditions that characterizes these subjects—both the contrast to the empirically oriented natural sciences and the difficulty of legitimizing one’s own activity become clear. But even social sciences that work with empirical data have often had problems living up to the standard set by the scientific ideal of the natural sciences and quantitative methodology. Issues of the generalizability of research findings have often become heated and have dogged qualitative researchers because, while some kind of claim to universality may be a hallmark of all academic disciplines, it is not obvious what the epistemology of every area of scientific thinking should be. Sometimes, conforming to this standard does not even seem desirable. Within the social sciences, for example, we can identify three kinds of
The Two Cultures: Humanism and Naturalism
211
expectations derived from the natural sciences that have proven highly difficult to live up to. They involve the requirement of predictability, the control of the object being studied, and the expectation of quantifiable precision, which are often difficult, if not impossible, to achieve in this field—and sometimes perhaps not even desirable.206 The outcome of this development, which originally started from the pressure created by positivism’s ideal of unified science, has been repeated in debates about the place and scientific status of the social sciences and, in particular, the humanities. In this situation, repeated efforts have been made to divide the scientific field into separate epistemological and ontological domains by insisting upon different kinds of discontinuity between different disciplinary fields. Ultimately, the upshot has been that the humanities are largely regarded as the problem child of the academic family. In recent decades, the reasons why this situation has arisen have not primarily to do with discussions within philosophy of science or epistemological or ontological considerations. The pressure to conform to a uniform disciplinary standard has instead come from new models for research communication, connected to accounting systems intended to measure scientific progress in terms of production using different qualitative measures and models for everything from publications to examinations. In these contexts, quality has tended to be a quantitative issue. The “magic of numbers” has dominated both research methods and the systems of evaluation and accounting. The conflict has thereby also been displaced, increasingly becoming a question of different formalized standards of evaluation. For example, the status of the monograph has been called into question, as has publication in a national language like Swedish and the possibility of using varieties of peer review, more informal alternatives that engage a broader public than academia via publications, reviews, newspaper debates, and publication with presses that reach a general readership. However, the general development has moved towards highly formalized peer-review procedures, often involving “double-blind” reading, exclusively reserved for articles published in English in international journals that are primarily directed to a limited global research community of specialists within a particular area. This means that the ideal of unified scientific thinking in today’s “globalized positivism” no longer comes, as it were, “from within” the area of philosophy of science but “from without,” from the accounting models and economic management systems of a society based on competition. Yet the effect of adaptation to this new research ideal has been that the human sciences in particular have become increasingly isolated from local and national cultural life—and, with it, their most important audiences. This is
212
Chapter 9
just one of many unanticipated effects of globalization. But before we narrow our field of enquiry to contemporary philosophy of science, let us take a couple of steps back in order to see where exactly these two cultures originate.
Can people be studied using “scientific methods”? In their book Making Modern Science, Peter J. Bowler and Iwan Rhys Morus describe the emergence of modern science. Their narrative is extensive and wide-ranging. And yet the reader has to wait a long time before encountering something that is not pure natural science. The book is only about science in the narrow sense of the word, not scientific thinking or scientific knowledge in the broader sense that we have been using. Only after almost three hundred pages, towards a section in Chapter 13 on the history of scientific development, the authors consider issues relating to the emergence of the human sciences, and then the reader is suddenly confronted by the question: “Could human nature and society be studied by the methods of science?”207 It would have been unthinkable for scholars in the seventeenth century to taking a natural sciences approach because they, like everyone else at that time, were convinced that the soul was supernatural in origin. However, Descartes solved the problem by clearly separating the soul from the mechanical body and referring to two distinct substances, a dualistic solution that has become paradigmatic. Later thinkers, such as Michel Foucault, have extended this perspective with investigations of how mental asylums, prisons, and schools were used to control human behavior within the framework of the emergence of the modern state in the nineteenth century. Nonetheless, the question remains: is it really possible to study the human world using the same methods as the modern (natural) sciences? Are they sufficient? If so, is there not a risk of ending up in the paradoxical situation that the study of human beings does not require special scientific knowledge of its own? In a later chapter, Bowler and Morus try to answer the question by considering how psychology, anthropology, and sociology emerged as scientific fields. The critical question in this context is: to what degree can human beings or “human behavior” (as the authors insist on referring to it) be regarded as governed by laws—and, if so, how could identification of these laws be combined with some notion of human freedom—without the human sciences abandoning the justification for their own existence?208 What makes this problematic so provocative and acute is that the entire discussion, tellingly, is framed in terms of science in the narrow sense of the word, which means that the premises of the argument are in many
The Two Cultures: Humanism and Naturalism
213
respects decided in advance—while a considerable portion of the disciplinary structure of the university is correspondingly excluded. The question, deriving from positivism, as to how far the methods of (natural) science can be used to study human beings can also be formulated by taking as a starting point the difference between participating and observing, and the difficulties associated with objectifying oneself. This is the basis of Skjervheim’s critique, in which he indicates his conviction that “we do not encounter freedom as a fact.”209 Dogmatically taking the objectifying approach as a starting point means forgetting that the human situation, far from having only one meaning, is fundamentally ambiguous. Instead, our thinking has to straddle two horses if we are to do justice to human behavior and the process of becoming human, since that which can seem “from the outside” like a causal development can also seem “from the inside” like a continuous series of more less deliberate choices. Paul Ricoeur has argued for the necessity of using a correlation of (“inner”) intentions, (“outer”) causality, and pure coincidences in order to be able to capture the composite complexity that is a hallmark of human capacities.210 Yet this correlation in the realm of philosophical anthropology also implies a dialectical approach to the realm of epistemology, so that the research process is advanced by the researcher alternating between the roles of participant and spectator. The necessity of such an alternation between proximity and distance recurs continually in scientific work, and ultimately this capacity for distance is of strategically decisive importance for every process of knowledge development. This also raises complicated questions about the nature of the connections between, on the one hand, the epistemological tension between inductive and deductive methods, and, on the other, the anthropological assumptions underpinning how we view the relation between teleology (intention-action) and causality (cause-effect). It is also possible to see how this tension between the two cultures informs the tensions between structure and action within sociology as a discipline, as it is historically manifested by the conflict between Durkheim’s structural perspective and Weber’s action theory.
Humanism or anti-humanism? One way to articulate what is at stake in this tension between “the two cultures” is to treat the entire issue as a question of the theoretical status of humanism, or, more precisely, how to determine the status of the subject in the tension between humanism and anti-humanism. In public debate, reference is often made in general terms to the importance of humanistic values, as if there were a consensus about what their meaning is. Humanism
214
Chapter 9
today seems to be regarded as desirable and self-evident—but at the same time it appears as something vague and ill-defined. We notice far too rarely the brutal contrast between this naïve, uncritical praise for humanism in politics and the public sphere—and the simultaneous theoretical antihumanism of the scientific community that is expressed in the dominant position of naturalism as well as the dominance of post-structuralist epistemologies within parts of the humanities and social sciences. It is as if political debate in the public sphere is unaware, or simply actively dismissive, of the formidable critique that has been levelled at the assumptions about the human subject that underpins traditional humanism.211 Despite their good intentions, those who have sung the praises of humanism seem neither to have noticed the far-reaching consequences of the general pressure to adopt an empirically oriented naturalist approach that is being applied to practically all areas of disciplinary knowledge, nor to grasp the anthropological implications of the entire gamut of post-structural theoretical positions. I am convinced that we pay far too little attention to this contrast, between the public recognition given to humanism by politicians and policymakers and the largely posthumanist—and often anti-humanistic— state of affairs that exists in the contemporary scientific community, which is interested more in dissolving the human subject than in investigating it. This conflict is as startling as it is overlooked, and unlike the maliciousness of practical anti-humanism, this theoretical anti-humanism is not a matter of destructiveness or ill-will but a genuine theoretical challenge. The fundamental issue is how it is even possible to consider what it is that makes us human in a posthumanist age.212 Yet, these reflections from the field of philosophy of science are not confined to an isolated academic sphere. They are highly social concerns and closely associated with practical dilemmas, particularly at a time when society is becoming increasingly vocal about the demands it places upon universities and other institutes of tertiary education. When “society talks back” at a time when scientific thinking is changing from being an external force behind the transformation of our societies to an internal component that is everywhere present in almost every area of society,213 the gap widens. Thus, on the one hand, we find humanist expectations of the public sphere, and, on the other, the largely posthumanist epistemologies that dominate in an academic world that has lost or deliberately abandoned its capacity to articulate some kind of humanism worth the name. This gap risks creating an alienated relationship to scientific thinking—and eventually perhaps an outright moral panic with far-reaching consequences for future funding opportunities.
The Two Cultures: Humanism and Naturalism
215
We have already emphasized that what we designate as facts should actually be recognized as a product of human actions associated with a complicated historical background in the scientific community. When the object of study is human beings, the challenges connected to how facts are constituted become even more complicated. Is it possible to recognize a human being as a fact? If we do, argues Hans Skjervheim, it will mean treating humans as completed and finished entities, in accordance with a “fatalistic naturalism.” Revealingly, we ascribe this kind of universal determinism almost exclusively to other people—while regarding ourselves more as faciens (“doing”). Today, when we find it so difficult to relate the two cultures to each other, this discourse on facts tends to give rise to dual systems and divided communities, with the result that the human subject slides out of view and acquires an uncertain ontological status—at the same time as researchers become alienated from themselves. Skjervheim notes that “we must use verbs when referring to human beings”—and then adds a crucial detail: verbs can be conjugated in both passive and active voice.214 We here discern the outlines of a dialectical understanding of the human subject as somebody who has the capacity to be recognized both as fact (facticity) and faciens (freedom), object and subject. Yet this presupposes that scientific thinking regards itself as a practice that, as it were, straddles two horses. In other words, the way modern people view themselves seems to be divided: on the one hand, they consider themselves absolute observers; on the other, they reduce all phenomena (including themselves) to objects and matter.215 This challenge, associated with the two cultures, must be addressed by all those who wish to represent some kind of humanism, and ultimately it may be asked whether we as scholars in fact need to adopt a slightly schizophrenic attitude.
The anthropological deficit in contemporary science To a greater degree than other disciplines, the social sciences have become a field for the conflict between the dominant scientific cultures of our time. Furthermore, the “outer” division between policymakers’ passion for humanism, and the solidly posthumanist (or outright anti-humanist) reality within the academy, has an “inner” counterpart in the division between, on the one hand, the dominant theoretical frames of reference for describing human beings within the social sciences, and, on the other hand, the political and moral convictions held by most researchers. In recent years, this gap has been made more acute by the pressure on the social sciences to adapt to a positivist empiricism inspired by naturalism. This gap has been further
216
Chapter 9
widened by the anti-essentialist tendency, found in strong versions of constructivism, poststructuralism, and what is vaguely referred to as postmodernism, of dismissing as idealism and essentialism every attempt to find a common, stable, and accepted starting point for an understanding of what it means to be human. All of this, in combination with the rapidly growing interest in neurology, computer science, biotechnology, artificial intelligence, and genetics, has left many researchers fundamentally uncertain about how to understand what it means to be human.216 When the humanities and social sciences, that have been influenced by this new epistemological landscape, paint a portrait of human beings as decentred subjects largely governed by external social forces, or develop reductionistic teleological perspectives on human beings as rational machines competing with each other for limited material resources and fighting with their rivals over power and status—there remain only very few things that testify that those same beings might also be the bearers of virtues such as equality and self-determination. Nor do they help us to understand why we think of ourselves as having attributes and values such as tolerance, democracy, and freedom—at the same time as the presence of such values finds little or no support in the epistemologies that we use in our research. If we restrict ourselves to theories of rational self-interest, where human beings are depicted as continually involved in exchanges and therefore endlessly calculating in terms of cost and profit (which the anthropologies dominating social sciences actually do when drawing inspiration from rational choice theories, for example), it will no longer be possible to make any kind of connection between these theories and the experiences and convictions that we ourselves have as individuals and that we sometimes regard as among the most important aspects of our lives. Christian Smith, whose thinking has strongly informed my thinking on these matters, refers to this gap as a “tension,” “disjunct,” “disconnection,” “apparent mismatch,” “fragmentation,” and “schizophrenia” among researchers.217 The growing divide between, on the one hand, the anthropological assumptions associated with dominant theories and research methods, and, on the other, the researcher’s own convictions and personal life, have led to a disconnect between theory and reality, descriptive and normative, facts and values, scientific explanations and human experience. This is the symptom of a split personality—an academic schizophrenia—with grave implications for scientific thinking.218 Shapin has highlighted a similar scientific paradox, that reveals itself in relation to the gap between the objective, disinterested gaze of the natural sciences and the passions and interests of everyday subjectivity. This becomes apparent in the opinion that the more a body of knowledge is
The Two Cultures: Humanism and Naturalism
217
regarded as objective and disinterested, the more valuable it tends to be regarded as a tool in moral and political behaviors. The strange consequence is that the knowledge we recognize as having the most to contribute to the solution of moral and political problems is at the same time often derived from knowledge that builds upon an understanding that is regarded as not having been produced or evaluated on the basis of human interests. Shapin argues that this idea—that the most important creator of values in our culture is associated with a type of knowledge that is regarded as having the least to do with moral values—is a legacy of the Scientific Revolution.219 In other words, it is no coincidence that modern scientific thinking today tends to be divided into two separate cultures of knowledge.
Words or numbers, qualitative or quantitative methods? The scientific project is less uniform that we often imagine. On the contrary, as we have seen, what we call scientific knowledge often tends to split into two separate knowledge cultures—and these have often gone by many different names. When dividing the academic disciplines into different groups, certain conceptual categories frequently recur, as for example idiographic versus nomothetic. Idiographic sciences focus on the individual (idios) and the unique. A biographical account of the distinctive characteristics of a human being is a clear example of an idiographic perspective, but nothing prevents those in the natural sciences from also directing their research at individual plants or animals. In contrast to this kind of scientific thinking, nomothetic research, echoing its etymology in the Greek word for law (nomos), focuses instead on the general, the universally applicable, and the rule-governed, something that is now also a common feature of research in the humanities and social sciences. This fact suggests that idiographic and nomothetic should not be treated as separate and intrinsically different domains or as mutually exclusive alternatives.220 Another way of describing this conflict, which, has been evoked as “two cultures” is to refer to the opposition between qualitatively and quantitatively oriented fields of scientific thinking. This is an opposition that has determined the discussion of scientific method for the past few hundred years and should be understood in relation to the opposition between the teleological tradition beginning with Aristotle and the mathematization of reality and division into primary and secondary qualities developed by Galileo. We have already encountered this conflict as a battle between words and figures. Thus, the fascination with mathematical explanations that was a defining feature of the Scientific Revolution has returned in our
218
Chapter 9
own era in the form of a new Renaissance for prioritizing figures above words. Qualitative and quantitative research methodologies differ from each other very markedly. Qualitative research uses documents, interviews, participant observations, and other methods, i.e. “loosely structured data” that, by being analyzed, leads to results that cannot easily be generalized or considered as regulated by some kind of law. Quantitative research typically relies on far larger bodies of material, which can be examined using statistical and mathematical methods and is therefore defined by an ideal of quantifiability, enabling generalization. It is sometimes suggested that qualitative and quantitative methods are mutually exclusive, and there have been frequent battles between the two methodological camps. Basically, however, both methods are examining the same reality and can be effectively combined in the same investigation—even if it is important not to mix them together, for example, by trying to draw quantitative conclusions from a qualitative sample. Yet this does not mean that their differences correspond in any sense to the difference between subjectivism and objectivism—because quantitative and qualitative researchers alike make scientific claims. In a broader perspective, the opposition between the two cultures can be seen as resulting from the fact that an alternative knowledge project, based on empirical data and with a mathematical focus, emerged in tandem with the Scientific Revolution. But it is not obvious that this new quantitative approach should be regarded as a separate culture that is entirely unconnected to a qualitative knowledge culture. In holding Galileo up as a leading representative of a quantitative scientific ideal that translates words into figures and qualities into quantities, we should also nuance this history in order to provide a more balanced picture. The Galileo who exemplifies mathematization, mechanical instrumentalization, and the breakthrough of experimental science was also the same Galileo who published his ideas in the form of dialogues. This dramatic device is also significant in terms of its contents. When, in The Dialogue Concerning the Two Chief World Systems (1632), he allows three people to represent three different approaches—Salviati, Sagredo, and Simplicio—he is using an expressive form taken from a qualitative knowledge culture. The book is also written in the vernacular, which was extremely unusual at a time when it was taken for granted that Latin was the appropriate language for such communication, and which also clearly indicates Galileo’s ambition of extending the debate to include a larger audience. Galileo can thus be said to have been shaped by each of the two cultures—in addition to being clearly driven by a desire
The Two Cultures: Humanism and Naturalism
219
to maintain a living communication channel between the research community and the society around it.
Continental and analytical philosophy A philosophical variant of the opposition between the two cultures is the conflict between “continental” and “analytical” philosophy, two traditions that has been a defining feature of twentieth-century philosophy. In fact, there is no historical equivalent, in terms of schools or geography, to this irreconcilable division within a disciplinary field of knowledge, which has resulted in two vastly differing, entrenched camps defined by positions with utterly different stylistic ideals and almost entirely without common starting points, concepts, or problematics. Nor do other fields have their own versions of “continental” biology, physics, and so forth.221 Yet it should be remembered that this division is relatively recent; there was a lively exchange of philosophical knowledge between Britain and the Continent right up to the nineteenth century. As with other articulations of the contradiction between the two cultures in contemporary scientific thinking, this contradiction is not symmetrical, as can be seen from the fact that one of the theoretical camps has a theoretical designation while the other is geographical. As a designation, analytical philosophy stems from Herbert Feigl and arose in 1912 during a meeting of Bertrand Russell and Ludwig Wittgenstein. The term “analytical philosophy” derived from the fact that neo-positivistic circles argued that solutions to philosophical problems should be sought through linguistic and conceptual analysis. They therefore developed a kind of critique of language “that presented analytical philosophy as opposed to the modern synthetic and transcendental premises of continental philosophy. The designation “continental philosophy” came “from without” and derives from John Stuart Mill, who coined the term in a book review in 1832, but the concept only became an established term after the Second World War, retaining its “British” perspective. The logician Gottlieb Frege and the phenomenologist Edmund Husserl have become the ruling deities of these two dominant modes of doing philosophy. Interestingly, the two have much in common. Both were mathematicians, and both were critical of speculative theories and metaphysics. Both were interested in the challenge of establishing the foundations of all scientific thinking. But they parted ways when it came to the methods by which this was to be achieved: whereas Frege started from formal logic, Husserl insisted that the foundation of knowledge lay in phenomenological experience as expressed in acts of consciousness.
220
Chapter 9
Yet these positions and the subsequent history is not without ambiguities, nor are the differences hermetically separated. It should be remembered that in continental Europe, just as in the Anglo-Saxon world, in some sense there have always been “continental” and “analytical” philosophers. Philosophy on the (European) Continent is, moreover, far too heterogeneous a phenomenon to be able to be lumped together as a single school. It includes transcendental philosophy, phenomenology, hermeneutics, psychoanalysis, Marxism, and structuralism as well as analytical philosophy, all positions that have relatively little in common with each other. As a rule, analytical philosophy has been more uniform, though there have also been variations, differences, and oppositions as well as analytical philosophers such as Charles Taylor, Richard Rorty, and others who have taken a serious interest in Continental philosophy. Still, if we were to extract the key features of these two schools, we might say that analytical philosophy has a tendency of starting from philosophical problems that it then analyses, whereas the reflections of continental philosophers often begin by turning to individual philosophers in history who in some exemplary (or cautionary) fashion can be said to have treated a particular topic, only thereafter developing their own line of argument on the issue. We might also say that analytical philosophy has been more influenced by the research ideals of the natural sciences as regards both style and forms of publication (the dominance of the scientific article), while many continental philosophers are closer in spirit to the humanistic research tradition (and were for a long time more likely to write monographs). This opposition has occasionally escalated into a conflict between epistemologically oriented conceptual analysis and ontologically oriented phenomenologists. During the last decades of the twentieth century, the philosophical “Berlin Walls” also began to come down. It became harder to sustain a hard and fast distinction that had been a hallmark of philosophy earlier in the century. Perhaps it can be seen as a consequence of globalization in the realm of philosophy. In this new situation, as philosophers found themselves forced to deal with previously unknown strands of philosophical thinking, philosophers such as Paul Ricoeur, Karl-Otto Apel, and Richard Rorty rose to prominence as thinkers who had long gone against the prevailing trends by investigating the complex relations between the two philosophical “cultures.” An illustrative example of the opposition between “analytical” and “continental” approaches is the opposition within the problematic of time between the relentlessness of “objective” cosmological time and the “subjective” phenomenological experience of time. In this context, thinkers
The Two Cultures: Humanism and Naturalism
221
like Bertrand Russell and Martin Heidegger are often presented as the primary representatives of analytical and continental philosophy, respectively. They can also be said to have focused, consistently and clearly, on the objective time of “the world” (Russell) versus the subjective time of “the soul” (Heidegger). In a series of extended investigations in the 1980s, Ricoeur addressed the issue of how time becomes human. The way in which he sought to orient himself and mediate between different temporal philosophies clarifies in a number of regards both how each culture’s philosophy has been developed and how the “two cultures” might be joined. It is entirely legitimate to examine time from these two fundamentally different perspectives. On the one hand, we associate time with something quantitative and empty, a clock time that advances relentlessly, whether or not we notice it, and that is regulated by the “objective” movement of the planets. Yet we are also made aware of the existence of another kind of time, for example, when we become bored and feel that time “goes slowly,” or when we are enjoying ourselves and feel that time “goes quickly.” These experiences of time attest to the fact that clock time is not “all time”: there also exists another kind of time that is bound up with our “inner” experience of time, a time whose form is determined by its content, a qualitative time that is anchored in the present but that can also be stretched “backwards” and “forwards” in the form of memory and anticipation. Having been confronted by the inhumanity of clock time, which pays no heed to how we live or experience time, it is certainly appealing to imagine, as Husserl and Heidegger do, that phenomenological time might be synonymous with a truly human time. Yet Ricoeur rejects such a solution on the basis of the face that phenomenological time, while admittedly necessary, is insufficient, for example, to explain why time continues even when “my” time comes to an end with the passing of the generations. By way of alternative, Ricoeur has argued that human time should instead be connected to a kind of “third,” mediating time, a fragile time that can only be established and sustained with the help of narrative “connections” that historical time establishes by inscribing inner time into outer time. Once again, the starting point is a dialectical position in which we are forced to work with a “broken” ontology in the place of a foundation or a stable synthesis. This third time involves a kind of “hybridized time,” which, being a heterogeneous synthesis, can really only have the character of a “joint” between the dominant conceptions of time.
222
Chapter 9
Phenomenology and naturalism The conflict between humanism’s assumptions about what it means to be human and the theoretical antihumanism that features prominently among the dominant epistemologies in the natural sciences is nowhere so pronounced as in the field of healthcare. The collision taking place there, between the two scientific projects, is further exacerbated by its context as a strategic field of conflict between contemporary political factions and economic priorities, where difficult questions are being raised about the value, autonomy, vulnerability, identity, and meaning of human life. It is no exaggeration to say that medicine is one of our era’s most successful knowledge cultures. Yet the extraordinary advances in medical research have also been revealed as having a dark side. It appears that there is a tendency to “lose sight” of the distinct human being when naturalistic research environments are combined with organizational ideals defined by large-scale production, specialization, and standardization. In other words, the fact that an anthropological deficit has arisen is to be considered not as the result of ill-will but as the negative “downside” of something that is fundamentally highly positive. The extraordinary advances made by medicine in the past century were in reality only made possible by an intense process of reduction, that sometimes devolved into reductionism (see the following chapter). In this way, a curious situation has developed in which the very area of scientific thinking that our own era so closely associates with humanism and human dignity is at the same time characterized by profound difficulties in finding a language able to articulate the human condition. Paradoxically, it is this relentless reduction, together with the intensifying demands for specialization, that have made possible medicine’s extraordinary successful achievements—while also creating the many dilemmas that now face us. In the field of medicine, these advances have largely followed upon the successful isolation of pathological processes in different organs, substances, and processes in an objectified body. This has given rise to a uniformly naturalistic approach that has “bracketed,” as it were, the human lifeworld. Nature has thereby been separated from culture, naturalism from lifeworld, allowing researchers to study causal connections and to stage interventions that have led to a dramatic proliferation of explanations for the workings of the human body, and, with it, a phenomenal improvement in public health. In fact, it should come as no surprise that the conflict between the “two cultures” has been so acute particularly in the field of healthcare. It is here that the necessary isolation and objectification of diseases by medical
The Two Cultures: Humanism and Naturalism
223
pathology meet the phenomenological lifeworld experience of that same disease in the patient, who is also considered a human being with a lived body, i.e. not only an object but also a subject. The emotional distance from the patient that is an integral part of medical research methodology, is as much a necessary component for the extraordinary advances in research that have been achieved as it is a problematic approach in human terms. We might say that the symptoms experienced by the patient have been made secondary in relation to the signs that medicine is primarily interested in. This conflict of values reminds us that making diagnoses in healthcare calls for an ability to manage the double competence necessary for maintaining a consistent perspective when technological interventions and subjective experiences, signs and symptoms, come into contact. The concrete challenges presented by these situations are in large part about how we can incorporate, endure, and, indeed, appreciate the considerable degree of distanciation, objectification, and alienation associated with the modern natural sciences— while not allowing healthcare to lose sight of the patient’s distinct humanity. The scientific challenges currently faced by medicine and the healthcare system become clearer and appear in exemplary fashion to anyone who wants to investigate the relation between humanism and naturalism. This is a framework in which the two dominant knowledge cultures of our era are colliding violently with each other. Questions about how this constellation should be handled are also crucial in the readjustment process of the healthcare system to person-centred healthcare that is taking place in Sweden and other countries. The two cultures come together, albeit not symmetrically, in these questions about how to move from a patient-centred to a person-centred approach: whereas the patient-centred approach lacks an understanding of what a person is, the latter implies that the person may also appear in the role as a patient. When we talk about patients, our focus is on what a patient is, a person-centred approach instead focuses on who the patient is as a person (which does not prevent the person from assuming the role of patient). It is important to develop a concept of the person that factors in also all conceivable determinants of “what” that person is as a patient, in order to remain inclusive and avoid end up in a position antagonistic to medicine. This asymmetry is reinforced by the fact that the concept of the person does not exclude the role of patient but rather presupposes the development of a composite view that relates the two cultures to each other. In a personcentred approach, the medical perspective is entirely necessary, though not sufficient, in order to do justice to the fact that healthcare professionals are encountering a human being—a person—when they treat patients. But just as the knowledge culture of medicine can often be skeptical about transcending
224
Chapter 9
its epistemological framework to involve methods and perspectives that are not grounded quantitatively in physiology, so, too, can a phenomenological lifeworld perspective that starts from the lived body easily fall into an absolute opposition to medicine’s naturalistic knowledge culture. In order to make the “journey” from “what” to “who” in this acute conflict between words and figures, quantitative and qualitative scientific methods, we need to introduce a narrative knowledge culture that nonetheless differs radically from the naturalistically defined knowledge culture of medicine. It makes no difference how many “whats” we add to the understanding of a person, if we do not use narrative we will still get no closer to an understanding of “who” the person is.222 In this context, narrative should not be recognized as just one form of knowledge among many; the narrative knowledge culture is the “royal road” to recognizing ourselves as humans. Narrative humanizes time and makes it human by creating cohesions within the passage of time and by connecting our “inner” experience of time to the “outer” time of the world. This is also the reason why we usually tell a story about ourselves when we want to introduce ourselves to someone and help them to get to know us. Narrative works as a kind of fabric from which our identity as individuals emerges. No matter how many attributes (“whats”) pile up in our assessment of a patient, we will still not get any closer to answering the question of “who” the person is. Giving patients a human face requires narratives. In personcentred healthcare, the continual negotiating of narrative, which happens on the basis of a partnership between patients and professionals as well as documentation, forms a constellation that defines the fundamental outlines of a person-centred praxis.223 Person-centred healthcare is not about following some grand Theory or Manual but is about an ethics focusing on responsible actions ”aiming at the ’good life’ with and for others, in just institutions.”224 However, we need to remain attentive to the fact that the relation between the two cultures in this respect can only be established as an asymmetric relationship, in the same way as the partnership between patient and professional has to take the form of asymmetric reciprocity. Every living person is also a physical body, but this does not mean that all physical bodies are human beings. The fact that a person can appear in the capacity of a patient does not mean that all patients are likewise treated as persons. The status of and restrictions on human beings within the framework of a constellation between nature and culture must therefore also be understood as an asymmetric relation, since the fact that all people are animals does not mean that all animals are people. That human beings share 99% of their DNA with chimpanzees does not lead to the conclusion that there are no
The Two Cultures: Humanism and Naturalism
225
differences between humans and chimpanzees. On the contrary, it could be argued that the socio-cultural world of humans differs 99% from the world of chimpanzees.225 What is more, the difference between humans and chimpanzees becomes even more pronounced when we vary the scale and consider not genes but the ability to collaborate socially in larger groups. The threshold usually occurs at populations of around 150 or more individuals, but when this number is increased to one or two thousand, the differences become “astounding.” It turns out that human beings, unlike chimpanzees, have a unique capacity for collaborating with a large number of strangers and interacting with them by means of ideas, concepts, fantasies, and institutions such as economies, religions, and democracies.226 This asymmetry also determines the conditions for possible mediations between the two cultures. Scientific thinking in our era is fundamentally shaped by the tension between the two cultures of knowledge, a conflict that has deep roots and that can be articulated in various ways. Currently, humanism and naturalism seem to be in a state of violent collision, that generates major challenges, particularly for scientific thinking. But if each of us ultimately embodies two entirely legitimate perspectives, both of which have proven to be productive, what exactly is the problem?
Reductions—not reductionism Reductionism is often bandied about as a term of abuse, as when people reflexively accuse natural scientists of being reductionist or criticize economists for the particular kind of reductionism known as “economism.” Yet the question then arises: is someone automatically reductionist because they work in the natural sciences or economics? In order to clarify what we really mean by reductionism, and why reductionism can be considered problematic, we need to start by asking what a reduction is. The word reduction comes from the Latin reductio and the verb reducere, meaning “to lead back to a starting point” but also “to put in its place.” All scientific thinking involves reduction in the sense of moving from multiplicity to something simpler by taking things back to fundamental premises. In other words, reductions are completely necessary; they constitute an obvious element of the basis for knowledge development and are a precondition for our entire scientific culture. Not least, they are also prerequisites for how disciplines and scientific areas of knowledge come about. We use different kinds of reductions every day in order to trace experience back to something that we think of as fundamental, an origin, a genesis: these can be different kinds of empirical materiality, language,
226
Chapter 9
society, existence, and so forth. An empirical reduction counts as true only what corresponds materially to empirical observations and experiments, a linguistic reduction views the world entirely in terms of linguistic phenomena and activities, a sociological reduction treats all aspects of reality as social phenomena and interactions, an existential reduction focuses exclusively on the existential conditions of human life, and so on. In other words, reduction is not only entirely legitimate, it has also proven itself to be extraordinarily successful for those wishing to develop knowledge and scientific thinking. Talking about reductionism is only really warranted in those cases where we forget or ignore the limitations that accompany these reductions and begin drawing general conclusions above and beyond what is reasonable. Scientific thinking is therefore in large measure a matter of being able to differentiate between reduction and reductionism by developing an awareness of how different forms of reduction simultaneously impose limits upon one’s own work. But reduction in itself is undramatic and unavoidable. Accusations of reductionism are usually targeted narrowly at those in the natural sciences, economics, or technology, and only rarely or never at the human sciences or theology. In these cases we often forget that all disciplines of scientific thought make reductions. Advancing knowledge is never innocent because it always involves some kind of reduction by virtue of the fact that it is never possible to encompass the totality of things. This also holds true of disciplines such as literary studies and theology, which are also forced to filter out a vast range of objects—most of them, in fact— in order to focus on their own subject-specific field of knowledge. These reductions can also be problematic when observations are traced back to specific points of departure, and even here there is a threat of reductionism whenever disproportionate claims are made upon the basis of any particular reduction. Reductions also need to be balanced because they presuppose that there exists something that needs to be reduced. In his elaborations on the ontological challenges of scientific reduction, Michael Polanyi highlights the importance of the concept of emergence for developing a balanced view on understanding scientific thinking as reduction. His starting point is that people can in fact be part of a development such that they appear as persons, even as it is at the same time not possible to trace these persons back reductively to lower ontological levels. Polanyi argues that both naturalistic casual explanations and various kinds of social constructionism are unsatisfactory because of their one-dimensional viewpoint. Too rapidly and simplistically they transform their epistemological assumptions into ontological claims. As an alternative to this “flat” world, Polanyi describes
The Two Cultures: Humanism and Naturalism
227
a multi-level reality whose various levels are continually interconnected, yet in complex ways. Persons exist at a higher ontological level than the sum of their attributes and capacities might suggest—even if these also represent the fundamental conditions of existence for that person.227 The point of departure for this emergence-perspective is a stratified hierarchy of levels which operates according to a fundamentally different logic: bottom-up rather than top-down. The relation is ontological in the former and causal in the latter. The fact that people “emerge” from the material world does not therefore mean that they can be reduced to nature. Polanyi is thus critical of those who see evolution as solely a process of continuous selective improvement: “Yet it is taken for granted today among biologists that all manifestations of life can ultimately be explained by the laws governing inanimate matter.”228 People are unquestionably also governed by causal relationships, but they are not governed by causality alone because they have the additional capacity of exceeding any given order. In Polanyi’s recognition of the person a new level of ontological reality has thus been established and, with it, a complexity that is not fully reducible to something physical or psychological, bodily or intellectual. A common example of how reductionism creeps into everyday academic prose can be seen in the way scholars claim that love “really” or “ultimately” is “only” a matter of chemical processes. To be sure, our bodies are comprised of a series of different chemical substances, without which we would not have human bodies.229 And it is entirely legitimate to reduce people to such chemical substances within the framework of scientific studies—but only as long as we limit our claims and remain mindful of temporal and spatial constraints. This kind of reduction becomes reductionism if, and only if, we claim that a living human “is nothing but” an accumulation of these elements. That a phenomenon must be described as something more than the sum of its parts is obvious in the case not only of humans but also of reality. Christian Smith offers the following example to show the dilemma of reductionism: “To a reductionist, therefore, a Zen garden is nothing more than sand, gravel, rocks, and some organic plant matter.”230 The same reduction that is a daily routine in of research in medicine according to the methodological naturalism of the natural sciences easily becomes unreasonable if it is generalized without any awareness of its limitations. Habermas reminds us, once again, about the problems that arise “when a naturalistic worldview oversteps the boundaries of its scientific competence.”231 When elements of academia can announce as a given that psychology is “only really” a kind of “operationalized biology,” we need to
228
Chapter 9
be mindful of the serious dilemma that such reductionism entails. Why do we hold people responsible for criminal actions and sentence them for things we regard as inhuman while not giving similar treatment to predators that attack other creatures or cats that toy with baby birds? Our practices attest to how we de facto expect more from a human being than we do from other living creatures, even as we lack plausible theories about why we think in such a way. Our scientific culture thus seems to suffer from a serious anthropological deficit. Reduction is entirely legitimate and also entirely undramatic. It is how we form knowledge in scientific work and cannot in itself be considered an expression of reductionism. But it is important to underscore that not only naturalism but also phenomenology use reductions—and that both risk winding up with unsustainable conclusions and positions if they develop in a reductionist direction by making claims that they cannot live up to. Where positivists reduced the world to the sum of its facts, phenomenologists instead reduced things and facts to a lifeworld in which they ultimately only acquire meaning as things and facts for the subject that experiences them. In the spirit of naturalism, positivism’s reduction sought to lead reality back to a given reality of objects and facts. Phenomenological reduction aimed instead to show how all knowledge can ultimately be traced back to the concrete intentionality of people’s lifeworld and thereby reduced to the contents of a consciousness directed towards something. Naturalism and phenomenology exemplify two entirely legitimate and productive reductions. Yet both exhibit a clear tendency of asserting priority by claiming that the other’s perspective can be derived from their own. Thus, if they are unable to limit their claims in such a way as to remain open to other legitimate perspectives, they both risk turning into reductionism. Peter Kemp, whose arguments I am closely following here, has argued for—and himself presented—the draft of a “critique of the reducing reason” in which, building on Kant, he seeks to identify the limits and the potential of a reason that reduces for the purposes of explaining, understanding, and affecting the world in which we live.232 His starting point is also that reductions are fundamentally a positive thing—it is only when we make reduction into something absolutely “pure” (pure thought, materiality, language, society, existence, etc.) that reductions become problematic and warrant the label reductionist. Among the various kinds of reductionism, for example, he mentions positivistic reductionism (which in reality sees only positive empirical data), materialist reductionism (in which reality is slavishly reduced to pure materiality), structuralist reductionism (in which reality is reduced in its entirety to pure structures), and existential reductionism (which reduces everything in the world to existential questions within the
The Two Cultures: Humanism and Naturalism
229
framework of a uniformly regressive order). Kemp nevertheless emphasizes that human life never consists solely of pure logic, pure empirical data, pure technology, pure language, pure society, pure existence, pure metaphysics, or pure religion. Rather, it is the very relationship between things like thought, experience, work, speech, feelings, and faith that we should focus on. The threat of reductionism today confronts scientific thinking, challenging us not to articulate our reductions so strongly that they exclude each other. Kemp himself argues that it is the task of philosophy to show the connections and contexts, dispel the false contexts and deadlocks in order to be able to better see the real conflicts, which can only be resolved practically.233 The real issue is to bring epistemology into hermeneutics by converting the theory of knowledge into a theory of interpretation. In other words, we are now standing on the threshold of the scientific thinking of an age of hermeneutics.
CHAPTER 10 LABORATORIES OF INTERPRETATION: UNDERSTANDING AND EXPLAINING
The ideal of unified science is one of the fundamental hallmarks of positivism. This ideal has nonetheless been criticized, and the challenges associated with managing this problematic, have already been touched upon in the previous chapter. In this chapter we will proceed, by drawing on the resources of modern hermeneutics, to discuss the importance of making clear distinctions between competing concepts of interpretation of relevance for scientific thinking in an age of hermeneutics. Around the turn of the twentieth century, it is possible to identify two dominant strategies for responding to the threats posed by positivism’s program of unified science in general and, in particular, by the naturalism that dominated the burgeoning natural sciences. Within the natural sciences themselves, however, there was only a limited engagement with positivism. Instead, the discussion primarily hinged on two different ways of handling the epistemological and ontological challenges being faced by the humanities and social sciences, together with the more general issue of how to defend fundamental human values that now appeared to be under threat. We might say that it was positivism’s extraordinary success, in combination with its ideal of a unified science, that rendered the situation acute and forced a response. Yet the two camps that resisted the positivists’ attempt to realize their goal of epistemological and ontological continuity between all academic disciplines chose very different strategies to deal with the challenge. Either they followed phenomenology in trying to assert an even more radical position than positivism, by identifying something even more fundamental than empirical reality as given, i.e. the phenomenological experience of the world—or they argued for epistemological and ontological discontinuity and limited their claims to a defense of the humanities as an autonomous realm, within the framework of two parallel conceptions of science, advocated by hermeneutics of understanding. We have already considered the problems associated with each of these alternatives for all those wishing
232
Chapter 10
to defend scientific thinking as a quest for truth. But if we abandon phenomenology’s foundational ambitions, and the idea of separate realities according to a hermeneutics of understanding, and instead try to develop a scientific approach characterized by dialectical thinking, we will need a radically different concept of interpretation. Because this kind of hermeneutics does not limit itself to a doctrine of understanding that offers protection to the besieged humanities but operates on the basis of the general relevance of hermeneutics by staging a dialectical interplay between understanding and explanation. The aim of this chapter is to discuss the resources that an updated hermeneutical position of this kind might provide for handling the challenges associated with the future of the scientific project. This also means that I will not present hermeneutics as some grand theoretical Solution or plug-in Method but will rather seek to reconnect the discourse on hermeneutics to scientific practices in terms of various laboratories of interpretation—interpretation labs.
The hermeneutic experience: miracle and restriction Everyone who rereads a book for the second or third time, especially if many years have passed, experiences the curious feeling that they are reading a new book. These experiences associated with acts of reading remind us that books are not empty containers with a fixed, pre-existing content that is timelessly preserved in texts, ready to be extracted whenever the need arises. Authors write only “half” of the text—“the rest” comes when we read. Books cannot “speak” for themselves. Indeed, texts are completely unable to “say” anything by themselves—people are required, the active intervention of a reader who engages with their contents is necessary, before books and texts can provide any kind of meaning. When we reread a text after the passing of time in which we have acquired new life experiences and been changed as people, the result of the act of reading also changes—and so it continues in a process of interpretation that essentially has no end. The ancient philosopher Heraclitus declared that it is impossible to step into the same river twice, and in corresponding fashion we might say that it is impossible to “step into” the same text twice. “What happens when we read?” asks Olof Lagercrantz in the introduction to his book Om konsten att läsa och skriva (On the Art of Reading and Writing).234 At first glance, the question may seem obvious, almost trivial. But there is actually something quite fantastic about how a series of black marks on a sheet of paper or a computer screen can generate meaning. The alphabet consists of letters whose significance (with a few exceptions) is arbitrary, but when they are combined and put together, they
Laboratories of Interpretation: Understanding and Explaining
233
form words, sentences, phrases, chapters, books, and libraries, which makes it clear that these letters have an extraordinary capacity to generate meaning. The combination of letters can create a common world of thoughts, people, and phenomena that we can share with each other—but only on the condition that we invest our own competence by reading these texts and thereby activate the potential inherent in their meaning-creating function. Lagercrantz does not hesitate to compare the act of reading to a minor miracle, one that is continuously repeated as we read, reread, and read again. In books, we meet people who in many respects resemble the living, but with one key difference: “when we reach out to embrace them, we grasp only thin air.”235 In other words, here we experience not only a miracle, but also a restriction—proximity as well as distance. The text has an enigmatic ontological position and theoretical status in that it is both meaningless (as long as no-one reads it) and filled with a surplus of meaning (by virtue of being endlessly re-readable). Although the text might thus be considered, on the one hand, dead as a doornail—since it is a materiality that has no means of acting on its own—and on the other hand, a text that is incorporated into an act of communication through competent readers can also begin “speaking” to us, becoming so alive that it can “almost” be considered a person. The obscure status of the text as a bearer of meaning also makes it necessary to adopt a dialectical approach when interpreting the paradoxical reality that a text constitutes. In the world of hermeneutic experience that arises between text and reader, we encounter both a miracle and a restriction. In order to manage this duality, we need to abandon all foundationalist claims and instead operate with a “broken” ontology, so that the fact that the text can be identified as both “lack” and “surplus” finds its equivalent in a concept of interpretation that is straddling two horses. In this way, hermeneutics becomes a heterogeneous process focused on emergence and becoming and with substantial elements of distanciation, rather than the foundation of a homogeneous experience in opposition to the alienation and loss of meaning that distanciation invariably entails. However, this means that interpretation needs to take both the miracle and the restriction seriously, both as condition and possibility. The act of interpretation can function as an activity that completes a text that is defined by lack because of its incomplete nature—while also having a selective function that reduces the surplus of meaning that the text can be said to be the bearer of. Unlike the unilateral homogeneity that is a hallmark of the efforts to reconcile world and consciousness in both positivism and phenomenology, contemporary hermeneutics represents a humbler approach, that contents itself with a “heterogeneous synthesis” that starts from a “broken” ontology.
234
Chapter 10
In this case, mediation never dispels the paradox, and all syntheses remain heterogeneous in the interplay between proximity and distance, understanding and explanation. Today, as we forget and lose contact with the fundamental conditions for creating meaning in the interface between ourselves and the world, it is essential to highlight the miracle that occurs in the act of reading. If we consider this experience of a miracle, with which we create coherence in the world by making distinctions, it becomes harder to dismiss hermeneutics as merely another contrived means of complicating something that is actually very simple, unproblematic, and obvious. Yet it is only by making distinctions, that we can share the world with each other. At the same time, we must never forget the restriction that manifests itself in the fact that we will always be grasping thin air when we try to embrace the people we meet in the world that language opens up to us. We encounter the same combination of miracle and reservation mentioned by Lagercrantz in the writings of Friedrich Nietzsche, whose aphorism “The book almost human” uses the word “almost” to insert a clear restriction about whether the text can have a life of its own: “it lives like a being furnished with soul and spirit and is yet not human.”236 There is no miracle without restriction, no identity without difference, no understanding without explanation. Language gives us a world—yet is not a world itself. Not everything is text, and the hermeneutical sphere of influence is far from being limited to textual interpretation. In the hermeneutical tradition, however, the text works as a paradigmatic model for understanding the general conditions for possible interpretations. As we have already noted, regardless of whether it is a text, a person, a society, or an organization, that is being interpreted, and regardless of whether the researcher is studying chemical substances, responses in an interview, or political systems, we can understand what interpretation means in these cases by using the analogy of “reading” and “writing.”237 Hermeneutics can thus be said to have general relevance for philosophy of science, regardless of the field of scientific thinking or research orientation. And what at a first glance, many look like the “weakness” of interpretation, is also its strength. The heterogeneous nature of hermeneutic experience and the “broken” ontology of contemporary hermeneutics differ radically from positivism’s and phenomenology’s grand ambitions of establishing an ultimate foundation for the world and knowledge. Unlike hermeneutics of understanding, critical hermeneutics leaves us with a dialectic that has no end, no absolute knowledge or ultimate foundation.238 For this reason, critical hermeneutics seems to be the ideal
Laboratories of Interpretation: Understanding and Explaining
235
candidate for the reflective framework of a mode of scientific thinking that regards itself precisely as a project. The paradoxical position of the text and the dialectical structure of interpretation make hermeneutics appear to be a dynamic field, which reminds us that theories and manuals are ultimately never able to replace practices and people. The complex nature of the hermeneutic experience, which joins miracle and restriction, thus has something to teach us about the fundamental conditions for scientific thinking. This conceptual world, full of tensions, represents the most important contribution of hermeneutics to the challenges we are currently facing when coping with scientific knowledge. The constant presence of the restriction attests to the necessity of distanciation in the apprehension of texts and, indeed, the development of all knowledge. Those who wish to develop knowledge must take a leap, leave their current position and their immediate understanding, and begin a journey that will turn every relation of affinity into a question of how distanciation can be made productive, in accordance with the fact that new discoveries presuppose inventions. Textual interpretation serves as a general model for hermeneutics in enabling us to understand how the act of interpretation organizes the hermeneutic experience in the interface between ourselves and the world. In the same way as we can read a book in endlessly new ways—particularly when some time has passed, and otherwise we create distance by allowing others to read our texts—so, too, do we read the world in different ways. From the fact that we can read a text differently we can never conclude that it is possible to interpret the text however we like; nor can the fact that we “read” people, organizations, and society in different ways ever legitimize the relativistic attitudes that we can interpret them as we like. This universalization of the discourse on hermeneutics also means that we foreground questions about the conditions for how knowledge is possible in general as well as the ontologically given preconditions of our entire capacity for knowledge-creation. Yet hermeneutics will only acquire this decisive importance for contemporary scientific thinking on the condition that it refrains from establishing itself as a special form of understanding within a separate cognitive territory of its own. The distinct character of contemporary critical hermeneutics is instead its openness to a world where a multiplicity of explanatory methods are related as a series of conflicts of interpretation. Critical hermeneutics is thereby characterized by a radical openness to a potentially infinite number of approaches. And if hermeneutics is not to exacerbate the division into different ontological and epistemological domains, it must nonetheless follow critical hermeneutics in developing a concept of interpretation able to embrace the entire
236
Chapter 10
spectrum of explanatory procedures. Only in this way can interpretation serve as a universal theory of mediation for scientific thinking in an age of hermeneutics. As already noted, however, this presupposes that we understand the miracle in such a way that we do not fear the restriction— but, on the contrary, embrace it.
Misunderstandings about hermeneutics What is hermeneutics? What do we really mean by interpretation in the tradition of critical hermeneutics? There are a number of persistent misconceptions about hermeneutics that repeatedly create confusion for those seeking to position hermeneutics within the landscape of philosophy of science. These attempts to define interpretation alternate between a position of resignation, with clear elements of relativism, and more scientistic attempts to present methodological frameworks for interpretation that try to live up to the uncompromising ideals of positivism. Yet these concepts of interpretation, rather than opening up new avenues for developing scientific thinking, serve to reinforce the problems with which philosophy of science is currently engaged. Let me therefore underscore what interpretation is not by identifying some of the more important common misunderstandings that arise in seminars and the scholarly literature about what interpretation might reasonably be taken to mean. The first of these is a relativistic misunderstanding, which finds expression in the general and vague concept of interpretation as it is sometimes encountered in academic essays and dissertations that claim to use a “hermeneutical method”—yet show little more than uncertainty as to what they have done. It is unfortunate, to say the least, when a slightly tricky term such as hermeneutics is used as an alibi for generally toning down one’s claims to be thinking scientifically. This specific misunderstanding risks turning hermeneutics into a kind of “soft” relativism that merely confirms the positivistic idea that there exists a foundation of certain knowledge and facts accompanied by a superstructure comprised of values and interpretations. But when such scholars thereby try to legitimize the presence of contradictions within their own scientific philosophical positions, they are not in fact articulating a hermeneutical position. Whatever one chooses to call it, this kind of intellectual capitulation cannot be said to have any legitimacy within contemporary scientific philosophical discourse. The second is a theology-fixated misunderstanding, which seems to derive from an inability to handle the purely historical circumstance that interpretation theory has its origins in biblical exegesis, with the result that it thereby becomes drawn into science’s own complicated relation to
Laboratories of Interpretation: Understanding and Explaining
237
religion. Yet this way of thinking is clearly a mistake since it gives the impression that a hermeneutic interest in knowledge is, in some more specific way, associated with theology and religion, when in reality this testifies about the historical background of hermeneutics in various regional contexts for interpretation theory in which theology figures alongside jurisprudence and philology and the real issue is about highlighting hermeneutics as practical philosophy. Yet hermeneutics today has no more to do with biblical exegesis than the university is obliged to restore its ecclesiastical origins—and in neither case has theology, the church, or religion a significant influence upon current issues and challenges in hermeneutics. The third is a text-fixated misunderstanding of hermeneutics, which has probably come about as a result of the importance of textual interpretation as an exemplary model in the hermeneutic tradition, simply focussing onesidedly on texts risks sending the entire discussion back to a time characterized by regional theories of interpretation in an era before the establishment of a general hermeneutics. If we then further delimit the interpretative problematic, so that hermeneutics is considered as only having relevance for the interpretation of literary texts, while also trying to establish a clear distinction between “historical” and “existential” hermeneutics, we eliminate entirely the general claim of hermeneutics and overlook the fact that the text is actually a paradigmatic model of what interpretation involves generally, thus we render invisible the universal presence of the problematic of interpretation in the interface between ourselves and the world. It is in light of all this that we can understand the misunderstanding that has arisen from, as it were, trying to understand hermeneutics on positivism’s terms (or those of a modified version of positivism). We can therefore speak of a fourth, positivistic misunderstanding, which involves portraying hermeneutics as a doctrine of understanding that fully confirms to the conditions imposed by the hypothetical-deductive method, with the result that the interpretation process is reduced to mere hypothesis testing. Hermeneutics here takes on the aspect of something one resorts to after having failed to understand and being forced to return to one’s material in order to extract a hypothesis. In order to understand the meaning of a text, one throws out a hypothesis, then tests it, and if it does not work, one changes it so that the relation to the text is changed. New hypotheses continually make us see texts and materials in new ways.239 The dynamics in the hermeneutic experience have here been reduced to hypothesis testing. The fifth misunderstanding I wish to highlight is the psychologistic misunderstanding. This comes from the insight that authors are necessary
238
Chapter 10
for the production of texts—which do not appear by themselves from a discourse or a language game—and intuitively leads to the notion that the meaning of a text should match the author’s underlying intentions in writing the text. In German Romanticist hermeneutics, this line of thought was developed by Wilhelm Dilthey, who devised a method of interpretation focusing on “the world behind the text” in which the goal of hermeneutic understanding was to recreate the author’s intentions. This nostalgic epistemological interest is focussed on reproducing the psychological life given expression in the text, within the framework of a Romantic notion of language as the expression of a prior experience. In the following section, we will therefore consider in greater depth the hermeneutics of understanding developed by Dilthey, but for now we can note that his aim in presenting interpretation as a doctrine of understanding was to define a method that might lend scientific legitimacy to a humanistic understanding in confrontation with the explanatory methods of the advancing natural sciences. Although his theory introduced certain objectifying elements, his conception of interpretation ultimately remained trapped within a Romanticist interest in how life finds expression in signs. Examining the ramifications of this doctrine of understanding, we see in the literature on the philosophy of science how hermeneutics has often been used as an instrument for establishing a kind of reserve for lifeworld knowledge, in which interpretation is limited to understanding how different people experience their situation and imagine the world. The interest in subjectivity here forms a loose connection to a relativist understanding of hermeneutics. In brief, it presumes that interpretation involves not taking an interest in how the world is, while hermeneutics represents a kind of “experience research” that focuses solely on how the world is apprehended.240 In this way, hermeneutics is reduced to an idealist theory of meaning, a lifeworld philosophy that has not only abandoned any larger claims to truth but that is also relegated to a marked-off epistemological and ontological reserve. This kind of “experience science” not only has a highly dubious theoretical status, its concept of interpretation also implies an impossible philosophical anthropology. If interpretation were merely an issue of understanding people’s intentions and broader lifeworld, hermeneutics would quickly be reduced to phenomenological lifeworld analysis, and qualitative methods ultimately risk becoming limited to a quest to ”achieve an understanding of individuals or groups of individuals.”241 Understanding how people imagine the world is undoubtedly an important component in scientific knowledge production, but, as we have noted, it can only ever be one component of scientific thinking among others. Making these experiences into a truth claim is hardly a tenable position. If we are to take
Laboratories of Interpretation: Understanding and Explaining
239
people seriously, far more will be needed than simply trying to take their experiences of the world seriously.
Hermeneutics of understanding: promoting two parallel ways of being scientific Modern hermeneutics is conventionally said to have been founded by the German theologian and Romanticist Friedrich Schleiermacher (1768– 1843). The reason he is ascribed this role is that he formulated a general theory of what it means to interpret. He thereby deviated from a tradition that divides hermeneutics into different areas, with separate principles of interpretation for jurisprudence, philology, and theology. The disciplinecrossing openness and general philosophical direction that Schleiermacher gave hermeneutics has also made him a perennial object of interest in the subsequent development of interpretation theory. Since modern hermeneutics can thus be said to have emerged in tandem with a clear claim to universality, it may seem slightly strange that Wilhelm Dilthey (1833–1911), barely a hundred years later, would use hermeneutics to develop a diametrically opposed strategy, namely, to make a radical division of the scientific field into two fundamentally different epistemological domains. In opposition to positivism’s ideal of disciplinary thinking, Dilthey argued that there was in fact no continuity between the sciences; rather, there were two completely different forms of scientific thinking: natural sciences (Naturwissenschaften) and human (or cultural) sciences (Geisteswissenschaften).242 Let us review the historical context for this move. The social circumstances of the late nineteenth century differed radically from those of Schleiermacher’s time. In less than a hundred years, Germany had gone from being a divided patchwork of Lilliputian states whose Romanticist culture was dominated by poets and writers, to a unified Germany of steel and iron that after a period of extraordinary growth in the second half of the nineteenth century had rescued from the European periphery, transformed into an industrial giant dominating the entire continent. The challenges it faced had also changed radically. The rapid advance of the burgeoning natural sciences and a dominant culture of scientific naturalism, coupled with positivism’s insistence upon disciplinary thinking modelled (somewhat against their will) upon physics and mathematics, created a pressure that put scholars and disciplines in the humanities against the wall by questioning their scientific legitimacy and methodological status.
240
Chapter 10
As the causal explanatory model of Galilean scientific thinking, which proceeded upon the basis of the mechanical cause-effect relation, faced off against an Aristotelian model of understanding defined by teleological understanding in terms of intention-action, Dilthey turned his attention to hermeneutics with an aim to formulating an epistemological defense of the humanities against this looming naturalization. Dilthey saw how Schleiermacher’s concept of interpretation could serve as a resource, but in the new situation that had arisen a century later the Romanticist’s ideas would be stripped of their universalist claims and instead be used as the methodological foundation for a humanistic doctrine of understanding that from its very beginnings took aim at the explanatory procedures of natural science. For those who, following Romanticism, imagined the text as a kind of fossilized expression from a preceding eruption of meaning, this also meant putting at centre stage the task of interpreting a text through empathy (Einfühlung) with the authorial subject’s intentions. One consequence of this aim—to reproduce, through understanding, the creative act “behind” the text—interpretation came to seem like an attempt to “resurrect” the author. Although this kind of hermeneutics from Schleiermacher had, around the turn of the twentieth century, been incorporated into a more complex context, this concept of interpretation became a key component of a psychologized understanding of culture—the opposite of the natural sciences’ explanation of nature.243 Within the framework of this epistemologization of hermeneutics, the humanities were ascribed a self-understanding that from the very start made them diametrically opposed to the natural sciences. The concept of interpretation was thereby inscribed into a long Western tradition of dualisms that drew a sharp line between body and soul, nature and culture, necessity, and freedom. In opposition to naturalism, Dilthey claimed that human beings, having free will, could not be explained in terms of laws. From there, he proceeded to argue for interpretation as a method of understanding that used insight and sympathy to recreate and resurrect the intentions of people in “the world behind the text.” Few things have generated as much confusion in contemporary hermeneutics as the fact that modern hermeneutics, from Dilthey to Gadamer, have engaged with the philosophy of science debate in terms of a hermeneutic of understanding that claimed to operate like a humanistic methodology. Although the notion of an epistemological break between different realms of scientific thinking gave the humanities a protected area for their activities, scientific knowledge cannot survive and develop in a cognitive safe zone of this kind. And because this intellectual figure has been reiterated in the methodology books as a contemporary hermeneutic
Laboratories of Interpretation: Understanding and Explaining
241
position, despite its being more than a hundred years old, the real contributions made by hermeneutics to the challenges that philosophy of science confronts today have been obscured. The legacy of German hermeneutics of understanding, with its strong focus on the humanities, have nonetheless plagued every discussion of interpretation for more than a century, and it is hard to exaggerate the negative significance for hermeneutics of the fact that this theory of interpretation has been incorporated into modern epistemological debates as a claim that there exists a radical discontinuity between these different realms of scientific thinking. One consequence has been that it has helped to establish a radical dichotomy between humanistic understanding and (natural) scientific explanation, which has been devastating for the field of scientific thinking in general and the concept of interpretation in particular. Within the framework of the canonized history of German hermeneutics, Martin Heidegger initiated a comprehensive settling of scores with the onesidedly epistemological focus of the hermeneutics of understanding with a view to anchoring it ontologically instead. Despite this shift from epistemology to ontology, German hermeneutics nonetheless remained within the domain of understanding. Hans-Georg Gadamer (1900–2002), who presented what in many regards must be considered the most ambitious synthesis of German hermeneutics, is shaped by this viewpoint in that his own concept of hermeneutics is ultimately limited to providing the humanities with a foundation in philosophy. The hermeneutics of understanding thus tends to inscribe itself within the Western dualism in which body and soul, material and spirit, are represented as different substances relegated to separate epistemological and ontological domains. Even when we include the variations within German hermeneutics, its concept of interpretation, from Schleiermacher to Gadamer, has remained confined to domains of understanding. Its focus has switched between a psychologizing interest in authorial intention and the world “behind” the text (Dilthey), an understanding of Being that adopts an ontological perspective as the basis for an attempt to reveal all the given preconditions of human existence (Heidegger), and the more composite strategy pursued by Gadamer, which seeks to bring together the horizon of ontological preunderstanding and the concrete task of interpreting and understanding texts. Throughout it all, hermeneutics with its humanistic understanding and lifeworld perspective has been counterposed to the explanations of the natural sciences and the distanciating chill of modernity. Gadamer’s synthesis of German hermeneutics admittedly marks a clear shift away from Dilthey’s position, replacing the latter’s psychologizing attempts to reconstruct the author’s original intentions with an interest in the fusion of
242
Chapter 10
horizons through which the horizon of the text/work fuses together with that of the reader/observer by means of language as the medium of interpretation. The dichotomous structure nevertheless lives on in Gadamer’s understanding of hermeneutic experience as a truth experience, in which the truth takes on the more limited meaning of a disclosure that may not be disturbed by methodological distanciation. The title of Gadamer’s great work Truth and Method (1960) can admittedly be regarded as something of a wet dream for a graduate student: it looks as though here we get both truth and method—what more could one ask for? Yet the book offers no clear “way” (= method, methodos) to follow—instead, its author deliberately presents the concepts of truth and method as diametrically opposed to each other. Gadamer’s position is thus so determined by his polemic against positivism’s excessive faith in methodology that he tends to see methodological work in general as a threat to the truth experience that his version of hermeneutics seeks to disclose. What Gadamer presents is an investigation of the hermeneutic experience as something that can only be developed on the condition that one defends it at all costs, without distanciating oneself from the immediate presence of dialogue in the hermeneutic experience. This kind of hermeneutics thus remains a distinct theory for the human sciences, but the question is: can any field of knowledge within the humanities limit itself to pure understanding? Gadamer’s estrangement from natural science and his resistance to the methodological distanciation of modern scientific thinking tends to situate truth (in the ontological sense of “disclosure”) and method (in the epistemological sense of correspondence)—and likewise understanding and explanation—in a devastating and implacable opposition to each other.244 We are thus thrown back upon the notion that there are supposed to exist two parallel forms of scientificity. At the turn of the twentieth century, it was perhaps necessary to temporarily defend some academic disciplines against the encroachment of the natural sciences, and perhaps it was this intellectual model that legitimized the use of Aristotelian-inspired teleological explanations (or should we say “understanding”) to defend their place within scientific thinking alongside the natural sciences that had now joined them under the university umbrella. And yet there is no getting away from the fact that this divide resulted in the hermeneutics of understanding primarily serving as a defense and a legitimization of an epistemological ghetto and an accompanying ontological safe space for the beleaguered human sciences.
Laboratories of Interpretation: Understanding and Explaining
243
Cognitive balkanization in the modern university We have seen how the process of change that we have come to call the Scientific Revolution steadily led to the emergence of an entirely new scientific project, which began in Europe and then spread to the whole modern world. But we have also seen that this knowledge project was characterized by an entirely different epistemology to that which dominated the medieval university. As a result of numerous reductions effected by a process of scientific specialization that is still ongoing, new disciplines, theories, and methods are continually being developed. What we call scientific thinking has thus become an increasingly complicated phenomenon to understand. The question of how to safeguard this growing multiplicity, without the field being split into separate worlds and, with it, the loss of science as a joint project, has repeatedly been raised, becoming more and more intractable with the passage of time. If we change the terms in which we discuss knowledge and organize knowledge development, and adopt an approach formed by aspects, instead of territorial metaphors of areas of knowledge, we will also be forced to take more seriously the numerous conflicts of interpretation that arise. In this light, it seems appropriate to use hermeneutics to describe the conflicts and challenges facing contemporary scientific thinking. At a time when one interpretation after another washes over us, albeit primarily from the digital information system that is in the process of transforming our world, rather than from the academic seminar room, it is no exaggeration to say that we are living in something of an age of hermeneutics. Under these circumstances, however, it is a condition for hermeneutics to be able to make a significant contribution to the project of scientific thinking that we settle accounts both with a number of intellectual models that have long held a dominant position within the hermeneutic tradition and with the way that they are linked to the organization of scientific thinking in universities today. At several points in this volume, I have returned to a strategically important, though often forgotten, aspect of the history of science, namely that the new scientific thinking that emerged in the wake of what has come to be known as the Scientific Revolution had extremely little to do with the universities, which had hitherto been so successful. Only in the nineteenth century did the gradual incorporation of this kind of scientific thinking into the university begin in earnest, and, later, become dominant. The result of this process is ironic. On the one hand, it contributed very substantially to saving the university and giving this institution growing prestige as an engine for the development of knowledge and expertise in society as a whole. On the other hand, it also had the effect that these new disciplines,
244
Chapter 10
by virtue of the power of this societal importance combined with positivism’s “realistic” ideal of scientific thinking, acquired the status of general paradigms, eventually assuming a preeminent position within academia. Echoing Wallerstein and his colleagues, we might say that “[t]he university was revived and transformed.”245
Critical hermeneutics The hermeneutic debate over how to view the relation between understanding and explaining has a general relevance for contemporary discussions of philosophy of science and methodological challenges. However, the major differences between different conceptions of interpretation have often obscured the fact that the “canonized” history of German hermeneutics, from Schleiermacher and Dilthey to Heidegger and Gadamer, has been presented as a continuous development, culminating in the critical hermeneutics of the French philosopher Paul Ricoeur (1913-2005). In fact, German hermeneutics of understanding and critical hermeneutics represent two fundamentally different concepts of interpretation: a conception of interpretation in which understanding and explanation are recognized as opposites, as if they represented two different epistemological and ontological domains; and a critical hermeneutics that not only dialectically relates understanding and explanation to each other as two complementary and successive stages of a process that we call interpretation, but in which explanation is even ascribed hermeneutic function proper by virtue of explanation’s ability to generate new understanding. In order for hermeneutics to be able to extract itself from the dead-end of a doctrine of understanding, we need not only to formulate a more precise definition but to effect a metamorphosis in the entire conception of interpretation by introducing a critical instance of “explaining” into the very heart of understanding. Only in this way can interpretation be made receptive to the larger scientific conversation and a greater reality. The humanities cannot hope to thrive by confining themselves to an understanding that is diametrically opposed to the spectrum of explanations that have been developed in our time. Moreover, this understanding of the natural sciences hardly squares with their actual positions, since ultimately no explanation can survive without understanding. If the hermeneutics of understanding tends to develop as a theory of meaning, then critical hermeneutics is driven by a quest for truth. These two different concepts of interpretation also entail two discrete ways of understanding the hermeneutic experience. What the hermeneutics of understanding consider as a homogeneous relation of affinity that must
Laboratories of Interpretation: Understanding and Explaining
245
absolutely not be disturbed, becomes within the framework of critical hermeneutics a more heterogeneous experience that always also includes and requires objective distanciation. Ricoeur refers to “the hermeneutical function of distanciation” thereby indicating not only a conciliatory attitude towards distanciation but also that distanciation should be regarded as truly productive and constitutive elements in hermeneutic experience.246 Critical hermeneutics is thus characterized by a dialectical approach that leads us beyond both monism and epistemological as well as ontological dualisms. Within the framework of the complex dialectic sustaining critical hermeneutics, at the very heart of interpretation, there lies an unavoidable but productive inner connection between understanding’s insightful listening and explanation’s critical suspicion. Because Ricoeur’s critical hermeneutics includes precisely that explanation which the hermeneutics of understanding has sought at all costs to distance itself from and offer an alternative to, the contrast to the German hermeneutics of understanding is radicalized. The dialectic between understanding and explanation also lends the conception of interpretation a conflictual, heterogeneous, and dynamic character that is reinforced by the way in which the linear, sequential model of interpretation, in which understanding and explanation follow upon each other, is replaced by the notion of interpretative conflict, a hermeneutics that implies a radical openness to what in principle is an endless variety of theories and methodological approaches. Through the composite conception of interpretation developed by critical hermeneutics, the hermeneutic circle is opened up and becomes a hermeneutic spiral. Interpretation is no longer merely a matter of understanding but involves understanding better by means of the entire spectrum of available explanations. The dialectic of understanding and explaining here represents a structure based on conflict that drives the interpretative process forward in the form of a quest for truth. This concept of interpretation makes a general claim, in the sense that one cannot assert existence of a particular theory of interpretation for a separate (humanistic) domain and relevant to a sphere of culture divorced from nature. The conditions for interpretation are the same in principle, regardless of the area in question. Yet this does not mean that there is complete ontological continuity between different areas of reality. Although there is a reciprocity between nature and culture, it is asymmetric. In the preceding chapter, we noted that while all people may be physical bodies, this does mean that all physical bodies are people. Likewise, most social facts are also physical facts, but the reverse does not hold true: not all physical facts are social facts. Profound knowledge is needed in order to navigate this complexity—and it
246
Chapter 10
is striking how little we really know about how this fits together, despite its general relevance. Because, while the humanities have traditionally started with an ambition of understanding, it is clear that the development of knowledge in this area of scientific thinking has nonetheless not restricted itself to understanding. If the humanities are to develop, they will need to embrace the entire spectrum of explanatory methods in order to deepen, extend, correct, and even rebus an initial understanding. In other words, there is no understanding that cannot be improved by various explanatory procedures. And even if the modern natural sciences were born out of a fascination with being able to explain the world, it is today clear that no explanations can be ultimately sustained or developed without being in some way connected to understanding—for the simple reasons that scientific thinking always involves people and that people need to understand in order to know and act. We are once again confronted by the challenge of avoiding the kind of reductionism and scientific foundationalism that produces formulations such as something is “really only” or “basically” something else, and that in the process allows itself to be reduced to something unilateral. The reduction necessary for scientific thinking must always be on its guard against the danger of reductionism, whether the latter assumes a rationalist, empiricist, transcendental, or linguistic form in its efforts to establish a fundamental basis for reality. For this reason, creating a foundation for scientific thinking is not about finding an ultimate basis upon which all knowledge can rest but about our capacity for seeing the limits for our own thought, limiting our claims to knowledge, and engaging with other interpretations. Ricoeur teaches us that hermeneutics cannot be limited to a doctrine of understanding; rather, it must be regarded as a theory of interpretation that makes general claims about scientific thinking. Explaining and understanding are not two mutually exclusive approaches but complementary poles that, at best, meet in a dialectical interplay. Together, they should be considered as parts of a dialectical process that we call interpretation. Yet this means that critical hermeneutics departs both from the ambition of the hermeneutics of understanding to confer upon the humanities the kind of legitimacy enjoyed by the natural sciences, and from the idea of universal determinism that has often been associated with the use of explanatory procedures, regardless of whether it involves the natural sciences or the assumption of explanatory precedence by semiotic models.
Laboratories of Interpretation: Understanding and Explaining
247
The art of explaining in order to understand better Where do explanations come from, then? To answer this question, we need to return to modern science and the logic of discovery/invention that emerged from the Scientific Revolution. The legacy of positivism, which continues to shape our understanding of scientific thinking, includes not only an empirically oriented quest to find representations capable in some degree of capturing reality but also an ambition of explaining, in various ways, how that reality works. The positivists took their models for what can be counted as a scientific explanation of change from the natural sciences. Carl Hempel occupies a prominent position among modern positivists in that his views have become something of a standard starting point for understanding the explanatory function of scientific thinking. For Hempel, there is a simple answer to the question of what it means to explain. In accordance with what he has named the covering law model of explanation, explanation is about discovering general laws (of nature). As a positivist, Hempel (in the spirit of unified disciplinary thinking) starts from the assumption that historical explanations are not unique but exactly follow the same logic as explanations of physical events: we categorize particular facts under general laws, whereupon the phenomenon is considered to have been explained. Explanation of the particular is covered, in accordance with the “nomological” method, by a law.247 However, the problem is that historians cannot be expected to live up to the demands imposed by this model. Moreover, Hempel himself had to acknowledge that his model generates a number of anomalies, and that historians are often forced to make do with a rough outline, an explanationsketch. The fact that explanations and predictions in this theoretical context, as we have already seen, are two sides of the same coin, in being connected by the logic of causality, indicates that application of this kind of model of the past and the future could imply a determinism that seems neither possible nor desirable. The close connection between explanation, causality, and prediction is a legacy of modern scientific thinking, which may also explain science’s fixation upon laws. Yet this temptation to develop predictions, even in areas that can hardly be said to follow laws of development, such as morality, science, art, history, politics, and economics, creates recurrent difficulties. Today, too, we are seeing a widespread and dangerous habit of using scientific thinking to try to predict the future in relation to economic development and electoral results, not to mention attempts at weather prediction that border on the absurd.
248
Chapter 10
Whereas Aristotle hypothesized four different kinds of causality— material, formal, efficient, and final—involved in the phenomenon of change, modern scientific thinking concerns itself solely with causal explanations. This fact raises questions about whether scientific thinking can explain everything. In a sense, there is a purely logical argument against the possibility of scientific thinking being able to explain everything. If we could explain everything, we would need to invoke something else—but how could we explain that? Since nothing is capable of explaining itself, we are forced to accept that some laws and principles will remain unexplained.248 As we have seen, a philosopher like David Hume chose to entirely reject the idea of causal explanations on the basis of a radically empiricist position because it is never possible to find a necessary connection between cause and effect. Others have only partially rejected explanations, on the grounds that some areas are entirely resistant to explanation. There is also always a risk of presenting overly plausible explanations, which can easily imply a deterministic perspective of the world. One way to escape the dilemma of explanation is to experiment with multiple explanations, but this means questioning right from the start both the axiom that there is only one type of explanation and the universal determinism that starts from the idea that everything is explicable. Yet the question is, what conditions are necessary in order to be able to consider several perspectives on explanation, if we do not want instead to end up in a naïve voluntarism that stipulates total freedom. Fifty years later, analytical philosophy of action devised a dichotomy that in many regards resembles that of the hermeneutics of understanding, developing a concept of action that proposed two non-reducible language games, as an extension of the larger dichotomy of nature and people: on the one hand, causes, laws, facts, explanations; and on the other, intentions, motives, inducement, and actions. Explanations here take the form of physicalism, while understanding is developed in terms of mentalism. However, the problem is that human action cannot be narrowly understood in terms of intention-action without being embedded in a world of causeeffect. Ricoeur has shown how critical hermeneutics can offer a mediatory position by virtue of imagining the human as situated in an intermediate field between the causation of explanation and the intentionality of understanding: “Human being is as it is precisely because it belongs both to the domain of causation and to that of motivation, hence to explanation and to understanding.”249 The British philosopher Elizabeth Anscombe has claimed that explanations involve differing ways of answering the question why? by
Laboratories of Interpretation: Understanding and Explaining
249
means of the connective because.250 This model allows for a differentiation of explanation—while ultimately relating explanations to understanding. Questions about forms of analysis and explanations become particularly complicated when they include human action and society in their investigations. Sometimes philosophers have chosen to differentiate between different kinds of scientific thinking and areas of scientific thinking, for example in claiming that causal explanations can inform hypotheses about the non-organic nature of physics, whereas the organic nature of biology requires a combination of causal and functionalist explanations, and the inclusion of humans in the social sciences involves a combination of causal and purposive explanations. Yet the question is whether these domains can really be kept apart in such strict fashion. If we follow Ricoeur in regarding human behavior as constituted by the correlation between intention, causality, and pure coincidence, our scientific investigations will need to see action as the result of individual will and effort, an effect of randomness, and a consequence of different social structures and other determining factors. But this would also mean that no single mode of explanation could be said to have priority. Instead, what is needed is a multiplicity of explanatory forms—a conflict of interpretations. In his book Explanation and Understanding, Georg Henrik von Wright identifies the following types of explanation: causal explanations (laws); intentional explanations (aims); functional explanations (functions); and teleological (purposive) explanations.251 To these might be added structural explanations, if we do not want to include them as a variant of the explanations already listed. In his early text hermeneutics, Ricoeur was so preoccupied with the fact that semiotics had developed structural explanations, such that explanations were no longer assumed to come from the domain of the natural sciences but seen as equally having their origins in the humanities and social sciences, that he tended to regard structural analysis as the only form for explanation. During the 1970s, however, he steadily broadened his perspective to include the entire spectrum of explanatory procedures, on the grounds that there is no explanation that cannot be improved by drawing upon explanations. For this reason, he also exerted himself to include explanations, convinced that they can correct (at worst, deny), expand, and deepen understanding.252 However, this presumes that explanation can be expanded to include different kinds of causal relations while also broadening one’s perspective to encompass other explanations—and relating those explanations to understanding. Concrete scientific work thus oscillates between an attempt to describe phenomena and empirical data, and an
250
Chapter 10
attempt to explain and understand them by means of analyses—in accordance with the dialectic of discovering and inventing the world.
Integrative humanities transcending disciplinary boundaries Let us go back to the fact that the history of the university is much older than that of (modern) scientific thinking, and that from a historical perspective, and that it is not self-evident that the university and science should get along. We have returned, time and again, to the historical fact that the knowledge project that emerged from the Scientific Revolution developed almost entirely outside of the established domains of the university—and that scientific thinking eventually became a direct threat to the university. Yet the incorporation of the new (natural) science under the same university umbrella as the older disciplines has resulted in the institution becoming the host of two parallel scientific projects. The issue here is of two knowledge cultures that are radically different and that can therefore come into conflict with each other, potentially at any moment. Yet the question of who can lay claim to this historical legacy is a complicated story. It is not unusual for that story to be told as if it were about a conflict between an “originary” university tradition, primarily maintained by humanists, and a “new” scientific reality defined by the natural sciences. However, when dealing with history we need always to be on our guard for anachronisms. The reality here is a far more complex matter than some long, unbroken tradition of humanities being confronted with the challenge of another scientific project within the framework of the modern university. Rather, we need to understand that the humanities themselves came into existence by this confrontation. It is not easy to navigate the differentiation process that characterizes scientific thinking in the nineteenth century, as the latter becomes constituted and acquires its modern forms. In order to avoid immediately lapsing into anachronism, by applying our own era’s division into humanities and natural science to a period of time when these intellectual categories did not exist, we need to remind ourselves that the old Faculty of Philosophy contained elements of what we today refer to as both the humanities and the natural sciences. The humanities can therefore hardly lay claim to continuity with this “long” tradition and (single-handedly) present itself solely as the representative of the philosophical faculty of the medieval university. Søren Kjørup puts the matter bluntly: “In ceremonial speeches, the history of humanities often goes back to antiquity. In real
Laboratories of Interpretation: Understanding and Explaining
251
history, the humanities first arose about two hundred years ago.”253 There is thus a complication for how humanists conventionally think about themselves if what we today call the humanities only arose in 1800, established itself during the nineteenth century, and consolidated itself during the twentieth. Yet we easily forget that the various kinds of humanistic activity presuppose inventions that were only developed late in history: the publishing explosion that followed upon the development of printing (to fuel the demand for seminar textbooks); the advent of journals and printed documents (so that intellectual achievements can be published, disseminated, and read); access to ecclesiastical or royal archives and collections (which were private prior to the 1800s and not open for the public until the latter was involved in the nation-building project).254 References to the Faculty of Philosophy can also be easily misunderstood—even here, the question of who really owns the legacy is a complicated story. For example, only in 1876 was the Faculty of Philosophy in Sweden divided into two sections—humanities and mathematics/natural science—which themselves only became independent faculties until 1956. Another differentiation process occurred during the twentieth century with the emergence of the social science disciplines, but these did not break away to form their own faculty until 1964 (although in my own institution, Åbo Akademi University, this development happened much earlier, in 1918, most likely because its rector, Edvard Westermarck, had pursued a successful career as a researcher in sociology and anthropology at the London School of Economics). The nineteenth century was also the century in which nationalism became a force to be reckoned with and the nation-state emerged with its burgeoning administration and the economic muscle provided by industrialization. Yet, when the humanities appeared, they were not merely part of a national project; their early years would be permeated by nationalism. If Berlin University can be counted as the first modern university, it is significant that it was founded by the Prussian state, an emergent nation state with grand ambitions, rather than emerging from “the shadow of the cathedral.” This close connection to the nation state gave the humanities both legitimacy and a different focus from that of its earlier incarnation in the medieval university. Both conditions for the emergence of the humanities cited above—availability of printed texts and access to archives—were also clearly connected to the obviously national character of these disciplines. Printing also necessarily led to linguistic standardization, which is a key precondition for the emergence of national languages from the tension between local dialects and the universal language of Latin.255
252
Chapter 10
The factor to which we have often returned, and which was perhaps the most important for the legitimizing of the modern humanities, was the fact that they nonetheless co-existed under the same “university umbrella” with the natural and social sciences, and eventually also engineering, that is to say, the subject areas upon which successful societal transformation, industrialization, and modernization were directly dependent. While this in some sense “saved” the humanities, humanists have nevertheless continued to regard these other areas of scientific thinking as threats. This lack of intellectual self-confidence can also be seen in the way that it often feels easier to write “Faculty of the Humanities” than to emulate the “Faculty of Natural Sciences” and write “Faculty of Human Sciences.” Even so, the fact that someone at a modern university can become a “doctor of philosophy” in both the Faculty of the Humanities and the Faculty of the Natural Science (a consequence of their roots in a single, joint Faculty of Philosophy) ought to serve as a timely reminder of the shared history that unites these areas of scientific thinking. If we want to give the humanities a future, it is important to settle accounts with the narrative of decline that has long been a hallmark of how the humanities view themselves. From a historical perspective, it is easy to lose a sense of proportion in painting an overly dystopian picture of the development and position of the humanities in the university. How did they really develop? In a case study of the subject’s history, archaeology, and English at the universities of Uppsala and Lund, Lars Geschwind and Miriam Terrell have identified what they regard as “anecdotal knowledge and mythologizing” in relation to the humanities. In sharp contrast to the established narrative of decline, they reveal a startling trajectory of development. In the year 1900, the number of faculty members in these fields at both universities was one in archaeology, five in English, and thirteen in history. In the year 2000, primarily as a result of periods of rapid growth in the 1960s and 1990s, the number had grown to seventy-five in archaeology, eighty-five in English, and a hundred and thirty-three in history (and this does not include the thirty-seven new institutions of higher education that also offer these subjects).256 Despite this enormous expansion, the humanities are burdened by a feeling that everything used to be much better. While it may be true that the humanities have been the losers in relative terms compared to other disciplinary areas during this period, in absolute terms the development has been extraordinary. Moreover, because attention has so often been fixated, for reasons to do with the labour market, upon the humanities as faculties at universities, we have become blind to the very large numbers of humanists working in other sectors of society.
Laboratories of Interpretation: Understanding and Explaining
253
A key explanation for the dystopian mood is likely the troublingly retrospective, inward-looking, and self-mirroring perspective that has often been a feature of debate over “the crisis of the humanities.” In their book Alltings mått [Everything’s Measurements], Anders Ekström and Sverker Sörlin highlight this fact, and argue for an integrative humanities. In order to strengthen the cultural dimensions of an increasingly complex society in which challenges are composite and interlinked, requiring long-term reserves of knowledge as well as the capacity to develop new combinations in the form of a “second-order specialization,” we need an integrative conception of knowledge that can create new intellectual milieus as well as working interdisciplinarily across the existing ones. Rather than including the humanities in the unitary disciplinary thinking that easily arises when one adopts a biological, economic, or technological perspective and tries to make overly ambitious claims, Ekström and Sörlin connect, in a forwardlooking fashion, an integrative humanities with a kind of hybridizing and recontextualizing expertise: It ought to be one of the primary tasks of humanistic research to create new frameworks of understanding for long-term societal development. Among other things, this means constructing and testing intellectual models that can make us see current problems in a larger historical perspective and with another kind of distance from that which informs contemporary political approaches to similar problems.257
The interdisciplinary movement that seeks to connect “the two cultures,” and that Ekström and Sörlin want the humanities to set in motion, also has its counterpart in the movements that have often been initiated from “the other direction,” when researchers in the natural sciences, technology, medicine, and economics have shown an interest in the humanities and a broader intellectual framework. Steve Jobs, who founded what was considered the world’s most successful company, was not a researcher or even a student at Stanford University, let alone a student of technology or economics. Although his only academic training was at a liberal arts college, its importance was proportionally more significant in that a course in calligraphy became the inspiration for a more wide-ranging interest in the computer user interface and led him to recognize the importance of aesthetics in computer use. Jobs also had a lifelong interest in Zen Buddhism. The fact is that Apple’s products—developed using intuition, simplicity, reduction to minimalism, a focus on detail, and emphasizing internal and external coherence—are among the most widely disseminated instances of Zen thinking in the world today. Yet this marketing giant primarily saw himself as a humanist. Jobs
254
Chapter 10
often emphasized that the interaction of humanities and technology, in which the creative arts and large-scale technology are brought together in an interaction between “The Path of the Arts” and “The Road of Technology,” was by far where the most of the interesting things could happen. According to Walter Isaacson, Jobs belongs to a group of people with a particular capacity to combine knowledge from different domains, due to the creativity that can only occur when a commitment for the humanities and the natural sciences are combined in strong personalities.258 This interdisciplinary tendency is also identified by a 2011 policy report from the Carnegie Foundation for the Advancement of Teaching (a policy and research centre based at Stanford University, Palo Alto, California), which argues that a broad foundation in the humanistic should be seen as a self-evident, integrated—and integrating—element of any business degree.259 With the emergence of a new business landscape defined by uncertainty and complexity, mutual dependence and ambiguity, its report, titled Rethinking Undergraduate Business Education: Liberal Learning for the Profession, argued forcefully for the need to develop a multidimensional view of economics using what American call liberal education, a concept that does not have an equivalent in the Swedish or European context but that might be described as a broad training in the humanities. Its authors point to the fundamental lack of integration, arguing that the “disintegrative character of the learning experience in business education” leads to fragmentation and division, which in turn further reinforces a narrowly one-dimensional and instrumental approach. The Carnegie report’s recurrent mantra is the need for a broader perspective by more clearly connecting economics with other fields and by integrating a broader palette of subjects, buttressed by integrative thinking and a well-integrated personality. It is an issue of combining analytical thinking with the ability to draw on different perspectives in order to see the multiple dimensions of a phenomenon, use multiple framing, and develop a self-reflective capacity for reflective exploration of meaning. All of this shows how important it is that the humanities do not become isolated as a cognitive safe space in which understanding is allowed to develop without having to confront explanations. It would be fatal if hermeneutics in this context were to be reduced to a humanistic doctrine of understanding, which might legitimize and offer protection against a onesided naturalistic culture of knowledge but would also eventually risk impoverishing the humanities—and, by virtue of this monopolistic claim to a specific form of knowledge, deprive the natural sciences of the resources of understanding. A contemporary reconceptualization of hermeneutics as critical hermeneutics, constituted by relating explanation and understanding
Laboratories of Interpretation: Understanding and Explaining
255
to each other, can show the way forward for humanities today when developing close interdisciplinary collaborations with other areas of scientific thinking. If we leave behind us the dichotomies of nature and culture, explanation and understanding, the humanities can no longer claim a monopoly on humanity. At first glance, it might seem like a welcome move to begin using the term “sciences of human kind” [människovetenskaperna]260 in relation to the humanities, but closer examination reveals that it is associated with a number of serious problems. The humanities will have to collaborate closely with medicine, economics, technology, and a host of other disciplines in various scientific areas, and counterpose all of these different perspectives against each other within the framework of a conflict of interpretations, if they are to be able to say anything about the human condition and what makes us human. What is more, the contacts between academia and society need to be extended using models for collaboration that go far beyond the simple transfer of research information from “theories” to “practices.” The study trips that innovation policy has long advocated, particularly to Silicon Valley and Stanford University, often missed the interesting aspect, namely the many intersections between academia and society that were a feature of an Internet revolution in which meetings in garages between the likes of David Packard and Steve Jobs acquired an almost mythological status. Remaining within the walls of the university can easily mean missing this really interesting feature. Moreover, the humanities today are in danger of becoming trapped within a narrow-mindedly theoretical academic culture of knowledge, which severs necessary ties to art and cultural life in the society around it.
A meeting of interpretations from “within” and “without” As we have seen, it is necessary to reformulate the relation between explanation and understanding in such a way that distanciation is no longer regarded as a threat but instead becomes a productive element of the concrete hermeneutic work. We likewise begin to write history only when we cease to understand immediately—and instead, with the help of distanciation, begin to reconstruct the chain of events in other terms than the motivations and reasons associated with the historical actors. Distanciation in fact has a productive hermeneutical function.261 The relation between the humanities and the natural sciences cannot be understood in terms of either dualism or monism. There is both discontinuity and continuity between these fields of scientific thinking, just
256
Chapter 10
as there is both understanding and explanation involved in various sciences. In order to manage these relations, we need a dialectical approach. We have already touched on how analytical philosophy of action, particularly Elizabeth Anscombe’s Intention (1957), developed an understanding of human action by using a theory of language games that tended to reproduce the same conception of a non-reducible difference between events in nature and actions carried out by human beings. But when cause, laws, facts, and explanation are separated from projects, intentions, motives, and reasons into two logical categories, we have to ask ourselves if it is really possible to sustain such a watertight distinction between two different language games. Ricoeur has pointed out that these language games are actually always mixed with each other in everyday language, which, he argues, indicates that ”[w]e are dealing instead with a scale that would have as one of its end points causation without motivation and, at the opposite end, motivation without causation.”262 We have also previously encountered stereotypical intellectual models in which positions that must in fact be considered neighboring concepts are represented as exclusive alternatives within contemporary philosophy of science. In reality, however, such binary oppositions seem like abstractions—in concrete human actions, they always appear in combination: The human phenomenon would be situated in between, between causation that has to be explained and not understood and motivation belonging to a purely rational understanding. The properly human order is this in-between in which we constantly move.263
The Royal Institute of Technology (KTH) in Stockholm has an emblem bearing the motto “art and science” [konst och vetenskap]. We might see it as a reminder that historically there has been a close relationship between art and science, even if the concepts themselves have complicated etymologies. Yet art has often been rendered invisible in modern science, sometimes even seeming incomprehensible in light of the art’s uncertain scientific status within modern scientific thinking. At the same time, we live in an age when creativity and imagination, aesthetic awareness and ingenuity, are being held up as core skills. And if we bear this in mind, we will realize that the possibility of establishing a common world, in which all our knowledge has a place, ultimately cannot find a firmer basis than a shared world of symbols and narratives. Ricoeur has often returned to the fact that our language originates from the poetic function: all our concepts have emerged from the symbolic function of language. For this reason, we need to regularly return to this origin and remind ourselves where our (symbolic) worlds come from.
Laboratories of Interpretation: Understanding and Explaining
257
Sometimes poetry is also more eloquent than philosophical concepts, as when poets manage to capture complex ideas in just a few words. A master of this was the Swedish poet Tomas Tranströmer, who captured in a few lines what might be called a “hermeneutic concept of truth” and the dialectic between understanding and explaining that is the hallmark of a hermeneutic process that, as it were, straddles two horses: Two truths approach each other. One comes from inside, the other from outside, and where they meet we have a chance to catch sight of ourselves.264
Our era is full of people asserting “their own truth.” We therefore need to pin down exactly what the concept refers to, because what in Tranströmer’s poem, and often also in daily life, are called “truths” appear from a hermeneutic perspective as “interpretations.” The concept of truth quickly becomes binary and exclusionary, whereas hermeneutics is open to other interpretations, in the same way as Tranströmer’s chain of thought does. In other words, we cannot develop robust self-understanding and scientific thinking solely on the basis of “inner” truths grounded only in our own understanding. We also need “outer” truths in the form of explanations that are based not on ourselves but on distanciated and abstract explanatory models, systems, and institutions. In an advanced knowledge society such as our own, the number of “outer” truths has increased exponentially. Indeed, it sometimes feels as though the “inner” truths are drowning or being smothered. This is undoubtedly one reason behind people’s desperate assertion of “their own truth.” In such a time, it is a major challenge to bring about a “meeting” between “inner” and “outer” truths. And in this situation, it is also tempting to close the door to “outer” truths and instead concentrate on people’s experiences or develop an alternative lifeworld understanding that safeguards and nurtures those “inner” truths. It then becomes difficult to engineer the “meeting” that is necessary to maintain scientific thinking’s quest for truth. As already noted, for this to even be a possibility, what we call truths—regardless of whether they come “from within” or “from without”—need to be redefined and understood as interpretations. This means abandoning the absolute claims associated with the concept of truth and instead acknowledging that our claims (“inner” truths) are in fact interpretations, which can in turn be counterposed to other interpretations (“outer” truths) within the framework of a joint quest for truth.
258
Chapter 10
“Without divergent opinions, it is impossible to discover the truth” Alexander von Humboldt made truly revolutionary discoveries during his long travels in South America, but after coming home and establishing himself as a major intellectual with bases in Paris, Berlin, and other places, he continued to make new discoveries with the continual help of new inventions. In September 1828, he hosted an academic conference in Berlin whose organization and program differed radically from existing practice. Instead of lectures in an auditorium, he allowed researchers to talk to each other—and in order to stimulate this exchange further, he also allowed them to socialize informally in various ways by including social activity such as concerts and excursions on the official program. For the academic world, this was a new way of thinking about meeting. When he opened this epochmaking conference at Berlin University, he also encouraged the delegates to gather in small groups, whenever possible from different disciplines, in order to form friendships and build networks. He was convinced that scientific thinking requires an “interdisciplinary brotherhood of scientists who could exchange and share knowledge.”265 Anyone who has taken part in an academic conference in our own era will recognize the format, which has unquestionably become a defining feature of conferences. Yet we have perhaps forgotten where this invention came from and, not least, the underlying principle as to why scientific meetings are organized in this fashion. However, the reason is spelled out in Humboldt’s opening speech, in which he underscored what constituted the precondition and very lifeblood of all modern scientific thinking: “Without a diversity of opinion, it is impossible to discover the truth.”266 Mono-disciplinary thinking must become multi-disciplinary thinking if science is to advance. In other words, scientific thinking builds on hermeneutic preconditions, including in the sense that it presupposes the possibility of interpreting the world in different ways. Well-founded differing opinions are thus not a threat but part of the essential conditions for scientific thinking that create opportunities for its further development. Scientific thinking cannot grow without variety and plurality. Adapting Hannah Arendt’s famous axiom that “men, not Man, live on the earth and inhabit the world,” we might say that “men (and women!), not Man, live in the academy and inhabit science.”267 Science is organized as a competition system, but the competitors are not primarily people but different knowledge claims. There are no theories in the world that can in themselves guarantee the scientific project. Ultimately, science’s quest for truth is carried on by people and practices
Laboratories of Interpretation: Understanding and Explaining
259
alone. Science is sustained by a long series of practices, various kinds of laboratories of interpretation, all of which require a high degree of hermeneutic awareness. A diversity of perspectives and standpoints is a necessary condition of scientific work. Yet the necessity of conflict also makes science a complicated activity in which we must always remain attentive to variation.
From libraries to laboratories Attempts to organize knowledge and institutions of knowledge did not cease—and have not ceased—with the invention of the university. Behind the seemingly self-evident dominance of this institution of knowledge and its extraordinary narrative of progress, there lies a longer history that ultimately complicates the image of the university as the dominant institution of knowledge by revealing the crucial importance of the laboratory. In their account of the history of institutions of knowledge, Reinventing Knowledge: From Alexandria to the Internet, Ian F. McNeely and Lisa Wolverton highlight what they regard as the six dominant institutions for the organization of knowledge that has been developed in the West. They did not emerge from each other, and collectively they represent something of a shared historical archive for contemporary scientific thinking and the knowledge society.268 The first institution of knowledge that they choose to highlight is the library, whose origins can be traced back to ancient Alexandria and that developed during the period 300 BCE – 500 CE. Construction of the library marks the transition from the face-to-face relations of oral culture in the Greek city state (polis) to the written basis of a Greek tradition from Aristotle. When these writings were gathered in the library, it produced an institution that was also able to administer a “Greece abroad” within the framework of the vast empire of (Aristotle’s student) Alexander.269 The foundation of the library at Alexandria created the basis for the text-centred, “academic” culture that arose at this time in Ptolemaic Egypt, in which the development of a critical mode of reading gave rise to new genres of literature—commentaries, glossaries, and indexes—in this new intellectual environment. When classical civilization collapsed during the Great Migration in the wake of the collective pressure of internal collapse and external conquest, this legacy of knowledge was nonetheless able to survive, thanks in large part to another institution of knowledge: the monastery (100–1100 CE). This ecclesiastical institution had often been shrewdly located in remote spots, far from the urban city culture that the great migrations had now left
260
Chapter 10
in ruins. Within the framework of the cumulative history and history of institutions of knowledge related by McNeely and Wolverton, the contributions made by this institution to the development of the organization of knowledge involved the emergence in these settings of a more qualified approach to texts. Monks collected, copied, and corrected manuscripts within the framework of lectio divina, a spiritual discipline that for the first time made silent, reflective reading the norm. Through the monastic rules of Benedict of Nursia, text reading was also connected to a disciplined temporal regime whose influence would extend well into the forms of social organization of industrial and post-industrial society, an approach so advanced that monks have come to be regarded as the first truly professional practitioners in the history of the West. As the third institution of knowledge in their history of institutional development, the authors identify the university, which they somewhat provocatively limit to the period 1100–1500. In medieval Europe, where cities had again begun to grow strong and where mobility was rapidly increasing, universities appeared as self-organizing urban guilds. We have already touched on the fact that this university had no connection to the buildings and physical locations that would later become so fundamental to the university’s identity. Instead, the universities consisted of movable networks in which students (Bologna) or teachers (Paris) controlled the organizational process. These universities were not only characterized by considerable heterogeneity also had a common denominator in the fact that the questions and answers of neo-scholasticism and catechistic pedagogy formed the constitutive elements of a process that sought to determine on what points interlocutors were in agreement. The new knowledge that emerged alongside these guilds and networks of intellectuals, which became known as the university, have been described as “Europe’s first individually portable, internationally recognized form of knowledge.”270 Over time, these networks were institutionalized and located in particular buildings and disciplinary structures in accordance with the dominant ideas about how the university operates in our own era. We have noted at several points that these universities, with their Aristotelian epistemology and ontology and despite their earlier successes, did not manage to incorporate the new view of knowledge that emerged in the Renaissance and the Scientific Revolution. In its place emerged a new institution of knowledge, what McNeely and Wolverton have chosen to call the Republic of Letters (1500–1800), that was comprised of islands of experimenting researchers. These were often regarded as controversial because they started from different epistemological and ontological assumptions, an impression reinforced by the fact that they were forward-
Laboratories of Interpretation: Understanding and Explaining
261
looking, experimental, and focused on creating new knowledge, in sharp contrast to the university’s more backward-looking, nostalgic culture of teaching. The researchers who would later be portrayed as heroes of science were at the time heavily beset by political, ecclesiastical, and academic authorities, for which reason they organized themselves into correspondence networks for communicating by letter at a distance. In this way, the Republic of Letters was also able to avoid the censorship that often hindered the spread of ideas by means of the era’s great innovation in information technology: the printing press. Over time, even these loose networks of knowledge became institutionalized, taking the form of academies and societies that met at conferences and whose letters were transformed into scientific articles published in a profusion of new journals. Humboldt’s reform of the university, as exemplified by Berlin University, established the idea of research directions dedicated to making new discoveries and new knowledge, which were introduced by the new natural philosophy—having been unknown in the medieval university—and were ultimately brought under the same umbrella as the culture of knowledge of the medieval university. Yet the story does not end with the establishing of the university. As their fifth institution of knowledge, McNeely and Wolverton highlight the process of differentiation that led to the development of the disciplines (1700–1900), which is also a history that largely unfolds within the framework of the Humboldtian university project. With the establishment of national systems for education and growing numbers of students, a new market emerged for academic specialisms. The disciplinary specialization that now gathered pace was organized around the seminar, which took the form of a kind of “circular dissertation” in which the personal relationship between professor and student was configured in terms of that of master and apprentice. Finally, in tandem with the rise of the discipline, the authors also highlight the laboratory, which they identify as the most important institution of knowledge of our era (1770–1970). In so doing, they set the scene for the institutional dimension of knowledge organization in our time. The laboratory—which I have chosen to generalize in terms of a laboratory of interpretations—combines an array of different values in a complex, interdisciplinary, and dynamic culture of endless experimentation, democratic equality, and an ambition of achieving improvements and innovations.271 In many ways, the laboratory really does operate like a laboratory of interpretation, and it is a model that can endlessly be exported to new areas. From having started as an observation of a piece of nature, the laboratory has developed, via workshops, active interventions, and manipulations, to
262
Chapter 10
the seminar that functions as a hermeneutic laboratory and, finally, the tendency of the social sciences to transform the entire world into a laboratory. In light of the fact that the laboratory additionally combines an experimental approach with entrepreneurial innovations (read: opportunities for commercialization), we can also understand why this institution of knowledge can be found both inside and outside the walls of the university, which may explain the overlap that we are now seeing between the laboratory and the research university. As a result, we are confronted by an institution of knowledge that proactively seeks to cross boundaries, partly between theory and practice, partly between academia and society. In an age of hermeneutics, the laboratory is becoming even more important within both science and society by virtue of being the laboratory of interpretation that underpins a shared quest for truth.
Science in an age of hermeneutics: laboratories of interpretation It is not unusual for positivism’s great expectations from scientific thinking to be transferred to hermeneutics, as if by sheer momentum, accompanied by the expectation that hermeneutics will live up to the promises on which positivism itself was unable to deliver. Yet hermeneutics does not in fact offer such a methodology, in the form of a clearly demarcated procedure that automatically delivers the “right” interpretation. There is, so to speak, no manual for interpretation. Nor is it unusual for science to be represented as an institution that in some way “has” the truth in its possession. On this view, scientific thinking administers truths and, ideally, also functions as a supplier of truths. This notion has been given greater impetus by concepts such as post-truth, alternative facts, and resistance to facts. In an era characterized by threats of this kind, it is easy to create expectations that scientific thinking can tell us what is true. Sometimes we even assign science the task of intellectual waste disposal in society. Science should provide truth, preferably Truth. Full stop. In this situation, it is important to emphasize that academic institutions do not administer scientific thinking as some grand Theory or Truth. There is no method for automatically delivering research results. Science does not own the truth, it strives unceasingly to find the truth. This quest for truth is maintained by means of a long succession of laboratories of interpretation, in which participants must balance the temptation of making claims to already possess the truth with that of abandoning the quest for truth entirely.
Laboratories of Interpretation: Understanding and Explaining
263
In an age of hermeneutics, scientific thinking must take its laboratories of interpretation more seriously. It is no coincidence that examinations at the more advanced levels as well as research more generally always use various kinds of peer-review assessment and evaluation of studies and research results. This focus on laboratories of interpretation tells us something important about the university in general. Ultimately, only people can calibrate the quality and the evidence of advanced studies and research results—and this assessment must always take the form of intersubjective peer review. No abstract theoretical standard can substitute for the assessment made through these human practices. Where the possibilities for formalization come to an end, human judgement must take over. One of the great challenges facing contemporary academia that is also an troubling index of crisis is the fact that many of its interpretative practices are becoming impoverished. Attendance at seminars is increasingly patchy and fewer colleagues and students show up to public doctoral defenses. There are many reasons for this, including several that have to do with professionalism and personality. Many scholars no longer feel that they have time because their workload is increasing. But there is also a lack of understanding about why this kind of laboratory of interpretation exists as well as the fact that they do not start, or continue, on their own. It takes resources, talent, and substantial expertise to maintain the vitality of academic conversations, discussions, and doctoral defenses. In order for these interpretation labs to function, they require a considerable degree of formalization of their procedures for ensuring quality and the quest for truth, even as these must be balanced against insights as to the significance of informal practices. Today, the interpretation labs that are the ultimate bearers of the scientific project are threatened. We see the consequences of a society fixated with rankings that insists on measuring results, and the demand for deliverables now determines the distribution of resources and career paths. The other side of this reality is that the processes that support the academic culture are being undermined. Collegial conversation ebbs away when scholars are absorbed by the lean production that defines the logic of projects in both research and education. This is an extremely dangerous situation, that ultimately runs the risk of making us blind to the hermeneutic conditions for scientific knowledge. One precondition for us to be able to talk meaningfully about laboratories of interpretation, in which different interpretations confront each other within the framework of a shared quest for truth, is the fact that reality can indeed be interpreted in different ways. But the fact that we can interpret the world in different ways can never mean
264
Chapter 10
that reality can be interpreted any way we like. In other words, within the framework of a quest for truth, laboratories of interpretation must safeguard both the conditions for interpretation and remain mindful of its limits. What, then, is the relation between scientific thinking and truth? Obviously, those no longer seeking the truth can hardly defend their stance within the framework of scientific work. The cognitive resignation that results from such a viewpoint makes “all cats grey” and tends to subvert scientific knowledge as a project. Needless to say, tolerating outright lies or nepotistic relationships is also unacceptable. But we should not forget that the opposite position is also problematic: there is no place in academia for those who think that they already possess the ultimate truth. This kind of unquestioning assurance tends to short-circuit academia’s hermeneutic practices and laboratories, making seminars and doctoral defenses impossible to handle/manage/organize. We live in an age of hermeneutics in which interpretations no longer emanate primarily from academic seminar rooms but rather from a global information system that continually forces us to adopt a position with regard to vast amounts of information and perspectives on our world. In a globalized world, there is now a generalized sense that everything can be interpreted in endlessly new and unexpected ways. In this situation, every kind of fundamentalism that claims something “is basically” something will encounter an insistence that things can “also” be considered in different terms. But how can we understand the fact that we have such differing interpretations of a world that we have in common? At the same time, we are moving away from a mode of dividing up knowledge by means of territorial metaphors that stake out different “areas of knowledge” to multiple aspects views in which different knowledge traditions are instead understood as various perspectives of a shared reality. At a time when those who work with knowledge are being continually forced to manage conflicts of interpretation, it is important to underscore that no methodological handbooks or interpretatively privileged individuals exist that might resolve this multivalence into a single meaning. Hermeneutics is a practical philosophy, and ultimately there are only practical solutions to problems of interpretation. The bad news of our era is that there is no absolute knowledge. Everything can be interpreted. The good news is, however, that we do in fact have a wide array of laboratories of interpretation through which we can handle the concrete challenges associated with a situation in which we neither have access to objectivism’s absolute knowledge or can content ourselves with an arbitrary relativism. I have already mentioned, and will return to in the following chapter, the many interpretation labs—seminars, doctoral defenses, and so forth—that
Laboratories of Interpretation: Understanding and Explaining
265
underpin academic work at universities and other institutions of higher education. Let me here highlight some of the most important laboratories of interpretation today, some of which are located outside academia but all of which are supported by advanced hermeneutic practices that can teach us something important about the conditions for academia’s laboratories of interpretation and what hermeneutics is.272 Historiography is in many regards an exemplary interpretation lab in that it involves handling a situation in which both objectivism and relativism fall short. As such, it also has something to teach us more generally about how the development of scientific knowledge works. History builds on the selection of material—most of what has “happened” must be filtered out, while that which is left must be organized meaningfully—and this task requires narration in the form of a connection to identification, whether that means one’s own identity or that of others. The interplay between them forms a triangle of mutual reinforcement and correction, so that narration, for example, not only orders something that has already been selected but itself takes on a selective function on the basis of what fits into its narrative. The notion of a definitive, objective historiography is not just impossible, it is not desirable. History can always be written in different and endlessly new ways, but this does not mean that it can be written any way one wants. The historian must always anchor an account in language, documents, and archives while also remaining open to—indeed, welcoming—critical objections and viewpoints. Another exemplary laboratory of interpretation is the court of law. Here, too, the issue is not about managing absolute knowledge. If someone were to claim a monopoly on the truth in the context of a trial, the legal process would be short-circuited. Rather, the court of law represents a stylized laboratory of interpretation in which one party (the prosecutor) has the task of only considering things that speak against the defendant, while another party (the defender) has the task of only focusing on things that speak for the defendant. These interpretations confront each other in a process in which a narrative is successively negotiated, and from that narrative emerges the verdict. When we describe human actions, we quickly begin to narrate, and when we narrate, we also begin to evaluate human behavior on moral grounds. The diversity of narratives has a unique ability to capture our humanity. A human verdict is thus a verdict that forces different interpretations to confront each other. What is required is a hermeneutic conflict that considers motives as well as circumstances and outright coincidences. And while no-one in this interpretation lab can personally claim to possess the truth, it is also nonetheless clear that the process of reaching a verdict cannot drift into relativistic arbitrariness.
266
Chapter 10
Democracy is yet another laboratory of interpretation, perhaps the most important in our time. Democracy is clearly predicated on hermeneutics in that it presupposes the possibility of interpreting the world in different ways, which means that it is not merely an ethical imperative but also a cognitive advantage to be democratic. In a democracy, no one person has interpretative privilege; rather, people endeavor to weigh different interpretations against each other in a conflict that does not eradicate the other’s position or the difference in positions itself. Democracy has inherited a crucial intellectual model from theology, an eschatological perspective that combines an “already” with a “not yet” which makes it possible to have two separate ambitions: on the one hand, the assertion of something as absolutely binding; and on the other hand, the possibility that something binding can change as the result of a new negotiation or decision. The hermeneutic understanding of these practices is based on insight into the attributes of hermeneutic experience. The vertiginous sensation mentioned earlier, in which someone rereading a book becomes uncertain as to what is actually “in” the book versus what has been “added to” it by the reader in the act of reading, reminds us of how interwoven discovery and invention are in the hermeneutic process. More generally, it teaches us something about how the interface between ourselves and the world is organized as something that is in continuous movement. In an age of hermeneutics, it is an essential skill to be very familiar with how the distinction between discovering and inventing works and to be experienced in making distinctions on the basis of practical experience of the paradoxical structure of the act of reading. When we “read” and “write” the world, we are balancing between discovering and inventing—while also learning something about what is at stake in the scientific project that came into existence with the invention of discovery. In other words, interpretative expertise is something that takes training and practice. Interpretation is something that only exists if we practice it, something that demands certain virtues and dispositions—an ethics. Without functioning interpretation labs, scientific quest for truth cannot be sustained or acquire lasting validity.
PART IV SCIENCE HAS A FUTURE
CHAPTER 11 GLOBALIZATION AND KNOWLEDGE SOCIETY
Scientific thinking develops through journeys of discovery. One of the greatest journeys of our era was the first crewed mission to the Moon. On 20 July 1969, Apollo 11 landed in Mare Tranquillitatis (the Sea of Tranquillity), and several hours later, at 02:56 (GMT), Neil Armstrong exited the Lunar Module Eagle and put the first human footprint on another heavenly body beyond the Earth. In so doing, he thereby stepped into a position from which he could view the Earth “from without.” In his book about the history of humankind, Sapiens, Yuval Noah Harari invokes a dizzying perspective in order to explain why this event is of unique historical importance: “During the previous 4 billion years of evolution, no organism managed even to leave the earth’s atmosphere, and certainly none left a foot or tentacle print on the moon.”273 In other words, Armstrong’s placing of his left foot on the surface of the Moon marked an entirely new phase in the history of human kind: ”one small step for a man, one giant leap for mankind.” This was a journey of an unprecedented kind that we can only understand if we recognize that scientific development had also entered an entirely new phase.
The dialectic of discovery and invention The media framework for the Moon landing itself attests to how radically the world had changed and to the extraordinary advances that had been made in communications technology by this point. The fact that a fifth of the Earth’s population were able to follow the drama on TV was something entirely new and epoch-defining. Humanity stood on the threshold of an era defined by global simultaneity. The technology used in the Apollo project deeply impressed contemporaries, yet it was embryonic in comparison with what exists today, the collective computing power of the navigation system in the command module and the Moon lander were modest, to say the least, by today’s standards. Indeed, Apollo 11’s computer system had a capacity that was several orders of magnitude less than that now to be found in any modern car, let alone a smartphone (though it is rather difficult to compare
270
Chapter 11
the thousands of circuits connected by cables in the lunar project with the millions of transistors that can now be compressed in a single circuit). The landing was particularly dramatic because it turned out that landing at the site originally envisioned was not possible, which in turn raised the pressing question of whether there would be enough fuel (the lunar module probably only had fifteen seconds of fuel left). The consequences do not bear thinking about. The risk of the Moon landing ending in failure was subsequently revealed to have been far greater than was generally known at the time. Whatever the case, the outcome was a fantastic triumph and an extraordinary research achievement for NASA, which had succeeded in achieving the goal of taking human beings to the Moon and back that John F. Kennedy had announced eight years earlier. The Moon landing can be seen as one of the clearest indications, and the result of, the almost volcanic developmental power that is the hallmark of a modern scientific thinking fueled by the interplay between discoveries and inventions. Countless essential human inventions were necessary for the journey of discovery to take this giant leap and this discovery, in turn, also paved the way for the development of numerous new, epochal inventions in the fields of material sciences, medicine, and robotics that emerged from space research. For NASA, the journey to the Moon resulted in over six thousand patents. The development of knowledge requires journeys—both physical journeys through space as well as journeys we make with the help of narrative and in the form of theoretical conceptual experiments. For knowledge to develop, discoveries must be combined with inventions, sensory experiences need to be sharpened with the help of theories, and our understanding has to be developed by means of explanations. If modern scientific thinking was born with the invention of discovery, the historical development of the scientific project has been defined ever since by a complex and successful interplay of inventions and discoveries. The enormous developmental potential of scientific thinking builds upon this dynamic, between the technological and cognitive inventions needed for discoveries to be possible, and the continual stream of new inventions that these discoveries have, in turn, shown themselves capable of generating. All this takes place within what may be called the hermeneutic experience, in which interpretation develops through a fluid interplay between discoveries and inventions in the interface between ourselves and the world. Over time, what we regard as science has also come to include a wide spectrum of positions extending across induction and deduction, empirical and theorical approaches, experience and thought, understanding and explanation. As it develops, science is continually straddling two horses, for which reason
Globalization and Knowledge Society
271
discovering or inventing reality is never a matter of an either-or choice. Thanks to new inventions, new discoveries are always being made as the scientific project advances.
The progress—and metamorphosis—of science At the same time as the Europeans were taking a giant leap across the Atlantic five hundred years ago, there began a process of knowledge development that would lead to dramatic, successful, and unpredictable changes in what we call scientific thinking. The next great leap, into space, can be thought of as a continuation of that narrative, a shining proof of the progress that, in a more or less revolutionary fashion, made possible the discoveries/inventions of entirely new worlds at the macro- and microcosmic levels as well as in society and lifeworld. The possibility of travelling to the Moon and the advent of expansive space research presupposes—and is, indeed, not possible to comprehend—without the extraordinary advances made by science over the centuries. Of particular importance are the scientific breakthroughs of the second half of the twentieth century. Apollo 11’s journey to the Moon marks a truly epochal shift in a world history in which the human condition can no longer be separated from scientific development. This journey also attests to the fact that science often surpasses our expectations by taking us beyond what has hitherto been the stuff of fairy tales and fantasies. Over time, however, all these advances have also fundamentally changed the essence and function of scientific thinking. We have already observed on several occasions that technology and scientific thinking are historically intertwined and that it is therefore not possible to understand their relationship solely in terms of theory–practice, or cause–effect. Technology is as much a precondition for scientific progress as the other way around. We have seen the emergence of technoscience, which is not only a phenomenon at advanced institutes of technology and universities but can also be said to define our civilization. Advanced digital technology today permeates the work being conducted, even at the simplest level, in the humanities, theology, and art, where scholars continue for the most part to work on their own. Scientific thinking, far from being exercised in isolation, is always embedded in physical and virtual worlds. If we broaden our notion of technology to include the general instrumentalization that now characterizes our knowledge practices, we can see how technology is always already present when we create knowledge. The emergence of a new kind of scientific thinking between the wars was also closely connected to the preceding war (the First World War) even
272
Chapter 11
as it established the terms for the coming war (the Second World War). Scientific thinking accelerated the development of weapons technology very considerably, particularly during the Second World War. Indeed, one can dare to say that science decided the outcome of the war. Yet we often underestimate a development that simultaneously moved in the opposite direction, namely the importance of war for the development of scientific thinking. It is no coincidence that the Manhattan Project was begun in 1942, as war raged around the world, in a race with Nazi Germany to develop an atomic bomb. For the President of the United States, Franklin D. Roosevelt, who issued the order to create the largest system of research that had ever existed, there was ultimately one issue: the war would be won by whichever side managed to develop the Bomb first. A massive research program was launched under the leadership of physicist Robert Oppenheimer, involving more than 130,000 personnel connected to the development and production sites all around the United States. This marked the beginning of a new kind of science, as was clear not only from the scale of the research organization but also the research findings. After isolating a splitable isotope of plutonium from uranium, the Manhattan Project successfully conducted a test detonation in the Alamagordo Desert in New Mexico on 16 July 1945, and later that year the United States dropped two atomic bombs on Hiroshima and Nagasaki. It is an open question as to whether this was a necessity in order to end the war, or whether it had in fact become a necessity to find a real-life test for a scientific breakthrough. Whatever the case, there emerges a kind of scientific thinking not only defined by massive scale but with the terrifying capacity to destroy the very civilization that had produced it. It is no exaggeration to speak of a new phase in the history of scientific knowledge, since we now find ourselves in a situation in which science has not only created gigantic research organizations with huge numbers of personnel and arrays of instrumentation that require budgets only a state can support—but when scientific thinking has acquired such power as to make possible its own self-extermination. Scientific thinking has since then continued upon this trajectory towards large-scale mega-projects characterized by massive internal complexity and profound reliance on external support, factors that have shaped both the development of knowledge and financial policy. Pursuing this kind of scientific work requires enormous budgets, and the involvement of industry, particularly the arms industry, has often been a condition of its possibility. We should therefore understand the Manhattan Project as the start of a new scientific era defined by Big Science, in which the university has become the focus of unprecedented levels of investment in expensive laboratories,
Globalization and Knowledge Society
273
advanced machinery, and equipment operated by thousands of staff. In this way, scientific thinking has merged with other sectors of society, developing by means of powerful international networks of peer-review evaluation and within the framework of a complex dynamic of competition and collaboration. Yet the ambiguousness of scientific development has also become more pronounced, raising questions about whether scientific development will ultimately make the world more civilized or lead to new forms of barbarism. The scientific thinking that expanded and accelerated with the help of innovations during the Second World War was closely associated with the military-industrial complex. Big Science has subsequently come to be dominated by life sciences, nanotechnology, and big data (internet of things). The latter creates opportunities for a small number of people to control an enormous amount of information—and, with it, large numbers of people. But access to large data banks is also a precondition for the development of artificial intelligence (AI), which can itself also handle massive amounts of data. None of this would have been possible, however, without the new digital information system that now binds the world together globally. But its importance and function have also changed over time. In a couple of decades, we have moved from a rosy view of the Internet as characterized by information highways and democratic communication platforms in which “information wants to be free” through a sobering-up phase in which we began to realize that the Net is in fact more like an information sewer or an echo chamber for narcissistic personalities, to the unavoidable recognition that this information system actually represents the most efficient surveillance system in history, whose ability to manipulate entire populations at the micro-level has been created with their active and enthusiastic collaboration. There is a growing realization that the emergent digital information system has generated colossal amounts of data that are stored in continually changing databases and that can only be analyzed by using advanced technology. Thanks to big data, this new scientific knowledge is a resource not only to meteorology, environmental research, and genetics but also marketing, the military, and political organizations. Using mobile telephones, camera surveillance, and web services, we can acquire information and perform simulations with such a high degree of precision as to be able to make predictions that give the impression of determinism. We have been increasingly forced to replace an image of ourselves as “users” with a more insightful, but also more worrying and less flattering, image of ourselves as being used. All this creates extraordinary opportunities for scientific thinking while also showing how the preconditions for science are changing
274
Chapter 11
as they encounter new challenges and problems—whose management, in turn, also requires more scientific knowledge. Considering how much the textual culture of the Gutenberg paradigm has shaped scientific thinking, we can only imagine how radically the global information world of digitalization and the development of artificial intelligence will shape the premises and possibilities of scientific thinking in the future.
The science of globalization and the globalization of science The great social transformation of our age is called globalization. But what is globalization? Where does it come from? And how does globalization affect scientific thinking? Sociologist and theorist of globalization, Manuel Castells, has described globalization as a process that is driven by two immensely powerful motors, one economic and the other technological.274 The economic motor has its origins in an originally liberal doctrine that, beginning in the 1980s, resulted in the deregulation of the world economy. This involved a dismantling of the national “containers” that had arisen as part of total war, when entire nations mobilized all their resources during the First World War and Second World War, and which resulted in the political sphere steadily gaining more control of the economy. Today, there is hardly any country, with rare exceptions such as North Korea, that has full control over its economy: money and capital, goods and services, and people move all more increasingly freely across national borders. The economic effects of this deregulation were also considerably exacerbated by the integration of the former communist states of Eastern Europe and the enormous Chinese economy during the 1990s. Yet the assumption that this will function as a successful strategy rests on notions of the division of labor and comparative advantages, fundamental ideas that were developed by liberal economists such as Adam Smith and David Ricardo in the eighteenth and nineteenth centuries. The idea is that if each of us concentrates on what we are good at, it will result in benefits for all included. But a deregulated world economy does not in itself constitute globalization. The economy was far more deregulated and liberal before the First World War, although without involving globalization in any real sense of the word. The sea-change, which occurred in the 1980s, was that this increasingly borderless economy began to operate in tandem with a new digital information system, which for the first time in human history created a global simultaneity of temporal and spatial compression in the world. This changed everything. When the stock market in New York or Singapore falls, it instantly impacts the economic situation of families in suburban Stockholm or a rural mining town in Finland. This is an entirely new
Globalization and Knowledge Society
275
phenomenon, a completely new economic reality. A real-time economy defined by hyper-competition has emerged from the convergence of a deregulated, liberal world economy characterized by free trade and an information system that has made possible global simultaneity for the first time in human history. In parallel with, and in response to, this development, a policy consensus has emerged since the 1990s over how best to face the challenges of globalization. This response can be summarized in four words: knowledge, knowledge, knowledge, and knowledge. Thus, the idea behind a knowledge-based economy is not to try to compete by means of lower prices in a race to the bottom, but rather to invest in knowledgeintensive activities that move the country further up the pyramid of value. If the challenge is called globalization, the answer is called knowledge society, a common strategy involving investment in developing expertise, cutting-edge research, and innovation capacity, which together can be expected to strengthen competitiveness. All of this has made knowledge and scientific thinking of central interest, prompting an obsession with education, research, and innovation processes. Even so, the situation abounds with paradoxes. Globalization is continually giving birth to its antithesis. For example, globalization’s knowledge society has given rise to a kind of cognitive nationalism, which is a curious phenomenon, to say the least, and deeply ironic in light of the universalistic ideals of the university and of scientific thinking. It is thus no coincidence that, after several decades of globalization, our societies now find themselves in a situation defined by intensifying tensions between globalism and nationalism. If Birmingham was the cradle of industrialism, the cradle of the information society (or whatever we are to call it) was Silicon Valley. It was in this curious milieu, south of San Francisco, where a railway baron named Leland Stanford established the economic foundations of one of the world’s most successful universities, Stanford University, which over time built up around it a knowledge economy in which, from an early stage, businesses and academic activity grew alongside each other. Today’s digital information geography has a long prehistory that is an integrated element of, and intertwined with, the history of scientific thinking, particularly during the postwar era when large-scale computing began to be used increasingly widely. The real advance came in the 1970s, however, with the integration of the computer’s various components into the microcomputer as a unit able to fit on a desk, at first in offices and within industry, but the advent of the personal computer in the 1980s and 1990s resulted in computerization making its definitive breakthrough into the home and among the general public. When, around the turn of the
276
Chapter 11
millennium, the capacity of these machines steadily grew and personal computers were connected via telephone modems and broadband through network connections, the Internet became a generalized experience for ordinary people. Since then, developments have led to desktop computers becoming portable laptop computers, while the telephone, the television, the camera, the music device, and the photo album, along with other “old media,” have increasingly converged into a single information system that in turn is becoming detached from the ground in the form of a “cloud.” This development has reinforced the horizontal dimension of information geography in the form of YouTube, Facebook, and Twitter, while the advent of Wikipedia has made it clear that this entire new information system is moving in a direction that makes it impossible for us to ignore that the conditions and possibilities for scientific thinking will be fundamentally changed as new forms of information management, knowledge creation, communication, government, monitoring, and surveillance increasingly come into their own. No-one can today confidently state what the effects of these changes will be, and it would be naïve to imagine that this development has a “final station.” Yet it seems likely that we in the academic world continue to underestimate the long-term impact of the developmental process that began in Silicon Valley just forty years ago.
Digitalization: myths and challenges Myths about Silicon Valley proliferate. One of the more persistent is the myth of private initiative. Indeed, it is curious that a romantic notion of free enterprise should have been nurtured in an environment in which such vast amounts of financing in fact came from the state, both as regards the university and business activity. Manuel Castells has described how the new network technology was actually created as the result of an unholy alliance between the United States’ investment in Star Wars (a decentralized defense system that could not be knocked out because it was sustained by nodes in a network), the military-industrial complex (a defense industry that operated in a close alliance with politics), and an anti-authoritarian hippie culture of entrepreneurial geniuses (who refused to be controlled by parents, corporations, or the state).275 From this curious symbiosis emerged a continuous stream of new discoveries and inventions, which eventually resulted in an entirely new digital technology that in only a few decades would conquer and change the world. But this new information system also gave rise to a new economy, within which some of the world’s most successful corporations would emerge in just a few years. In the background of this development, however, the state always placed a crucial role in financing research. While
Globalization and Knowledge Society
277
globalization is profoundly dominated by economics, it was political and legal decisions that made possible and initiated the historical development that we now call globalization. Walter Isaacson has tried to dispel another myth relating to the power of innovation, that of the lone genius, by foregrounding how creativity here emerged from a collaborative process. Teams working in the interface between people and technology were the key to success. In Silicon Valley in the digital era, these teams principally arose in three ways, often in close interaction with each other: resources from and coordination by the government; private companies; and the sharing of ideas freely among peers.276 Even if the transnational nature of globalization means that politics now has a greatly reduced room for maneuver, globalization today remains dependent on political-legal preconditions that are fundamentally bound up with the state. Another myth relates to Stanford University, the successful private university that is regarded as the origin of the Internet revolution and the “new economy” that emerged from the new network logic. At this point, however, it is vital to remember, first, that research financing at this university to a striking degree came from the Department of Defense, and second, that a new knowledge project, here as always, resulted not from well-behaved researchers and students living harmoniously within the walls of the university but, rather, from crucial interactions in the interface between academia and society. The Internet revolution would not have been possible without intensive exchange between the university, private research laboratories, and “garage culture” (the fact that many of the leading entrepreneurs of the digital revolution began by using their garages has given these buildings an almost iconic status in such circles). We should also remember that most of these super-entrepreneurs were not wellbehaved students or researchers at Stanford or another university but mostly came from another world entirely. At best, they were college dropouts like Steve Jobs, with his famous course in calligraphy and lifelong interest in Zen Buddhism. While he may have been something of a marketing genius (despite never having studied the subject!), his real talent lay in a capacity to build constellations of expertise (his namesake Steve Wozniak, the real computer nerd, played the key role in many of the technological solutions, particularly during the initial phase). Like so many others in Silicon Valley, Jobs moved in a space between worlds of business enterprise and research, where he could quickly seize on ideas and adapt advanced technological solutions from the research divisions at Xerox Park and elsewhere. This entire narrative about the digital revolution is a necessary condition for the emergence of state of society that we now usually designate as
278
Chapter 11
globalization. Without cutting-edge scientific thinking featuring a dynamic interplay of discoveries and inventions, there would be no globalization. But globalization has also fundamentally changed the premises and possibilities of scientific thinking itself. Just as book printing was an integral component and defining feature of the Scientific Revolution of the sixteenth and seventeenth centuries, the information technology revolution of our own era is a product of scientific progress. At the same time, this digital information culture is in the process of fundamentally reshaping scientific thinking, particularly its communicative infrastructure and, with it, working methods, publishing routines, procedures for evaluation, and career paths in academia. Let me highlight just a few of the many dimensions of globalization that directly affect scientific work. The intensifying stream of scientific information to which we are now exposed is not only an effect of the fact that researchers are now geographically mobile to an unprecedented degree (this could not have happened without cheap air travel and may not continue for much longer, given its effect on the climate) but also an effect of a global information system that has generated an immense tsunami of research publications. Outside of the specialized research centres where knowledge on a mass scale has been built up in a disciplined, cumulative, and well-organized fashion, this dramatic increase in publications has led to a curious situation defined by a fundamental imbalance in which the need to publish (for the purposes of professional accreditation and meeting the standards of research accounting systems) has come to vastly exceed the need for—and opportunity of reading—these publications. But despite the fact that many commentators are now arguing, on strong grounds, that too much research is now being published (two and a half million scientific articles are apparently published every year), the state and other financers continue to measure research in terms of number of publications. Their sheer number and not merely their uneven quality are becoming serious problems for scientific knowledge.277 The knowledge society’s strategy of countering the challenges of globalization by investing in knowledge has boosted scientific activity and considerably increased its budgets. Yet it has also come at the price of a drastic reframing of its horizon of expectation and incentive structure in terms of economic factors. When we combine this with the predominance of economic measures in the measurement of scientific results, we can clearly see that the very soul of scientific thinking is under threat and that the logical of discovery is in danger of losing its dynamism. These new means of information management have radically changed research communication and created unimagined possibilities for bibliometrics and
Globalization and Knowledge Society
279
ranking, measurement and comparability. Today, a “global positivism” is emerging from this ferment. This time, the roots of this positivism are not epistemological considerations but the “magic of numbers” i.e. the result of an amalgamation of the accounting systems introduced to satisfy financers’ demands for proof of results, and a knowledge culture that is largely formed by quantitative and naturalistic ideals. The new, “flat” information geography of globalization also affects the conditions and possibilities of scientific thinking because competition in this kind of “flat” world also increases dramatically among actors who find themselves outside the traditional institutions of scientific knowledge. In the wake of globalization’s horizontalizing culture of knowledge, universities are beginning to experience real competition from journalism, “old” media, and social media. Scientific thinking itself has also increasingly begun to use these channels in order to communicate research findings, increase prestige, and generate financing for its activities. “Research by press conference” is becoming increasingly common, at the same time as research projects build up platforms on social media through which to communicate and legitimize their work. When science becomes journalism, however, there is a clear risk that peer-review evaluation will cease, and that journalism will use science without paying attention to its complexity and nuances. A great deal of research is also now beginning to look like journalism. Whether the result of this development will be a kind of scientific democratization or a levelling-down of scientific knowledge remains an open question. In our own era, the ambiguous function of scientific thinking is being carried to an extreme. It is becoming increasingly clear that science itself generates challenges, problems, and crises—but also that it is a necessary resource (perhaps even our most important) for dealing with these challenges, problems, and crises. The fact that scientific thinking now appears to be both the problem and the solution also raises questions about what the conditions of possibility are for directing scientific thinking and its development within the framework of the preconditions established by a new information geography.
Once again: contextualizing science Every time we say “knowledge” or “scientific thinking” we are immediately transported back to the knowledge arena of the ancient world, and, just as in Pythagoras’ original description, we immediately think of theory as being in opposition to practice. We have seen that the tracks left by this conceptual figure are very deep, resulting in a perspective that associates scientific
280
Chapter 11
thinking with pure theories—lacking in historical and societal context and disconnected to the various forms of instrumentation and human behavior. However, scientific thinking is always far more than just theories. It is always a practice before it becomes a theory, and ultimately it is practices alone that sustain the scientific project. Scientific thinking never takes place in a hermetically sealed space. The embedded nature of scientific thinking has been reinforced by its various metamorphoses during the past century, and from these we can recognize even more clearly the importance of contextualizing our understanding of what scientific thinking is and how it works. Accounts of philosophy of science are often characterized by an institutional lacuna, which results in a tendency of overlooking the importance of organizational realities, employment conditions, and the significance of institutions. Yet working with scientific knowledge means being a part of one of our oldest institutional environments, which is also the bearer of traditions that live on and whose influence continues to influence the creation of scientific knowledge. This also means both that power relations, strategic alliances, and tactical moves impact scientific development, and that scientific thinking needs mechanisms that can serve as correctives to corruption, opportunism, and nepotism. In a time like ours, when scientific thinking is being attacked and threatened, we need to safeguard the institutional bases of science while also enhancing its laboratories of interpretation. Philosophy of science suffers from an economic lacuna, too. When talking about science, we have a tendency to pretend that money is not involved—despite the policy discourse being entirely shaped by economic interests. This is curious, given the fact that both history and our present moment attest to the central role played by money in the creation of scientific knowledge. Historically, there has also been an extremely strong connection between the development of capitalism and the emergence of modern science. If we want to understand the conditions for scientific thinking in a complex knowledge society based on enormous systems of research financing, we need to, as noted earlier, follow the money! In a historical perspective, scientific thinking in countries like Sweden has changed in an extraordinarily short space of time, from having been handled in the late 1960s by a Minister for Ecclesiastical Affairs (a position last held by Olof Palme), who was responsible for science as well as church-related issues and culture, to being today dominated entirely by economic considerations. In practice, this has meant that science is becoming primarily the responsibility of the Ministry of Business and the Ministry of Finance. At a microlevel, too, questions about financing now occupy researchers’
Globalization and Knowledge Society
281
thinking to such a degree that aims (knowledge) and means (financing) are tending to change places—promoting a dangerously instrumental mindset as researchers no longer search for financing for their knowledge projects but instead scan the field for suitable kinds of knowledge in order to be able to meet the criteria given in a funding announcement. The political lacuna affecting philosophy of science is actually just as curious as the economic, given how obvious it is that the state has been crucially important in both financing and shaping modern scientific thinking. But in today’s globalized economy, a kind of cognitive nationalism has also emerged, with the result that we prefer to talk about scientific knowledge almost exclusively in terms of national resources and assets—despite the fact that scientific thinking fundamentally relies on universal and transnational ideals and logics. Scientific thinking will thus be characterized by the same tensions between global and national interests and ideals that have increasingly come to define the public sphere at large. Last, but definitely not least, we need to be aware of the technological lacuna in philosophy of science. We are still suffering from a lingering tendency to overlook the fact that technology and the use of technology are integrated elements of scientific work. The omnipresence of information technology means that it will soon be impossible to differentiate between human and machine in the process of knowledge production. However, what makes it so difficult to think about science in terms of technoscience is probably the fact that science and technology so quickly became bogged down in the deep tracks that in our history have been created by a linear and instrumental understanding of the relation between theory and practice.
When the fate of the nation-state became the fate of science As we have seen, the traditional narrative of science has often taken the form of a conflict with religion as the adversary. Yet history teaches us that what looks like a struggle between faith and knowledge, Church and university, in reality involved a tug-of-war over scientific thinking itself and the universities that were pulled between the Church and the nation-state that emerged in the nineteenth century. The presence and central role of the state in this drama only becomes visible, however, when we abandon the perspective of this secularization narrative. During the last few centuries, scientific thinking and the university have definitively moved from the jurisdiction of the church to that of the nation-state, and the history of the university has to a large degree been determined by the developmental logic of the nation-state. Consequently, the horizon of expectations for scientific thinking has also profoundly changed.
282
Chapter 11
Globalization, as we name the great transformation of the world in our era, has meant that the framework used by society when viewing scientific knowledge is now dominated by economic considerations. In recent decades, scientific thinking has become absorbed into the new economic logic of the welfare state, a development often taken up in increasingly critical discussions of New Public Management. Another way of defining and problematizing globalization’s consequences for a scientific thinking increasingly integrated into the welfare state has been presented by the Danish political scientist Ove Kaj Pedersen, who has outlined a theorical framework about how the welfare state in the last few decades has developed into what he calls a competition state. The competitive state is a state that actively seeks to mobilize its population, companies, and its entire resources in a global competition—rather than, as with the welfare state, compensating and protecting its population and companies from the boom-bust cycles of the international economy. Furthermore, the competition state seeks to make individuals responsible for their own lives, connecting community with work, and treating freedom as something identical with the realization of one’s own needs—rather than, as with the welfare state, emphasizing moral development, the social dimension of democracy, and a concept of freedom focussed on the opportunity to participate in the political process. In short, the competition state is a state that, instead of seeking stability, promotes dynamism and unending reform. It is a state that is more dynamic and internationally oriented than the welfare state.278 While the competition state had its breakthrough in the 1990s, when it also became an economic-political program for the Clinton administration, its real roots lie in the bitter experiences of the 1970s, when the postwar welfare state proved increasingly difficult to finance as the reform wave continued despite the postwar economic boom having come to an end. At a time when international competition was changing, the hyper-competitiveness of globalization began to establish itself, and the old communist states were incorporated into a shared world economy, nations also began to compete with each other in a new way. They carried out a “total mobilization” of all material and immaterial resources in order to face this global competition with the help of institutional reforms that opened up their own economies to the forces of competition. Everything became subject to competition. As a result of this changed logic, the state also underwent a metamorphosis. Where the welfare state had focused on securing, protecting, and compensating its citizens, the competition state instead recognized one of its primary tasks as being the effectivization of the public sector and the promotion of business competitiveness by equipping the labour force in
Globalization and Knowledge Society
283
order to increase their degree of employability and encourage adaptation as a way to exploit the full potential of the economy. Pedersen describes the evolution of the welfare state into the competition state as a shift from welfare to competition. At first, the rationale for this process was highly ideological, but over time it has been absorbed and internalized as the logic of society. This can be seen, for example, in how the primary focus of schooling has tended to shift away from the creation of citizens to a focus on issues relating to national competitiveness. As the task of pedagogy has become to motivate individuals to see themselves as responsible for developing their own skills set, the frame of reference has also tended to become economic, rather than political, and focused on obligations rather than rights. Although there is lingering uncertainty about the degree to which states can really be said to compete with each other, it is clear that nation-states are anything but superfluous in the era of globalization. Far from the world having become post-national, the ongoing process of globalization has brought about fundamental changes in the nation-state and given it new tasks. Where the welfare state’s focus of interest was largely on democratic participation, the competition state has concerned itself with economic competitiveness. Pedersen describes the competition state as a combat organization created to “mobilize society’s resources in competition with other states,” in contrast to the welfare state’s protective and compensatory ambitions.279 The organization of the competition state therefore relies on a new form of bureaucracy that primarily treats activities in terms of their economic aspects and focuses on the efficient use of resources. Competition among nations is intensifying over how to become the most adaptable, successful, and efficient in delivering the services needed by private companies in order to compete in regional and global markets: The competition state therefore reorganizes itself so that state employees work as efficiently as possible and do so with the highest possible degree of productivity, while at the same time ensuring that what they produce has optimal impact on private-sector competitiveness.280
As state employees find themselves confronted by the task of living up to these goals and meeting the production targets set by politicians, economic efficiency, results, and delivery capacity are becoming entirely detached from democratic anchoring processes, profession, and any kind of ethical aiming for the good life. This shift from politics to economics is also ushering in a transition in which democracy as a competition system is replaced by the market as a competition system. Whether or not this development should be regarded as necessary and/or desirable, there is no
284
Chapter 11
doubt that we are today facing a challenge of how to manage a growing deficit in the democratic legitimacy of state authorities in society in general and in academic institutions in particular. Pedersen argues that the competition state has a deficit of legitimacy that can be localized as “the ethical moment,” that is to say, it places greater emphasis on efficiency than on rule of law.281 The competition state seems to lack, or, at least, has not yet managed to demonstrate, any mechanisms for dealing with the moral deficit of a democracy that has been reduced to citizens merely deciding at regular intervals “who should be in charge”—something that ultimately risks transforming a democratic society into a state ruled by a majority dictatorship.
The competition state as scientific challenge Since the modern university has increasingly distanced itself from the jurisdiction of the church, to instead become increasingly integrated into the administrative logic of the emerging nation-state, it has also come to be shaped by the development of the welfare state— and therefore also being reshaped in accordance with the accounting and control system of the welfare state, built on the sequence: inputs – results – effects. This institutional reality is also largely responsible for setting the terms for all those engaged in scientific work at universities today, but also for having generated a deficit of legitimacy. In its one-dimensional focus on efficiency and delivery capacity, the competition state tends to ignore both academic ethos and the moral aspect of human action as a precondition for laboratories of interpretation to be able to function within science. Indeed, it seems like the challenges facing the competition state become strikingly acute in relation to the issue of how the nation-state should control and manage universities and institutions of higher education. Here, I would like to highlight four areas in which the competition state challenges science. First, the competition state has introduced a one-dimensional culture of competition, that in the long term risks impoverishing both society and academia. No living system, whether markets, organizations, or academic institutions, can survive and develop successfully without maintaining a balance between competition and collaboration. Being good at competing in complex systems is not always the same thing as being good at developing scientific thinking of the highest quality. Thus, when the state massively increases the pressure from competition, academia risks becoming dysfunctional—and this in an organization that has itself always been predicated on competition.
Globalization and Knowledge Society
285
Second, we are seeing a development in which different competition systems are getting mixed up. While it is true that both democracy and science are constructed as systems of competition, in an era dominated by market thinking we easily forget that science as a competition system has goals, driving forces, and aims that differ in many regards from the system of competition found in the marketplace. Efficiency may be a virtue in the marketplace, but the competition system of a democracy is built on legitimacy and principles of power-sharing, while scientific thinking is primarily about competing ideas, not people. The main goal here is not efficiency or announcing winners but developing knowledge and establishing the greatest amount of evidence within the framework of a quest for truth. In the era of the competitive state, however, the various systems of competition tend to collapse into each other, with the result that everything turns into markets and competitions, which is hardly a favorable basis for individual personal growth or for sustaining the viable, long-term development of knowledge. Third, the narrowly economic focus that is a feature of the competition state’s management system also tends to generate a cultural and moral deficit as the processes that create the legitimacy of this institution become impoverished in an era when everything is about efficiency and delivery. We are now increasingly becoming aware that developments in society at large, and in scientific thinking in particular, are leading to a society lacking moral content. When the particular ethos that forms the foundation of academic life and work is being reduced to a formalistic code of conduct for state employees, in a state where efficient delivery is the governing ideal, there is certainly a risk that the knowledge culture of academia will become impoverished and superseded by “valueless” instrumentalization. Fourth, there is a striking lack of alternatives in our societies today. We are wealthier than ever before in the history of mankind, our imagination is stimulated on a daily basis by an almost unbelievable culture of storytelling (which has also grown into one of our largest industries), and at the same time our societies are making the greatest investments in research and knowledge development. And yet, strangely enough, we now seem unable to imagine that another way of living is possible or that society might be reshaped in a profound way according to other principles. Shouldn’t it be the other way around? The material, mental, and cognitive conditions in our time must surely be more favorable than at any previous moment for those who want to think ground-breaking thoughts, develop new projects, and practically realize other ways of living. Of course, an era characterized by “the politics of no alternative” in which political debate mostly seems to be limited to who can deliver most efficiently, represents an enormous challenge,
286
Chapter 11
if not a direct threat, to a scientific project that mainly relies upon the invention of discovery. This lack of alternatives is not a good seedbed for science, whose horizon of expectations is the future. The situation is deeply paradoxical, because we live in a society defined by dizzyingly rapid change, and yet we are at the same time paralyzed by the feeling that, deep down, nothing can be changed. If we also consider the fact that society increasingly directs research and teaching in the form of greater or fewer specific commissions, we may understand the complicated situation into which scientific thinking suddenly finds itself when a welfare society gives way to a society of competition. Fifth, the strategic point at which the two knowledge projects originating from the Enlightenment—democracy and science—collide head-on with the competition state is located to philosophical anthropology and resolves around the generalized view of the human being as homo economicus. The organizational principles of the competition state not only involve a new view of the state, organized like a market with the resources allocated based on competitive behavior, it also ultimately presupposes a new kind of human, in the form of a rational being who is regarded as entirely motivated by selfishness and self-interest. Pedersen sketches a historical overview of the postwar era, in which the contrast with the unimaginable crimes against humanity committed during the Second World War served as the source of moralization, which in turn contributed significantly to the legitimacy of the welfare state after the war. The Universal Declaration of Human Rights in 1948 ushered in an “age of moralism,” whose most significant inventions include the welfare state, intergovernmental agreements, and the subordination of economics and markets to politics and democracy. At the same time, the neo-liberal critique of the planned economies of socialism and the welfare state as a safeguard against extremism came into vogue in the 1980s, as an alternative way of criticizing Nazism’s destruction of democracy. Neoliberalism, whose roots go back to interwar Vienna, came to assume a dominant position and thereby was able to establish the doctrine of homo economicus as an anthropological precondition of emerging deregulated economies. If the transformation of the welfare state into a competition state can be described as a movement from moralism to economism, this shift also saw the emergence of a new type of personality: the opportunistic personality, who has suppressed political theory and moral philosophy from the public debate, and in its place installed economism as the number one ideology of the society.282
The opportunistic personality to which Pedersen is referring, who sees human beings as rational, egocentric profit-maximizers, represents the
Globalization and Knowledge Society
287
anthropological underpinning for everything from rational-choice theories to the accounting and control systems for academic work in the competition state. If scientific thinking is ultimately about people, this is a view of human beings that risks undermining both the moral and the cognitive preconditions for long-term, viable scientific activity. But there are also threats from other quarters.
The future of knowledge society? During the 1990s, a policy consensus emerged about how to successfully address the challenge of globalized hyper competition. In the EU, the OECD, and the UN system, there was a conviction that competition should not take the form of a race to the bottom but, rather, involve movement upon the pyramid of values on the basis of skills and the power of innovation. The response to the challenges of globalization was, as noted earlier, knowledge, knowledge, knowledge, and yet more knowledge. This is the strategic agenda of knowledge society, and it also seems to have worked most successfully within the framework of welfare states, which have invested heavily in adaptation and retraining for those groups whose jobs have been lost as a result of competition. By contrast, societies that have not chosen to invest in the knowledge society have seen the deeply disquieting advance of rusts belts and, with them, the emergence of increasingly fierce resistance to globalization. In the backyards of globalization, there is growing skepticism towards the very idea that a globalized division of labor can bring competitive advantages. At the end of the day, are most people losing out? This type of question rouses our curiosity, which feeds our fears, which in turn risks undermining the premises for globalization. In the world today, there is a rapidly hardening opposition to the consensus that has long existed about investment in the knowledge society being the best way to meet the challenges of globalization. Instead of globalization’s knowledge societies, we now see a growing preference for protectionist strategies that are narrowly focussed on protecting each nation’s own industries and jobs. In the tension between globalism and nationalism, the university and scientific thinking have become caught in a bind as some nation-states begin to cut back on education and research, particularly in subject areas such as the humanities and social science, and to direct scientific thinking on the basis of a purely instrumental interest in (short-term) growth and competitiveness. The strategic conflict increasingly seems to be between, on the one hand, the knowledge society’s strategy of using structural transformation to meet the challenges of globalization, and, on the other, what might be called Trump-society protectionism, which
288
Chapter 11
instead resignedly draws its strength from populist nationalism.283 The situation is grave. Fears that a competitive advantage will not materialize risk paralyzing the social body, and the closing of borders threatens many of the values that we have begun to take for granted and that are necessary conditions for scientific thinking to continue to develop. It is a situation that raises difficult questions: Is knowledge society threatened and (already) coming to an end? If so, how will this affect science? For those who want to defend the knowledge society’s heavy investment in science—without ending up in an extremely instrumentalized view of knowledge—it is time to build a new alliance between the two great knowledge projects of the Enlightenment: science and democracy.
CHAPTER 12 BEYOND RELATIVISM AND OBJECTIVISM: STRIVING FOR TRUTH
Because modern science has been shaped by positivism—as well as by its many critics—the scientific imaginary has been continually plagued by objectivistic expectations about being able to guarantee absolute knowledge by means of purely causal explanations and immediate empirical verification. The fact that, in practice, it has not been possible to establish such certainty quickly led even positivists to modify their position, something that resulted in positivism adjusting its original programme and shifting its position on the basis of both “internal” and “external” criticism. But these grand ambitions have also regularly elicited powerful reactions that have created tremendous pressure in the opposite direction. For this reason, many have chosen to abandon the quest for truth that underpins the scientific project and instead resigned themselves to calling for a relativistic position. Modern approaches to science have therefore tended to oscillate between extremes, counterposing objectivism to relativism, absolute knowledge to arbitrariness, and a single ideal of science to calls for the dissolution of science into discrete epistemological domains. A polarized landscape emerges in which it can easily be thought that theory of science means choosing between narrow-mindedly extreme positions. In order to rectify the stereotyping that has often been a hallmark of accounts in philosophy of science, it is important to maintain a strictly contextualized understanding of scientific praxis as a historically variable phenomenon—a project. The idea of scientific “objectivity” itself has in reality a history, and this history is deeply rooted in the nineteenth century. Objectivity is not some naturally occurring phenomenon—in precisely the same way as subjectivity is not some originary or default standpoint. In their account of how epistemological ideas about mechanical objectivity emerged and assumed a dominant position within science, Lorraine Daston and Peter Galison are therefore careful to emphasize that the other side of the story— subjectivity—also has a history and that both these stories are interconnected: “Subjectivity is as historically located as objectivity; they emerge together
290
Chapter 12
as mutually defining complements.”284 This means that we also need to call into question the narrative about scientific development that presents the phenomenological experience of the world as an ontological-historical origin, which was affected by over time by the ontological collapse that followed upon the emergence of modern (natural) science. Rather than representing the underpinnings of modern science, the investigation of phenomenological experience, when taken to an extreme, in fact leads onto asymmetries. In this book, I have extended Ricoeur’s ideas by calling for a “broken” ontology that can only maintain its hermeneutical integrity by means of unstable “heterogeneous syntheses” continually moving towards conflicts of interpretation that bring together the asymmetries which have resulted from the collapse of phenomenology.285 Within the framework of this hermeneutic perspective, objectivity and subjectivity appear together, as complementary terms of a historical reality in which scientific objectivity and scientific subjectivity mutually define each other. This mode of relating the history of science underscores once again the importance of avoiding either-or dichotomies, but instead trying to think of science as two sets of conditions that clash with each other within the framework of a shared quest for truth.
Relativism, cognitive horizontalization, and post-truth On several occasions, we have noted that we are now living in an increasingly polarized cognitive landscape, which has, in turn, been projected back onto the history of science, with the result that relativism is made to seem like the alternative to objectivism. Curiously, many people imagine that relativism comes as a mere gift, as long as one abandons one’s objectivist ambitions. Yet this is to forget that relativism is a highly complicated epistemological position and seems in many regards even harder to defend than objectivism. The classical paradoxes associated with this position have been a frequent point of reference in the history of philosophy. For example, how is it possible to claim that everything is relative—without immediately making absolute claims that run contrary to that very relativism? And how can it be argued that we are always lying— without by the same measure claiming to be speaking the truth? In other words, the challenges posed by relativism to our epistemological considerations have been gravely underestimated. Science is not some narrow-minded theoretical activity. Rather, by actively participating in different laboratories of knowledge, we practice and learn how science works. While hermeneutics was developed, from the beginning, largely in opposition to positivistic objectivism, the epistemological
Beyond Relativism and Objectivism: Striving for Truth
291
landscape of our era has changed radically, such that relativism has now become the most important opponent of and challenge to hermeneutics. Today, the horizon for our laboratories of interpretation has thus changed character: relativism, not objectivism, has become the most important challenge. As science today finds itself compelled to take interpretation seriously in order to forge a path in an age of hermeneutics, it is of the utmost importance that our conceptualization of interpretation is not transformed into an arbitrary relativism. An unanticipated consequence of the new digital information system that underpins globalization, particularly when considering the utopian expectations about knowledge that permeated the early phases of the information age, is an impoverishment of our knowledge cultures that threatens to usher in a “flat” world of cognitive horizontalization. As we have already explored, these effects are ambiguous in that they can be seen as an opportunity for democratization while also having a levelling-down function. Increasingly, however, we have begun to refer to a state of posttruth. The vast amounts of information now flooding the world are creating problems by threating to give rise to an unmanageability that, in turn, will create confusion. Ultimately, we grow apathetic and cease to care about the quality of information: a lie comes to be thought as good as a well-founded statement. It is thus no coincidence that we are increasingly sounding the alarm about resistance to facts—and yet it should be noted that resistance to medication results not from insufficient medication but its overconsumption. In other words, resistance to facts should be understood as largely the result of the vast number of facts, which can be more or less in agreement with each other, and which have often been mixed with lies and dubious assertions, but whose sheer volume impairs our judgement. The emergence of a new media logic that increasingly fuses social and traditional media has also given rise to a situation in which it seems that everyone can become their own news channel and knowledge producer. When the hierarchies of knowledge are eroded, quality control and the possibility of making authoritative assertions and scientific claims also disappear. At a time when various systems of competition are collapsing in upon themselves, there is also a dangerous tendency for democracy to be used as an argument that anyone has the right to make knowledge claims— without any need to be burdened by the procedures that science uses to ensure quality. Yet the accessibility conferred by podcasts and ubiquitous media makes possible an influence that far exceeds that available to “traditional” scientists in research communities. These challenges are compounded by a new phenomenon: the open contempt for science that is now increasingly articulated by political leaders, as when they declare
292
Chapter 12
themselves to be tired of “experts” and “academics.” But this contempt for science is also manifested in the way that people, without proportion or qualification, exploit the slightest dissension among researchers as a pretext for announcing that “all cats are grey in the dark”—and that there is therefore no reason to pay any particular attention to the representatives of science. Globalization is a fundamentally positive process of change, which during the last three or four decades has created the largest increase in prosperity and public health in history. While not everyone has shared in the benefits of globalization, if we examine the population growth that has taken place during this period, it is clear that without this development, the situation could have been far worse, if not indeed catastrophic. It is only now that we are beginning to see the truly dark sides of globalization. From an early stage we were warned that globalization exerts relentless pressure on the planet’s ecosystem and threatens environmental destruction, climate change, and the spectre of a resultant ecological collapse—in addition to immature financial markets which are erasing the distinction between investment, speculation, and gambling in a real-time economy whose regular crises are as unexpected as they are predicable—but it is only very recently that we have become fully aware of the devastating consequences that cognitive horizontalization will have for a world that is unable to balance that horizontalization with different forms of “vertical” thinking, that is to say, thinking that is rooted “downwards” in traditions as well as “upwards” in a capacity to make knowledge claims and establish more or less fixed hierarchies of knowledge. The “flat” information world of globalization also increasingly sets the terms for the scientific community. It brings opportunities but also new challenges in the form of a new kind of relativism and a new cognitive uncertainty. Although there are academic variants, which often go under the rubrics of social constructivism, post-structuralism, and postmodernism, it is hardly among these theoretical movements that we find the most important causes of the widespread relativism of our time. Rather, these should be looked for in the new information system that is fundamentally changing the world. A cognitive horizontalization of the world, that is not balanced by various kinds of vertical thinking, represents one of the more truly threatening aspects of globalization, and these problems are only now becoming fully visible. But technology does not have a life of its own. Nor is it exclusively a question of purely technological problems, but rather of people and their use of technology. Now, the ability to interpret is therefore emerging as a crucial skill for anyone who wishes to navigate within this new cognitive landscape.
Beyond Relativism and Objectivism: Striving for Truth
293
In this situation, however, many people are trying to exploit the uncertainty that is a constitutive element of the scientific project and that it shares with democracy as a life form and a political system.
Objectivism and the new formalism There is a temptation when confronted by these challenges to take refuge in an absolutist bastion of science, an objectivism that claims to represent Truth and that unendingly supplies timeless facts that refute the lies in society—a kind of “fact fetishism.” When this kind of defence of science is manifested in an internationalized form, by means of Science March and similar phenomena, there is a real risk of reinforcing hard-boiled scientific ideals, with a consequent danger of suffocating the dynamics and robust power of the scientific project. Yet even this counter-reaction to the soft relativism that often results from the cognitive horizontalization of the world derives much of its own energy from the benefits of globalization. The unimagined capacity of the new information system to amass and process vast amounts of information using big data has created entirely new possibilities for surveillance and control. By means of information processing this is now used on a daily basis by companies and defence organizations as well as by state agencies and internet trolls. Positivism originated as an epistemological project, yet the magic of numbers that defines the present and has given rise to a new, globalized version of positivism is not founded on arguments from theory of science but draws its strength more from the systems of accounting and evaluation at work in our new regimes of control. People have probably always compared themselves with others, but the possibility of continual comparison made possible by the enormous amounts of data which digital technology can process means that the scientific community is now increasingly characterized by incessant comparison through bibliometric instruments, evaluations, ranking lists, and scoring. It would seem that the magic of numbers has once again triumphed over words, such that most quality systems in practice are now designed as quantity systems. Research policy uses bibliometric instruments to measure the performance of universities and researchers in terms of publications in journals, which are in turn ranked (by the number of citations and references). At a time when formalized procedures form the basis for establishing prestige in scientific communities, it is perhaps not so strange that science has come to be defined by the new formalism—with the result that the informal dimension of science has been increasingly suppressed and absent
294
Chapter 12
in view. Strategic development, organizational changes, and audits are carried out in accordance with the mantra “what you can’t measure, you can’t manage.” However, in this situation we need to remind ourselves of the fact that one of the fundamental precepts of accounting is that, in the real world, you do not measure what is important but what can be measured, and when what is important cannot be measured what can be measured becomes what is important. If we then begin to allocate resources in accordance with these measures—and organizations and individuals adapt their behaviour so as to be as measurable as possible—we will very soon find ourselves in a dire situation. Today, we can observe the consequences of this in terms of dysfunctional academic organizations in danger of losing their dynamics and their developmental capacity as a result of the dominance of this logic of formalization. Among the many ironies of the present moment is the experience that the more we formalize, the more we hasten the emergence of new areas that resist being formalized or articulated comprehensibly. In this situation, Polanyi’s discussion of the “tacit” dimension of knowledge has once again become topical as the starting point for illuminating the limits to formalization and the necessity of riding two horses at once: in order to understand how knowledge works, it is necessary to include both that which allows formalization and that which does not. Polanyi’s reference to the “tacit” dimension of knowledge serves as a reminder that people are always present in our knowledge and that there will always be dimensions of scientific activity that resist being articulated in terms of theories. Knowledge cannot exist without people—it is that simple. Knowledge thus has an unavoidably personal dimension. Regardless of how advanced our systems and technologies, we cannot get away from the fact that knowledge ultimately requires capable human beings. This is doubly apparent in the context of education, where one easily finds oneself thinking that essays, exam results, and doctoral dissertations are the only products of scientific activity, when in fact the most important end result is almost always people—capable human beings. When it comes to the logic of research, new knowledge may be the result, but for it to come about there will always be a need for creative people. There is a growing body of scholarship on the limits of formalization, which in Sweden has been developed in a number of research areas, inspired by Ludwig Wittgenstein, Michael Polanyi, Richard Sennett, Martha Nussbaum, and others. Early contribution to this field in Sweden includes Bo Göranzon’s study of automatization and professional expertise in The Practical Intellect: Computers and Skills (1996), and the research grouping on Skill and Technology that was developed for several decades within the
Beyond Relativism and Objectivism: Striving for Truth
295
framework of the Dialogue Seminar Series, based partly in the Royal Institute of Technology (KTH) in Stockholm and partly in Stockholm’s Royal Dramatic Theatre.286 Departuring from a kind of philosophical language laboratory, and confident in the importance of the poetic impulse for encouraging creative writing, they drew up a series of reflections intended to liberate a strain of practical knowledge, derived from experience, within the “silent” professional skills. Another Swedish scholar who has contributed to this tradition is Ingela Josefsson and her study Läkarens yrkeskunnande (On the Professional Skills of Doctors, 1990). The issues of professional skills and formalization were, also in this context, brought to a head in Johan Svahn’s doctoral dissertation on the culture of security within the Swedish energy company Vattenfall and the challenges associated with the problem of generational transition within a nuclear power industry in which many experienced employees are retiring. The focus put here on the importance of prudent action in situations that lack the formalized rules or manuals to guide those responsible for safety, is clearly inspired by Wittgenstein’s idea that it is not possible to articulate rules for how to follow the rules. In the final instance, we are dependent upon our own judgement, since such knowledge cannot be summarized in an axiom.287 A good safety culture cannot be realized by mechanically following the formalized rules in manuals; rather, the skills of professional expertise must be developed by means of a wide range of examples.288 In short, formalization is possible and necessary—but never sufficient. It is also as an extension of this tradition that we should understand Jonna Bornemark’s critical investigation of formalization in Det omätbaras renässans: En uppgörelse med pedanternas världsherravälde (The Renaissance of the Unmeasurable: Confronting the Empire of Pedantry, 2018).
Instead of possessing the Truth, science must content itself with evidence The concept of truth turns up at regular intervals in any discussion of science and academia. The concept is laden with great expectations and has perpetually shadowed modern science. While aspirations to truth have varied across a broad spectrum of positions, they have strikingly often been drawn towards extreme positions in which one either makes objectivistic claims to be presenting the final Truth or resignedly notes that no such claims about truth are possible. In the practice of everyday science, however, neither or these deeply polarized positions is particularly common. A moment’s reflection shows that in practice both these attitudes mean taking up a position outside of science. Whenever someone is no
296
Chapter 12
longer searching for the truth but instead resigns themselves to making do with a statement that the world can be interpreted in different ways, they have abandoned the criteria for scientific conversation. Yet it is easy to forget that even someone who claims to possess the truth is disqualifying themselves from scientific debate, since such a position in practice serves to short-circuit the academic process, because it becomes a practical impossibility to collaborate and interact with people who behave in this way at academic seminars and doctoral defences or in peer-review evaluations of publications and conference presentations. In real life science concerns itself very little with Truth in particular and only rarely can provide what might be considered absolute certainty or full knowledge. Likewise, there is rarely complete consensus within a scientific discipline—such a subject would quickly be deemed defunct and closed. The laboratories of interpretation in science are built to manage conflict, they are platforms where different positions and perspectives are supposed to compete. This presumes that those who participate know something, of course, but also that all people involved are open to that which they do not know. The productive scientific value of not-knowing should never be underestimated. Science thrives on questions and on its self-critical capacity to call things into question, even what may seem obvious, in a continual search for truth that takes place within the framework of a process that is effectively never-ending. Science is thus less concerned with truth than many people imagine. What really matters is evidence. It is perhaps not so strange that the concept of evidence has proven so knotty, given how dictionaries insist upon translating this concept with terms such as “certainty” and “incontrovertibility.” Evidence does not mean truth, however, but denotes rather a scientific proof that is accompanied by some degree of uncertainty. In reality, argument on the basis of evidence involves systematically compiling and checking the quality of all relevant and available literature with a view to establishing the highest possible degree of evidence at the present moment. Clearly, there are many pitfalls associated with the ready use of the concept of evidence, given the tension between, on the one hand, the conventional meaning of incontrovertibility as something beyond dispute, and, on the other, the uncertainty associated with arguments based on evidence in light of the fact that in practice it is a question of managing different degrees of evidence.289 When we speak of evidence as if it were a theoretically pure standard, we disregard its foundation in philosophy of action and that it is all about evidence-basing. The concept of evidence-basing also indicates that we have to do with a process concept. Evidence is about neither absolute certainty nor already finished results. That something is based on evidence
Beyond Relativism and Objectivism: Striving for Truth
297
means that it is founded upon a knowledge practice that seeks to secure as strong evidence as is currently possible. Truth tends to become a “digital” concept that sets up binary alternatives in terms of truth or lies, but one never speaks of evidence in absolute terms. Instead, it is always a matter of different degrees of evidence: evidence can be weaker or stronger. All of this indicates that the horizon of understanding for the concept of evidence is less about defending certainty and more about managing uncertainty. As a result, the horizon for philosophy of science has moved very pronouncedly, from guaranteeing certainty to managing uncertainty. This has profound consequences for how we view research, as Helga Nowotny explains: Uncertainty is an inherent component of the process of research. It resides in the multiple ways of searching for and generating new knowledge. Discovery is open-ended and fundamental research cannot predict what it will find and when. Research is the basis of a powerful and systematic process that seeks to transform uncertainties into certainty, only to be confronted with new uncertainties again […] always preliminary. They can and most likely will be replaced by new knowledge, sidestepped by new certainties.290
The concept of evidence has its origins in methodological developments that have taken place in evidence-based medicine (EBM), which prior to any healthcare decision seeks to establish “evidence” by systematically gathering and evaluating the quality of the most reliable scientific knowledge available, instead of relying on (colleagues’ and writers’) “eminence.” In establishing evidence-based clinical practices, healthcare professionals are primarily interested in three methods: randomized and controlled clinical tests (RCT, involving the effects of interventions on one treatment group in comparison with the effects on a randomly selected control group), metaanalyses (developing systematic overviews by means of statistical methods), and clinical guidelines (checklists and criteria for transferring and applying knowledge in healthcare).291 After being very successfully implemented and becoming standard practice within healthcare and medicine, this form of evidence-basing spread first to social work and pedagogy before eventually becoming a generalized practice within virtually all fields of knowledge. The evidence movement must be understood in light of the need for evidence-based decision-making in a society based on science. But the evidence movement has a longer history—in fact, like so much else, it originated in wars: the Spanish Civil War and the Second World War. Doctors’ experiences of brutal and desperate situations during wartime raised questions about what fraction of existing medical practices were in
298
Chapter 12
fact suited to their purpose and founded on a scientific basis. The notion of evidence came into being from a desire to break with blind authority and instead base treatment methods on reliable statistics. In a society now dominated by accounting and audits, the evidence movement has merged into and become an integral part of various kinds of quality-control systems and evaluation programmes intending to handle shortcomings in quality, identify development needs, and address local inconsistencies in practices. Thus, we now refer to “evidence-based” activities of every conceivable kind. However, what often happens is that the word “evidence” is used as something detached from human actions and, moreover, is taken to be a statement of truth, rather than being an issue of managing uncertainty. This has been made possible by obscuring the technologies and procedures used for evidence-basing when presenting “pure” data in isolation. Ingemar Bohlin and Morten Sager, who have systematically highlighted the importance of focusing on practices, point out that the critical issue is ultimately how evidence-based practices work in practice. This is being done by emphasizing two points of tension that recur throughout evidencebased medicine: first, a tension between strict methodological stringency and practical relevance when methods are used in particular contexts; and second, a tension between the growing formalization of assessment and decision-making procedures and the increasing difficulty for experts and professions to make independent evaluations.292 In this way we return to the debate about the new formalism and how, as formalism takes over, the internal tensions between theory and practice within the concept of evidence are now making it increasingly difficult to develop the profession and apply clinical judgement in concrete situations. But even if the concept of evidence is often used in a way that seems to refer to a kind of objectified and depersonalized truth, there is always a dimension of human behaviour present whenever evidence-basing is used or when evidence-based practices need to be implemented. No matter how much knowledge that is integrated in today’s systems and institutions, we can never get away from the fact that knowledge requires the involvement of humans. When Bohlin and Sager forcefully highlight “the diversity exhibited by evidence-based methods in their practical application,” they are also helping us to see the presence of human involvement in evidencebased knowledge.293 Behind all data lies some kind of action—there is no area of knowledge that is entirely free from human involvement. For this reason, it is important to defend a space where well-informed judgement can be exercised.
Beyond Relativism and Objectivism: Striving for Truth
299
Science is much more about (different degrees of) evidence than (absolute) truths. In order for the concept of evidence to retain its dynamics and avoid being consumed by the desire to make overly strong claims to possess the definitive truth, it is important to use the concept with greater precision and never forget that evidence must be “based” on something, to be aware of the significance of contextualization when exporting the concept of evidence to new areas of knowledge, to broaden the concept of evidence to include both quantitative and qualitative methods, and, not least, to balance formalized procedures against professional judgement and collegial practices.294 The scientific community is no place for those who believe that they have obtained definitive truth or who want to gain absolute knowledge—— and it is definitely no place for those who resign themselves to relativism. At an undergraduate level of education, one might conceivably encounter seemingly absolute truths and definitively completed lines of enquiry, i.e. fixed knowledge that can be examined. At the higher levels of academia, however, one will find only seminars, doctoral defences, peer-review procedures, and other kinds of interpretation labs. These are the practices upon which science’s quest for truth ultimately rests. For this reason, academia must devote all its energies to the practice and cultivation of these instruments.
Science is ultimately defined by collegiality and peer-review practices Science is about people, and successful science needs people with the ability to surpass themselves by means of extraordinary performance. We need inventions, both technological and conceptual inventions, that can enable in turn new discoveries. There may be no knowledge without people, but it is also not enough merely to be “excellent” oneself: fundamentally, science is organized communicatively in the sense that it institutionalizes conflicts of interpretation. At a time when it can seem as if the only thing that matters is delivering results, we can easily become blind to the fact that what ultimately underpins science as a rational project are its procedures. Academic freedom is not about being able to do whatever one wants, nor is it an individualistic narcissism that ultimately legitimizes irresponsibility in the name of total freedom. Academic freedom is always exercised on the conditions of collegiality: “Collegiality creates a space for individually oriented, independent researchers and teachers that also fosters a shared sense of responsibility for their work.”295 Academic freedom thus involves debating with colleagues, collaborating, and taking and giving criticism in
300
Chapter 12
a public exchange on the basis of a shared understanding that the best argument takes priority. One cannot do science alone. Rather, it develops within the framework of collegial processes, reciprocally corrective practices in which people take and give critical opinions. The intersubjective testing in the form of seminars, laboratories, doctoral defences, and collegial examination that takes place through scientific publication, evaluations, and expert reports differs in general from other knowledge practices in being part of a critically evaluative interaction in which knowledge claims are being continually scrutinized and developed—in other words, collegiality. This collegiality requires decisions to be based on evidence and argument and for there to be a work-based leadership that rotates among peers in a collegial grouping. In practice, however, collegiality is never entirely independent, particularly not in a time when the university is governed by state bureaucrats and increasingly modelled upon a corporate structure. Within the framework of today’s academic capitalism, the university, faculties, institutions, research groups, and individual researchers are expected to behave as if they were actors in a market and to react to signals in accordance with the logic of homo economicus. In their important book on collegiality, Kerstin Sahlin and Ulla Eriksson-Zetterquist refer to “islands of collegiality” in view of the fact that the reality is considerably more complicated: “In practice, the university is expected to function as a set of collegial groupings, bureaucracies, and companies, all at the same time.”296 This is a reality, but it is also a grave problem, because if collegiality in the form of management-integrated scrutiny is only permitted to be a chain of remote islands in a landscape that in all other regards is organized according to other logics, not only the independence of the work but even the future of science itself will be jeopardized. It is abundantly clear that the mixed forms of management which characterize the university today risk creating a deficit of legitimacy, precisely because administrative decisions and delegated instructions lack the moral gravitas that is a hallmark of the collegial structures of seminars, laboratories, doctoral defences, ward rounds, and committee work, in which, ideally at least, the strongest arguments and meritocracy should prevail. Knowledge is not merely the foundation of collegiality—collegiality is also the foundation of knowledge. The collegial structure is not simply one competitive system among others, it is entirely sui generis. For this reason, collegial leadership must ensure that competition between people does not take precedence above competition between ideas in such a way that the laboratories of interpretation in science degenerate into arenas of
Beyond Relativism and Objectivism: Striving for Truth
301
competition in which individual advancement is allowed to overshadow a collegial focus on developing knowledge. Collegiality also reminds us that knowledge practices and laboratories of interpretation, rather than theoretical apparatuses and formalized manuals, are what ultimately support the scientific project. Students typically first encounter the collegial management and peer review that (ideally) form the basis for admissions, examinations, promotions, and allocation of resources within the university when their course of study involves seminar participation, since it is there that they learn to present, argue, critically evaluate, and develop knowledge in a process of exchange with others. As Sahlin and Eriksson-Zetterquist underscore, collegiality is something that is exercised in practice and that can only be learned by doing. In other words, the seminar is the most important laboratory of interpretation in which collegiality is introduced, taught, disseminated, shaped, and refined: In seminars, research and research findings are evaluated, disseminated, and developed. The seminar is guided by an ideal of a free and frank exchange of opinions in which seminar participants respect and listen to different perspectives on the subject in hand and try to include as much expertise and as many assessments as possible on the basis of what is currently known. Knowledge claims are scrutinized and discussed. The seminar is based on the understanding that conflicts—over factual matters—are functional and prevent narrow-minded thinking but also that colleagues listen to each other. Ideally, it is “knowledge that speaks.” Ideally, the way problems are formulated and their conclusions should change, not because someone with greater authority demands it but because the scientific debate reveals shortcomings in the preliminary knowledge that is being presented.297
This same collegial structure can be recognized in the procedures for academic appointments, scientific publication, allocation of research funding, evaluations, and, not least, doctoral degree committees. Every examination of a doctoral degree also involves the application of quality criteria and a negotiation in which those quality criteria are calibrated. This effectively gives the doctoral degree committee a double role, which further confirms the fundamental importance of this practice for what science is: partly it is about passing (or failing and, in some situations, giving a more differentiated grade) a new dissertation; and partly it is about both passing on and testing the quality criteria for what can currently be approved as scientific research.298 The fundamental importance of collegiality for testing and developing scientific knowledge reminds us again that academia does not hold science and truth in trust like some timeless Theory or fixed standard but rather
302
Chapter 12
comprises a wide range of knowledge practices that are applied in laboratories of interpretation of various kinds, all of which are governed by a desire to establish the best possible knowledge. As Karl Jaspers writes: “The university is a community of scholars and students engaged in the task of seeking truth.”299 Science is a project defined by a quest for truth by establishing increasingly strong evidence, even as it has to resist the temptation to proclaim any definitive truth. Instead of talking about posttruth, we might say that the scientific project, like the search that underpins science, is in a permanent state of pre-truth. This also means that science is dependent upon practices that only allow themselves to be formalized to a limited degree. There is an enormous challenge for an audit society characterized by the new formalism trying to preserve and learn from the informal dimensions of various kinds of peerreview practices. At a time when public doctoral defences are less and less well attended, when collegial bodies are being abolished (or ceasing to be obligatory), and when seminars (which do not count as legally regulated work) are disappearing or being transformed into simple brainstorming sessions or debate clubs of uncertain scientific status, the stakes are not merely academic sociability but the very fundaments of science itself. In the final instance, science is about people and collegiality. If universities are to continue to provide a home for scientific work, they need processes with moral character and scientific legitimacy. For this reason, seminars cannot be reduced to “islands of collegiality” in organizations that are otherwise dominated by entirely different principles. In a society based on science, there needs to be a close connection between attitudes, working methods, governance models, knowledge development, and the development and management of universities. If the principles and management of collegiality and peer-review processes are defined not by theories but by practices integrated into the larger enterprise, then we will need to develop a sensible approach and a reflective balance between bureaucratic rules, the delegation of functions by the organizational leadership, and the work-based, autonomous management that is the hallmark of a collegial system. It would be an illusion to think that collegiality can survive on its own and realistically hope to remain autonomous in any meaningful way in our current, externally financed, and socially integrated university system—and collegiality itself also needs corrective input from without if it is not to become corrupted into a professorial fiefdom, an academic old boy network, a coterie, or a social club. Nor is it feasible to combine these different logics unreflectively, since this would risk leading to new and deformed modes of management.300 Ultimately, science is all about people and collegiality. In order to develop the collegial culture that underpins science as a project, we
Beyond Relativism and Objectivism: Striving for Truth
303
need able education ministers and university leaders, but those of us involved in academia also need to devote far more time and attention to seminars, doctoral defences, and other collegial exchanges within the parameters of the entire spectrum of different interpretation labs.
The implied ethos of science Things that we take for granted can easily escape our notice––precisely because they are so fundamental, but we are nonetheless unable to live without such things. The world we share is reliant upon givens like these in the form of a chain of invisible networks of trust. We would not be able to live our daily lives without being able at all times put our trust in other people. But confidence in ourselves and a capacity to build institutions that inspire trust are essential in order for such a fund of trust to grow and take shape among individuals. It is trust that makes it possible to share the world in the first place, and it is often so self-evident that we take it for granted— such that its real importance only becomes clear to us when it falters. Indeed, it is perhaps only when our reality is destabilized by revolutionary upheavals and there is a crisis—such as when science goes wrong, is perverted, or simply ceases to function in a way that we can trust—that we truly realize the crucial importance that things we take for granted have for us and for every kind of scientific activity. One such shocking incident in science was the scandal that erupted in September 2016 around the surgeon and visiting professor Paolo Macchiarini, based at the Karolinska Institute and Karolinska University Hospital in Stockholm. The revelation that the country’s most highly ranked educational institution had subjected patients to mortal danger (more about which in a moment) by surgically inserting into patient’s defective plastic tracheas delivered a shock to Swedish society in general and to its research community in particular. Yet, as Thomas Karlsohn has pointed out, the criticism directed by the responsible minister and (eventually) others was limited to the legitimate demands that can be made of a civil servant from a narrowly administrative perspective. Macchiarini may have been convicted legally on these grounds—but little attention was given to whether his actions had been in any way reprehensible scientifically. This silence served to obscure the scientific context of the case and ignored the issue of whether Macchiarini’s actions could be considered as having breached the historically established norms, principles, and values that are associated with the academic culture of a university and the professional ethos associated with scientific work—regardless of which sector of society the university and science happen to be bracketed under. The behaviour of its
304
Chapter 12
leadership indicated that the very idea of the university and the foundations of science seem to have been entirely absorbed into its role as a public authority. In light of what he calls the “absence of any reflection upon the basis of critique,” Karlsohn refers to “an intellectual deficit that defines large parts of the academic world today” and calls for a debate on the serious failings in the academic culture at KI and elsewhere, quite apart from the fact that there has been a breach of the rules for public administration.301 At a time when science is being fundamentally called into question, and when “post-truth” rhetoric, characterized by an insidious wave of relativism, is gaining ground, it is clear that the search for truth that underpins the scientific project cannot be regarded as something that “automatically” functions as an innocent and self-evident instance of “pure” objectivity. There can be no science without people, and knowledge can never be detached from ethical considerations. If we return to the idea that the shared reality we call the university is underpinned by things taken for granted, whose importance only becomes visible when their very existence is threatened, we can also see that this has actually already occurred on several occasions in the past. At a time when almost all world-leading universities are American, few people are aware that science at the start of the twentieth century was entirely dominated by Germany. Yet two world wars and the experience of a totalitarian regime of the worst kind resulted in the country losing this distinction. When the philosopher and university vice-chancellor Karl Jaspers published his second book on the university (he would later publish a third), it was with the deepest sorrow that he reluctantly conceded that academics had not proven to be the bulwark against fascism that he and others had hoped. On the contrary, scientists seemed to be among the easiest to attract to its barbaric projects by means of money and positions. There is thus a deeply moralizing undertone to Jasper’s reflections upon the university’s situation after the catastrophe. Indeed, when he wrote the following lines in 1945, he even questioned whether the university had a future: The future of our universities—if they are to have a second chance at all— depends upon recapturing their original spirit. The degeneration went on for half a century until the university finally fell helplessly into the abyss. Its moral disintegration continued for another twelve years, and now we find ourselves at a historical moment when faculty members are forced to reflect on and weigh our own actions. When the ground is shaking under our feet, it is time for those of us who are involved in academic life to reflect on where they stand and what they want. If the university is to be re-established, the outcome of the process hinges upon the capacity to return to the greatest achievements in our intellectual history. Now, we ourselves are responsible for what will happen.302
Beyond Relativism and Objectivism: Striving for Truth
305
If science always requires human actions, so too are ethical questions always present in its work. And if establishing a shared reality also within science demands trust, an ethical network, then we need to ask ourselves what ethics means in this context. Ethical questions in scientific institutions are usually reduced to research ethics regulations, which typically appear in a formalized form with a quasilegal aspect and are dealt with by ethical committees. Such rules are necessary, and in recent years we have seen that they are urgently required. Yet the fact is that we are confronted with ethical challenges at a far earlier stage in science, and if these rules are to be seen as truly well-founded, legitimate, and functioning, then our perspective will need to be extended beyond the simple formulaic statement “this is ethical” (which, at worst, also means that one need no longer pay any consideration to moral dilemmas). At the same time, we should be careful not to moralize science in an exaggerated fashion by drowning scientific work in ethical theories or moral imperatives. These would merely give more fuel to the unfortunate tendency towards mutual self-reinforcement by a narrow-minded theoretical understanding of science and the new formalism that characterizes academic management systems. Ethics should not be treated as an additional supplement, a moralism that, as it were, can be tagged on or associated with constrictive rules and that, at worst, can become merely an adjunct to a kind of strategic thinking that effectively seeks to avoid taking responsibility, to protect itself, and to avoid being caught in the media spotlight. But if science is a praxis, then ethical reflection must also proceed from the concrete. We therefore need to start from the ethics implied in good scientific praxis even if not always stated explicitly. It is a matter of rediscovering ethics as a precondition for peer-review practices to work successfully and for the quest for truth that underpins science as a knowledge project. The implied ethos of science is, in other words, an ethics that proceeds from something more fundamental while also being less moralizing, in that its starting point is the fact that science requires people, and that therefore includes an entire cluster of questions that are associated with human behaviour in a scientific context. We have already returned on several occasions, including in relation to Daston and Galison, to the fact that subjectivity, no less than objectivity, has a history of its own. Despite repeated attempts to refine one or the other, they are essentially intertwined: “objectivity and subjectivity emerge in tandem, and the explanation is the demarcation line between them.”303 Daston and Galison proceed to show how the development of epistemology and ethics are closely interrelated. Interest in quantification and efforts to
306
Chapter 12
establish objectivity are as much about a way of being as they are a way of knowing, which means that they can be described as an expression of what might be called epistemic virtues. As has already been noted several times, we need to acknowledge the terms of both sides if we want to understand how science works. The search for a purely mechanical objectivity can also be thought of as an expression of scientific virtues, which in turn implies the existence of moral virtues. Indeed, it is possible to discern a moralizing tone in the calls by enthusiasts of objectivity for a severe self-restraint. Daston and Galison argue that this approach can be described as the expression of a kind of “self-denying ethics” in that it seeks to avoid all forms of human intervention, that is to say, it is a view of the self as expressing moral virtues in the form of its interest in knowledge.304 Objectivity is never only a matter of “pure” science, but rather of practising a particular kind of science—it is about “rallying a self” that has a wholly extraordinary character. Epistemic virtues are an expression of norms that have been internalized—and have nothing to do with some kind of un-situated neutrality. These technologies of the self, as it were, are the basis for the scientific practices that seek for objectivity in the form of a series of values and practices obtained by means of hard work and that constitute a way of living the scientific life. To quote Daston and Galison again: To paraphrase Aristotle on ethics, one becomes objective by performing objective acts. Instead of a pre-existing ideal being applied to the workaday world, it is the other way around: the ideal and ethos are gradually built up and bodied out by thousands of concrete actions, as a mosaic takes shape from thousands of tiny fragments of colored glass. To study objectivity in shirt-sleeves is to watch objectivity in the making.305
In other words, Daston and Galison force us to confront the fact that science requires people and qualified human action—in a word, a scientific self. In the place of futile dreams about establishing some mechanical objectivity free of human involvement, there emerged a realization of the importance of having trained judgement: the mechanical attitude needs to be supplemented by an interpretative capacity. Here, according to Daston and Galison, it is possible to see a concrete historical connection to the way in which scientific practices developed more and more avenues for active participation in laboratories, botanical gardens, observatories, and seminars, settings where one is trained to see, judge, evaluate, and argue scientifically.306 Ethics precedes morality—but not according to the simplistic view that theory comes first and is the causal basis for praxis. Ethics as an aim (for the good life) is, in fact, always more fundamental than norms, moral rules,
Beyond Relativism and Objectivism: Striving for Truth
307
and injunctions—just as the desire to live the good life (Aristotle’s “ethics”) is a fundamental precondition without which it would be meaningless to embark upon an evaluation of the various forms of behaviour using norms and principles (Kant’s “moral”). Universal and abstract principles are necessary but neither sufficient nor fundamental. For this reason, discussion of ethics in scientific work must start with the issue of implied ethics and focus on the always present, but often unstated, preconditions for science as praxis. If ethics is an aim for the good life (or “the flourishing life” as more recent translations render it), one could by extension specify the content of this search in the scientific realm in terms of a quest for truth.307 The necessity that such a quest for truth actually exists becomes apparent, if nowhere else, when we see it being negated, something that can happen in two different ways: either when one resignedly abandons this quest or when one announces that one possesses the definitive Truth. But in the same way as Aristotle started from the precept that the good life is not something that one can live alone, but rather something one lives in the company of friends, so too must the scientific search for truth be realized in a larger communicative community. It is a matter of mutual collegial evaluation by peers, which has both a dialogical and an institutional dimension. We have already noted that “academic freedom” is not about a freedom to do whatever one wants, as some kind of individual project of liberation or journey into the unknown. Academic freedom is always realized within a collegial context, which also implies things like a willingness to present one’s findings in some kind of public setting, something that in turn allows an independent examination in which one also makes oneself vulnerable to criticism. Inscribing this understanding of academic freedom into a communication structure of peers indicates that there is a close connection between freedom and responsibility, entirely in accordance with an understanding of autonomy as self-legislation. In other words, a scientist should be understood as someone who embodies the criteria of both sides—the mutual connectedness of freedom and responsibility, and the critic’s need for self-criticism. That said, it is time to make some further qualifications. The ethics that we are here elaborating upon in terms of an aim for truth is an ethics that, regardless of the degree to which it is articulated, is implied in scientific praxis as such. Ethics therefore cannot be reduced to some fixed or given “set of values.” Nor is ethics another word for consensus. On the contrary, we always need multiple ethical models in order to deal with questions about what responsibility for this search for truth means in a multidimensional reality. As with all other fields, indeed, perhaps more so, ethics is a field
308
Chapter 12
characterized by conflicts of interpretation. The utility-calculating ethics that often finds expression in a utilitarian attitude or an instrumental approach offers a perspective that focuses on we can best conserve our finite resources. Yet this focus on utility can also drift into maximization of utility, which can in turn give rise to moral dilemmas and ethical problems. A functioning culture of collegiality and peer evaluation also presupposes a communicative ethics that takes the form of a community of argumentation in which the primacy of reason and argument are respected. In accordance with phenomenological principle, this further implies a capacity for respect and responsibility for the other. A deontological ethics from Kant, using the universalization of norms and rules as a critical test of the consistency of actions in time and space, is also needed in order to safeguard the testing that aims at universalization and consistency in actions in time and space. Extending this precept, John Rawls has developed an ethics of justice that focuses on how to articulate an imperative of just distribution and sharing of resources—beyond the limits of self-interest.308 Ultimately, this means that it is also possible to extend collegial testing into general, formalized systems. These different ethical perspectives in turn need to be balanced by the respect for the other according to phenomenological ethics and the radical ethical demand that emerges from the other’s concrete needs. Fundamentally, however, there needs to be a search by a person with agency who is capable of developing knowledge and willing to subject their ideas to scrutiny within a wider community of communication. Taken together, this means that the teleological quest for truth is embedded in a multidimensional ethical structure whose purpose, so far as possible, is to guarantee that collegial assessment remains healthy and vital. In this continual alternation between different ethical perspectives, the utilitarian aspect of value often recurs when we look more closely at the question of what consequences science is understood as having. In this area, too, it is necessary to move beyond both regulation, which would instrumentalize scientific work, and total freedom, which would leave the question entirely open. Let me offer some examples. On courses in business economics, it is customary to introduce ethical questions as if they were a matter of a simple supplement, but on closer consideration it becomes clear that there are in fact no courses on how to drive a company into bankruptcy or harass its staff as effectively as possible. It has been quipped that in this regard departments of economics can be considered some of the most moralistic academic environments in academia.309 It is similarly possible to discern a clear imperative for life in both nursing and medicine, in that it would be unthinkable to allow scientific
Beyond Relativism and Objectivism: Striving for Truth
309
activity and research in these areas to be driven by questions about how to shorten people’s lives as quickly and drastically as possible. But an ethics of utility, if isolated and considered separately, also gives rise to moral dilemmas and ethical problems whose resolution calls for the inclusion of other ethical models and traditions. In order to understand, develop, and refine science’s implied ethos, we therefore need a multidimensional approach that allows for correction and mutual reinforcement by a utilitarian ethics focused on “good effects” (Bentham), a deontological ethics that proceeds from “good principles” (Kant), a discourse ethics that starts from “good conversations” (Habermas), and an ethics of virtue based on “good people” (MacIntyre).310 This spectrum of perspectives forms conflicts of interpretation in which no one person can claim to be presenting a definitive solution, but, instead, different interpretations need to be considered together in a laboratory of interpretation. Because science is a system of competition which stages its own activities in the form of conflicts, moral dilemmas and ethical problems can easily arise within scientific work as such. This requires regulations—but also a fundamental realization that these knowledge processes ultimately never fully allow themselves to be regulated. Beyond the one-sided focus on research fraud with which research ethics regulations concern themselves, major ethical problems currently await our attention, including the tension between, on the one hand, one-dimensional assumptions about the researcher as homo economicus that characterize the management systems of a state based on competition, and, on the other, the collegiality whose multidimensional communicative ethics are underpinned by a scientific search for truth. At a time when scientific practices are being suffocated by a new formalism, it may be asked whether science can even have any prospects of surviving and developing if there is not even a vestige of a quest for truth. However, this question cannot be resolved at a purely theoretical level. To address this problem, we must return to scientific practice. Indeed, it would be devastating—and crippling—to the spirit of discovery that underpins science as a project if science were to be moralized. In the final instance, there are no rules or norms that can guarantee the ethical aim; rather, we are dependent upon the practical wisdom that is underpinned by situation-specific moral judgement in situations of great uncertainty. However, in order for the conflicts that are a hallmark of science’s interpretative laboratories to be productive, we also need a functioning dialectic between argumentation and conviction.311 For this reason, any reflection on science and ethics will find itself both starting from and ending up with the notion of science’s implied ethics.
310
Chapter 12
What does it mean that science has a future? Science is a project, which means that science has a past—but also a future! We have already devoted considerable attention to the challenges associated with the fact that science has a past. On the face of it the statement that science has a future might seem like merely a naive and enthusiastic hope that science will be able to unproblematically continue its triumphal progress towards the shining horizon. I am sustained by a conviction that science does indeed have a future, partly because of its vigorous development potential and partly because we so badly need science in order to confront the many social challenges currently facing us. But the fact that science has a future is accompanied by serious complications and major challenges. One of these, which the history of science has already highlighted, relates to its variability, but this time in terms of a future perspective. There is a saying that those who wed themselves to the science of their day will soon be either widowed or forced to get a divorce. In other words, the knowledge that science gives us is always provisional, never final. The knowledge that results from the development of scientific knowledge must be taken within the utmost seriousness—and yet we must also remember that in the future science may well present even better knowledge. Science is a “provisionally rational project,” as Mats Alvesson and Kaj Sköldberg have stated it.312 The scientific progress is always ongoing. Science restlessly searches for new knowledge, its curiosity always leading it to the boundary between knowing and not-knowing, subjecting itself to scrutiny and criticism, continually aspiring to better knowledge and stronger evidence within the framework of a quest for truth that will never be completed. Yet this also has a dark side: for some, the realization that science has no end can seem comfortless. A recurrent theme in this book has been that one must know something about history in order to understand what science is today and what it might become. Science has a history and is shaped by powerful traditions, but it is also a thoroughly changeable historical phenomenon, a restless project with rigorous ambitions that is continually in motion. This awareness of the importance of time also opens up a perspective on the future; science does not only have a past. Insofar as making discoveries is part of science’s soul, it is also dependent on a necessary horizon of expectations. In a word, science belongs to the future. If the history of science tends to acquire a problematizing function, shaking us out of ourselves, its forward-looking perspective looks onto more challenging questions. Indeed, in the final instance we cannot avoid asking questions such as: how can we deal with the fact that science has not only a past but also a future?
Beyond Relativism and Objectivism: Striving for Truth
311
Science is a project, an adventure, which is today so successful that it is likely to destabilize our societies and disorient us, instead of offering a stable foundation of security and stability in an otherwise turbulent and changeable world. But science, with its restless desire for discovery, looks ahead. The future horizon is constitutive of modern science whose own emergence is an integrated part of the modern faith in progress. Auguste Comte’s narrative of science is a story of progress, and the modern science that emerged and acquired its institutional forms in the nineteenth century was part of a epoch in which virtually everyone was obsessed with the future, progress, and development. The combination of the idea of development and some form of (often tripartite) teaching structure was to define not only an emergent positivism but Marxism, psychoanalysis, pedagogy, economics, theology, and so on. Science moves towards the future, and regardless of whether this development happens cumulatively (logical positivism), gradually (critical rationalism), or in fits and starts (by means of paradigm shifts), or if one chooses to take a critical view of science by expressing ambivalence or even skepticism about its faith in progress, science’s primary horizon nevertheless remains the future. But this means that science, in order to remain proportionate, must be regularly reminded of its history and that the future, too, has a history. Science needs both roots and wings. When Wilhelm von Humboldt, during his sixteen months as Minister of Education, drew up the guidelines for the new university in Berlin in 1810, he underscored exactly how important it was to approach science as an “as a problem that has still not been fully resolved and therefore remain constantly engaged in research.” He also spoke of science as “something that has not been and can never be entirely found, and to constantly pursue it as such.”313 It is a sophisticated and learned way of specifying the deeper meaning of the fact that science has not only a history but also a future. The insight that science has a future therefore presents us with the challenge of dealing with the fact that when future scientists look back on our time, they will very likely smile indulgently, as we ourselves have a habit of doing when we look back at the history of science. How can those of us involved in science make use of this insight—while also keeping our spirits up for the task in hand? How should we hold onto the impatience that restlessly drives the scientific project forward—while also resisting the despair that threatens when we are brutally confronted with the provisional nature of all our knowledge?
312
Chapter 12
Scientific festivities are indispensable! Those who enter the academic world can easily become confused by all of its antiquated concepts, titles, rituals, ceremonies, and festivals, which collectively attest to the fact that the university is a very old institution that has to a large extent been shaped by an ecclesiastical organizational culture and that has its roots in the intellectual world of the Middle Ages. Today, even younger universities try to recreate a kind of “enchanted” world by joining traditions and establishing their own ceremonies and academic festivities with the assistance of terminology that invokes the university’s long history. Nevertheless, in Sweden and Finland these traditions are particularly prominent at universities with very long histories to drawn on, such as the universities at Uppsala, Lund, Åbo/Turku, and Helsinki. At the same time, it should be remembered that these rituals are not always quite as old as people like to imagine (even if they do have a long prehistory). In fact, as often as not, and like much of the modern university in general, they are inventions of the eighteenth and nineteenth centuries. At a time of exaggerated rationalism, when the seductive logic of the secularization narrative was dominant, there was only insignificant interest in traditions and ceremonies. Over time, however, universities have increasingly entered a kind of post-secular state in which re-enchantment has come to exert a greater appeal. In an era when popular culture is characterized by sweeping mythological dramas such as The Lord of the Rings, and in which the ideal for educational establishments is increasingly being shaped by the ritualization that characterizes Hogwarts in the Harry Potter books and films, it is perhaps unsurprising that the university, by virtue of being one of our society’s oldest institutions, has increasingly rediscovered/reinvented its traditions and ceremonies. As we have noted several times, the university could be regarded as a kind of secularized religious institution, which emerged in the shadow of the cathedral. But it is also possible to understand this neo-traditionalism as the expression of a need among academic institutions to assert their own identity in a knowledge society in which virtually everyone is involved with knowledge and where onlookers are increasingly prompted to wonder what is so special about academia and the knowledge overseen by universities. Under these circumstances, the enchantment conveyed by academic traditions may well help to preserve science as a world that can be shared with others. Festivities have traditionally occupied a special place in academic life, something that holds for official academic ceremonies, doctoral degree celebrations, and post-seminar informal gatherings as well as for the various associations and societies that are a part of undergraduate life. In most of
Beyond Relativism and Objectivism: Striving for Truth
313
these settings, festivities often have a somewhat ritualized format with the result that their conventions can sound old-fashioned. Why are there so many celebrations and festivities in the scientific world? What are the origins of this special tradition of academic celebration, which includes highly formalized rituals as well as lively and informal social gatherings? How ought we to understand the strong ties between science and festivities? It would be reductive simply to see festivities as a break from work or something entirely external, a superficial (indeed, perhaps vulgar) supplement to “real” academic work. Festivities are genuinely important for science, indeed, are a scientific necessity. Without them, the scientific project would have difficulty continuing. In order to show why festivities are an indispensable condition for academic work, I want to sketch some of their most important functions—a provisional outline of a “hermeneutics of festivity.” Everyone who has experience of studying, research, or teaching knows that it can be a relatively solitary occupation for long swathes of time. Lead times are sometimes unbearably long, which means that months or years can pass between opportunities for individuals to get any kind of feedback, see any real results, or receive any kind of acknowledgment of their work. The nature of intellectual work is often diffuse. For this reason, it can be hard to see the point of the work one puts in, and it is easy to feel as though one is being forced to postpone or absent oneself from much of what makes life enjoyable. Matters are not made easier by the fact that one is sometimes forced to accept that one has made a long detour involving pointless labour that ultimately turned out to have no immediate value. To a large degree, as Wittgenstein observed, research is about building a kind of ladder, which one throws away after reaching the next level. The catchy precept kill your darlings can be hard to live by for anyone in academia who knows just how much research time went into something that is now being cast aside. My belief is that academic ceremonies and academic festivities are utterly essential in order to balance these dark sides of academic life. Academic communities are a necessary way of combating the solitude of academic work. Festivities are of considerable and indispensable scientific importance. Discussing science’s future is something that can fill researchers and teachers with utopian energy. And yet, as already noted, there is also a darker side. The realization that the adventure of knowledge is endless, and infinite can become a curse. It never ends! How can anyone rest or find satisfaction when confronted by this ceaseless impatience? It is at precisely this point that festivities show their positive value by serving as a welcome interruption and signalling a preliminary conclusion to the work. Without this kind of break in the work—when one typically also celebrates the
314
Chapter 12
completion of a piece of work (although we know that the ongoing development of knowledge means that nothing is ever really concluded)— it would be hard to keep one’s spirits up and find the energy to participate in science’s search for truth. The university is also a deeply competitive system in which competition is omnipresent, not only with regard to ideas, but also people. This fact can make scientific institutions rather complicated places to be in—indeed, they can become quite unbearable and can seem almost unacceptable as workplaces. For example, the psychosocial challenges associated with taking a doctoral degree are often underestimated, but it is possible to gain an insight from the portraits of the joys and woes of doctoral life in fiction.314 At a time when politicians have chosen to use systems of management and accounting that “turns up the competition tap” with a view to maximizing its return on investments from the public funding that has been pumped into the enormous system of science created in our era, academic work runs the risk of becoming almost unbearably sterile and narrow-minded. Studies of the working conditions at universities generally reveal an extreme and unusual kind of ambivalence among its employees— enthusiasm and unhappiness, joy and worry, euphoria and depression—to a degree found in few other workplaces. If we also factor in the general inability of academic work to convert individual performance into shared successes, we begin to understand why academic workplaces are at risk of being forcibly turned into hothouses in which narcissistic arrogance and envy reinforce each other in a dangerous spiral. In such milieux, where decades of injustices become piled up in the organizational memory, people easily break down. Similarly devastating is the fact that a narrow-minded culture of competition hardly makes an ideal environment in which to foster talent. Festivities are an utterly essential component of any functioning academic environment: they ease the pressure and help us to put up with each other. Festivities are also a proven way of getting some distance to oneself and one’s work and of deriving pleasure from other people’s successes. For this reason, festivities cannot be reduced to an extra “supplement” or a pleasant diversion alongside scientific work but must be seen as an integrated part of scientific workplaces and as an entirely necessary component for academia to be able to function as a bearable competitive system. In other words, festivities need to be organized on a regular basis if the dark sides of academia’s culture of competition are to be balanced and if the conditions for the successful development of knowledge are to be safeguarded. At the same time, the traditions and rituals of the post-seminar or the more intimate dinner, no less than major academic ceremonies, also serve
Beyond Relativism and Objectivism: Striving for Truth
315
as important reminders that formalization has its limits, and that science needs informal exchanges, as a kind of “tacit” dimension of science, in order to work. In an era dominated by the new formalism that is infiltrating academia from every direction, it is important to emphasize that there are limits to what can be formalized into rules and standards, and the logic of discovery/invention that characterizes successful science requires there to be room in time and space for dynamism, creativity, and experimentation. In order to prevail and develop, academia must also be able to accommodate the kinds of complicated personalities who can never be fully incorporated into administrative systems. The fact that academia offers an attractive array of festive opportunities, together with academic ceremonies whose splendour is matched by few other institutions, should serve as a corrective to excessive formalization. Recurrent festivities serve as a reminder that not everything can be regulated. In the final instance, academia needs to be held together by a shared spirit. Science is a kind of covenant that can ultimately only be fully realized in symbolic—and gastronomic— terms. Dinner serves as a social glue that establishes, expresses, and refines the social agreements that are necessary for us to be able to interact with each other in complex networks. Dinners are an ancient token of belonging and mutual trust and have historically been regarded as one of the key markers of identity within an academic community. In college-based universities such as Oxford and Cambridge, dinners are the very heart of academic life. But festivities in themselves also remind us that knowledge formation is truly a wonder and that science’s search for truth is not reducible to strategic plans or the solemn pronouncement of infallible truths. It is no coincidence that Gadamer, alongside games, play, and dance, uses the party as a metaphor for clarifying both the experiences specific to art and the more general experience of education.315 The journey of education is about the journey of knowledge that each of us makes when we not only observe, register, and manipulate our object of study but are also gripped by our subject, like a reader absorbed in a book who is transported to new worlds— as if the book were alive, even though in reality it is just inanimate matter. Books, as we have noted, do not contain knowledge. No book “speaks” for itself—it needs human intervention in the form of the act of reading for the construction of meaning to occur. However, the miracle of education— Bildung—also includes an experience of that activity which we ourselves are participating in when, for example, reading something in a text sets us on a course in the opposite direction, a motion that proceeds from the text to ourselves, such that the text or the material we are working with can suddenly start to “speak” to us as if in a dialogue.
316
Chapter 12
This is the background that we need in order to understand Gadamer’s four metaphors for the hermeneutic experience conveyed by knowledge: play, game, dance—and party. In playing and gaming, a curious logic emerges that gives rise to the experience of not only playing the game but also of being played by the game. Similarly, in play one’s activity sets in motion a “playful” logic in the form of a movement that comes back to ourselves—to play is really a matter of being-played. In the same way, it is possible to be absorbed into the whirl of the dance and feel as though one is being danced. Parties, too, do not exist in any of themselves but only come into being when a group of people gather and celebrate. A party does not work, is quite simply not a party, if one does not also invest oneself and enter into the spirit of the party—and ultimately, the art of the party is about “being-partied.” The educational component thus reminds us that when one engages in science one cannot be merely an observer but must also invest a part of oneself in the adventure of knowledge. Only in this way can one acquire learning and only in this way can science as a project dedicated to discovery be carried on. Festivities allow us to let our hair down ever so slightly, so that we can catch a glimpse of ourselves from a new vantage point. This experience not only provides a valuable moment of self-distance, it serves as a reminder that science as a competitive system is primarily neither about people competing for position nor about proclaiming winners but about interpretations and knowledge competing. For science’s interpretative laboratories to be able to work, its participants need to be able to step back and let go of their own positions. Such stepping outside of oneself is a precondition for being able to listen to others and join them in a shared search for truth—and it is this experience that festivities make possible. As a competition system, and in accordance with Karl Jasper’s communicative concept of “loving struggle,” academia requires us to hand over our strongest intellectual weapons and arguments to our opponents so that we can fight together against others but also against ourselves, within the framework of a shared search for truth. In other words, the dual experience of the festivity—being both its participant and its object—is a reminder of the fundamental conditions of scientific work. Festivities are of a crucial significance for science and should be taken far more seriously as constitutive elements of academic life. Those who plan to lead academic activities and who wish to ensure that the scientific project is carried on must therefore exert themselves to the best of their abilities to safeguard the traditions of academic festivities. Ultimately, festivities are the most powerful force available to those who want to take a stand against the instrumentalization that continually threatens the vitality of any
Beyond Relativism and Objectivism: Striving for Truth
317
knowledge culture. It helps to sustain morale during our search for truth— and it reminds us of the university’s reliance upon the miracle of education. In a word, it is time for every level of academia and society to treat festivities with a truly scientific seriousness, for without them, science will not prevail.
ACKNOWLEDGEMENT, 2019 (SWEDISH EDITION)
Presenting science as a project in this book has posed significant challenges for me as a writer. Admittedly, it has been helpful that as a teacher and researcher, I have traversed academia for many years, crossing disciplinary boundaries and venturing into various faculties and fields of knowledge. Even so, writing a book about science is still not something one can do alone. The subject of philosophy of science is vast and seemingly inexhaustible. The writing process itself can also teach us something about the conditions of knowledge development. Those who write embark on an adventure that alternates between, on the one hand, an initial vertigo in the face of the infinite possibilities of the blank page, and, on the other, mounting claustrophobia as the writing process inevitably results in shrinking horizons as freedom encounters its conditions in the form of selection, something that is as necessary as it is painful. Over time, the original assignment from Ola Håkansson at Studentlitteratur—to write a book on philosophy of science that could also function as an academic textbook (although he probably envisaged these priorities in reverse)—has increasingly opened up perspectives towards multiple infinities: backward in time (where does the story of human knowledge begin?), in relation to the horizons of the present (how can one do justice to the complex web of processes that science constitutes in today’s society?), and forward in time (what can one say about a science that is undergoing such rapid development that future changes are almost unpredictable?). The project has also taken me on a research journey that has required me to reassess some of my positions. Time and again, I have been fascinated by connections that I did not previously suspect—and challenged by the lack of connections where I thought they existed. The overwhelming and almost insurmountable amount of pre-existing literature in the field of philosophy of science, which is constantly introducing new and exciting ideas and books that one simply must delve into, can make an author feel as though he is being offered a glass of water while he is drowning . . . However, I have tried to find a language that invites the reader to join me on the journey of knowledge, to which end I have adopted a “minimalist” approach referencing that does not unnecessarily
320
Acknowledgement, 2019 (Swedish Edition)
burden the presentation or make it inaccessible. For the same reason, I have largely used my own translations of quotations from other languages unless prior translations exist. My research contribution has focused more on identifying, deconstructing, and correcting established scientific narratives and articulating new narratives about science, rather than presenting detailed analyses. The pursuit of knowledge has been guided more by perspectives in the philosophy of science than by groundbreaking archival work. One of this book’s guiding precepts is that science is not a private activity conducted in splendid isolation. Scientific endeavours have a distinctly social and communicative nature; they are always embedded in deep traditions and complex collegial processes. This means that one must repeatedly stand on the shoulders of giants, interact with knowledgeable colleagues through seminars, publications, or individual meetings. I anticipate that this book, too, will be scrutinized and generate both constructive and critical feedback as it embarks on its own journey among a new readership in the coming years. This will also make it possible to correct, supplement, and refine the presentation in any future editions. In this sense, writing a book resembles the rigorous and provisional logic of science itself. Although scientific work can be a somewhat solitary endeavour for extended periods, one never ultimately walks alone with one’s thoughts. I have been influenced and interacted with a multitude of published works, colleagues, and friends during the process of writing this book. In some cases, the exchange has been so intensive that, in retrospect, I am not always able to distinguish the origins of theories and ideas. Nevertheless, I wish to mention at least some of those with whom I have spoken, who have been present during seminar discussions of the text, or who have been directly involved in the writing process and served as readers at various stages of the work. A warm thank you to Johan Arnt Myrstad, Margareta Norell Bergendahl, Petra Carlsson Redell, Bino Catasús, Caroline Edlund, Mats Edenius, Inger Ekman, Anna Forssell, Eskil Franck, Arne Fritzon, Pierre Guillet de Monthoux, Martin Gustafsson, Johanna Gustafsson Lundberg, Mats Hårsmar, Thomas Karlsohn, David Karlsson, Jonny Karlsson, Sabina Koij, Bo Larsson, Mikael Lindfeldt, Johanna Lindström, Alexander Lundberg, James McGuirk, Börge Ring, Ulrich Schmiedel, Johan von Schreeb, Annika Stålfors, Johan Storgård, Bernice Sundqvist, Peter Svensson, Mark C. Taylor, Jan Kenneth Weckman, Holger Weiss, Ulrika WolfKnuts—and many others. Last but not least, a heartfelt thanks to my publisher, Ola Håkansson, who inspired me to write this book and who has
Science as a Quest for Truth: The Interpretation Lab
321
had to wait far too long for it to be completed. In my defence, I can only say that science is a project that can never truly be finished. This book is the fruit of research periods during which I have been fortunate enough to spend time in the intellectually stimulating environments of Oxford, Visby, Princeton, Menton, and, above all, Stockholm and Åbo. On and around my birthday, May 2019 Bengt Kristensson Uggla
ACKNOWLEDGEMENT, 2023 (ENGLISH EDITION)
This book originally appeared in Swedish under the title En strävan efter sanning: Vetenskapens teori och praktik [Striving for Truth: Science in theory and practice] in 2019. Thanks to financial support from Föreningen Konstsamfundet (Helsingfors, Finland), Birgit och Sven Håkan Ohlssons stiftelse (Lund, Sweden), and Åbo Akademi University (Turku, Finland), it has been translated into English with only minor corrections. I am grateful to my translator, Stephen Donovan, for his excellent work. Several colleagues and friends, in addition to those mentioned in the Acknowledgment of the original edition, have generously offered their assistance during this process. My old friend Professor Ruth Bereson came from Australia and gave invaluable help with proof-reading and much more. Professor Pierre Guillet de Monthoux offered some brilliant ideas about the title of the book. Visual artist Jan Kenneth Weckman again kindly gave permission to use a reproduction of his artwork Points of View (from our joint project “The Art of Being University”) for the book’s cover. Doctoral student Nichan Pispanen investigated some difficulties associated with the translation of Karl Jaspers. And Inger Ekman, Henrika Franck, Sven-Eric Liedman, Mikael Lindfelt, Gunilla Ohlsson, Lars Strannegård, Karl Svedberg, Fredrik Tell, and many more have contributed in various ways. In order to realize my intention of presenting an original piece of research while also allowing the text to serve as an academic textbook, I have limited myself to a minimalistic approach with regard to footnotes and references. The well-informed reader will nevertheless easily identify the many implied references to thinkers, texts, discourses, and discussions associated with the extensive literature on the philosophy of science. The main inspiration behind the overall configuration of my argument is the work of Paul Ricoeur (1913 – 2005), one of the most polyphonic voices of the 20th century, whose philosophy has inspired my own investigations and elaborations for four decades and to whose memory I dedicate this book. No other single philosopher seems more important to listen to in an age of hermeneutics and multiple crises in which we are experiencing a profound
324
Acknowledgement, 2023 (English Edition)
destabilization of our common cognitive infrastructure. My own conceptualization of “the interpretation lab” and “laboratories of interpretation” should be considered a prolongation of Ricoeur’s recognition of hermeneutics as a conflict of interpretations. This book has a Nordic framework, even though the perspective adopted when dealing with science must of course be universal and the tension between the particular and the universal is present in all scientific practice. I hope that this contextualization will not function as an obstacle for the reader, but instead add even more aspects to the complex conditions of interpretation labs for science as a quest for truth. Åbo-Stockholm-Lysekil, June 2023. Bengt Kristensson Uggla
NOTES
1
Helga Nowotny, et al, Re-Thinking Science: Knowledge and the Public in an Age of Uncertainty (Cambridge: Polity Press, 2001). 2 Nowotny et al, Re-Thinking Science, 87. 3 Steven Shapin, The Scientific Revolution (Chicago: The University of Chicago Press, 1996), 165. 4 Sven-Eric Liedman, Ett oändligt äventyr: Om människans kunskaper (Stockholm: Bonniers, 2001), 60. 5 Yuval Noah Harari, Sapiens: A Brief History of Humankind (London: Penguin, 2015), 281. 6 Hans-Georg Gadamer, Truth and Method. Translated by Joel Weinsheimer and Donald G Marshall (London: Bloomsbury1960/2004), 371. 7 Michael Smedbäck, ”Universum är också mycket mindre än vi trott.” Under strecket, Svenska Dagbladet, June 6. 2019. 8 Gustaf Fröding, “Vad är sanning?” In Ungdomsdikter: Gitarr och dragharmonika (Stockholm: Dejavu, 1891/2012). 9 See Part III. 10 Carl-Göran Heidegren, Positivismstrider (Göteborg: Daidalos, 2016), 143. 11 Jürgen Habermas, Between Naturalism and Religion: Philosophical Essays. Translated by Ciaran Cronin (Cambridge: Polity, 2005/2008). 146. 12 Arenij Gulyga, Immanuel Kant. Translated by Håkan Edgren (Göteborg: Daidalos, 1977/1988), 37. During his career, Kant gave 268 lectures: 54 on logic, 49 on metaphysics, 46 on physical geography, 28 on ethics, 24 on anthropology, 20 on theoretical physics, 16 on mathematics, twelve on law, eleven on the encyclopedia of philosophical sciences, four on pedagogy, one on minerology, and one on theology (ibid., 262). 13 Amartya Sen discusses several reasons for why Michael Polanyi was treated as an outsider, albeit a respected one, and concludes that he himself was probably partly responsible in that he chose to be outside. However, Sen adds, it may also have been factor that his “rapid-fire sequences of insights—often deep insights—without much pause for examining alternative interpretations and possible counterarguments.” Amartya Sen, “Foreword.” In Michael Polanyi, The Tacit Dimension (Chicago: The University of Chicago Press, 1966/2009), xv. 14 Kaj Sköldberg and Miriam Salzer-Mörling, Över tidens gränser: Visioner och fragment i det akademiska livet (Stockholm: Carlssons, 2002), 72. 15 Clark Kerr, The Uses of the University (Cambridge, MA: Harvard University Press, 2001). 16 Bengt Kristensson Uggla, Slaget om verkligheten: Filosofi, omvärldsanalys, tolkning (Stockholm/Stehag: Brutus Östlings Bokförlag Symposion, 2002/2012), 79.
326 17
Notes
Liedman, Ett oändligt äventyr. I have previously discussed the figure of Columbus in my book Kristensson Uggla, Slaget om verkligheten and Katedralens hemlighet: Sekularisering och religiös övertygelse (Skellefteå: Artos, 2015), which also contains references to the ever-growing body of secondary literature on Columbus. In the present book I have chosen to highlight slightly different aspects of the Columbus narrative, primarily in relation to David Wootton, The Invention of Science: A New History of the Scientific Revolution (Allen Lane, 2015). 19 I discuss this dark side to the Columbus story in greater detail and from the perspective of theories of civilization in Kristensson Uggla, Slaget om verkligheten, 25–37. 20 Darwin’s autobiographical notes op.cit. Staffan Ulfstrand, Darwins idé: Den bästa idé någon någonsin haft och hur den fungerar idag (Stockholm/Stehag: Brutus Östlings Bokförlag Symposion, 2008), 24. Since Ulfstrand does not provide a source, the quoted statement is a retranslation, from Swedish to English. 21 Ulfstrand, Darwins idé, 15. 22 Darwin, op cit. Ulfstrand, Darwins idé, 65. Translation from Swedish. 23 Thomas Robert Malthus, An Essay on the Principle of Population. The Works of Thomas Robert Malthus. Volume Two: An Essay on the Principle of Population. Sixth edition (1826) with variant readings from the second ecition (1803). Part I. Edited by E.A. Wrigley and David Souden (London: Willian Pickering, 1803/1986), 308 [520/2]. 24 Tim Levens, The Meaning of Science (Penguin, Pelican Books, 2015), 156; cf. 154-158 on “Darwin’s capitalism.” 25 Andrea Wulf, The Invention of Nature: The Adventures of Alexander von Humboldt. The Lost Hero of Science (London: John Murray Publishers, 2015), 128. 26 Op cit. Wulf, The Invention of Nature, 234. 27 Sandra Harding, Is Science Multi-Cultural? Postcolonialisms, Feminisms, and Epistemologies (Bloomington, IN: Indiana University Press, 1998), 23. 28 Harding, Is Science Multi-Cultural?, 39–54. 29 Wulf, The Invention of Nature, 226. 30 Blaise Pascal, Pensées. Translated by A.J. Krailsheimer (Baltimore: Penguin, 1966), 48. Cf. Paul Ricoeur, Memory, History, Forgetting. Translated by Kathleen Blamey and David Pellauer (Chicago: The University of Chicago Press. 2000/2004), 209. 31 Cf. Part III, Chapters 8–10. 32 Bengt Hansson, Skapa vetande: Vetenskapsteori från grunden (Lund: Studentlitteratur, 2011), 173. 33 Hansson, Skapa vetande, 156, 160. 34 Hans Skjervheim, Deltagare och åskådare: Sex bidrag till debatten om människans frihet i det moderna samhället. Translated by Ian Hamilton och Lillemor Lagerlöf (Stockholm: Prisma, 1971), 19. 18
Science as a Quest for Truth: The Interpretation Lab
327
35 This figure, which I have taken from Gunnar Olsson, Professor of Economic Geography and Planning, is the organizing principle in Kristensson Uggla, Slaget om verkligheten. In Swedish, the verb dela means both to divide and to share—a dualism that is captured by the colloquial English term divvy up. 36 Samir Okasha, Philosophy of Science: A Very Short Introduction (Oxford: Oxford University Press, 2002), 2. 37 Nils Erik Villstrand, Riksdelen: Stormakt och rikssprängning 1560-1812 (Helsingfors: Svenska Litteratursällskapet, 2009). 38 Nils-Erik Forsgård, Ingens herre, ingens träl: Radikalen Anders Chydenius i 1700-talets Sverige (Stockholm: Timbro, 2014). 39 Johan Östling, Humboldts universitet: Bildning och vetenskap i det moderna Tyskland (Stockholm: Atlantis, 2016). Cf. Bengt Kristensson Uggla, “Med Humboldt i universitetsdebatten,” Signum 3 (2017), 32-37. 40 Wulf, The Invention of Nature, 235. 41 In Swedish, the world “scientist” [vetenskapsman] means literally “man of science.” 42 Geoffrey Gorham, Philosophy of Science: A Beginner’s Guide (Oxford: One World, 2009), 129. 43 Shapin, The Scientific Revolution, 1. 44 Shapin, The Scientific Revolution, 3. 45 Shapin, The Scientific Revolution, 136. 46 Lawrence M Principe, The Scientific Revolution: A Very Short Introduction (Oxford: Oxford University Press, 2011), 37. 47 Principe, The Scientific Revolution, 61. 48 Principe, The Scientific Revolution, 19. 49 See the discussion of Giovanni Pico della Mirandola in Chapter 7. 50 Principe, The Scientific Revolution, 11. 51 Principe, The Scientific Revolution, 36. 52 Lars F H Svendsen, Vad är filosofi? Translated by Joachim Retzlaff (Stockholm: Natur och Kultur, 2003/2005), 9, cf. 9–20. This entire line of argument about philosophy and scientific thinking is inspired by Svendsen. 53 The example is taken from the appointment procedure for a professorial chair at Oslo University in 1970, and the evaluations in question related to the applicant Hans Skjervheim. Carl-Göran Heidegren, Positivismstrider (Göteborg: Daidalos, 2016), 42. 54 See Chapter 8. 55 Svendsen, Vad är filosofi?, 45. 56 Okasha, Philosophy of Science, 12. 57 Liedman, Ett oändligt äventyr. 58 Liedman, Ett oändligt äventyr, 110f. 59 Nowotny et al, Re-Thinking Science. 60 Stefan Svallfors, The Inner World of Research: On Academic Research. Translated by Neil Betteridge (Anthem Press, 2012/2021), 10. 61 Calvin O. Schrag “The Fabric of Fact,” Philosophical Papers: Betwix and Between (Albany: State University of New York Press, 1994), 184. 62 This is also the subject of one of the classic works of philosophy of science, Ludwik Fleck, Genesis and Development of a Scientific Fact. Translated by
328
Notes
Frederick Bradley (Chicago: The University of Chicago Press, 1935/1981). This idea has also been productively developed by Bruno Latour, who has examined, by means of a large number of studies, how facts come about and are managed within scientific thinking. See, for example, Bruno Latour, Science in Action: How to Follow Scientists and Engineers Through Society (Cambridge, MA: Harvard, 1987). 63 Lorraine Daston and Peter Galison, Objectivity (New York: Zone Books, 2007). 64 Liedman, Ett oändligt äventyr, 12. 65 Mats Alvesson, Interpreting Interviews (London: Sage, 2011), 4. 66 Sten Andersson, Om vetenskapens gränser: Socialfilosofiska betraktelser (Göteborg: Daidalos, 2004), 7, 105. The Swedish word “vetenskapande” implies “knowingcreating.” 67 On the scientific importance of peer review, see Chapter 10 and, in particular, Chapter 12. 68 Hans-Georg Gadamer, Truth and Method. Translated by Joel Weinsheimer and Donald G Marshall (London: Bloomsbury, 1960/2004), 320, cf. 318-322. 69 Paul Ricoeur, From Text to Action: Essays in Hermeneutics. Translated by Kathleen Blamey and John B. Thompson. (Evanston: Northwestern University Press, 1986/2007), 75-88, 270-307; Kristensson Uggla, Slaget om verkligheten, 45– 61. 70 Kristensson Uggla, ”Med Humboldt i universitetsdebatten.” 71 Adam Smith, The Wealth of Nations (Capstone Publishing, 1776/1994), 15. 72 Katrine Marçal, Who cooked Adam Smith’s dinner? A Story about Women and Economics. Translated by Saskia Vokel. (London: Portobello Books, 2012/2016), 41. 73 Marçal, Who cooked Adam Smith’s dinner?, 40. 74 Ulla Eriksson-Zetterquist and Alexander Styhre, Organisering och intersektionalitet (Malmö: Liber, 2007). 75 Shapin, The Scientific Revolution, 10. The period discussed in Shapin’s book is the Scientific Revolution of the sixteenth and seventeenth centuries. 76 Immanuel Wallerstein et al., Open the Social Sciences. Report of the Gulbenkian Commission on the Reconstructuring of the Social Sciences (Stanford: Stanford University Press, 1996), 75. 77 This kind of question may help explain why Tycho Brahe, as we will see in Chapter 5, continued to cling to a (modified) geocentric worldview as late as the year 1600, while his younger assistant Johannes Kepler had become a convinced Copernican. 78 Per Strømholm, Den videnskaplige revolusjonen, 1500-1700 (Oslo: Solum, 1984), 52–53. 79 Michael Polanyi, Personal Knowledge: Towards a Post-Critical Philosophy (Chicago: The University of Chicago Press, 1958/1962), 3. 80 Polanyi, Personal Knowledge, 3. 81 Polanyi, Personal Knowledge, 4. 82 Polanyi, Personal Knowledge, 4. 83 Polanyi, Personal Knowledge, vii.
Science as a Quest for Truth: The Interpretation Lab
329
84 Michael Polanyi, The tacit dimension. With a new forward by Amartya Sen (Chicago: The University of Chicago Press, 1966/2009), 4. 85 Polanyi, The tacit dimension, 25. 86 Polanyi, The tacit dimension, 67. 87 See Chapter 2. Cf. Kristensson Uggla, Slaget om verkligheten, 350–374 and Gorham, Philosophy of Science, vii. 88 Polanyi, Personal Knowledge, 19. 89 Polanyi, The tacit dimension, 22. 90 Polanyi, The tacit dimension, 22-3. 91 Strømholm, Den videnskaplige revolusjonen, 62. 92 Lennart Hultqvist, Knockad av stjärnhimlen: Från Tycho Brahes blick till Voyagers resa (Stockholm: Santérus, 2018), 85. 93 Hultqvist, Knockad av stjärnhimlen, 87. 94 Wootton, The Invention of Science, 1. 95 Wootton, The Invention of Science, 575. 96 Wootton, The Invention of Science, 57. 97 The noun “discovery” was used in this new sense for the first time in 1554, the verb “discover” in 1553, and the phrase “voyage of discovery” in 1574 (Wootton, The Invention of Science, 82). 98 Wootton, The Invention of Science, Chapter 3, “Inventing discovery,” 57ff. 99 Wootton, The Invention of Science, 83. 100 Wootton, The Invention of Science, 91. 101 Wootton, The Invention of Science, 93. 102 John Freely, Istanbul: The Imperial City (London: Penguin Books, 1996), 202. 103 Yuval Noah Harari, Sapiens: A Brief History of Humankind (London: Penguin, 2015), 317. 104 Harari, Sapiens, 319. 105 Mark Monmonier, How to lie with maps (Chicago: University of Chicago Press, 1991), 1. Cf. “Not only is it easy to lie with maps, it’s essential. To portray meaningful relationships for a complex three-dimensional world on a flat sheet of paper or a video screen, a map must distort reality.” (ibid., 1) For a fuller introduction to the significance of the world map for the development of Eurocentrism, see Kristensson Uggla, Slaget om verkligheten, 17–49. 106 Ludwig Wittgenstein, Philosophical Investigations. 4th Edition (Oxford: WileyBlackwell, 1953/2009), part II: § 118-121. 107 Michel Foucault, This is not a pipe. With illustrations and letters by René Magritte. Translated by James Harkness (Berkeley: University of California Press, 1973/1983). 108 Principe, The Scientific Revolution, 9. 109 Roy Porter, The Enlightenment (Red Globe Press, 1990). 110 Pierre Guillet de Monthoux, The Art Firm: Aesthetic Management and Metaphysical Marketing (Palo Alto: Stanford University Press, 2004). 111 Giorgio Vasari, Leonardo da Vinci. Translated by George Bull (Penguin Books, 1550/1987), 25. 112 Don Ihde, Instrumental Realism: The Interface between Philosophy of Science
330
Notes
and Philosophy of Technology (Bloomington, IN: Indiana University Press, 1991), 9. 113 This formulation is taken from Rachel Laudan, The Nature of Technological Knowledge: Are Models of Scientific Change Relevant? (Dordrecht: D. Reidel Publishing Co, 1984), 9. 114 Principe, The Scientific Revolution, 13. 115 Ihde, Instrumental Realism, 136 ff. 116 Principe, The Scientific Revolution, 59. 117 Hultqvist, Knockad av stjärnhimlen, 7. 118 Wootton, The Invention of Science, 560. 119 However, many of these inventions and innovations—such as the magnetic needle, rubber, firearms—probably came from China, in the same way as Europeans gained knowledge about local geography, geology, animals, plants, classificatory systems, methods of cultivation, and navigation techniques from the peoples whom they encountered. Sandra Harding, Is Science Multi-Cultural? Postcolonialisms, Feminisms, and Epistemologies (Bloomington, IN: Indiana University Press.1998), 35. 120 Polanyi, Personal Knowledge, 6. 121 Wootton, The Invention of Science, 163–164. 122 Wootton, The Invention of Science, 177. 123 Wootton, The Invention of Science, 24. 124 Stephen Toulmin, Cosmopolis: The Hidden Agenda of Modernity (Chicago: The University of Chicago Press, 1990). By “carnivalesque” science, Toulmin has in mind the tolerant and skeptical attitude of sixteenth-century humanist philosophers such as Montaigne, whose thinking was characterized by a very different kind of respect for complexity and diversity than that found in the ideas of Descartes, who came to dominate from the seventeenth century and on. 125 Strømholm, Den videnskaplige revolusjonen, 186. 126 Amartya Sen, ”Forword.” In Michael Polanyi, The Tacit Dimension, vii-xvi (Chicago: The University of Chicago Press, 1966/2009). 127 Peter J Bowler and Iwan Rhys Morus, Making Modern Science: A historical survey (Chicago/London: The University of Chicago Press, 2005), 46-52. 128 Strømholm, Den videnskaplige revolusjonen, 212. 129 Strømholm, Den videnskaplige revolusjonen, 225. 130 Francis Bacon, New Atlantis (Cambridge: Cambridge University Press, 1627/2014), 22. 131 Bacon, New Atlantis, 35-46. 132 Ihde, Instrumental Realism, 134. Cf. Bruno Latour and Steve Woolgar, Laboratory Life: The construction of scientific fact (Princeton: Princeton University Press, 1979). 133 Max Horkheimer and Theodor W. Adorno, Dialectic of Enlightenment. Translated by John Cumming (London/New York: Verso, 1944/1997), 5. 134 Adorno and Horkheimer, Dialectic of Enlightenment, 6. 135 Georg Henrik von Wright, Vetenskapen och förnuftet: Ett försök till orientering (Stockholm: Bonniers, 1986), 151.
Science as a Quest for Truth: The Interpretation Lab 136
331
René Descartes, Meditations on First Philosophy (BN Publishing, 1641/2007),
21. 137
Descartes, Meditations on First Philosophy, 27. Descartes, Meditations on First Philosophy, 30. 139 Toulmin, Cosmopolis, 16. 140 Geoffrey Parker, Global Crisis: War Climate Change and Catastrophe in the Seventeenth Century (New Haven/London: Yale University Press, 2013). 141 Descartes, Meditations on First Philosophy, 49. 142 Eberhard Jüngel, God as the mystery of the world: On the foundation of the theology of the crucified on in the dispute between theism and atheism. Translated by Darrell L. Guder (Grand Rapids: Eerdmans, 1977/1983), 14-35. 143 Richard Bernstein, Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis (University of Pennsylvania Press, 1983). 144 Shapin, The Scientific Revolution, 13. 145 Giovanni Pico della Mirandola, Oration on the Dignity of Man. Translation: A. Robert Caponigri. Introduction by Russell Kirk (Chicago: Henry Regnery Company, 1486/1956), 7-8. 146 Pico della Mirandola, Oration on the Dignity of Man, 8. 147 Cf. Chapter 4. 148 Friedrich Nietzsche, On Truth and Lies in a Nonnormal Sense (Theophania Publishing, 1873/2012), 3. 149 See Chapter 12. 150 Strømholm, Den videnskaplige revolusjonen, 169. 151 Strømholm, Den videnskaplige revolusjonen, 37. 152 Bacon, New Atlantis, Book I, XCV, 79. 153 Strømholm, Den videnskaplige revolusjonen, 166. 154 Immanuel Kant, Critique of Pure Reason. Unified Edition (with all variants from the 1781 and 1787 editions). Translated by Werner S. Pluhar (Hackett Publishing, 1787/2004), A51/B75/156. 155 Fredric Jameson, Valances of the Dialectics (London/Brooklyn: Verso), 475– 545. 156 K.E. Løgstrup, Kierkegaard’s and Heidegger’s Analysis of Existence and its Relation to Proclamation. Selected Works of K.E. Løgstrup. Translated by Robert Stern (Oxford: Oxford University Press, 1950/2020). 157 Sven-Eric Liedman, Hets! En bok om skolan (Stockholm: Bonniers, 2011), 60. 158 Auguste Comte, General View of Positivism. Translated by J.H. Bridges (London: Reeves & Turner, 1844/1880), 16. 159 Liedman, Ett oändligt äventyr, 366–370. 160 Wallerstein et al., Open the Social Sciences, 9; cf. 5ff. 161 Compare the discussion of scientific thinking as “science” in Chapter 3. 162 Søren Kjørup, Människovetenskaperna: Problem och traditioner i humanioras vetenskapsteori. Translated by Sven-Erik Torhell (Lund: Studentlitteratur, Lund, 2008/2009), 71. 163 Mark C. Taylor, Lessons in Leaving (New Haven, CT: Yale University Press, 2018), 297–316. 164 Wulf, The Invention of Nature, 128. 138
332 165
Notes
Wulf, The Invention of Nature, 199. Wulf, The Invention of Nature, 335. 167 Karl Marx and Friedrich Engels, The Communist Manifesto. Translated by Samuel Moore (Penguin Classics, 1848/2012). 168 Se Chapter 11 and 12. 169 Alan F. Chalmers, What is this thing called science? 4th edition (New York: Open University Press, 2013), 2-3. 170 Gilje and Grimen, whose ideas have deeply informed my own, also argue that positivism’s breakthrough involved a breakthrough for a general discussion of scientific thinking in our era: “Logical positivism (subsequently called logical empiricism) ushers in the modern philosophy of science.” Nils Gilje and Harald Grimen, Samhällsvetenskapernas förutsättningar. Translated by Sten Andersson (Göteborg: Daidalos, 1992/2003), 57. For my part, however, I believe that we should also be careful to incorporate the importance of the Scientific Revolution as a background and screen for the projection of the scientific ideals that properly emerged in the nineteenth and twentieth centuries, while also including positivism’s “internal” and “external” critiques into our definition of modern scientific thinking. 171 Georg Henrik von Wright, Logik, filosofi och språk: Strömningar och gestalter i modern filosofi (Nora: Nya Doxa, 1965/1993), 149. 172 Ludwig Wittgenstein, Tractatus Logico-Philosophicus (Dover Publications, 1921/1998), § 7. 173 For a brilliant account of Wittgenstein’s life and thought, see Ray Monk, Ludwig Wittgenstein: The Duty of Genius (London: Vintage 1990). 174 Ingvar Johansson, ”Positivismen.” In Positivism och marxism, edited by Ingvar Johansson and Sven-Eric Liedman (Stockholm: Norstedts, 1981), 27. 175 Bengt Hansson, Skapa vetande: Vetenskapsteori från grunden (Lund: Studentlitteratur, 2011), 161. 176 W. V. O. Quine, Pursuit of Truth (Cambridge, MA: Harvard University Press, 1992). 177 Johansson, ”Positivismen,” 31. 178 Skjervheim, Deltagare och åskådare, 63. 179 Chalmers, What is this thing called science?, 38–54. 180 Karl Popper, The Logic of Scientific Discovery (London/New York: Routledge, 1935/2002), 93-94. 181 Gilje and Grimen, Samhällsvetenskapernas förutsättningar, 81. This entire section in my presentation is heavily indebted to Gilje and Grimen, from whom I have taken my principal insights on critical rationalism. 182 Popper, The Logic of Scientific Discovery, 280. 183 Karl Popper, The Open Societies and its Enemies, Vol. 2 (London/New York: Routledge, 1968/2002), 225. 184 Chalmers, What is this thing called Science?, 81–120. 185 Kristensson Uggla, Slaget om verkligheten, 199–201. Kuhn’s book about paradigms and scientific revolutions—and his ambition of developing an alternative model to positivism’s view of scientific thinking as continual growth through the collating of new information—was in some sense anticipated by Fleck, Genesis and Development of a Scientific Fact (1935). The knowledge-theoretical and social dimension of 166
Science as a Quest for Truth: The Interpretation Lab
333
scientific thinking that Kuhn would discuss in terms of paradigms and scientific communities are paralleled in Fleck’s work by the importance that ascribed to thought style, thought constraints, and thought collectives in science. 186 The French title Les mots et les choses means “Words and Things.” 187 Michel Foucault, L'ordre du discours est la leçon inaugurale de Michel Foucault au Collège de France, prononcée le 2 décembre 1970 (Paris: Gallimard, 1971). 188 Michel Foucault, The History of Sexuality. Volume I: An Introduction. Translated by Robert Hurley (New York: Pantheon Books, 1976/1978), 93. 189 However, Kuhn himself has added to the confusion by identifying as many as twenty-two different meanings of the word paradigm as used in his celebrated book. It might also be added that, although the concept of the paradigm was intended for the natural sciences, it actually originated in the humanities, specifically the linguistic term for a table of verb inflections (Kjørup, Människovetenskaperna, 95– 97). 190 Okasha, Philosophy of Science, 94. 191 Ihde, Instrumental Realism, 7. 192 Christian Smith, What Is a Person? Rethinking Humanity, Social Life, and the Moral Good from the Person Up (Chicago/London: The University of Chicago Press, 2010), 184–190. 193 Kristensson Uggla, Slaget om verkligheten, 320–327. 194 Don Ihde, Experimental Phenomenology: An Introduction (Albany: Stat University of New York Press, 1986), 14. 195 Ihde, Experimental Phenomenology, 29-79. 196 Cf. my previous account of this in “Drömmen om det konkreta inom positivismen, fenomenologin och existensfilosofin,” Kristensson Uggla, Slaget om verkligheten, 320–326. 197 Maurice Merleau-Ponty, The Phenomenology of Perception. Translated by Colin Smith (London: Routledge, 1945/2002). 198 Paul Ricoeur, Husserl: An Analysis of His Phenomenology. Translated by Edward G Ballard and Lester E Embree (Evanston: Northwestern University Press, 1967), 201. 199 Ricoeur outlines his analysis of time in Paul Ricoeur, Time and Narrative, I–III. Translated by Katheleen McLaughlin and David Pellauer (Chicago: The University of Chicago Press, 1983-85/1984-88), and continues his reflections in Memory, History, Forgetting. Translated by Kathleen Blamey & David Pellauer (Chicago: The University of Chicago Press, 2000/2004). 200 Ingolf Dalpherth, Creatures of possibility. Translated by Jo Bennett (Grand Rapids: Baker Academic, 2011/2016), 84. 201 Don Ihde (1998), Expanding Hermeneutics: Visualism in science (Evanston, Il: Northwestern University Press, 1998), 50. 202 Ricoeur, Time and Narrative, III, 207; Ricoeur, Oneself as Another. Translated by Katheleen Blamey (Chicago: The University of Chicago Press, 1990/1992), Chapter 10, “What ontology in view?” 297-356. 203 C.P. Snow, “The Two Cultures.” The Rede Lecture (Cambridge: Cambridge University Press, 1959). 204 Wallerstein et al., Open the Social Sciences, 62.
334 205
Notes
Ilya Prigogine and Isabelle Stengers, La nouvelle alliance: Métamorphose de la science (Paris: Gallimard, 1982), op cit.Wallerstein et al., Open the Social Sciences, 75. 206 Wallerstein et al., Open the Social Sciences, 50f. 207 Bowler and Morus, Making Modern Science, 299. 208 Bowler and Morus, Making Modern Science, 315. 209 Skjervheim, Deltagare och åskådare, 12. 210 Ricoeur, Oneself as Another, 178; Bengt Kristensson Uggla in Paul Ricoeur, Homo Capax: Texter om filosofisk antropologi och etik av Paul Ricoeur sammanställda av Bengt Kristensson Uggla. Translated by Eva Backelin (Göteborg: Daidalos, 2011), 13. 211 Rosi Braidotti, The Posthuman (Cambridge: Polity, 2013). 212 Bengt Kristensson Uggla, “Coping with Academic Schizophrenia: The Privileged Place of the Person when Confronting the Anthropological Deficit of Contemporary Social Imagination: Christian Smith and Paul Ricoeur.” Eco-ethica. Volume 5 (Berlin: LIT Verlag, 2016), 199-218. 213 Nowotny, Re-Thinking Science, 1. 214 Skjervheim, Deltagare eller åskådare, 31. 215 Skjervheim, Deltagare eller åskådare, 11–35. 216 Smith, What is a Person?, 4, 5, 9, 13, 21. 217 Smith, What is a Person?, 2–4, 384–385. 218 Christian Smith, To flourish or destruct: A personalist theory of human goods, motivations, failure, and evil (Chicago: The University of Chicago Press, 2015), 62. Cf. Kristensson Uggla, “Coping with academic schizophrenia” and Kristensson Uggla, Slaget om verkligheten, Chapter 6. 219 Shapin, The Scientific Revolution, 164. 220 Sven-Eric Liedman, Mellan det triviala och det outsägliga: Blad ur humanioras och samhällsvetenskapernas historia (Göteborg: Daidalos, 1998), 13f. 221 Svendsen, Vad är filosofi?, 97. 222 Bengt Kristensson Uggla, ”Personfilosofi – filosofiska utgångspunkter för personcentrering inom hälso- och sjukvård.” In Personcentrering inom hälso- och sjukvård: Från filosofi till praktik, edited by Inger Ekman (Stockholm: Liber, 2020), 58-105, and Kristensson Uggla, “What makes us human? Exploring the Significance of Ricoeur’s Ethical Configuration of Personhood between Naturalism and Phenomenology in Health Care.” Nursing Philosophy (2022). 223 Inger Ekman, et al. “Person-centered care – Ready for prime time.” European Journal of Cardiovascular Nursing. No. 10 (2011): 248–251. 224 Ricoeur, Oneself as Another, 172, cf. Kristensson Uggla, “Personfilosofi.” 225 Andersson, Om vetenskapens gränser, 63. 226 Harari, Sapiens, 42. 227 Smith, What is a Person?, 53. 228 Polanyi, Tacit Knowledge, 37. Cf. “It is the height of intellectual perversion to renounce, in the name of scientific objectivity, our position as the highest form of live on earth, and our own advent by a process of evolution as the most important problem of evolution.” (ibid., 47).
Science as a Quest for Truth: The Interpretation Lab 229
335
Cf. Smith, What is a Person?, 37, who refers to the statement that “the human body is nothing but a whole lot of oxygen, carbon, hydrogen, nitrogen, calcium, phosphorus, sulphur, sodium, magnesium, and trace amounts of copper, zinc, selenium, molybdenum, fluorine, chlorine, iodine, manganese, cobalt, iron, lithium, strontium, aluminium, silicon, lead, vanadium, arsenic, and bromine.” 230 Smith, What is a Person?, 38. 231 Jürgen Habermas, Between Naturalism and Religion: Philosophical Essays. Translated by Ciaran Cronin (Cambridge: Polity, 2005/2008), 146. 232 Peter Kemp, Filosofiens Verden: Kritik – etik – paedagogik – religion (Copenhagen: Tiderne Skrifter, 2012), 21–33: “Sand og falsk reduktion.” 233 Kemp, Filosofiens Verden. 234 Olof Lagercrantz, Om konsten att läsa och skriva (Stockholm: Wahlström & Widstrand, 1985), 7. 235 Lagercrantz, Om konsten att läsa och skriva, 8. 236 Friedrich Nietzsche, Human, All Too human: A book for free spirits. Translation: R.J. Hollingdale (Cambridge: Cambridge University Press, 1878/1986), 97, paragraph 208. 237 See Chapter 4. 238 Ricoeur, Oneself as Another, 22f. 239 Dagfinn Føllesdal, Lars Walløe, and Jon Elster, Argumentasjonsteori, språg och viteskapsfilosofi (Oslo: Universitetsforlaget, 1990). 240 Jan Hartman, Vetenskapligt tänkande: Från kunskapsteori till metodteori (Lund: Studentlitteratur, 2004), 106. Cf. 187f. 241 Hartman, Vetenskapligt tänkande, 273. 242 Wilhelm Dilthey, “The development of hermeneutics.” In Selected Writings, edited by E.P. Rickman (Cambridge: Cambridge University Press, 1900/1976). 243 The ambition of “understanding authors better than they understood themselves” is dubious since it not only seems to articulate an excessively psychologizing goal of accessing authorial intention but also seems to include the possibility of understanding authors differently from how they understood themselves. The latter interpretation is interesting in that it gestures beyond the nostalgic focus on authorial intentions “behind” the text by instead creating the opportunity for a new hermeneutic situation in the world “before” the text. Subsequent research has shown that Schleiermacher in his lectures on hermeneutics actually refers to two different kinds of interpretation: one “psychological,” the other “grammatical.” 244 Gadamer, Truth and Method, cf. Burman (2014), Hans-Georg Gadamer och hermeneutikens aktualitet. 245 Wallerstein et al., Open the Social Sciences, 6. 246 Paul Ricoeur, Hermeneutics and the Human Sciences: Essays on language, action, and interpretation. Edited, translated and introduced by John B. Thompson (Cambridge: Cambridge University Press, 1973/1981), 131-144. 247 Karl Hempel, “The Function of General Laws in History.” The Journal of Philosophy, nr 9 (1942), 35-48. 248 Okasha, Philosophy of Science, 52–55.
336
Notes
249 Paul Ricoeur, From Text to Action: Essays in Hermeneutics. Translated by Kathleen Blamey and John B. Thompson (Evanston: Northwestern University Press, 1986/1991), 135. 250 Elizabeth Anscombe, Intention (Oxford: Blackwell, 1957). 251 Georg Henrik von Wright, Explanation and Understanding (London: Routledge & Kegan Paul, 1971), 252 Ricoeur, From Text to Action, 125-143. 253 Kjørup, Människovetenskaperna, 27. 254 Kjørup, Människovetenskaperna, 43f. 255 Andersson, Vetenskapens gränser. 256 Lars Geschwind and MiriamTerrell, “Vilka var humanisterna? Miljöer och verksamhet 1900, 1950 och 2000.” In Humanisterna och framtidssamhället: Forskningsrapport, edited by Bugoslaw, Julia (Stockholm: Institutet för framtidsstudier, 2011), 77, 83. Cf. Andreas Ekström and Sverker Sörlin, Alltings mått: Humanistisk kunskap i framtidens samhälle (Stockholm: Norstedts, 2012), 139. 257 Ekström and Sörlin, Alltings mått, 122. 258 Walter Isaacson, Steve Jobs: The Exclusive Biography. (Abacus, 2011/2015). 259 Anne Colby, Thomas Ehrlich, William M. Sullivan, and Jonathan R. Dolle, Rethinking Undergraduate Business Education: Liberal Learning for the Profession. The Carnegie Foundation for the Advancement of Teaching. Foreword by Lee S. Shulman (San Francisco: Jossey-Bass, 2011). 260 Cf. for example Joakim Molander (2003), Vetenskapsteoretiska grunder: Historia och begrepp (Lund: Studentlitteratur, 2003) and Kjørup, Människovetenskaperna. 261 Ricoeur, “The Hermeneutical Function of Distanciation,” Chapter 3 in From Text to Action, 75-88. 262 Ricoeur, From Text to Action, 134. 263 Ricoeur, From Text to Action, 134. 264 Tomas Tranströmer, “Preludes,” Night Vision. Selected and translated from the Swedish by Robert Bly (London: London Magazine Editions, 1970/1972), 45. 265 Wulf, The Invention of Nature, 196. 266 Alexander Humboldt, op.cit. in Wulf, The Invention of Nature, 196. 267 Hannah Arendt, The Human Condition (Chicago: The University of Chicago Press, 1958), 7. 268 Although the authors advance the fascinating thesis that it is more meaningful to define the West on the basis of its institutions for organizing knowledge than on a set of cultural values or a geographical region, this does not mean that they advocate an unreflective Eurocentrism. On the contrary, they take great pains to include a global perspective by drawing on studies of historical knowledge institutions that have developed in parallel in Chinese, Islamic, and Indian contexts. Ian F. McNeely and Lisa Wolverton, Reinventing Knowledge: From Alexandria to the Internet (New York/London: Norton, 2008), xiii. 269 This perspective on the importance of the library offers a parallel to the transition from oral to written, from Aristotle to Alexander, from city state to empire: “the library embodies on a large scale, what Aristotle’s book embodied in miniature” (McNeely and Wolverton, Reinventing Knowledge, 13). 270 McNeely and Wolverton, Reinventing Knowledge, 93.
Science as a Quest for Truth: The Interpretation Lab 271
337
McNeely and Wolverton, Reinventing Knowledge, 274. These hermeneutic practices are presented in greater detail in Kristensson Uggla, Slaget om verkligheten. 273 Harari, Sapiens, 276. 274 Manuel Castells, The Information Age: Economy, Science, and Culture. Volume 1: The Rise of the Network Society (Oxford: Blackwell). 275 Castells, The Information Age, 5-46. 276 Walter Isaacson, The innovators: How a group of hackers, geniuses and geeks created the digital revolution. (London/New York: Simon & Schuster, 2014), 479– 488. 277 Mats Alvesson, Yiannis Gabriel, and Roland Paulsen, Return to Meaning: A Social Science with Something to Say (Oxford: Oxford University Press, 2017). 278 Ove Kaj Pedersen, Konkurrencestaten (Copenhagen: Hans Reitzel, 2011), 12. 279 Pedersen, Konkurrencestaten, 206. 280 Pedersen, Konkurrencestaten, 208. 281 Pedersen, Konkurrencestaten, 287. 282 Pedersen, Konkurrencestaten, 15. 283 I have no wish to add to the exaggerated importance that Donald Trump ascribes to himself, so when I use the notion of “Trump-society” it is because I see Trump as the consequence of a particular societal state, not its cause (Cf. Matthew d’Acona, Post truth: The new war on truth and how to fight back (London: Ebury Press, 2017), 5. 284 Lorraine Daston and Peter Galison, Objectivity (New York: Zone Books, 2007), 5. 285 Ricoeur, Oneself as Another, 297-356. 286 Adrian Ratkiü, Dialogseminariets forskningsmiljö (Stockholm: Dialoger, 2016). 287 Wittgenstein, Philosophical Investigations. 288 Johan Svahn, Kunskap i resonans: Om yrkeskunnande, tekmologi och säkerhetskultur (Stockholm: Kungliga Tekniska Högskolan, 2009). 289 Nina Rehnqvist, ”Förord.” In: Evidensens många ansikten: Evidensbaserad praktik i praktiken, edited by Ingmar Bolin and Morten Sager (Lund: Arkiv, 2011), 10. 290 Helga Nowotny, The Cunning of Uncertainty (Cambridge: Polity Press, 2016), vii. 291 Ingmar Bohlin and Morten Sager, Morten (eds), Evidensens många ansikten: Evidensbaserad praktik i praktiken. Lund: Arkiv, 2011), 14. 292 Bohlin and Sager, Evidensens många ansikten, 16, 19–23. 293 Bohlin & Sager, Evidensens många ansikten, 16. 294 Bohlin & Sager, Evidensens många ansikten, 28. Cf., 213ff., 219–223. 295 Kerstin Sahlin and Ulla Eriksson-Zetterquist, Kollegialitet: En modern styrform (Lund: Studentlitteratur, 2016), 45. 296 Sahlin & Eriksson-Zetterquist, Kollegialitet, 49. 297 Sahlin & Eriksson-Zetterquist, Kollegialitet, 75; cf. 51f., 74. 298 Sahlin & Eriksson-Zetterquist, Kollegialitet, 73. 299 Karl Jaspers, The Idea of the University. Translated by H. A. T. Vanderschmidt. Edited by Karl W. Deutsch. Preface by Robert Ulrich (Boston: Beacon Press, 272
338
Notes
1945/1959), 1. 300 Sahlin and Eriksson-Zetterquist, Kollegialitet, 79, 87 ff., 165. 301 Thomas Karlsohn (ed), Universitetets idé: Sexton nyckeltexter. Göteborg: Daidalos, 2016), 7–9. 302 Karl Jaspers, Die Idee der Universität, reprint (Berlin: Springer, 1946/1980), 5. Translated from German, in the English translation of this book by Jaspers, this passage has curiously been omitted. 303 Daston and Galison, Objectivity, 5; cf. 4ff. and 28, 37, 39ff., 53. 304 When Daston and Galison specify the ethos that made mechanical objectivity possible—on the basis of the idea that objectivity and subjectivity should be seen as two sides of the same scientific project—they highlight self-mastery, self-discipline, self-control, and a self-chosen desire for a lack of will: “the negating of subjectivity by the subject became objectivity” (Daston and Galison Objectivity, 204). 305 Daston and Galison, Objectivity, 53. 306 Daston and Galison, Objectivity, Chapter 6. 307 Cf. Daston and Galison, Objectivity, 58: “Seeking truth is the ur-epistemic virtue.” From here on, I am following Ricoeur’s articulation of ethics as “aiming for the good life, with and for others, in just institutions” (Ricoeur, Onself as Another, 172; cf. ibid. Chapter 7-9). Cf. John Rawls: “Justice is the first virtue of social institutions, as truth is of systems of thought.” Raws, A Theory of Justice (Cambridge, M.A.: Harvard University Press, 1971/1999), 3. 308 Rawls, A Theory of Justice. 309 Claes Gustafsson, Om företag, moral och handling (Lund: Studentlitteratur, 1988). 310 Tomas Brytting, Företagsetik (Malmö: Liber Ekonomi, 2005), 37–55. 311 Ricoeur, Oneself as Another, 287-290. 312 Mats Alvesson and Kaj Sköldberg, Reflexive Methodology: New Vistas for Qualitative Research (London: Sage 2008), 316. Cf. Sven Eric Liedman: “Science is mutable, but knowledge, by contrast, is a passive result.” (Liedman, Ett oändligt äventyr, 60). 313 Wilhelm von Humboldt, W. “On the Internal and External Organization of the Higher Scientific Institutions in Berlin.” German History in Documents and Images. Volume 2. From Absolutism to Napoleon, 1648-1815. Translated by Thomas Dunlap (1810), 2-3. 314 I will here mention in particular, Fredrik Ekelund, Jag vill ha hela världen (Stockholm: Bonniers, 1996) and Martin Engberg, En enastående karriär (Stockholm: Nordstedts, 2017). 315 Hans-Georg Gadamer, Die Aktualität des Schönen: Kunst als Spiel, Symbol und Fest (Stuttgart: Reklam Universal-Bibliothek, 1977/2012), cf. Gadamer, Truth and Method, 106-135.
BIBLIOGRAPHY AND REFERENCES
d’Acona, Matthew. 2017. Post truth: The new war on truth and how to fight back. London: Ebury Press. Alvesson, Mats, and Sköldberg, Kaj. 2018. Reflexive Methodology: New Vistas for Qualitative Research. London: Sage. Alvesson, Mats. 2011. Interpreting Interviews. London: Sage. Alvesson, Mats, and Gabriel, Yiannis, and Paulsen, Roland. 2017. Return to Meaning: A Social Science with Something to Say. Oxford: Oxford University Press. Andersson, Sten. 2004. Om vetenskapens gränser: Socialfilosofiska betraktelser. Göteborg: Daidalos. Anscombe, Elizabeth. 1957. Intention. Oxford: Blackwell. Arendt, Hannah. 1958. The Human Condition. Chicago: The University of Chicago Press. Asplund, Johan. 2003. Hur låter åskan? Förstudium till en vetenskapsteori. Göteborg: Korpen. Bacon, Francis. 1627/2014. New Atlantis. Cambridge: Cambridge University Press. Bacon, Francis. 1620/2000. The New Organon. Edited by Lisa Jarine and Michael Silverthorne. Cambridge: Cambridge University Press. Berger, Peter, and Luckmann, Thomas. 1968. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. New York: Anchor Books. Bernstein, Richard. 1983. Beyond Objectivism and Relativism: Science, Hermeneutics, and Praxis. University of Pennsylvania Press. Bohlin, Ingmar, and Sager, Morten (eds). 2011. Evidensens många ansikten: Evidensbaserad praktik i praktiken. Lund: Arkiv. Bornemark, Jonna. 2018. Det omätbaras renässans: En uppgörelse med pedanternas världsherravälde. Stockholm: Volante. Bowler, Peter J, and Morus Iwan Rhys. 2005. Making Modern Science: A historical survey. Chicago & London: The University of Chicago Press. Braidotti, Rosi. 2013. The Posthuman. Cambridge: Polity. Brytting, Tomas. 2005. Företagsetik. Malmö: Liber Ekonomi. Burman, Anders. (ed). 2014. Hans-Georg Gadamer och hermeneutikens aktualitet. Stockholm: Excerpt.
340
Bibliography and References
Castells, Manuel. 1996. The Information Age: Economy, Science, and Culture. Volume 1: The Rise of the Network Society. Oxford: Blackwell. Chalmers, Alan F. 2013. What is this thing called science? 4th edition. New York: Open University Press. Colby, Anne, Ehrlich, Thomas, Sullivan Willian M, and Dolle, Jonathan R. 2011. Rethinking Undergraduate Business Education: Liberal Learning for the Profession. The Carnegie Foundation for the Advancement of Teaching. Foreword by Lee S. Shulman. San Francisco: Jossey-Bass. Comte, Auguste. 1844/1880. General View of Positivism. Translated by J.H. Bridges. London: Reeves & Turner. Copernicus, Nicolaus. 1453/2016. On the Revolutions of the Heavenly Spheres [De revolutionibus orbium coelestium]. Hansebooks. Dalphert, Ingolf. 2011/2016. Creatures of possibility. Translated by Jo Bennett. Grand Rapids: Baker Academic. Danto, Artur. 1996. After the End of Art: Contemporary Art and the Pale of History. New Jersey: Princeton University Press, Danto, Arthur. 2003. The Abuse of Beauty: Aesthetics and the Concept of Art. The Paul Carus Lectures Series 21. Chicago and La Salle: Open Court Publishing. Darwin, Charles. 1959. The Origin of Species by Natural Selection. New York: D. Appleton and Company. Daston, Lorraine, and Galison, Peter. 2007. Objectivity. New York: Zone Books. Descartes, René. 1641/2007. Meditations on First Philosophy. BN Publishing. Dilthey, Wilhelm. “The development of hermeneutics.” In Selected Writings, edited by Rickman, E P. Cambridge: Cambridge University Press, 1900/1976. Ekelund, Fredrik. 1996. Jag vill ha hela världen. Stockholm: Bonniers, Ekman, Inger, et al. “Person-centered care – Ready for prime time.” European Journal of Cardiovascular Nursing. No. 10 (2011): 248–251. Ekström, Anders, and Sörlin, Sverker. 2012. Alltings mått: Humanistisk kunskap i framtidens samhälle. Stockholm: Norstedts. Engberg, Martin. 2017. En enastående karriär. Stockholm: Nordstedts. Eriksson-Zetterquist, Ulla, and Styhre, Alexander. 2007. Organisering och intersektionalitet. Malmö: Liber. Fleck, Ludwik. 1935/1981. Genesis and Development of a Scientific Fact. Translated by Frederick Bradley. Chicago: The University of Chicago Press. Forsgård, Nils-Erik. 2014. Ingens herre, ingens träl: Radikalen Anders Chydenius i 1700-talets Sverige. Stockholm: Timbro.
Science as a Quest for Truth: The Interpretation Lab
341
Foucault, Michel. 1961/1965. Madness and Civilization: A History of Insanity in the Age of Reason. Translated by Richard Howard. New York: Random House. Foucault, Michel. 1966/1971. The Order of Things: An Archeology of the Human Sciences. New York: Random House. Foucault, Michel. 1971. L'ordre du discours est la leçon inaugurale de Michel Foucault au Collège de France, prononcée le 2 décembre 1970. Paris: Gallimard. Foucault, Michel. 1973/1983. This is not a pipe. With illustrations and letters by René Magritte. Translated by James Harkness. Berkeley: University of California Press. Foucault, Michel. 1976/1978. The History of Sexuality. Volume I: An Introduction. Translated by Robert Hurley. New York: Pantheon Books. Freely, John. 1996. Istanbul: The Imperial City. London: Penguin Books. Fröding, Gustaf. 1891/2012. Ungdomsdikter: Gitarr och dragharmonika. Stockholm: Dejavu. Føllesdal, Dagfinn, Walløe, Lars, and Elster, Jon. 1990. Argumentasjonsteori, språg och viteskapsfilosofi. Oslo: Universitetsforlaget. Gadamer, Hans-Georg. 1960/2004. Truth and Method. Translated by Joel Weinsheimer and Donald G Marshall. London: Bloomsbury. Gadamer, Hans-Georg. 1977/2012. Die Aktualität des Schönen: Kunst als Spiel, Symbol und Fest. Stuttgart: Reklam Universal-Bibliothek. Galilei, Galileo. 1932/1967. Dialogue Concerning the Two Chief World Systems—Ptolemaic & Copernican. Translated by Stillman Drake. Foreword by Albert Einstein. Berkeley & Los Angeles: University of California Press. Geschwind, Lars, and Terrell, Miriam. “Vilka var humanisterna? Miljöer och verksamhet 1900, 1950 och 2000.” In Humanisterna och framtidssamhället: Forskningsrapport. Edited by Bugoslaw, Julia, 77108. Stockholm: Institutet för framtidsstudier, 2011. Gilje, Nils, and Grimen, Harald. 1992/2003. Samhällsvetenskapernas förutsättningar. Translated by Sten Andersson. Göteborg: Daidalos. Gorham, Geoffrey. 2009. Philosophy of Science: A Beginner’s Guide. Oxford: One World. Guillet de Monthoux, Pierre. 2004. The Art Firm: Aesthetic Management and Metaphysical Marketing. Palo Alto: Stanford University Press. Gulyga. Arenij. 1977/1988. Immanuel Kant. Translated by Håkan Edgren. Göteborg: Daidalos. Gustafsson, Claes. 1998. Om företag, moral och handling. Lund: Studentlitteratur. Göranzon, Bo. 1992. The Practical Intellect: Computers and Skills. Springer.
342
Bibliography and References
Habermas, Jürgen. 2005/2008. Between Naturalism and Religion: Philosophical Essays. Translated by Ciaran Cronin. Cambridge: Polity. Hacking, Ian. 1999. The Social Construction of What? Cambridge, MA: Harvard University Press. Hansson, Bengt. 2011. Skapa vetande: Vetenskapsteori från grunden. Lund: Studentlitteratur. Harari, Yuval Noah. 2015. Sapiens: A Brief History of Humankind. London: Penguin. Harding, Sandra. 1998. Is Science Multi-Cultural? Postcolonialisms, Feminisms, and Epistemologies. Bloomington, IN: Indiana University Press. Hartman, Jan. 2004. Vetenskapligt tänkande: Från kunskapsteori till metodteori. Lund: Studentlitteratur. Heidegren, Carl-Göran. 2016. Positivismstrider. Göteborg: Daidalos. Hempel, Carl. “The Function of General Laws in History.” The Journal of Philosophy, nr 9 (1942), 35-48. Horkheimer, Max, and Adorno Theodor W. 1944/1997. Dialectic of Enlightenment. Translated by John Cumming. London/New York: Verso. Hultqvist, Lennart. 2018. Knockad av stjärnhimlen: Från Tycho Brahes blick till Voyagers resa. Stockholm: Santérus. von Humboldt, Wilhelm. “On the Internal and External Organization of the Higher Scientific Institutions in Berlin.” 1810. German History in Documents and Images. Volume 2. From Absolutism to Napoleon, 16481815, 1-6. Translated by Thomas Dunlap. Husserl, Edmund. 1931/1999. Cartesian Meditations: An Introduction to Phenomenology. Translation: Ronald Bruzina. Dortrech: Springer. Ihde, Don. 1986. Experimental Phenomenology: An Introduction. Albany: Stat University of New York Press. Ihde, Don. 1991. Instrumental Realism: The Interface between Philosophy of Science and Philosophy of Technology. Bloomington, IN: Indiana University Press. Ihde, Don. 1998. Expanding Hermeneutics: Visualism in science. Evanston, Il: North Eastern University Press. Irving, Washington. 1928. A history of the life and voyages of Christopher Columbus. London. Isaacson, Walter. 2011/2015. Steve Jobs: The Exclusive Biography. Abacus. Isaacson, Walter. 2014. The innovators: How a group of hackers, geniuses and geeks created the digital revolution. London/New York: Simon & Schuster.
Science as a Quest for Truth: The Interpretation Lab
343
Jameson, Fredric. 2010. Valances of the Dialectics. London/Brooklyn: Verso. Jaspers, Karl. 1946/1980. Die Idee der Universität, reprint. Berlin: Springer. Jaspers, Karl. 1945/1959. The Idea of the University. Translated by H.A.T. Vanderschmidt. Edited by Karl W. Deutsch. Preface by Robert Ulrich. Boston: Beacon Press. Johansson, Ingvar. ”Positivismen.” In Positivism och marxism, edited by Johansson, Ingvar, and Liedman, Sven-Eric. Stockholm: Norstedts. 1981. Josefsson, Ingela. 1988. Läkarens yrkeskunnande. Lund: studentlitteratur. Jüngel, Eberhard. 1977/1983. God as the mystery of the world: On the foundation of the theology of the crucified on in the dispute between theism and atheism. Translated by Darrell L Guder. Grand Rapids: Eerdmans. Kant, Immanuel. 1787/1996. Critique of Pure Reason. Unified Edition (with all variants from the 1781 and 1787 editions). Translated by Werner S. Pluhar. Hackett Publishing. Karlsohn, Thomas (ed). 2016. Universitetets idé: Sexton nyckeltexter. Göteborg: Daidalos. Kemp, Peter. 2012. Filosofiens Verden: Kritik – etik – paedagogik – religion. Copenhagen: Tiderne Skrifter. Kerr, Clark. 2001. The Uses of the University. Cambridge, MA: Harvard University Press. Kjørup, Søren. 2008/2009. Människovetenskaperna: Problem och traditioner i humanioras vetenskapsteori. Translated by Sven-Erik Torhell. Lund: Studentlitteratur. Kristensson Uggla, Bengt. 1994. Kommunikation på bristningsgränsen: En studie i Paul Ricoeurs projekt. Stockholm/Stehag: Brutus Östlings bokförlag Symposion. Kristensson Uggla, Bengt. 2002/2012. Slaget om verkligheten: Filosofi, omvärldsanalys, tolkning. Stockholm/Stehag: Brutus Östlings Bokförlag Symposion. Kristensson Uggla, Bengt. 2012. Gränspassager: Bildning i tolkningens tid. Stockholm: Santérus. Kristensson Uggla, Bengt. ”Personfilosofi – filosofiska utgångspunkter för personcentrering inom hälso- och sjukvård.” In Personcentrering inom hälso- och sjukvård: Från filosofi till praktik, edited by Ekman, Inger, 58-105. Stockholm: Liber, 2020. Kristensson Uggla, Bengt. “Coping with Academic Schizophrenia: The Privileged Place of the Person when Confronting the Anthropological
344
Bibliography and References
Deficit of Contemporary Social Imagination: Christian Smith and Paul Ricoeur.” Eco-ethica, 199-218. Volume 5. Berlin: LIT Verlag, 2016. Kristensson Uggla, Bengt. ”Med Humboldt i universitetsdebatten.” Signum 3 (2017), 32-37. Kristensson Uggla, Bengt. ”Bildningsuniversitetets framtid – när samhället talar tillbaka / The Future of the University of Bildung – when society speaks back.” Konsten med universitet, 37-42. Jan-Kenneth Weckman et al. Åbo: Åbo Akademi/AmosLAB, 2018. Kristensson Uggla, Bengt. “What makes us human? Exploring the Significance of Ricoeur’s Ethical Configuration of Personhood between Naturalism and Phenomenology in Health Care.” Nursing Philosophy (2022). DOI:10.1111/nup.12385. Kuhn, Thomas. 1962/2012. The Structures of Scientific Revolutions. Chicago: University of Chicago Press. La Mettrie, Julien Ofray de. 1748/2022. L’homme machine. Legare Street Press. Lagercrantz, Olof. 1985. Om konsten att läsa och skriva. Stockholm: Wahlström & Widstrand. Latour, Bruno. 1987. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge, MA: Harvard University Press. Latour, Bruno, and Woolgar, Steve. 1979. Laboratory Life: The construction of scientific fact. Princeton: Princeton University Press. Laudan, Rachel (ed). 1984. The Nature of Technological Knowledge: Are Models of Scientific Change Relevant? Dordrecht: D. Reidel Publishing Co. Levens, Tim. 2015. The Meaning of Science. Penguin, Pelican Books. Liedman, Sven-Eric. 1998. Mellan det triviala och det outsägliga: Blad ur humanioras och samhällsvetenskapernas historia. Göteborg: Daidalos. Liedman, Sven-Eric. 2001. Ett oändligt äventyr: Om människans kunskaper. Stockholm: Bonniers. Liedman, Sven-Eric. 2011. Hets! En bok om skolan. Stockholm: Bonniers. Lind, Rolf. 2019. Vidga vetandet: Teori, metod och argumentation i samhällsvetenskapliga undersökningar. Lund: Studentlitteratur. Løgstrup, K.E. 1942. Den erkendelseteoretiske konflikt mellem den transcendentalfilosofiske idealisme och teologien. Copenhagen: Samlarens forlag. Løgstrup, K.E. 1950/2020. Kierkegaard’s and Heidegger’s Analysis of Existence and its Relation to Proclamation. Selected Works of K.E. Løgstrup. Translated by Robert Stern. Oxford: Oxford University Press, 1950/2020.
Science as a Quest for Truth: The Interpretation Lab
345
Malthus, Thomas Robert. 1803/1986. The Works of Thomas Robert Mallthus. Volume Two. An Essay on the Principle of Population. Sixth edition (1826) with variant readings from the second edition (1803). Part I. Edited by E.A. Wrigley and David Souden. London: William Pickering. Marçal, Katrine. 2012/2016. Who cooked Adam Smith’s dinner? A Story about Women and Economics. Translated by Saskia Vokel. London: Portobello Books. Marx, Karl, and Engels, Friedrich. 1848/2014. The Communist Manifesto. Translated by Samuel Moore. Penguin Classics. McNeely, Ian F, and Wolverton, Lisa. 2008. Reinventing Knowledge: From Alexandria to the Internet. New York/London: Norton. Merleau-Ponty, Merleau. 1945/2002. The Phenomenology of Perception. Translated by Colin Smith. London: Routledge. Molander, Joakim. 2003. Vetenskapsteoretiska grunder: Historia och begrepp. Lund: Studentlitteratur. Monk, Ray. 1990. Ludwig Wittgenstein: The Duty of Genius. London: Vintage. Monmonier, Mark. 1991. How to lie with maps. Chicago: The University of Chicago Press. Newton, Isaac. 1687/2020. The Mathematical Principles of Natural Philosophy. Flame Tree Publishing. Nietzsche, Friedrich. 1873/2012. On Truth and Lies in a Nonnormal Sense. Theophania Publishing. Nietzsche, Friedrich. 1878/2000. Human, All Too human: A book for free spirits. Translation: R.J. Hollingdale. Cambridge: Cambridge University Press. Nowotny, Helga. 2016. The Cunning of Uncertainty. Cambridge: Polity Press. Nowotny, Helga, Scott, Peter, and Gibbons, Michael. 2001. Re-Thinking Science: Knowledge and the Public in an Age of Uncertainty. Cambridge: Polity Press. Okasha, Samir. 2002. Philosophy of Science: A Very Short Introduction. Oxford: Oxford University Press. Östling, Johan. 2016. Humboldts universitet: Bildning och vetenskap i det moderna Tyskland. Stockholm: Atlantis. Parker, Geoffrey. 2013. Global Crisis: War Climate Change and Catastrophe in the Seventeenth Century. New Haven/London: Yale University Press. Pascal, Blaise. 1966. Pensées. Translated by A.J. Krailsheimer. Baltimore: Penguin. Pedersen, Ove Kaj. 2011. Konkurrencestaten. Copenhagen: Hans Reitzel.
346
Bibliography and References
Pico della Mirandola, Giovanni. 1486/1956. Oration on the Dignity of Man. Translation: A. Robert Caponigri. Introduction by Russell Kirk. Chicago: Henry Regnery Company. Polanyi, Michael. 1958/1962. Personal Knowledge: Towards a PostCritical Philosophy. Chicago: The University of Chicago Press. Polanyi, Michael. 1966/2009. The tacit dimension. With a new forward by Amartya Sen. Chicago: The University of Chicago Press. Popper, Karl. 1935/2002. The Logic of Scientific Discovery. London/New York: Routledge. Popper, Karl. 1968/2002. The Open Societies and its Enemies, Vol.2. London/New York: Routledge. Porter, Roy. 1990. The Enlightenment. Red Globe Press. Prigogine, Ilya, and Stengers Isabelle. 1982. La nouvelle alliance: Métamorphose de la science. Paris: Gallimard. Principe, Lawrence M. 2011. The Scientific Revolution: A Very Short Introduction. Oxford: Oxford University Press. Quine, W V O. 1992. Pursuit of Truth. Cambridge, MA: Harvard University Press. Ratkic, Adrian. 2016. Dialogseminariets forskningsmiljö. Stockholm: Dialoger. Rawls, John. 1971/1999. A Theory of Justice. Cambridge, MA: Harvard University Press. Rehnquist, Nina. ”Förord.” In: Evidensens många ansikten: Evidensbaserad praktik i praktiken, edited by Ingmar Bolin & Morten Sager, 9-11. Lund: Arkiv, 2011. Ricoeur, Paul. 1967. Husserl: An Analysis of His Phenomenology. Translated by Edward G Ballard and Lester E Embree. Evanston: Northwestern Univeristy Press. Ricoeur, Paul. 1969/1974. The Conflict of Interpretations: Essays in Hermeneutics. Translated by Willis Doingo et al. Evanston: NorthWestern University Press. Ricoeur, Paul. 1973/1981. Hermeneutics and the Human Sciences: Essays on language, action, and interpretation. Edited, translated and introduced by John B. Thompson. Cambridge: Cambridge University Press. Ricoeur, Paul. 1986/2007. From Text to Action: Essays in Hermeneutics. Translated by Kathleen Blamey and John B. Thompson. Evanston: Northwestern University Press. Ricoeur, Paul. 1983-85/1984-88. Time and Narrative, I–III. Translated by Katheleen McLaughlin and David Pellauer. Chicago: The University of Chicago Press.
Science as a Quest for Truth: The Interpretation Lab
347
Ricoeur, Paul. 1990/1992. Oneself as Another. Translated by Katheleen Blamey. Chicago: The University of Chicago Press. Ricoeur, Paul. 2000/2004. Memory, History, Forgetting. Translated by Kathleen Blamey & David Pellauer. Chicago: The University of Chicago Press. Ricoeur, Paul. 2011. Homo Capax: Texter om filosofisk antropologi och etik av Paul Ricoeur sammanställda av Bengt Kristensson Uggla. Translated by Eva Backelin. Göteborg: Daidalos. Sahlin, Kerstin, and Eriksson-Zetterquist, Ulla. 2016. Kollegialitet: En modern styrform. Lund: Studentlitteratur. Schrag, Calvin O. “The Fabric of Fact.” In Philosophical Papers: Betwix and Between, by Schrag, C O, 175-189. Albany: State University of New York Press, 1994. Sen, Amartya. “Forword.” In The Tacit Dimension by Polanyi, Michael, viixvi. Chicago: The University of Chicago Press, 1966/2009. Schütz, Alfred. 1932/1967. The Phenomenology of the Social World. Translated by Walsh, George and Lehnert, Frederich. Evanston: Northwestern University Press. Shapin, Steven. 1996. The Scientific Revolution. Chicago: The University of Chicago Press. Skjervheim, Hans. 1971. Deltagare och åskådare: Sex bidrag till debatten om människans frihet i det moderna samhället. Translated by Ian Hamilton och Lillemor Lagerlöf. Stockholm: Prisma. Sköldberg, Kaj, and Salzer-Mörling, Miriam. 2002. Över tidens gränser: Visioner och fragment i det akademiska livet. Stockholm: Carlssons. Smedbäck, Michael. ”Universum är också mycket mindre än vi trott.” Under strecket, Svenska Dagbladet, June 6. 2019. Smith, Adam. 1776/2010. The Wealth of Nations. Capstone Publishing. Smith, Christian. 2010. What Is a Person? Rethinking Humanity, Social Life, and the Moral Good from the Person Up. Chicago: The University of Chicago Press. Smith, Christian. 2015. To flourish or destruct: A personalist theory of human goods, motivations, failure, and evil. Chicago: The University of Chicago Press. Snow. C. P. 1959. “The Two Cultures.” The Rede Lecture. Cambridge: Cambridge University Press. Strømholm, Per. 1984. Den videnskaplige revolusjonen 1500-1700. Oslo: Solum. Svahn, Johan. 2009. Kunskap i resonans: Om yrkeskunnande, tekmologi och säkerhetskultur. Stockholm: Kungliga Tekniska Högskolan.
348
Bibliography and References
Svallfors, Stefan. 2012/2020. The Inner World of Research: On Academic research. Translated by Neil Betteridge. Anthem Press. Svendsen, Lars F H. 2003/2005. Vad är filosofi? Translated by Joachim Retzlaff. Stockholm: Natur och Kultur. Taylor, Mark C. 2018. Last Works: Lessons in Leaving. New Haven, CT: Yale University Press. Toulmin, Stephen. 1990. Cosmopolis: The Hidden Agenda of Modernity. Chicago: The University of Chicago Press. Tranströmer, Tomas. 1970/1972. Night Vision. Selected and translated from Swedish by Robert Bly. London: London Magazine Editions. Ulfstrand, Staffan. 2008. Darwins idé: Den bästa idé någon någonsin haft och hur den fungerar idag. Stockholm/Stehag: Brutus Östlings Bokförlag Symposion. Vasari, Giorgio. 1550/1987. Leonardo da Vinci. Translated by George Bull. Penguin Books. Villstrand, Nils Erik. 2009. Riksdelen: Stormakt och rikssprängning 15601812. Helsingfors: Svenska Litteratursällskapet. Wallerstein, Immaneul, et al. 1996. Open the Social Sciences. Report of the Gulbenkian Commission on the Reconstructuring of the Social Sciences. Stanford: Stanford University Press. Wittgenstein, Ludwig. 1921/1998. Tractatus Logico-Philosophicus. Dover Publications. Wittgenstein, Ludwig. 1953/2009. Philosophical Investigations. 4th Edition. Oxford: Wiley-Blackwell. Wootton, David. 2015. The Invention of Science: A New History of the Scientific Revolution. Allen Lane. von Wright, Georg Henrik. 1971. Explanation and Understanding. London: Routledge & Kegan Paul. von Wright, Georg Henrik. 1965/1993. Logik, filosofi och språk: Strömningar och gestalter i modern filosofi. Nora: Nya Doxa. von Wright, Georg Henrik. 1986. Vetenskapen och förnuftet: Ett försök till orientering. Stockholm: Bonniers. Wulf, Andrea. 2015. The Invention of Nature: The Adventures of Alexander von Humboldt. The Lost Hero of Science. London: John Murray Publishers.
INDEX OF INDIVIDUALS
d’Acona, Matthew, 337, 339 Ad Reinhardt, 108 Adler, Alfred, 185 Adorno, Theodor W., 133-134, 330, 342 Alcott, Louisa May, 168 Alexander the Great, 260, 336 al-Farabi, 113 Alvesson, Mats, 70, 310, 328, 337338, 339 Andersson, Sten, 71, 328, 332, 334, 336, 339, 341 Anscombe, Elizabeth, 248, 256, 336, 339 Apel, Karl-Otto, 220 Aquinas, Thomas, 114 Archimboldo, Guiseppe, 107 Archimedes, 140 Arendt, Hannah, 258, 336, 339 Aristarchus of Samos, 81 Aristotle, 47, 57, 79-80, 91, 99, 112113, 121, 125-126, 149, 191, 208, 217, 242, 248, 260, 306307, 336 Armstrong, Neil, 269 Asplund, Johan, 339 Augustine, 113, 144 Avicenna, 113 Averroës, 113 Bachelard, Gaston, 180, 193 Backelin, Eva, 334, 347 Bacon, Francis, 79, 94, 98-100, 125126, 130-133, 149, 150-153, 330-331, 339 Bakunin, Mikhail, 159-160 Ballard, Edward G., 333, 346 Benedict of Nursia, 260 Bentham, Jeremy, 162, 309 Bennet, Jo, 333,
Bergendahl Norell, Margareta, 320 Berger, Peter, 197, 205, 339 Bereson, Ruth, 323 Bernstein, Richard, 144, 331, 339 Betteridge, Neil, 327, 348 Blamey (McLaughlin), Kathleen, 326, 328, 333, 336, 346-347 Bly, Robert, 336, 348 Bohlin, Ingemar, 298, 337, 339, 346 Bonaparte, Napoleon, 159 Bornemark, Jonna, 295, 339 Bosch, Hieronymus, 107 Bowler, Peter J., 212, 330, 334, 339 Bradley, Frederick, 328, 340 Brahe, Tycho, 57, 91-96, 119, 126, 328-329, 342 Braidotti, Rosi, 334, 339 Bridges, J.H., 331, 340 Brorson, Hans Adolph, 168 Bruzina, Ronald, 342 Brunelleschi, 124 Bruno, Giardano, 81 Brytting, Tomas, 338-339 Bugoslaw, Julia, 336, 341 Burckhardt, Jakob, 109 Burman, Anders, 335, 339 Camponigri, Robert, 331 Caravaggio, 107 Carlsson, Arvid, 45 Carlsson Redell, Petra, 320 Castells, Manuel, 274, 276, 337, 340 Catasús, Bino, 320 Chalmers, Alan F., 174, 188, 332, 340 Christina, Queen, 137-138 Chydenius, Anders, 54, 327, 340 Clinton, Bill, 282 Colby, Anne, 336, 340
350
Index of Individuals
Columbus, Christopher, 23-27, 3032, 36-39, 43-45, 54, 85-86, 96100, 102, 104, 326, 342 Combert, Jean-Baptiste, 132 Comte, Auguste, 40-43, 79, 127, 162-165, 174-175, 311, 331, 340 Copernicus, Nicolaus, 57-59, 80-84, 92, 95, 119, 122, 127, 130, 146, 157, 188-194, 328, 340-341 Coponigri, A. Robert, 331, 346 Cornaro, Elena, 76 Cronin, Ciarin, 325, 342 Cumming, John, 330, 342 Curie, Marie, 45, 77 Da Gama, Vasco, 24 Da Vinci, Leonardo, 115, 118, 133, 348 Dalphert, Ingolf, 207, 333, 340 Danto, Arthur, 340 Darwin, Charles, 32-37, 44, 57, 70, 104, 147, 326, 340, 348 Daston, Lorraine, 289-290, 305-306, 328, 337-338, 340 Democritus, 113 Descartes, René, 47, 79, 122, 125128, 137-141, 143-145, 147148, 150, 152, 154-156, 169, 212, 331, 340 Deutch, Karl W., 337, 343 Dias, Bartholomeus, 24 Dilthey, Wilhelm, 238-242, 335, 340 Doingo, Willis, 346 Dolle, Jonathan R., 336, 340 Donovan, Stephen, 323 Drake, Stillman, 341 Dunlap, Thomas, 338, 342 Durkheim, Emile, 162, 174, 213 Ecphantus, 81 Edenius, Mats, 320 Edlund, Caroline, 320 Einstein, Albert, 45, 79, 129, 185, 341 Ehrlich, Thomas, 336, 340 Ekelund, Fredrik, 338, 340
Ekman, Inger, 320, 323, 334, 340, 343 Ekström, Anders, 253, 336 Elster, Jon, 341 El Grecco, 107 Embree, Leser E., 333, 346 Emerson, Ralph Waldo, 168 Engberg, Martin, 338, 340 Engels, Friedrich, 173, 332, 345 Erasmus of Rotterdam, 125 Eriksson-Zetterquist, Ulla, 300-301, 328, 337-338, 340, 347 Ferdinand, King, 24 Fichte, Johann Gottlieb, 169 Fleck, Ludwik, 327, 332-333, 340 Føllesdal, Dagfinn, 335, 341 Forsgård, Nils-Erik, 327, 340 Forssell, Anna, 320 Foucault, Michel, 108, 192-197, 212, 329, 333, 341 Fra Angelico, 107 Francis I, Pope, 8 Franck, Eskil, 320 Franck, Henrika, 323 Franklin, Rosalind, 77 Frankenstein, Victor, 52 Frederick II, 92-94 Freely, John, 329, 340 Feigl, Herbert, 219 Freud, Sigmund, 79, 147, 185 Fries, Ellen, 76 Fritzon, Arne, 320 Fröding, Gustaf, 14, 325, 341 Fuller, Margaret, 168 Gadamer, Hans-Georg, 72, 240-242, 315-316, 325, 328, 335, 338, 341 Galilei, Galileo, 47, 57, 59-60, 81, 118, 121-122, 125-126, 133, 149, 151-152, 208, 217-218, 240, 341 Galison, Peter, 289-290, 305-306, 328, 337-338, 340 Geschwind, Lars, 252, 336, 341 Gibbons, Michael, 345 Gilje, Nils, 187, 332, 341
Science as a Quest for Truth: The Interpretation Lab Giotto di Bondone, 107 Goeppert-Mayer, Maria, 77 Goethe, Johann Wolfgang, 168-169 Göranzon, Bo, 294, 341 Gorham, Geoffrey, 327, 329, 341 Grimen, Harald, 187, 332, 341 Guder, Darrell L., 331, 343 Guillet de Monthoux, Pierre, 320, 323, 329, 341 Gulyga. Arenij, 325, 341 Gustafsson, Claes, 338, 341 Gustafsson, Martin, 320 Gustafsson Lundberg, Johanna, 320 Gutenberg, Johann, 112, 116, 120121, 274 Habermas, Jürgen, 147, 227, 309, 325, 335, 342 Hacking, Ian, 197-198, 342 Haeckel, Ernst, 162 Håkan, Edgren, 325, 341 Håkansson, Ola, 319-320 Hamilton, Ian, 326, 347 Hansson, Bengt, 42-43, 326, 332, 342 Harari, Yuval Noah, 11, 104, 269, 325, 329, 334, 337, 342 Harding, Sandra, 326, 330, 342 Harkness, James, 329, 341 Hartman, Jan, 335, 342 Hårsmar, Mats, 320 Hawthorne, Nathaniel, 168 Hegel, Friedrich Wilhelm, 13, 60, 143, 159-160, 162-163 Heidegger, Martin, 203-205, 221, 242, 331, 344 Heidegren, Carl-Göran, 14, 325, 327, 342 Hempel, Carl, 247, 335, 342 Hobsbawn, Eric, 161 Hollingdahl, R.J., 335, 345 Horkheimer, Max, 133-134, 330, 342 Humboldt, Alexander von, 35-38, 43-44, 167-170, 258, 326, 336, 348
351
Humboldt, Wilhelm von, 54-55, 167-170, 261, 311, 327, 338, 342, 344 Hume, David, 150, 180, 248 Hultqvist, Lennart, 118, 329-330, 342 Hurley, Robert, 333, 341 Hus, Jan, 120 Husserl, Edmund, 201-203, 205207, 219, 333, 342 Ihde, Don, 116, 118, 192, 202, 207, 329-330, 333, 342 Irwing, Washington, 26, 342 Isaacson, Walter, 254, 277, 336-337 Isabella, Queen, 24 Jameson, Frederic, 331, 343 Jarine, Lisa, 339 Jaspers, Karl, 16, 302, 304, 316, 323, 337-338, 343 Jastrov, Joseph, 108, 197 Jobs, Steve, 253-255, 277, 336, 342 Johansson, Ingvar, 180, 332, 343 Josefsson, Ingela, 295, 343 Jüngel, Eberhardt, 331, 343 Kahneman, Daniel, 128 Kant, Immanuel, 13, 16, 127, 147, 155-160, 164, 168, 206, 306307, 309, 325, 331, 341, 343 Karlsohn, Thomas, 303-304, 320, 338, 343 Karlsson, David, 320 Karlsson, Jonny, 320 Kemp, Peter, 228-229, 335, 343 Kennedy, John F., 270 Kepler, Johann, 59, 81-83, 95-96, 122, 126, 328 Kerr, Clark, 325, 343 Kierkegaard, Søren, 62, 147, 159160, 331, 344 Kirk, Russell, 331, 346 Kjørup, Søren, 250, 331, 333, 336, 343 Koj, Sabina, 320 Koyré, Alexandre, 57 Krailsheimer, A.J., 326, 345
352
Index of Individuals
Kuhn, Thomas, 64, 83, 118, 182183, 188-197, 332-333, 344 Lagercrantz, Olof, 232-335, 344 La Mettrie, Julien Ofray de, 127, 344 Lagerlöf, Lillemor, 326, 347 Lakatos, Imre, 115 Larsson, Bo, 320 Latour, Bruno, 118, 328, 330, 344 Laudan, Rachel, 330, 344 Lehnert, Frederich, 347 Leibniz, Gottfried Wilhelm, 124, 169 Levens, Tim, 326, 344 Liedman, Sven-Eric, 70, 323, 325328, 331-332, 334, 338, 343, 344 Lind, Rolf, 344 Lindfelt, Mikael, 320, 323 Lindström, Johanna, 320 Løgstrup, K.E., 158, 331, 344 Locke, John, 127, 150 Loos, Adolf, 79 Louise XIV, King, 132 Luckman, Thomas, 197, 205, 339 Lundberg, Alexander, 320 Luther, Martin, 103, 120-121 Macchiarini, Paolo, 8, 303 Mach, Ernst, 174, 177, 179 Magritte, René, 108, 329, 341 Malthus, Thomas Robert, 34, 326, 345 Mannheim, Karl, 197, 205 Marçal (Kielos), Katrine, 75-76, 328, 345 Marshall, Donald G., 325, 328, 341 Marx, Karl, 159-160, 173, 185, 332, 345 McGuirk, James, 320 McLaughlin (Blamey), Kathleen, 326, 328, 333, 336, 346-347 McNeely, Ian F., 259-262, 336-337, 345 Mechmet II, Sultan, 101-102 Mendeleev, Dmitri, 12 Mercator, Gerardus, 105
Merleau-Ponty, Maurice, 205-206, 333, 345 Michelangelo, 107 Michelet, Jules, 109 Mill, John Stuart, 162, 164-165, 174, 219 Molander, Joakim, 336, 345 Monk, Ray, 332, 345 Monmonier, Mark, 106, 329, 345 Montaigne, Michel de, 125, 141 Moore, Samuel, 332, 345 Moore, G.E., 174, 178 Morus, Iwan Rhus, 22, 330, 334 Moses, 130 Myrstad, Johan Arnt, 320 Neurath, Otto, 174, 177, 179 Newton, Isaac, 38, 45, 56-57, 59, 79, 81, 94, 96, 122-123, 126130, 149, 163, 167, 169, 209, 342, 345 Nicolaus V, Pope, 60 Nietzsche, Friedrich, 61, 147, 234, 331, 335, 345 Nightingale, Florence, 185 Noddack, Ida, 77 Nowotny, Helga, 297, 325, 327, 334, 337, 345 Nussbaum, Martha, 294 Occam, William of, 114, 155 Ohlsson, Gunilla, 323 Okasha, Samir, 51, 327, 333, 335, 345 Olsson, Gunnar, 327 Oppenheimer, Robert, 272 Osiander, Andreas, 83 Östling, Johan, 327, 345 Packard, David, 255 Palme, Olof, 280 Parker, Geoffrey, 142, 331, 345 Pascal, Blaise, 38, 326, 345 Paulsen, Roland, 337, 339 Pedersen, Ove Kaj, 282-284, 286, 337, 345 Pellauer, David, 326, 333, 346-347 Philolaus, 81 Picasso, Pablo, 107
Science as a Quest for Truth: The Interpretation Lab Pico della Mirandola, Giovanni, 60, 146, 327, 331, 346 Pispanen, Nichan, 323 Pius II, Pope, 60 Planck, Max, 86 Platon, 87, 99, 126, 130 Pluhar, Werner S., 331, 343 Polanyi, Michael, 16, 83-87, 118, 227, 294, 325, 328-330, 334, 346-347 Ponticus, Heraclides, 81 Popper, Karl, 118, 179-180, 182188, 194-196, 332, 346 Porter, Roy, 329, 346 Potter, Harry, 311 Prigogine, Ilya, 209, 334, 346 Principe, Lawrence M., 59-61, 118, 327, 329, 330, 346 Ptolemy, Claudis, 80-82, 84-85, 259, 341 Pythatogoras, 68, 122, 130, 279 Quine, Willard Van Orman, 180, 332, 346 Ratkic, Adrian, 337, 346 Rawls, John, 308, 338, 346 Rehnquist, Nina, 337, 346 Retzslaff, Joakim, 327, 348 Ricardo, David, 274 Ricoeur, Paul, 157, 206-207, 213, 220-221, 242, 244-246, 248249, 256, 290, 323-324, 326, 328, 333-338, 344, 346-347 Rickman, E.P., 335, 340 Ring, Börge, 320 Roosevelt, Franklin D., 272 Rorty, Richard, 220 Rosso Fierentino, 107 Rudolfo II, King, 95 Ruskin, John, 109 Russel, Bertrand, 174, 178, 219, 221 Sager, Morten, 298, 337, 339, 346 Sagredo, 218 Sahlin, Kerstin, 300-301, 337-338, 347 Saint-Simon, Henri de, 40 Salviati, 218
353
Salzer-Mörling, Miriam, 325, 347 Schelling, Friedrich von, 159, 168169 Schiller, Friedrich, 169 Schlegel, August, 168 Schlegel, Friedrich, 168 Schleiermacher, Friedrich, 169, 239240, 335 Schlick, Moritz, 174, 177, 179 Schmiedel, Urlich, 320 Schrag, Calvin O., 69, 327, 347 Schreeb, Johan von, 320 Schütz, Alfred, 197, 205, 347 Schwartz, Nanna, 76 Scott, Peter, 345 Scotus, Duns, 114, 155 Semmelweis, Ignaz Philipp, 183184 Sen, Amartya, 128, 325, 330, 346347 Sennet, Richard, 294 Shapin, Steven, 57-59, 78, 145, 216, 325, 327-328, 331, 334, 347 Shulman, Lee S., 336, 340 Silverthorne, Michael, 339 Simplicio, 60, 218 Sixtus IV, Pope, 60 Skjervheim, Hans, 181, 213, 215, 326, 332, 334, 347 Sköldberg, Kaj, 310, 325, 338, 339, 347 Smedbäck, Mikael, 325, 347 Smith, Adam, 54, 74-76, 128, 164, 274, 328, 345, 347 Smith, Christian, 216, 227, 333-335, 344, 347 Smith, Collin, 333, 345 Snow, C. P., 207-208, 333, 347 Söderhielm, Alma, 77 Soderini, Piero, 97 Sörlin, Sverker, 253, 336, 340 Souden, David, 326, 345 Spencer, Herbert, 34, 162, 174 Stålfors, Annika, 320 Stanford, Leland, 275 Stenger, Isabelle, 209, 334, 346
354
Index of Individuals
Stern, Robert, 331, 344 Storgård, Johan, 320 Styhre, Alexander, 340 Strangelove, Dr., 52 Strannegård, Lars, 323 Strømholm, Per, 83, 151, 328-331, 347 Sullivan, William M., 336, 340 Sundqvist, Bernice, 320 Svahn, Johan, 295, 337, 347 Svallfors, Stefan, 68-69, 327, 348 Svedberg, Karl, 323 Svendsen, Lars Fr. H., 63-64, 327, 334, 348 Svensson, Peter, 320 Taylor, Charles, 220 Taylor, Mark C., 320, 331, 348 Terrell, Miriam, 252, 336, 341 Thales of Miletus, 113 Thompson, John B., 328, 335-336, 346 Thoreau, Henry David, 168 Torhell, Sven-Erik, 331, 343 Toulmin, Stephen, 118, 141-142, 330-331, 348 Tranströmer, Tomas, 257, 336, 348 Trump, Donald, 8, 337 Ulfstrand, Staffan, 326, 348 Ulrich, Robert, 337, 343 Urban VIII, Pope, 60 Vasari, Giorio, 109, 115, 329, 348 Vespucci, Amerigo, 24, 37, 97 Villstrand, Nils Erik, 327, 348
Vokel, Saskia, 328, 345 Waldseemüller, Martin, 24 Wallerstein, Immauel, 78, 166, 209, 244, 328, 331, 333-335, 348 Walløe, Lars, 341 Walsh, George, 347 Warhol, Andy, 108 Weber, Max, 127-128, 213 Weckman Jan Kenneth, 320, 323, 344 Weinsheimer, Joel, 325, 328, 341 Weiss, Holger, 320 Westermarck, Edvard, 251 Whewell, William, 57 Wittgenstein, Ludwig, 79, 108, 174, 178, 197, 219, 294-295, 313, 329, 332, 337, 345, 348 Wolf-Knuts, Ulrika, 320 Wolverton, Lisa, 259-262, 336-337, 345 Woolgar, Steve, 330, 344 Wootton, David, 96-97, 124, 326, 329-330, 336, 348 Wozniak, Steve, 277 Wright, Georg Henrik von, 134-135, 249, 330, 332, 336, 348 Wrigley, E.A., 326, 345 Wulf, Andrea, 38, 170, 326-327, 331-332, 336, 348 Wundt, Wilhelm, 162 Yiannis, Gabriel, 337, 339 Zeno, 113