A History of Western Science: The Basics [1 ed.] 1032346507, 9781032346502

A History of Western Science: The Basics offers a short introduction to the history of Western science that is accessibl

135 24 3MB

English Pages 256 [271] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Endorsements
Half Title
Series Page
Title Page
Copyright Page
Contents
Preface
Introduction
Part I: The Scientific Revolution
1. Antiquity and the Middle Ages
Greek philosophers and nature
Greek mathematics
Medicine in antiquity
The classical tradition in mediaeval Europe
2. The sixteenth century: the Aristotelian worldview in decline
New intellectual currents: humanism and Hermeticism
Natural history and medicine
Mathematics and 'natural magic'
Astronomy
Philosophy of nature
3. The seventeenth century: a new worldview
Galileo and a new view of the heavens
Descartes and mechanistic science
The emergence of an experimental tradition
Mathematization of science
The mathematical science of Isaac Newton
A revolution in the prevailing worldview?
Part II: Autonomous science: methods, theories and researchers 1700-2000
4. The eighteenth century: disseminating the idea of science
Knowledge and practice: instruments
Collecting and classifying: natural history
From alchemy to chemistry
Newtonian mechanics and its problems
5. The nineteenth century (i): science at the service of the rationalization of society
A 'scientific' system of measurement
The modern hospital
Observatories, measuring stations and a global science
Science and Western imperialism
6. The nineteenth century (ii): professional science
Universities and professors
Women in science
Laboratories
Classification and conferences
The rise of the experiment: physiology
Measuring and experimenting in the study of nature
Further mathematization
Statistics
7. The twentieth century: industrial science
The rise of industrial science
The science of measurement
Research institutes
Control and modelling
Independence under pressure
Part III: The scientific worldview
8. The origin of the world
The Bible and the new image of the world
The development of geology
The origin of the universe
9. The nature of life and the origin of human beings
Early scientific ideas about humankind and its place in the world
The idea of evolution
Darwin's contribution
Descent
The mysteries of the mind
The mechanism of heredity
Heredity and evolution
A science of human beings?
10. The nature of reality
A rational world?
The building blocks of reality
Research into radiation
The theory of relativity
Quantum mechanics
In search of a theory of everything
11. The influence of science on the general worldview
Scientification?
Accommodation of scientific findings
Rejection of scientific findings
Concluding remarks
Further reading
Index
Recommend Papers

A History of Western Science: The Basics [1 ed.]
 1032346507, 9781032346502

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

‘Anyone searching for a short book on the history of Western science need look no further than Rienk Vermij’s lively survey. Factually reliable and engagingly readable, it sketches a panoramic overview while avoiding getting bogged down in technical details. The book will appeal to students and general readers, who will find it effortlessly informative.’ Jan Golinski, University of New Hampshire, USA ‘A History of Western Science: The Basics is an accessible and informative account of the sciences in Europe, focusing on their ancient origins and the last 5oo years. The book surveys not only how key ideas, instruments, and practices changed over time, but also how scientists approached research questions in the context of shifting political and social values. Written in a conversational tone, this book will appeal to students and interested readers.’ Dr Catherine Abou-Nemeh, Victoria University of Wellington, New Zealand

A HISTORY OF WESTERN SCIENCE THE BASICS

A History of Western Science: The Basics offers a short introduction to the history of Western science that is accessible to all through avoiding technical language and mathematical intricacies. A coherent narrative of how science developed in interaction with society over time is also provided in this comprehensive guide. The first part discusses the period up to 1700, with a focus on the conceptual shift and new ideas about nature that occurred in early modern Europe. Part two focusses on the practical and institutional aspects of the scientific enterprise and discusses how science established itself in Western society post 1700s, while part three discusses how during the same period modern science has impacted our general view of the world, and reviews some of the major discoveries and debates. Key topics discussed in the book include: • • •

Natural philosophy, medicine, and mathematics in the ancient and medieval worlds The key figures in the history of science—Galileo, Descartes, Isaac Newton, Darwin and Einstein—as well as lesser-known men and women who have developed the field The development of scientific instruments, the transformation of alchemy into chemistry, weights and measures, the emergence of the modern hospital and its effects on medicine, and the systematic collection of data on meteorology, volcanism, and terrestrial magnetism



The big questions – the origins of humans, the nature of reality and the impact of science.

As a jargon-free and comprehensive study of the history of Western science, this book is an essential introductory guide for academics and researchers of the history of science, as well as general readers interested in learning more about the field. Rienk Vermij is a Professor at the Department of the History of Science, Medicine, and Technology of the University of Oklahoma. His research topics include early ideas on earthquakes, the reception of Copernicanism, and the Enlightenment. He has published several books and many articles.

THE BASICS SERIES The Basics is a highly successful series of accessible guidebooks which provide an overview of the fundamental principles of a subject area in a jargon-free and undaunting format. Intended for students approaching a subject for the first time, the books both introduce the essentials of a subject and provide an ideal springboard for further study. With over 50 titles span­ ning subjects from artificial intelligence (AI) to women’s studies, The Basics are an ideal starting point for students seeking to understand a subject area. Each text comes with recommendations for further study and gradually introduces the complexities and nuances within a subject. HINDUISM

EVOLUTIONARY PSYCHOLOGY

NEELIMA SHUKLA-BHATT

WILL READER AND LANCE WORKMAN

RELIGION IN AMERICA (second edition)

SUBCULTURES

MICHAEL PASQUIER

ROSS HAENFLER

FINANCE (fourth edition)

GLOBAL DEVELOPMENT

ERIK BANKS

DANIEL HAMMETT

IMITATION

FOOD ETHICS

NAOMI VAN BERGEN, ALLARD R. FEDDES,

RONALD L. SANDLER

LIESBETH MANN AND BERTJAN DOOSJE SELF AND IDENTITY MEGAN E. BIRNEY PSYCHOPATHY SANDIE TAYLOR AND LANCE WORKMAN SUBCULTURES (second edition) ROSS HAENFLER TOTALITARIANISM

TRANSLATION JULIANE HOUSE WORK PSYCHOLOGY LAURA DEAN AND FRAN COUSANS ECONOMICS (fourth edition) TONY CLEAVER ELT MICHAEL MCCARTHY AND STEVE WALSH

PHILLIP W. GRAY

For a full list of titles in this series, please visit www.routledge.com/ The-Basics/book-series/B

A HISTORY OF WESTERN SCIENCE THE BASICS Rienk Vermij

Designed cover image: Liebig in his Laboratory © Chronicle / Alamy Stock Photo First published 2024 by Routledge 4 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 605 Third Avenue, New York, NY 10158 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2024 Rienk Vermij Translated by Liz Waters The right of Rienk Vermij to be identified as author of this work has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-1-032-34648-9 (hbk) ISBN: 978-1-032-34650-2 (pbk) ISBN: 978-1-003-32318-1 (ebk) DOI: 10.4324/9781003323181 Typeset in Bembo by MPS Limited, Dehradun

CONTENTS

Preface Introduction

xi 1

PART I

The Scientific Revolution

5

1

Antiquity and the Middle Ages

2

The sixteenth century: the Aristotelian worldview in decline

30

The seventeenth century: a new worldview

61

3

9

PART II

Autonomous science: methods, theories and researchers 1700–2000 4

The eighteenth century: disseminating the idea of science

97

103

x

CONTENTS

5

The nineteenth century (i): science at the service of the rationalization of society

124

6

The nineteenth century (ii): professional science

137

7

The twentieth century: industrial science

162

PART III

The scientific worldview

183

8

The origin of the world

189

9

The nature of life and the origin of human beings

202

10

The nature of reality

218

11

The influence of science on the general worldview

232

Concluding remarks

240

Further reading Index

244 248

PREFACE

Science is one of the elements that define our modern society and having some idea of what science represents is becoming increasingly important. It is also becoming increasingly difficult, however, because in the course of its history science has grown ever more complex and encompassing. Moreover, the place of science in society and the way it has been appreciated have seen significant shifts over time. For instance, science has been hailed as a saviour of humankind and indeed, modern life would not be possible without the accomplishments of modern science. But not everybody has always been happy with all aspects of modernity and for those drawbacks, science has been held responsible as well. By some, science has been regarded as a force that undermines traditional values or even as an instrument of dark powers seeking to manipulate our lives. Especially in recent years, in some circles, there has appeared a marked backlash against scientific experts and scientific knowledge. This book is not going to solve these issues. What it aims to do is to offer a short and clear overview of the way science has become what it is now, both taken in itself and as an element of modern society. In every period and society, people needed to come to grips with the world around them. Some elements of present-day scientific thinking go back millennia. Still, a coherent programme that aims to

xii

PREFACE

understand the world in modern scientific terms emerged less than four centuries ago. Initially, this programme was just an affair of some local enthusiasts in a tiny part of the globe (though definitely not an isolated part, but one that had contacts and absorbed influences from many other regions). Very quickly, however, it gained the approval and support of wider sections of society. Moreover, it did not remain limited to its place of origin in Western Europe but spread across the world. This transformation from its humble beginnings into a global powerhouse is actually quite astounding. Scientific concepts came to dominate the ideas and practices of many people from many different backgrounds, often to the extent that the scientific view of the world appeared completely self-evident to them. This holds true even for people who were not scientists themselves. It had become not just an intellectual, but also a social force, vital for many aspects of society. Scientific theories certainly had their detractors, but even they had to formulate their objections in scientific language. That is not to say that at any time science was simply a given. As a growing number of topics became the subject of scientific inquiry and scientific explanation, scientists soon discovered that one catch-all approach did not work. Scientists constantly had to develop new techniques and methods and sometimes even to re-think some basic principles. This history of how science developed is vast and complex, and so too is how it determined our view of the world and obtained its place in society. This book can by no means exhaust the subject. My goal is to sketch a basic framework and to draw some major historical lines that hopefully will help a general reader to better understand this development and can serve as a starting point for further study. As a consequence, I have had to make rigorous choices. Many interesting aspects, important researchers and even complete disciplines will be passed over in complete silence. However, the book is decidedly not simply a summary of existing more comprehensive overviews. Examples were chosen to demonstrate the general line of argument; the inherent scientific importance of the various discoveries or ideas was only a secondary consideration. So, although I have had to skip many episodes, the book includes some topics that in most other overviews have been overlooked. The original Dutch version of this book appeared several years ago. Unavoidably, in some cases my ideas have shifted in the interval and

PREFACE

there are things that I would now formulate differently than I did at the time. On the whole, however, the overview still stands and serves its purpose. Any significant changes would threaten the coherence of the narrative. In one or two cases, where, given present-day demands, a subject needed more attention than I originally gave it, I have added a few paragraphs. For the rest, I made only minor adjustments, partly to correct some small inaccuracies or points on which ongoing research in the meantime has thrown new light, and partly to adjust for the difference in background knowledge between Dutch and Anglo-Saxon audiences. I do not need to say that the task of giving an overview of such a large and comprehensive subject is rather daunting. Fortunately, I did not have to start from scratch. Over the last half-century, the field of the history of science has grown into a well-developed discipline. On many topics there were specialists whom I could consult and who were happy to give me advice. I owe a major debt to my colleagues for all the time and effort they have spent to help me out. In the first place Frans van Lunteren, Bert Theunissen and Ernst Homburg, who read the whole text several times and gave elaborate comments. Valuable advice was also given by Fokko Jan Dijksterhuis, Lodewijk Palm and Geert Somsen. Beyond that, the English version profited from comments by Stephen Weldon, JoAnn Palmeri and Gary Toth. I must admit that in some cases I ignored their good advice and went my own way, but very often they saved me from mistakes and kept me on the right track. I cannot guarantee that there are no inaccuracies left, but if so, these are wholly my own responsibility. Besides these, I owe gratitude to David Baneke who initially came up with the idea of a translation and gave it his unwavering support, as well as to Daan Wegener for his help. I also owe thanks to the translator, Liz Waters, for a fruitful collaboration. The translation was supported by the Descartes Centre and the Freudenthal Institute, both at Utrecht University. Further financial support was provided from the Office of the Vice President for Research and Partnerships and the Office of the Provost, University of Oklahoma, as well as by the Department of the History of Science, Technology, and Medicine of the same university.

xiii

INTRODUCTION

Science is of immense social significance in the present day. Industry, agriculture and innumerable other sectors draw upon scientific theories. Experts with scientific training are frequently asked to contribute to discussions about a wide range of social issues. Not even the world of our own thoughts and experiences can escape their influence, since even something as fundamental as our concept of sickness and health is largely based on scientific findings. If you stop to think about it, this state of affairs is not inevitable. There is nothing inherently natural about natural science. It is a human construct that meets certain needs, ideals and objectives. That scientific knowledge is held in such high esteem by people who ordinarily have few dealings with it is rather remarkable, given that science is an extremely technical and specialist activity. Most people are incapable of evaluating scientific claims or of judging what should be regarded as scientific and what should not. How can the ideas and dreams of a few isolated seventeenthcentury thinkers, incomprehensible for most people, have acquired so much influence that knowledge, education and society as a whole are now to a large extent based upon them? It is not as if modern science sprang out of a top hat in the seventeenth century and has breezed through life ever since. Knowledge has not merely DOI: 10.4324/9781003323181-1

2

INTRODUCTION

increased since then, it has had to adjust to new circumstances and new questions, in fact it has been transformed by them. If we want to understand how science became what it is today, then the developments of the past few centuries are at least as important as its origins. Those histories of science that take for granted the viewpoint of nature-oriented research mostly concentrate on advances in scientific theory. They tend to result in an enumeration of the most important discoveries and discoverers, thereby making visible above all the growth in scientific knowledge. But however important this accumulation of knowledge may be, the development of science involves very much more. It is not just a matter of hypotheses and formulae but of instruments, laboratory assistants, journals and indeed money. If we want to understand how science became what it is now, then we will have to start by taking account of all the various aspects of the phenomenon. Science is not a clear-cut concept. It is a form of knowledge regarded to a great degree as objective, which does not, however, alter the fact that in different periods, people have harboured different scientific ideals and aims. In other words, they have defined science in different ways. Even those living in the same era are not necessarily always in agreement about the definition of science. It can mean different things to different people, as German poet Friedrich Schiller knew: To one, science is an exalted goddess; to another it is a cow which provides him with butter. Friedrich von Schiller (1796)

While for some people science is an object of veneration, an almost religious quest for deeper truths about reality, for others it is a milch cow, a convenient way to solve practical problems. Both attitudes have been important in the history of science, and both have contributed to the way in which science is now embedded in our societies. Every general history is forced to leave out certain aspects, but here I want at least to give an account of the conflict between different sides and try to find a balance between them. Neither the goddess nor the cow can be omitted with impunity. Science is not just a matter of big questions but of small details too.

INTRODUCTION

This book falls into three parts. The first deals with the prehistory of modern ideas about nature, in antiquity and the Middle Ages, and the radical change those ideas went through in the sixteenth and seventeenth centuries, known as the Scientific Revolution. Many historians are less than happy with that label, but it is hard to think of another concise designation for this historical episode, so for the sake of convenience I use it here. The second part is entitled ‘Autonomous science’. In it I look at how science has developed since the seventeenth century. The main question concerns how science has managed to gain a firm footing in one sector of society after another and to command increasing authority. In attempting an answer, I look above all at the material basis and practical application of scientific knowledge. Who has investigated the natural world and why, how was that research organized, and why did there come a point when society decided such efforts were worthwhile? Science deals with content, however, and it is impossible to examine how it has won ground without looking at the development of that content. Nevertheless, the emphasis here is not so much on specific theories as on humankind’s general attitude to nature, and to instruments, techniques and practices. Scientific progress is often revealed more clearly by the standard equipment in laboratories than by lofty theorizing. A question that will arise several times concerns the degree to which the use of specific methods or instruments has influenced the general theoretical stance. Part Three concerns the influence of science on our view of the world as a whole. Here the emphasis is on the development of scientific theory. Once again, my approach has been highly selective. Whereas in Part Two the practical aspects of science are central, here I look mainly at its philosophical significance, at the ideas that have done most to influence the way we see the world. It is worth noting that these are frequently theories that have also attracted a great deal of attention from a purely scientific point of view. This alone indicates that the meaning of science still carries a powerful philosophical charge.

3

PART I THE SCIENTIFIC REVOLUTION

Historians or scientists who address the question of when modern science came into being quickly arrive at the seventeenth century. The scholars of that period and later practiced the kind of science that we still recognize as such. They presented theories that continue to be accepted as valid today. By contrast, anyone confronted with the writings of mediaeval scholars will think themselves in a different universe. It is not merely that people had a very different notion of how the world worked from our own; the kinds of questions that modern science addresses did not exist at all for them. To the extent that they tackled recognizable problems, their answers needed to meet entirely different criteria. The explosion of creative energy with which seventeenth-century scholars called modern science into being came to be known as the Scientific Revolution. The term was initially associated with the most important pioneers, who opened up new research paths in the sixteenth and seventeenth centuries and thereby gave shape to important aspects of modern scientific theory and to the way we see the world today: Copernicus, Kepler, Galileo and Newton. Science was once regarded as having sprung more or less fully formed out of the top hat of history in the seventeenth century; it was an indivisible whole, and seventeenth-century science was in DOI: 10.4324/9781003323181-2

6

THE SCIENTIFIC REVOLUTION

essence no different from our own. So there was thought to have been an abrupt transition from the darkness of the Middle Ages to the scientific way of thinking. This view is questionable in the extreme. Even before the Scientific Revolution, people were not stupid. Some fields, such as astronomy, were highly advanced in antiquity. Ancient and mediaeval astronomers built instruments, recorded observations, formulated hypotheses based on them and made predictions that were often extremely accurate. Conversely, in a great many fields the major researchers of the seventeenth century were not nearly so modern as their scientific contributions seemed to suggest in retrospect. Their discoveries were frequently inspired by concerns that we would now tend to call mediaeval or superstitious, to say nothing of the fact that there are countless areas of science that were not studied seriously until the nineteenth or twentieth century, or concerning which even today no satisfactory theory exists. Nonetheless, shifts took place in the seventeenth century that are indisputably important. In this book they are described mainly as conceptual shifts. Most crucial are not the available theoretical knowledge and the methods to hand but changes in the general philosophical assumptions people entertain about nature, which are typically experienced as so self-evident that many people do not even realize that their thinking is based on assumptions. A general outlook on nature of this fundamental sort is less a set of ideas than something that guides the way in which people have defined and addressed problems. To use a fairly vague term, we could speak of their worldview. It is in this sense that the seventeenth century presents us with a true revolution. A new outlook on nature emerged from new philosophical assumptions about how reality functioned. Only on that basis was it possible to formulate new theories. More important than those theories themselves, however, was the scientific programme that gave rise to new research. Science as it took shape in the seventeenth century was not an established system of fixed truths but a perpetual search for the secrets of nature, in which innovative methods were continually being introduced and old theories kept giving way to new. In that sense it is indeed possible to say that the seventeenth century saw the dawn of modern science, since it was only then that science emerged as an undertaking with a clear identity.

THE SCIENTIFIC REVOLUTION

Of course, I will have to make my way through this development on seven-league boots. I do so as follows. In the first chapter I outline the scientific worldview in the period before the Scientific Revolution. It had its roots in classical antiquity, so this chapter covers a period of more than two thousand years. Furthermore, the ideas of that time are so far removed from us that most people today find them difficult to grasp. My account is therefore of necessity very broad, but as a background to later events, a survey of a few of the main points is essential. The second chapter describes how this ancient and mediaeval worldview slowly fell into discredit in the sixteenth century. At the same time, cautious attempts were made to discover new paths. These did not immediately lead to a new scientific vision, but they did produce new discoveries and insights that further eroded the old way of thinking. What would come to be put in its place was at that point a completely open question. Chapter three is the meat of this first part of the book. In it I describe how the resulting period of uncertainty came to an end. One of the many alternatives proposed for the old worldview eventually gained the upper hand. It was initially drawn up by several important thinkers, Galileo and Descartes foremost among them. The new vision led to a torrent of new discoveries and approaches, culminating in the work of Isaac Newton, whose theories found general acceptance even outside research circles. With his work, science became an accepted, indeed respected undertaking.

7

• 1

ANTIQUITY AND THE MIDDLE AGES

A common misapprehension suggests that in the time before modern science took root, people had a purely magical view of the world, according to which all events were explained as the will of the gods or the intervention of supernatural powers. This is a crude oversimplification. Even in ancient times it was recognized that many events happened ‘by themselves’, in a natural way, while even after the Scientific Revolution the vast majority of people, including scholars, made room in their worldview for intervention by supernatural forces. The ‘disenchantment of the world’ is an important historical theme, and the Scientific Revolution undoubtedly played a part in it, but it was a revolution that concerned questions about how nature works, not about where the boundary lies between the natural and the supernatural. The natural explanations accepted by people in the time before the Scientific Revolution ultimately drew upon ideas from classical Greece, more than two thousand years earlier. What we now call the ancient Greek civilization flourished in the centuries before the common era among the Greek-speaking peoples of the Eastern Mediterranean. The most important centres of population lay in what is now Greece but also in Asia Minor, the southern part of the Italian peninsula and, after a time, in Egypt. Here a thriving DOI: 10.4324/9781003323181-3

10

THE SCIENTIFIC REVOLUTION

cultural and intellectual life emerged, in which countless new ideas were put forward and debated. World history used to be depicted as a straight line, with culture beginning in the Near East, reaching a temporary high point in classical Greece, then being adopted by the Romans before finally, after the fall of the Roman Empire, making its home in Western Europe. This unwavering course is entirely imaginary. Remnants of the Roman Empire can undoubtedly be found in the European Middle Ages, but Western European culture drew upon many sources and is certainly not a direct successor to classical Greek and Roman civilization. It is of course true that such a pedigree, so to speak, could be constructed only because in one way or another people felt themselves to be heirs to the ancient Greeks and Romans. At various points – in the Middle Ages, again in the Renaissance and later too – people in Western Europe consciously made connections with ancient Greek civilization. This applies especially to ‘higher’ culture. For most fields of scholarship and learning, until the sixteenth century the writings of the ancients were the most important sources by far. European scholars had no hesitation in recognizing the superiority of their ancient predecessors. The contribution of the classical world is therefore of paramount importance in the history of European science. So before beginning the story this book aims to tell, I will give a brief introduction to that earlier contribution. This is by no means a complete summary of ancient culture or even of ancient science. It includes merely those points that are of importance for a proper understanding of later history.

GREEK PHILOSOPHERS AND NATURE For scientific development in ancient Greece, it was important that although religion and religious concepts were highly influential, no separate priestly class developed as distinct from the rest of the population. In some older and initially perhaps more culturally advanced civilizations in the Near East, great temple complexes existed with priests appointed on a permanent basis, supported by a king and sustained by income from rents and taxes. In such circumstances, intellectual life was inevitably dominated by the priesthood and

ANTIQUITY AND THE MIDDLE AGES

intellectual activity was therefore focused on worship. Greek society was on a scale far smaller than those earlier civilizations. The priests in the temples filled honorary positions; they were farmers or local citizens, and they did not dominate intellectual life. This meant there was a gap in the market for singers and lectors, orators and physicians. These people were religious like all their contemporaries, but that did not prevent them from thinking independently about inherited truths and about the world around them. They became known as philosophers. It seems the word ‘philosophy’ was first used by Pythagoras, whom the Greeks regarded as the oldest known philosopher. He established a political-religious community in the south of what is now Italy that had all the hallmarks of a religious sect, with rituals, dietary laws, taboos and blind devotion to a leader. The later schools of philosophy, which flourished in Athens in particular, are more reminiscent of training places. They prepared the sons of Greek dignitaries for a career in government and politics. What they taught was primarily rhetoric and the art of argumentation, since anyone who wished to succeed in Greek society needed above all to be an accomplished speaker. Second in importance was a good understanding of problems of morality, ethics and politics, but according to the philosophers this needed to be rooted in a proper insight into ‘reality’ as such. For the Greek philosophers, reality was not dependent on divine caprice or ordained by the gods; rather it was to a great degree an independent whole. The divine was an order that could be understood, not a power that needed to be revered and that might intervene in the world at will. The entirety of forces and causes that made the world function as it did was called ‘nature’. The notion of the independence of the natural world has been a permanent part of the Western intellectual tradition since the ancient Greeks and it has gradually come to be regarded as self-evident. Without it the Scientific Revolution of the seventeenth century would not have been possible. Countless schools of philosophy existed, each of which propounded its own ideas, and they all attempted to steal a march on each other. Some were attached to a specific teacher and lasted only a few years; others became established institutions that spanned several centuries. Many different ideas about nature therefore

11

12

THE SCIENTIFIC REVOLUTION

prevailed. Epicurus (or, to use a more accurate transcription from the Greek, Epikouros), the founder of a school known as the Garden after the place where he and his followers gathered, believed that everything in the world was ‘nature’ and that no supernatural reality existed. The world was made up entirely of empty space and atoms. Other philosophers did believe that the cosmos had a divine element. The Stoics, a school of philosophers founded by Zeno (Zenōn) that gathered in a Stoa or colonnade, regarded the whole world as suffused with a divine substance, the pneuma. This did not mean, however, that the world was dependent for its functioning on the will of the gods. On the contrary, everything that happened in it was predetermined and would be repeated after a set interval. Plato (Platōn) believed that our visible world is ultimately a shadow of a higher, divine reality, a world of order and beauty. He too had a school, and several centuries later his ideas were taken up again by Plotinus (Plōtinos) who, with his interpretation of Plato, created a new school, called Neoplatonism. By far the most important philosopher for the history of the natural sciences was Aristotle (Aristotelēs). His ideas dominated scholarship for many centuries, so we shall pay rather more attention to them here. It should be said from the start, however, that ideas about nature make up only part of Aristotle’s work. He also wrote about ethics, politics, literature, logic and other subjects. In his work on ‘reality’ he was interested above all in extremely abstract issues, not about the actual things around us but about reality as such. Specific things in the world were regarded as falling under the general heading of ‘beings’. Aristotle’s abstract ideas sometimes had important consequences. His notion of reality meant, for example, that natural phenomena were not mechanistic, purely causal processes. Instead of straightforward cause and effect relationships, everything operated according to a certain purposiveness, because things wanted to give scope to the possibilities contained within them. In that sense the concept ‘natural’ had a different meaning for Aristotle than it now has for us. Nevertheless, Aristotle was most definitely interested in the functioning of each of the natural phenomena. Abstract principles could be discovered only by studying concrete things. Of interest to us here is not what Aristotle stressed in his work or found most

ANTIQUITY AND THE MIDDLE AGES

important but rather those aspects of his worldview that were most significant for later developments. These might well be things to which Aristotle himself had paid little attention, perhaps because they were felt to be self-evident in his day. According to Aristotle, there was a fundamental distinction between the sublunary and the superlunary, in other words between the terrestrial region below the moon and the celestial above it. Humankind has always been fascinated by the difference between the unchanging heavens, where the stars move in circles for eternity, and the transient nature of all the worldly things around us. This distinction was elaborated upon by Aristotle, who believed that the sublunary world was constructed out of four elements: earth, water, air and fire. In the middle of the world was the globe-shaped earth, made up as the name suggests of the element earth. It was the immovable centre of the universe. Above it was water, around that air, while the highest spheres of the atmosphere were made of fire. The superlunary, heavenly regions by contrast were not made of transitory matter but of a fifth element, a ‘quinta essentia’ or quintessence. The celestial world was eternal and unchanging. There the heavenly bodies travelled for eternity along their predetermined paths. Since Aristotle believed the earth stood still, he had to accept that the sun, moon and stars orbited around it every day, moving in regular circles. That circular movement was constant and unchanging, and therefore the ultimate expression of eternity. The natural path taken by sublunary bodies, by contrast, was a straight line. Heavy bodies made out of the elements earth and water fell downwards, while the light elements fire and air rose. Such upward or downward movements were of necessity finite and limited. A body stopped moving when it had reached its natural place. The basic idea was therefore very simple. In practice things are of course a little more complicated. On closer inspection, movements in the heavens are not as regular as they seem. Long before Aristotle, it had been determined that while the stars complete their orbit of the earth within roughly twenty-four hours, there are several heavenly bodies that deviate from that regularity: the sun, the moon, and the planets Mercury, Venus, Mars, Jupiter and Saturn. In the main, they travel along with the stars in the sky, but in addition they have movements specific to them, so that their position in relation

13

14

THE SCIENTIFIC REVOLUTION

to the stars gradually shifts. Aristotle solved that problem by allocating each of the planets its own sphere in the superlunary region, allowing them to move independently of each other. This made the Aristotelian world an entity composed of concentric spheres. The fixed stars formed the outermost limit of the visible cosmos. What happened beyond that was a matter of speculation. Another problem was that the sublunary world was less orderly than it ideally should have been according to Aristotle. Earth, water and air are not arranged according to concentric, strictly spherical husks but are mixed together at many places. Aristotle explained this as a result of the influence of the heavens on the earth by means of its motion and warmth. The heat of the sun generated vapours and fumes in the earth, which were in turn the cause of rain, wind, thunderstorms and so on. This influence of the celestial on the earthbound realm is one of the central principles of Aristotelian explanations of nature. Of course, his conclusion is based on a simple empirical fact. The sun warms the earth and makes plants sprout and grow. The annual variation in the path of the sun causes the seasons. It was easy to generalize from this and conclude that the influence of the heavens caused all earthly things to emerge and eventually to pass away. According to Aristotle, the heavens governed not only the growth and dying back of plants but the birth and death of animals, and even the growth of rocks and metals in the earth. These changes took place in an endless cycle. To recap, it could be said that the Aristotelian worldview rests on two major principles. The first is that of order. The cosmos is an orderly whole, in which everything, be it stars, planets or elementary matter, has a fixed place. The most fundamental division is between the superlunary and the sublunary region. The second great principle is that the superlunary influences the sublunary. While the principle of order produced of necessity an ideal but rather static universe, the influence of the heavens provided a more dynamic element. The interplay of the two gave rise to the visible world as we know it.

GREEK MATHEMATICS One important consequence of Greek philosophy lay in the development of mathematics. Of course, simple counting and measuring

ANTIQUITY AND THE MIDDLE AGES

have existed since time immemorial, but the Greeks practised mathematics as an abstract art. This goes back to the very beginnings of Greek philosophy in the school of Pythagoras, who presumed that all of reality was made up of numbers. The study of numbers would therefore provide insight into reality. Pythagoras derived the paramount example of the importance of numbers from research into musical intervals. Harmony arose from string lengths, which had simple numerical relationships with one another. The Pythagoreans also studied the movements of the stars and were devotees of geometry. Whether the famous Pythagoras’ theorem was actually discovered by Pythagoras himself, however, is questionable in the extreme. Most influential of all were Plato’s ideas. As we have seen, Plato believed that material reality was a reflection of a higher, divine reality that was not material but purely intellectual and ideal. It was the world of pure forms. To gain an insight into this higher reality, humans needed to distance themselves as far as possible from the world around them and learn to look at it purely intellectually, abstractly. The best approach to the higher reality was through the abstract world of mathematics, and the best way of learning to view the truth was to practise mathematics. Under the influence of these ideas, the study of mathematics flourished in the classical world, geometry in particular, especially in the Hellenistic period. After the rise of Alexander the Great, the cultural centre of gravity in the Greek world moved from Athens to Egypt. The Greek rulers of Egypt made the capital city of Alexandria into an important centre of learning. They founded what could be called a scientific academy, the Musaeum (Mouseion), as well as a large library where they brought together the many writings of older Greek thinkers and passed the knowledge contained in them on to their descendants. Important scholars were attached to the Musaeum, and they conducted research in many different fields, in grammar and literature, but also in mathematics and medicine. The Greek mathematicians did not confine themselves to trivial matters but succeeded in finding solutions to complex problems as well. On some points they started from a different basis and applied different methods from those of modern mathematicians, but their work can be directly translated into the mathematics of our own day.

15

16

THE SCIENTIFIC REVOLUTION

Any contemporary mathematician who studies the work of the ancient Greeks will have no hesitation in seeing them as colleagues. A textbook by Euclid (Eukleidēs) for example, the Elements (Stoikheîa in Greek), written in about 300 BCE, dominated education in geometry until well into the twentieth century. Greek mathematics had a direct impact on what we now call science. To us, mathematics is purely abstract, an intellectual aid that can be applied to all sorts of things but in itself has little to do with visible nature. In earlier times, however, mathematics was not just the science of counting and measuring but of everything that was counted and measured. That is to say, all scholarship that concerned itself with specific quantities was part of mathematics. In classical antiquity, therefore, mathematics included not just geometry and arithmetic but musical harmony, astronomy, statics, construction and optics (although Greek optics was about little other than flat and curved mirrors). People spoke of ‘pure’ mathematics as opposed to ‘mixed’ mathematics, meaning mixed with the material, dealing with the theory of concrete rather than abstract quantities. This meant that astronomy, as part of mathematics, concerned itself with the measurement, description and in some cases prediction of the path of the stars in the sky, but not with questions as to the cause of those movements, the substance out of which the stars were made, or the qualities of the heavens in general. The causes and essential qualities of reality were not measurable. They were dealt with not by mathematics but by philosophy. The ‘mathematics’ of antiquity is therefore in some respects far closer to what we now call science than ancient natural philosophy, all the more so since philosophy preferred to take beings in the general sense as its subject, rather than concrete things. Perhaps the greatest mathematician of antiquity was Archimedes. He studied in Alexandria, but later worked mainly in his native city of Syracuse in Sicily. He made an important contribution to pure mathematics, especially regarding the determination of the surface area and volume of geometrical figures, and he also invented many ingenious tools such as pumps, screw jacks and all kinds of armaments, which caused significant difficulties for the Romans who besieged Syracuse. In doing so he founded something of a tradition. Later Alexandrine scholars experimented with a wide range of mechanical

ANTIQUITY AND THE MIDDLE AGES

structures, instruments and machines. Their work can be seen as an early example of engineering. Archimedes’ inventions were closely connected with his efforts at mixed mathematics. In mechanics he discovered the law of the lever and used that principle for further research into statics. He also developed hydrostatics, in which he formulated a theory about bodies immersed in liquid that is known to this day as Archimedes’ principle. In doing so he combined ingenuity in construction with theoretical insight. His work on mechanics and hydrostatics was probably inspired by practical problems, but nothing remains of those in the writings that he left to us on these two subjects. They are not do-it-yourself manuals but theoretical works, couched in a strictly mathematical form. The oldest and most important fields of mixed mathematics were harmonics and astronomy. We shall pay no further attention here to the theory of harmony, which concerns musical intervals. Astronomy was limited, as I have said, to the quantitative description of the movements of heavenly bodies, and it was aimed at enabling people to predict eclipses of the sun and moon, for example. The Greeks derived their astronomical data mainly from the Babylonians, who had observed the heavens systematically for centuries and achieved major advances when it came to making predictions. Greek mathematicians attempted to encapsulate those observations in a model of the way in which heavenly bodies move. A difficulty arose here, however. In his efforts to explain the paths taken by the planets, which deviate from those of the rest of the bodies in the firmament, Aristotle had allocated each of them its own sphere, but he had demanded (as did Plato, incidentally) that those spheres must nonetheless orbit the earth steadily in perfect circles. For an astronomer, this model is too simple since the course of the planets across the sky is far too irregular to fit into it. Mathematicians and philosophers therefore found themselves at odds. Greek astronomers tried to find a model that both correctly represented the phenomena seen in the sky and did as much justice as possible to philosophical insights about the movements of heavenly bodies. The greatest authority was acquired by a solution proposed by Claudius Ptolemy (Klaudios Ptolemaios), who must have lived some time in the second century of the common era. Ptolemy was

17

18

THE SCIENTIFIC REVOLUTION

associated with the Musaeum in Alexandria. He wrote treatises dealing with a great many fields, including geography, harmonics and optics, in which he collected, systematized and evaluated the discoveries that Greek scholars had made in previous centuries. He became most famous of all for his book on astronomy called Mathēmatikē Syntaxis, ‘Mathematical systematic treatise’, nowadays usually referred to using the Arabic title Almagest. Ptolemy was probably quite critical of the philosophy of Aristotle. At the very least he did not take Aristotle’s writings particularly seriously. Instead of simple, concentric circular movements as Aristotle had required, he introduced eccentric circles, epicycles (circles whose centre moves round the circumference of a larger circle) and even structures in which the circular movement was no longer uniform, which is to say that bodies moved at irregular rather than constant speeds. The latter was particularly disturbing. Given that according to Aristotle everything in the heavens was eternal and unchanging, there was no place for irregular movements. Ptolemy’s model was brilliant from a purely mathematical point of view, but unsatisfactory for most philosophers. Few seem to have been worried by this, however. Astronomy was a mathematical discipline, so what mattered above all was whether the results were correct. In theory of course it was always possible to argue that these epicycles and irregular movements were purely mathematical aids, introduced for ease of calculation, and that in reality the movements of heavenly bodies were indeed regular (although it was impossible to see how). Few people asked themselves how these two matters related to each other. Over time, however, the discrepancy became an important incentive to revise astronomy.

MEDICINE IN ANTIQUITY In addition to philosophy, a second tradition has been extraordinarily important to thinking about nature: the medical tradition. It too has its roots in classical Greece. True, in antiquity medicine was not entirely separate from philosophy. For physicians, knowledge of nature was of immediate relevance, so they had a duty to be well schooled in the principles of natural philosophy. But whereas when philosophers turned their attention to nature it was to concentrate

ANTIQUITY AND THE MIDDLE AGES

mainly on abstract issues, like the question of being as such, medical practitioners were always interested in the background to concrete problems. Their approach therefore strikes us as far more modern than that of the philosophers. They also frequently had an influence on the development of science that was considerably more immediate. Traditionally, matters such as sickness and health tended to be interpreted in religious terms. Diseases were sent by the gods or resulted from the casting of spells. Healing was then achieved mainly by means of religious rituals. Under the influence of philosophical thinking, however, the realization grew in some Greek medical schools that illness could arise by itself, as the result of a poor diet, the wrong climate or other causes. This meant there must also be natural ways of curing diseases. Hippocrates (Hippokratēs) is regarded as a pioneer of this approach. He was an older contemporary of Plato and Aristotle, a renowned physician from the island of Kos. Dozens of writings have come down to us that bear his name, but most were definitely not written by him. In fact, the relationship between what we now call the school of Hippocrates and the person of Hippocrates is unclear in the extreme. His school too was dominated by what to our way of thinking was still a strongly religious and sectarian ethos. Pupils were bound to their teacher by a solemn oath, for example. Diseases, however, were regarded as entirely natural phenomena, and there is a suggestion that the concept of nature as the antithesis of the supernatural was developed mainly in his particular school. The work of Hippocratic doctors was primarily practical rather than theoretical, but obviously they needed certain theoretical assumptions on which to base their treatment methods. They found the necessary starting point in the doctrine of the ‘humours’. It developed gradually, but in its fully elaborated form it meant there were four humours, blood, phlegm, black bile and yellow bile. In a healthy body these were in balance. Symptoms appeared when the balance between them was lost. It was the doctor’s job to weaken the dominant humour and strengthen the weak humour by imposing a diet, administering medication or by some other means. Over the course of time, more effort was put into the elaboration of such theoretical insights. This took place mainly in the Alexandrian era. Along with other disciplines, medicine was studied

19

20

THE SCIENTIFIC REVOLUTION

assiduously at the Musaeum. Alexandrine physicians developed the basic medical subjects of anatomy and physiology in particular, and the study of remedies was also undertaken in a systematic manner. The Alexandrine doctor Dioscorides (Dioskouridēs) wrote about it in a book that was regarded as authoritative for centuries. The greatest authority of all in the field of medicine was the Roman physician Galen (Galenus), who came from Pergamon in Asia Minor and lived in the second and early third century of our era. He wrote hundreds of works on medicine, bringing together the entire medical tradition in a clear and orderly manner. The Hippocratic doctrine of the humours remained central to Galenic medicine, but now it was placed within a philosophical framework. Galen was a powerful advocate of the idea that the art of medicine was not merely a matter of experience since theory was at least as important. It could be practiced only based on a correct insight into reality. In his writings he therefore addresses all kinds of philosophical subjects. As a philosopher Galen was an eclectic who picked up whatever he could use, but he based his work largely on Aristotle. For the philosophical background to medicine, Aristotle’s philosophy was the most useful. A doctor could not get far with purely abstract theories like Plato’s doctrine of pure forms. Aristotle came from a family of doctors and wrote at length about living creatures. His opinions were often concrete and detailed. Moreover, he was a keen observer; his observations on animals were used by biologists as recently as the eighteenth century. The theories of Aristotle and Galen could therefore support each other. For mediaeval science, these two men were the great teachers.

THE CLASSICAL TRADITION IN MEDIAEVAL EUROPE In the era of migration, around the fifth century, the development of classical scholarship was interrupted. The schools of the philosophers were closed. The Alexandrian library continued to exist for several centuries, but fewer and fewer people were capable of reading the works preserved there. Science has always been the concern of a small intellectual elite, but in antiquity that elite was particularly narrow, so science was extremely vulnerable to external

ANTIQUITY AND THE MIDDLE AGES

threats. Much knowledge of mathematics and philosophy was forgotten, and many works containing ancient learning were lost. It is actually a miracle that a substantial amount of science from this period has survived, and it would not have done so without scholars from the Islamic world. From the seventh century onwards, Arab conquests created a new cultural region, with Islam as its dominant religion and Arabic as the language of government and scholarship. After a while, the new rulers developed an interest in the knowledge of their predecessors, and under their protection, fragments of ancient scholarship were collected or tracked down in Byzantium or Alexandria and translated, as far as possible, into Arabic. The scholars of the Near East then made the material their own, to the extent that as time went on they began to contribute to science themselves. They made important discoveries in mathematics and optics. Not all these scholars were Muslims or Arabs, but because virtually all of them wrote in Arabic, for brevity’s sake we shall refer in this book to ‘Arab science’. Their contribution can only be touched upon here, but it was considerable to say the least. In Europe a degree of elementary knowledge of the classics remained, but it was generally at what we might call primary-school level. In mediaeval Europe there was little appetite for advanced studies in philosophy or mathematics. The social situation had changed drastically too, of course. The training of the intellect had fallen into the hands of the Catholic Church, which is to say of the clergy, who formed a separate social class with its own interests and obligations. The priority lay with religion, to which knowledge and learning were subordinated, in theory at any rate. Interest in the more practical science of medicine was first to develop. In the eleventh century a medical school was founded in Salerno, in the south of the Italian peninsula. It made Persian doctor Ibn Sina, a respected author in the Islamic world, famous in Europe as well. When his work was translated out of Arabic, his name was adjusted and in Europe he became known as Avicenna. Arab medicine was strongly orientated towards the ancients, and through the Arabs, Europeans became familiar once more with ancient ideas. The philosophy of the ancients did not receive attention until a century later, and revived interest in it had a very different cause from that which had led to the valuing of such knowledge in

21

22

THE SCIENTIFIC REVOLUTION

antiquity. In this period the first universities were founded, mainly as institutions for the education of theologians and lawyers (and later also physicians). The various subjects taught were more strictly systematized, so the universities gained a propaedeutic or preparatory faculty in which the basic knowledge was taught that students would need later. This was the faculty of the artes, and it was for these propaedeutic studies that classical philosophy was revived. Artes faculties taught logic, the basic subject of mediaeval science, along with the philosophical disciplines: ethics, metaphysics and physics (in other words the philosophy of nature). The term ‘physics’ was used quite differently in mediaeval times from the way it is used now. It meant not a study in itself but a body of knowledge that was regarded as important for the education of future theologians, physicians and lawyers. In practice it was mainly the theologians who decided what should be included in the study of physics. They may often have been extremely intelligent people, but what they took into consideration naturally had little to do with the aims of science as we know them today. Initially Plato was held in the highest regard of all the ancient philosophers. His reflections on a higher divine reality seemed compatible with Christian teachings. From the twelfth century onwards, however, Aristotelian philosophy steadily gained ground. Once Aristotle’s ideas had been accepted, they quickly became paramount and drove out all competition. Until the seventeenth century, and in many places into the eighteenth, they dominated intellectual life in Europe. There was interest primarily in Aristotle’s logic, but his ideas in other areas were embraced too, including physics. This seems a rather remarkable inclination for mediaeval theologians. Aristotle did not recognize any influence of supernatural powers and explained the world on purely natural grounds, so his were not at all the sort of ideas you would expect from theologians. However, there was a desire among them to create a greater distance between God and humankind, and they could do so by acknowledging nature as an independent domain, albeit created by God. In this respect Aristotle’s ideas were extremely useful. Mediaeval theologians wanted to keep their religion pure and so were surprisingly sceptical about popular ‘superstition’.

ANTIQUITY AND THE MIDDLE AGES

It is customary to refer to the worldview of the late Middle Ages as Aristotelian – justifiably so to the extent that it was based on the writings of Aristotle (as well as those of Galen, in fact it might be better to talk of an Aristotelian-Galenic worldview). But we need to remember that this Western European Aristotelianism had distanced itself in many respects from the original thinking of Aristotle. The people of mediaeval times had not sat at the feet of their classical teachers. As with medical literature, to gain knowledge of philosophical authors, people were largely dependent on Arab sources. The writings of Aristotle first needed to be translated from Arabic into Latin, the language of mediaeval European scholarship, before attempts could be made to understand them, with the help of Arabic commentaries. In antiquity, philosophy had been a living tradition. For Arab and Western scholars it was something that came from outside and required them to undertake great efforts in order to make it their own, based purely on a written tradition. Many of the original works had been lost, what remained was often shoddily translated and in some cases its origins were unclear. Quite a few of the writings regarded as Aristotelian in the Middle Ages were subsequently found to date from far later. For the Greeks the study of reality had been central, but because of these difficulties, mediaeval science was primarily about understanding ancient texts. Mediaeval scholars wrote few treatises of their own but instead mainly produced commentaries on works of antiquity. This had obvious repercussions for the content of the various fields of scholarship. The ancient writings were the reports of debates, so they sometimes represented a variety of opinions. Based on them, Arab and Western commentators tried to form a coherent image of reality. This meant that contradictions were smoothed away as far as possible so that all knowledge could be contained within a single system. Furthermore, all knowledge naturally had to be made compatible with Christian teachings. This was done consciously to some extent, but in part it happened automatically. In the Middle Ages the concept of a ‘soul’ was very different from what Galen or Aristotle had meant by it. For Aristotle the soul was the organizing principle of the living body and as such inseparable from the body. This was unacceptable to

23

24

THE SCIENTIFIC REVOLUTION

theologians. To Aristotle the world was eternal, to people of the Middle Ages it had been created. Aristotle’s nature was interpreted as the order of creation. One might wonder to what extent Aristotle, or any other classical philosopher, would have recognized the worldview that remained. Aristotle believed that the soul as the principle of life existed in differing degrees. The souls of animals were more highly developed than those of plants, and the souls of humans more highly still. Plants had a vegetative soul, the principle of growth and propagation. Animals too had a vegetative soul, but in addition they had an animal soul, which enabled them to perceive and to move. Humans had a rational soul as well as those first two, which made them capable of thought. This doctrine of the three aspects of the soul became authoritative in the Middle Ages, but it was combined with other elements. Galen as well, echoing Plato, argued that humans had a triple soul, but he distinguished between the three souls quite differently: thinking resides in the brain, passion in the heart and desire in the liver. Following the teachings of Galen, the seat of thinking, or the rational soul, was now located in the brain, whereas Aristotle had placed the soul mainly in the heart. Generally speaking, physicians tended to follow the ideas of Galen, whereas philosophers and theologians sought a connection with Aristotle. The physical worldview of the Middle Ages was broadly that of Aristotle. The theory of a distinction between the superlunary and sublunary worlds, the doctrine of the four elements, the circular motion of heavenly bodies – all this was fairly easy to understand and could be accepted without objection. On one point people went into more detail than Aristotle; mediaeval authors generally assumed that the heavenly spheres were made of a hard, transparent material, a kind of crystal, in which the stars and planets were fixed. Nobody was troubled by the spherical shape of the earth. The notion that mediaeval scholars believed the earth was flat is a myth. It originates with French scholar Antoine-Jean Letronne in 1834. He pointed to the beliefs of two ancient church fathers, Lactantius and Cosmas Indicopleustes (Kosmas Indikopleustos), who had indeed argued that the earth was flat. For Letronne, a fierce anticlerical, this was an illustration of what he saw as the scholarly obscurantism of the Church. In reality, Lactantius and Cosmas were

ANTIQUITY AND THE MIDDLE AGES

not taken seriously on this point in their own time (and the works of Cosmas, a Greek, could not even be read in the West), but Letronne made it seem as if what they had written reflected a belief prevalent in the Middle Ages. This fitted the image of the Middle Ages that was generally held in the nineteenth century. The story was quickly taken up and eventually became a widely known ‘fact’. It could therefore be said that in the Middle Ages the static element of Aristotle’s worldview, his image of order in the universe, was adopted practically unaltered. But the dynamic element, his explanation of how changes in the sublunar world occur, was given new features, mainly derived from astrology. Astrology too was largely developed by the Greeks, based on earlier Babylonian theories, and it flourished in late antiquity. The most important astrological text was the Tetrabiblos by Ptolemy, who as we have seen was also responsible for the most authoritative work of Greek astronomy. Arab scholars were very interested in this element of the Greek legacy and elaborated further upon it. European scholars adopted it again from the Arabs, along with the rest of the classical heritage. Aristotle had spoken in general terms about the influence of the sun and the heavens on the terrestrial world, but that is a rather different matter from claiming that the future can be predicted from the position of the stars. The people of the Middle Ages, however, interpreted his ideas far more broadly. Not just the sun but all heavenly bodies were thought to influence the generation and destruction of earthly things. Since all heavenly bodies shine on the earth, they must all have an effect similar to that of the sun. The continual variation in the position of the planets was seen as responsible for the apparently capricious course of events in the sublunary world. However, for anyone familiar with the workings of heavenly influence, all events were in theory predictable. They could be deduced from the already ascertained movements of the heavens. So, in the Middle Ages, astrology was a subject of study believed to have a serious scientific basis. European scholars therefore engaged in it intensively. Its most important application was in medicine. Processes in the body were generally felt to be influenced by the stars, so people needed to turn to the stars for healing. A specific medicine or surgical operation was effective for a specific patient only at a time to be determined by astrology. There was a connection between

25

26

THE SCIENTIFIC REVOLUTION

the disorder, the remedy and the position of the stars and planets. This sense of cosmic cohesion was still based on Galenic theory, incidentally, especially that of the humours. Arab and mediaeval physicians made a connection between the four humours, the four elements, the seasons, the signs of the zodiac, the various parts of the body and so on. In as far as objections to astrology arose, they were not scientific in nature but primarily religious. On theological grounds, astrology was always regarded as slightly suspect. The danger was that people might go beyond trying to predict purely natural processes by looking at the stars and attribute to the stars an influence on the human will. This was unacceptable for religious reasons. Nonetheless, there were some clerics who fervently devoted themselves to predicting the future. It is somewhat misleading to describe all this as the mediaeval worldview. We should recall that science simply did not exist in the Middle Ages. People were interested in the world in a very different way. The aspects dealt with here are not those that people of the Middle Ages found the most important in giving shape to their world. Rather they are important from the perspective of the modern world, with its modern view of reality. Mediaeval ideas about how the world was composed, which may only rarely have been expressed explicitly, were the basis for the Scientific Revolution. Aristotelian philosophy was mandatory at all universities, which is to say that all scholars of the sixteenth and seventeenth centuries were brought up on it, including those who later emerged as ardent opponents of Aristotle. In other words, in the Middle Ages little thought was given to the points made above. They were more a kind of background knowledge that people would fall back on when they thought about the problems that really mattered. Even what was known as ‘physics’, which dealt specifically with nature, was miles away from modern studies of the natural world. It is true that physics did not limit itself to the modest role of auxiliary science intended for it. Those who taught it inevitably developed their own pet notions without paying much heed to the importance of physics for other fields. It therefore became an increasingly independent field of learning. But the research conducted by mediaeval professors of physics was not about actual natural processes. Instead, it focused on

ANTIQUITY AND THE MIDDLE AGES

being, and on the causes of reality. Only in rare instances did more concrete issues arise, such as: Are the heavenly spheres solid or liquid? Are they moved by angels or by an internal principle? Even more importantly, mediaeval scholars did not attempt to investigate questions about nature by studying concrete things but rather through all kinds of subtle argumentation. They were mainly engaged in introducing and distinguishing between different concepts. Some modern researchers therefore brand mediaeval physics as a kind of applied logic rather than as physics in the modern sense. In the Middle Ages, logic was regarded as a universal method and the fact that physicians applied arguments to nature (or to the writings about nature by Aristotle) was actually of secondary importance. Because these beliefs had a university background, we refer to them collectively as scholasticism. In classical antiquity, mixed mathematics represented a more modern ideal of knowledge, but there was little interest in mathematics in mediaeval Europe. Although Greek mathematics and astronomy had been further developed by the Arabs and enriched with new discoveries, little of it was taken up in Europe. With time, a more or less serious tradition of astronomy developed, but really only because of the importance attached to astrology. In order to compile a horoscope you need to be able to calculate the position of the heavenly bodies at a specific time, and doing so required knowledge of astronomy. But people limited themselves to what was strictly necessary. The theories deployed went back to Ptolemy, via the Arabs, but few consulted his original works. Mostly they used abridged and elementary manuals. What were known as the Alfonsine Tables were authoritative, said to have been compiled on the orders of Spanish king Alfonso the Wise. More important for a tradition of research into natural phenomena was medicine. The European universities were originally intended exclusively for the study of theology and law, but physicians soon managed to gain recognition for their field as a third faculty. The study of medicine flourished in Italian universities in particular, and it too developed a close bond with natural philosophy. To the north of the Alps, most of those who taught philosophy were primarily interested in theology, whereas in what is now Italy, philosophy was mainly engaged in by people with medical training.

27

28

THE SCIENTIFIC REVOLUTION

The medical scholarship of the Middle Ages was above all theoretical, learned from books. It was the study of the works of Galen and Avicenna. But of course, the justification for this knowledge lay in its practical application. Physicians were therefore forced to look at nature. They had to know something about the human body, about life and health, and about plants. Most remedies after all were made from medicinal plants. Other matters of nature could be important too. Some physicians took an interest in healing springs, for example. Nor should we forget that a medical practitioner needed some knowledge of astrology and had to be able to compile a horoscope. So, most astronomers of the late Middle Ages initially had a medical education. In short, to the extent that the study of the natural world existed at all in mediaeval times, it was mainly a medical pursuit. Of course more knowledge was available in mediaeval Europe than merely that which had been derived from the classics. Agriculture, hunting and fishing demanded at the very least some practical insight into how to deal with living nature. The great technical progress made in any number of fields – shipping, mining, milling, warfare and the building of the great cathedrals – would not have been possible without experts and their specialist knowledge. The period saw the introduction of mechanical clocks, nautical charts, eyeglasses and quarantines. In academic studies, people had a powerful tendency to distance themselves from such purely practical knowledge, but sometimes a little of it filtered through. It is hard to pinpoint, however. Much of it was treated as a trade secret and hardly anything was written down. One last thing we need to devote a few words to here is mediaeval chemistry, or alchemy as it is usually called. It was a practical skill (especially distilling), but it also had a theoretical component. Alchemy was not taught in the universities, but it was to a great degree based on writings from earlier times. The oldest dated back to late antiquity, but the most important sources were in Arabic. Later the European alchemists wrote a large number of treatises on the subject themselves. Most of these works appeared anonymously or under a pseudonym, and they were often written in an allegorical symbolic language that was hard for outsiders to decipher.

ANTIQUITY AND THE MIDDLE AGES

Alchemists concerned themselves with practical matters such as the preparation of medicines or dyes, or the extraction of metals, but their main goal was the ‘transmutation’ of base metals into gold. It was central to their theory. Their worldview was vitalist; they tended to see the world as a living organism, such that things could grow towards perfection. This had spiritual significance, and alchemical theory was often closely intertwined with religious insights. The theory of the alchemists ultimately proved a dead end, but the practical and experimental skills they developed were of lasting significance.

29

• 2

THE SIXTEENTH CENTURY: THE ARISTOTELIAN WORLDVIEW IN DECLINE

The seventeenth-century Scientific Revolution was founded to a significant degree on facts and theories drawn up by sixteenthcentury scholars. It is therefore possible to speak of the sixteenth century as a period of preparation for that revolution. But such a stance would suggest, incorrectly, that people of the sixteenth century saw the truths of the seventeenth century glimmering on the horizon and needed only to take a good long march in a predetermined direction in order to reach them. Every period has a character of its own, however. In the sixteenth century nobody had the slightest clue that there was something to prepare for, and if they saw anything glimmering on the horizon, then it had little to do with future science. When speaking of ‘preparation’, it was not in the sense that an inkling was gradually gained of solutions that would be offered in the century to come. However, people did grow vaguely aware of the problems that the seventeenth century would attempt to solve. The sixteenth century saw the first signs of interest in those aspects of reality that we now regard as matters for science. The mediaeval Aristotelian worldview therefore gained a new meaning, different from that given to it by mediaeval people themselves. Aspects that were initially of secondary importance came increasingly into the foreground. DOI: 10.4324/9781003323181-4

THE SIXTEENTH CENTURY

There were attempts to answer new questions that arose from within the traditional framework of the mediaeval worldview. After all, no other framework was available. So, to our way of thinking, the science of the sixteenth century often has a rather half-hearted character. It was a period of great intellectual activity. There was a tremendous urge to explore the world. People were dissatisfied with the knowledge passed down to them, dared to ask radical questions and did not baulk at providing unconventional answers. In this period fundamental discoveries were made that would prove of great importance to the intellectual revolution that was to take place later, in the seventeenth century. Yet the underlying view of the world remained in broad terms that of the Middle Ages. Essential features of the mediaeval worldview simply could not be called into question. They were regarded as so self-evident that people were barely aware they were assumptions that might be open to debate. Explanations always require a certain theoretical framework, and no viable alternative was at hand. The outlines of one would not emerge for some time to come, and even then only very gradually. In light of all this, the sixteenth century was less a period of advancement than of critical investigation. It was the period in which the Aristotelian world system, as described in the previous chapter, began to fall into discredit without anything as yet developing that could take its place. On the one hand this produced uncertainty, in response to which some people clung all the more firmly to the certainties of Aristotelian philosophy. On the other hand, it inspired all kinds of intellectual experiments, and a few of these led to important new discoveries, although at the time it was often unclear which of them were important and which were not. It was only after the events of the seventeenth century that a new standard was set according to which the discoveries of the earlier period could be evaluated.

NEW INTELLECTUAL CURRENTS: HUMANISM AND HERMETICISM The causes of the intellectual movement of the sixteenth century are complex and still unclear even today. Voyages of discovery undoubtedly had a powerful effect. In India and America, Europeans encountered a world completely unknown to them. Printing also had

31

32

THE SCIENTIFIC REVOLUTION

an extremely important part to play, making it possible to disseminate and compare information far more quickly and accurately (and above all far more cheaply) than before. This often reinforced traditional knowledge, which now spread more easily by means of a variety of teaching methods and textbooks, but the fact that access to books increased also made it easier to criticize their content. There were important intellectual developments, too. The upkeep of the Aristotelian tradition was above all a task for the universities, whose curriculum was largely determined by the Church. The Aristotelian worldview was therefore closely interwoven with official theology. Towards the end of the Middle Ages, however, a new sort of scholarship arose that was independent of both the universities and theology. It was mainly engaged in by people working as physicians, lawyers or government officials. Scholarly studies were not a source of income to them, but neither were they an informal pastime. We are dealing here with a new cultural ideal, a movement known as humanism. Having begun in the Italian city states, humanism spread all across Europe in the sixteenth century. We might describe it, in brief, as a programme to revive the civilization of classical antiquity through the study of classical writings. There was therefore a humanist ideal – the revival of classical antiquity – and a humanist method: the study of classical writings. The latter does not sound particularly scientific, nor was it. Humanist textual study was above all linguistic in nature. Of course, the humanists also studied the scholarly works of antiquity, but at first they were interested more in their language and style than in their content. The aim, the revival of the civilization of classical antiquity, involved far more than this. The humanist ideal extended into all fields of higher culture, which explains why interest in the content of ancient writings gradually increased. The achievements of the ancients became the focus of literature, art, philosophy and what we now call science. This meant among other things that countless ancient scientific works were dusted off and valued at their true worth. They were old, but by no means always outdated. The science of classical antiquity had reached a higher level at many points than that of the Middle Ages. Precisely by looking back, humanism was able to promote the rise of modern science.

THE SIXTEENTH CENTURY

Humanism could not summon much appreciation of mediaeval Aristotelian philosophy. The scholastics had written in rather shoddy Latin by humanist standards and did not know Greek at all. Careful study of the sources soon made clear that mediaeval scholars had distorted Aristotle in many respects and interpreted him highhandedly. Furthermore, the humanists brought to light so many dissenting ideas expressed by other classical authors that the works of Aristotle could no longer be regarded as uniquely valid. In the long run, therefore, the humanists encouraged a critical attitude to traditional knowledge. The influence of humanism on the study of nature should not be exaggerated, however. Humanism was very much an elite movement of the extremely well educated, such as the higher clergy, civil servants and courtiers. Most had simply no interest in issues of a scientific nature. Of greatest importance to them was how one could become a good person. Aside from questions concerning purely linguistic matters, in which textual study is inevitably central, they therefore focused mainly on matters of morality and politics. Naturally these might provide some insight into reality, but they did not require an exhaustive study of the natural world. We should make an exception of the medical profession. As we have seen, knowledge of nature had always been important in medicine. During the Renaissance, educated physicians, especially in Italy, applied humanist insights to medicine and the knowledge of nature. The surviving texts by Aristotle and Galen were corrected linguistically and there was a painstaking search for writings by other authors. Aside from the nurturing of a critical stance towards Aristotle, humanism is therefore important to the history of science above all because of the boost it gave to the development of medical studies. For the advances that led to the later Scientific Revolution, the work of physicians who had received a humanist education was particularly important. They included Copernicus, Vesalius and William Gilbert. A second intellectual movement that was dissatisfied with oldschool philosophy arose out of the increasing religious consciousness of the time, partly under the influence of the Reformation. Several thinkers criticized Aristotle because they believed his ideas to be in conflict with the Bible. As they saw it, Christians should derive their

33

34

THE SCIENTIFIC REVOLUTION

learning from Holy Scripture, not from heathen philosophers. In some cases, this criticism extended to natural philosophy. Knowledge of nature was to be gained as far as possible from the Bible, especially from the story of the creation. This pursuit is sometimes referred to as biblical or Mosaic physics. Religious inspiration was also responsible for the fact that in this period people increasingly turned to the writings of Plato, Plotinus and their schools. These were then interpreted along Christian lines. An important current of thought in the Renaissance even claimed that Plato learned his philosophy in Egypt from Moses himself. Some other ancient writings were similarly reinterpreted. One of the most highly valued authors was the legendary Egyptian priest-philosopher Hermes Trismegistus. We can be confident that no writer of that name ever existed. The works said to have been written by him are, in reality, a late-classical corpus of philosophical and technical texts of unknown origin, related to Neoplatonism. Renaissance scholars regarded Hermes as an ancient Egyptian sage who had acquired his knowledge directly from Moses, Adam or God. The Neoplatonism or Hermeticism of the Renaissance was more interested in the human soul than in the material world. Its direct contribution to the study of nature is therefore slight, but it did introduce a few new accents. Aristotelianism was a rational philosophy, in which reality came to be known by means of experience and logic. Neoplatonism had a far more mystical cast to it: the human spirit already shared in the divine spirit, which meant that humans had a direct means of understanding reality. To the extent that Neoplatonist philosophers occupied themselves with the world at all, they tried to gain insight into its hidden forces and relationships. Parts of the cosmos were seen as influencing each other through sympathy and antipathy. Neoplatonist philosophers saw themselves more as magicians than as natural scientists. The term ‘magic’ was emotionally charged then as it is now. Traditionally it had been assumed that magic was possible, although only with the help of the devil. But since the late Middle Ages some philosophers had applied a broader definition, according to which a philosopher could bring about magical effects through knowledge of the hidden properties of things. This was referred to as ‘natural magic’. Theologically there was no fault to be found with that. Some

THE SIXTEENTH CENTURY

Hermetic philosophers went further, however, and sought contact with planetary spirits, heavenly intelligences and angels. Mathematics had a special part to play in attempts to gain knowledge of the hidden nature of reality. Renewed interest in Plato’s doctrine brought along with it his ideas about the value of mathematics. Particular attention was paid to astronomy. Contemplation of the eternal course of the heavens could help people to rise above earthly things and find the heavenly fatherland. In the sixteenth century, astronomy was no longer simply regarded as a practical accomplishment, the art of drawing up horoscopes; instead, it was once more engaged in at a more abstract level, as pure science. General intellectual currents like humanism and Neoplatonism placed a firm stamp on intellectual life, but mostly were quite indifferent to the study of the natural world. A goal-oriented and coherent programme of research on nature did not exist in the sixteenth century any more than it had in the Middle Ages. There were several aspects, however, that received more attention. In the sixteenth century we see the rise of several fields that are of direct importance in this respect: mathematics, natural history and to a certain degree philosophy. Each of the three developed into a more or less independent programme in the sixteenth century, separate from the old scholarship of the Aristotelian school and distinct from (although often partly inspired by) the humanist ideal. Natural history could be described as the collecting of facts. Mathematics, as it had been in antiquity, meant working with quantities. It included astronomy and what we would now call engineering, while philosophy remained as the search for causes and qualities. We are most assuredly not referring here, incidentally, to the kind of philosophy taught at the universities but to a new exercise of philosophy that was above all the work of independent scholars – physicians, government officials and the like. Natural history too was largely the work of physicians. Mathematics is by its nature mainly the concern of dedicated experts, but in the sixteenth century the field was still emerging, and it was so vaguely described that its practitioners cannot really be placed in a single category. The fact that we are dealing here with specialized pursuits does not mean that people of the time saw these scholars as engaged in a

35

36

THE SCIENTIFIC REVOLUTION

coherent scientific programme. For many people, natural history had more to do with history than with nature, and an educated physician probably regarded his trade as closer to classical philology than to the work of a land surveyor. Nor can the subjects themselves easily be regarded as science in the modern sense, although fertile ground was laid for later developments. People started to look at nature with fresh eyes, so the truths they had traditionally been taught were no longer self-evident.

NATURAL HISTORY AND MEDICINE ‘History’ is a Greek word (historia) that originally meant nothing more than ‘inquiry’. So ‘natural history’ is the seeking of knowledge about nature. It was not understood to mean the investigation of the causes or first principles of natural things; that was the concern of physics. Rather it was about the cataloguing of all kinds of natural phenomena and the describing of their characteristics. The resulting descriptions could later be used for the benefit of physics, but also of medicine. In the Middle Ages such natural history barely existed. Mediaeval bestiaries were fairly uncritical enumerations of animals, and their purport was more moral than zoological. The animals were presented as instructive symbols. In fact, whether or not they truly existed was of little importance. However, genuine interest in the world as it actually was arose in the Renaissance. Real ‘histories’ were written, in which as many facts as possible were brought together and a certain critical sense developed. For the time being, this criticism was mainly literary. The object of research was not nature itself but the writings of classical authors. Writers of natural history took as their great example the Roman author Pliny, of the first century of the common era, who in his Historia naturalis looked at a great many animals, plants, precious stones and other natural phenomena, even including works of art. His subjects are extremely diverse. He gives practical instructions for the growing of crops or for their use as medicines, but also tells a story about the phoenix, for example, which he himself remarks may be merely a legend. The bird was said to live in Arabia and to be engulfed in flames every 540 years before rising out of the ashes.

THE SIXTEENTH CENTURY

However, even things that Pliny writes about as confirmed facts often sound rather like fairy tales. He tells of a small fish, the remora, that can block the passage of entire ships, and claims that elephants are not only very intelligent but deeply religious. The work of Pliny was published very soon after the invention of printing and a great many editions and translations of it were produced. Generally speaking, nobody dared to criticize it on a factual basis. Pious mediaeval fables were no longer unhesitatingly believed, but the claims of ancient authors were sacred. In imitation of Pliny, people started writing their own natural histories, along the same lines, collecting not what we would now call scientific facts but mainly things that classical authors had said on the subject. In some fields, Pliny’s work clearly had shortcomings. The exploration of the New World from 1492 onwards and the many strange stories told of it caused a great hunger in Europe for descriptions of the wonders of the newly discovered lands. Here lay a problem for the describers of nature. The animals, plants, peoples and other phenomena found there were still so new to Europeans that nothing at all had been written about them. The tried and tested method of adopting a description from older, especially classical sources was of no use at all. If anyone wanted to give a description of the wonders of the New World – and interest in them was immense – then there was no other option but to travel there. For knowledge of this exotic world, people were dependent on information from the Indigenous populations or from their own observations and research. Initially this method was the product of pure necessity, but gradually people began to understand the advantages of the process, leading to a new attitude in the study of nature, even in fields where the classics had previously been relied upon. This applied to the geographical description of countries as well as to the knowledge of plants, animals and minerals. The classics remained important, but people were increasingly critical of what they found in ancient writings. They also tried to acquire as much first-hand knowledge as possible. This did not transform natural histories into modern works overnight. Many of their descriptions came about as part of an effort to satisfy the public’s thirst for sensation, rather than what we would now call a scientific approach. To use a modern analogy, the natural histories of the Renaissance were often more akin to tourist travel

37

38

THE SCIENTIFIC REVOLUTION

guides than to scientific treatises. Their main purpose was to allow the reader to encounter any number of remarkable curiosities. The more peculiar, puzzling and bizarre a phenomenon was, the more interesting people found it. The reasons why something attracted attention were many and various – its rarity, its odd appearance, the extraordinary characteristics claimed for it, or an interesting legend. This reflected not only the taste of the general public but also that of the educated elite. People were not satisfied by hearing stories, they wanted to see all these wonders with their own eyes, partly of course because they realized how unreliable reports of natural phenomena could sometimes be. Princes and nobles, and scholars too, took to compiling cabinets of curiosities, in which fascinating freaks of nature were brought together. They regarded the world as above all a large collection of objects of interest. It was precisely the extraordinary and the bizarre that revealed the miraculous forces of nature. This attention to rarities does not alter the fact that some people carried out systematic research, in which all new objects were investigated. Typical of the era was a great urge to collect facts in every possible field. German physician Georg Agricola, who worked in a mining region, investigated the underground world both by studying the writings of classical authors and by looking at the experiences of miners. He wrote about rocks, minerals and metals, but also about diseases suffered by miners, about underground water, underground animals, and the techniques and machinery of mining. Swiss physician Konrad Gessner treated zoology systematically. Both the ancient works and the everyday experiences from which these writers derived their conclusions contained not just facts but fabrications, and it was difficult to tell one from the other. The new empirical attitude was learned only by trial and error. In the writings of this period, we quite often find a modern-seeming critical outlook right next to what now looks like astonishing credulity. Agricola, for example, sharply criticized the widespread astrological explanation for all kinds of underground phenomena, but he also believed in goblins. There were certain areas where people paid hardly any attention to rarities and where a more objective stance prevailed. These were mainly the fields that had traditionally been seen as auxiliary studies that were of use in medicine, such as botany. Knowledge of plants was necessary for anyone preparing medicines, and botany took a

THE SIXTEENTH CENTURY

reasonably objective approach even in the Middle Ages. Mediaeval doctors had in the main derived their botanical knowledge from ancient writings, especially the work of Dioscorides. In the Renaissance the botanical work of Aristotle’s pupil and successor Theophrastus (Theophrastos) was discovered. These two writers naturally tended to describe the plants of their own classical Greece. In the sixteenth century, physicians no longer confined themselves to this kind of bookish learning. The first serious work on the flora of the New World was published by Nicolás Monardes, a physician from Seville. Monardes had never been to America, so he had to base his work on reports from travellers and on the seeds and dried plant specimens brought to Seville. His book was therefore above all a manual for apothecaries. The properties of the plants were described entirely according to the Galenic system used at the time. However, medical practitioners also became interested in the plants in their own European environment, to which they had far easier access. This kind of work is a good deal less sensationalist than most other natural histories. The plants were accurately described and often depicted in drawings. Instead of innumerable legendary qualities, their medicinal properties received most attention. Of the many botanists of the time, three Flemings, called Clusius, Lobelius and Dodonaeus (L’Escluse, L’Obel en Dodoens), are still the best known, both because of their meticulous studies and because of the beautiful illustrations in their books. The readership for these ‘herbals’ consisted not just of physicians and apothecaries but of rich nobles and citizens who took pleasure in expensive gardens. Botany increasingly became an independent branch of learning, separate from its purely practical use in medicine. Clusius, for example, also wrote at length about toadstools. This work was supplementary to that of classical writers. The new empirical method in natural history added knowledge to that which could be found in classical writings, but did not refute it. Ultimately, however, such research was bound to lead to the discovery of errors by the experts of antiquity, who had been close observers but not infallible. Faith in the authority of the classics was eroded as a result, and gradually other elements of the worldview of antiquity came under fire. The most striking example of the march of new empirical knowledge is found in another branch of scholarship that was

39

40

THE SCIENTIFIC REVOLUTION

auxiliary to medicine, namely anatomy. Knowledge of the internal organs can generally be acquired only through dissection, in other words by cutting corpses open. In many societies, including those of classical antiquity, this was taboo. (Only in Alexandria, in the Musaeum, was the opportunity to carry out dissection briefly available.) Galen therefore derived his anatomical knowledge mainly from the dissection of animals, and of course this produced unreliable information about human anatomy. Nevertheless, in the late Middle Ages his work was seen as the last word on the structure of the human body. In the late Middle Ages in the Italian city states, a cautious tradition grew up of demonstrating the content of ancient texts visually. The bodies of executed criminals were dissected in public for that purpose, while a medical professor gave an account of what was being shown. The classical authors retained their authority, since such autopsies were not intended to show anything that conflicted with what they had written. But in 1543 the young physician Andries van Wesel, or Andreas Vesalius as he was known in scholarly circles, published a book called De humani corporis fabrica (On the Fabric of the Human Body) in which he gave precedence to observation over the ancient authorities and on that basis openly contested the accuracy of many of Galen’s descriptions. Vesalius, who came originally from Brussels, was at that time a professor at the University of Padua; he later became personal physician to Holy Roman Emperor Charles V. His opposition to Galen was based on his own research on a great many human corpses. It had taken him immense efforts to get hold of a sufficient number of them and his methods of doing so were probably not always what might be described as tasteful. Dissection itself, in a time without refrigeration or mortuaries, was hardly an enjoyable endeavour. Sometimes he was forced to keep human body parts in his bedroom for weeks. The results, however, made it all worthwhile. De humani corporis fabrica is a classic, not just because of its contents but because of its execution. The internal structure of the human body is presented to the reader in many beautiful, large-format prints. Based on his own observations, Vesalius was able to show that many of the descriptions by Galen were simply incorrect. This was not a matter of fundamental philosophical questions, or of the workings of

THE SIXTEENTH CENTURY

the body as such, but of simple factual observations. Galen had written, for example, that the breastbone was made up of seven pieces, whereas Vesalius determined that there were only three. Another point of criticism concerned precisely where certain blood vessels flowed to, or how certain muscles were constructed. For people without a medical background this may have amounted to arcane hair-splitting, but Vesalius’ work dented the reputation of ancient medicine severely. A fierce debate arose. Traditionalist physicians disputed Vesalius’s findings. Others, by contrast, carried his work forward. For the time being nobody contemplated questioning Galen’s more fundamental notions. They did however begin to cautiously collect new material concerning the workings of the human body. A new approach was introduced by Italian physician Santorio Santorio (or Sanctorius). He wanted to hone the Galenic theory of the humours by making quantitative assessments. He weighed himself on a scale and kept precise account of how his weight changed during the day under the influence of nourishment, excretion and a range of daily activities. It led to the discovery that weight loss due to ‘invisible transpiration’ (sweating and breathing) was far greater than anyone had suspected. From pure anatomy the attention therefore shifted to the question of how the living body functioned. The work of English physician William Harvey is seen as the most important contribution to the modernization of medicine. Harvey had studied at the University of Padua for several years and he probably acquired many of his innovative ideas there. He did not publish them until 1628, however, in a book entitled De motu cordis et sanguinis (Anatomical Account of the Motion of the Heart and Blood). Classical medicine asserted that the blood flowed back and forth in the arteries. Harvey was the first to propose the theory (still adhered to today) that the blood, under the influence of the heart, circulates throughout the body. From the heart it flows into the arteries and from there into the veins, which carry it back to the heart. The theory was audacious, since how the blood got into the veins from the arteries was still entirely unclear; the connection through the capillaries was invisible to the naked eye. Only later was that part of Harvey’s theory confirmed. Like the work of Vesalius, Harvey’s theory was the subject of fierce debate among medical experts. Understandably so, since it overturned the entire

41

42

THE SCIENTIFIC REVOLUTION

way in which people had thought about the functioning of the human body up to that point. In medicine, therefore, important new insights arose, but the meaning of all these new facts was still obscure. It was clear that the ancient theories of Galen required revision, but that was not necessarily to say that people wanted to jettison them altogether. Harvey’s own ideas about how the body worked remained thoroughly traditional. The conflict that had arisen about anatomy and the theory of the circulation of the blood did bring with it a sense of uncertainty, but exactly which direction would need to be taken as a result remained at that point impossible to predict.

MATHEMATICS AND ‘NATURAL MAGIC’ In natural history, then, a new way of looking at nature gradually developed, arriving at results that did not always prove compatible with previously dominant teachings. Something similar happened in mathematics. In the sixteenth century the mathematical way of looking at nature increasingly developed into an alternative to the philosophical approach. As we have seen, the Middle Ages did not have a mathematical tradition in the true sense. Certain elementary skills were of course to hand, but they were mainly a matter of adding and subtracting. In the sixteenth century, however, mathematical knowledge started to become far more relevant to society as a whole. The great sea voyages of the period created a demand for navigational instructors and cartographers. The increase in trade brought with it a need for financial experts who as well as arithmetic could teach bookkeeping. A more accurate system of collecting rents and taxes was introduced, which meant there was extensive demand for surveyors. Outside the sphere of economic life too, mathematics gained in importance. Developments in painting, for example, brought with them the concept of linear perspective. As a consequence of these social developments, mathematical experts were given a chance to make their mark, all the more so because generally they were not bound by the restrictions imposed by the guilds. At first they found their raison d’être, and their livelihoods, in their capacity to solve purely practical problems. Nonetheless, some

THE SIXTEENTH CENTURY

arithmeticians also turned their attention to more theoretical questions. Sometimes this was purely a matter of self-promotion; they wanted to show they had intellectual abilities that surpassed those of their immediate competitors. It was, therefore, not unusual for arithmeticians to keep the method of solving some problems a carefully guarded secret, which they would reveal only in return for payment. But others placed their knowledge centre-stage rather than only follow their professional interests and thereby set themselves up as scholars of a new kind. This happened first in military science. The introduction and large-scale deployment of cannon in warfare in the late Middle Ages necessitated many different adjustments to fortress design. The new designs were based on geometrical shapes and therefore required a knowledge of geometry. In the Italian city states, and later elsewhere, a category of military experts with mathematical training emerged: the engineers. Because of their military backgrounds (some were even of noble blood), they were held in far greater esteem than ordinary arithmeticians. Fortress design became one of the largest branches of mathematics. Military engineers were self-confident types. They were soon busying themselves not just with the design of fortresses but with all kinds of other military problems to which they could apply their theoretical knowledge: the design of siege engines, the positioning of the guns, the layout of army camps and so on. Before long, similar figures emerged in non-military fields. The Renaissance was a time of great economic change, in which a new, rational approach to problems could sometimes be extremely rewarding. The construction of ports, mining, civil engineering and the solution of innumerable mechanical problems all belonged to the fields tackled by these newfound scholars. In some cases, the results were disastrous, but others had more success. The self-confidence of experts was nourished to a significant extent by ancient precedents. Ancient writings about mathematics and mechanics were rediscovered by the humanists, those of Archimedes in the first place, then of the Roman architect Vitruvius and later the works of Alexandrine engineers. The engineers of the Renaissance were able to adopt many of these earlier designs wholesale. They were therefore true scholars, who derived their knowledge from the

43

44

THE SCIENTIFIC REVOLUTION

writings of antiquity. But even where that was not the case, they knew themselves to be the heirs to a respectable tradition, the practitioners of a profession at which the great minds of ancient times had already excelled. Through all these developments, mathematics, taken to include mechanics, fortress building and so forth, grew to become a serious branch of learning. Even some classically trained scholars occupied themselves with the work of mathematicians who had a low level of education or only practical skills. More and more universities established a separate chair in mathematics. Professors of mathematics never achieved the status of lawyers or classical philologists, but they had clearly moved beyond simple manual work. A nobleman might include a dash of mathematics in the education given to his sons. Scholarly mathematicians acquainted themselves with classical antiquity and attempted to discover, as far as that was still possible, what else the Greek mathematicians had written. (Arab mathematics had fallen into decline in this period and it paid no further heed to the Greek heritage.) A typical example of an ‘engineer’ with a mathematical bent is Simon Stevin. Originally from Bruges in Flanders, he spent almost his entire career in the province of Holland. Initially he busied himself there mainly with the task of improving windmill design, but later he was taken into personal service by the stadholder, Maurice of Nassau, as a military engineer. Among other things, he worked on the construction of harbours and fortresses, and also wrote a great many mathematical works. Some are purely practical in nature, concerning the use of decimal fractions or the calculation of annuities. In other cases, the starting point is a practical problem, but the writing itself mainly concerns the purely mathematical consequences. In terms of theory, his contribution to hydrostatics, in which he continued the work of Archimedes, is considered his most important. Since these mathematicians tried to apply their knowledge in one fresh field after another, including those that had never previously been regarded as mathematical, new mathematical disciplines were continually emerging. One striking example is the line taken by Italian mathematician Nicolo Tartaglia. Also known for his contributions to algebra, Tartaglia applied mathematical calculations to the trajectory of cannonballs, with the intention of

THE SIXTEENTH CENTURY

drawing up guidelines for the more effective aiming of guns. This was ambitious, especially since cannon in his day were so inaccurate that you could really only shoot and hope for the best. Indirectly, however, his work was of great significance. Even though Tartaglia’s mathematical reasoning made no sense at all, it did mean that an old phenomenon was looked at with fresh eyes. In practice his approach led to an entirely new way of studying moving bodies. Tartaglia’s project captured the imagination and was taken up by countless other mathematicians. Even theology was not ultimately spared interference from mathematics. A method of chronology was developed as a consequence of attempts to determine a more precise world history based on data from the Bible. Mathematician Gerard Mercator, best known as a cartographer, used it for a new interpretation of the dates of Christ’s life. In short, the engineers had high aspirations. Nevertheless, if we ask what they ultimately contributed to the development of science, then we can only regard the results as disappointing. The contribution of the mathematicians and engineers of the Renaissance perhaps lies more in their pretensions than in their actual performance. Their ideal of a universally applicable mathematics had a great future ahead of it. Aside from that, their work was of mainly practical significance. It did not lead directly to a better understanding of nature. Two things in particular were responsible for that failure. Firstly, the social factors that enabled the engineering sciences to claim to be a serious branch of scholarship at the same time obstructed the development of a more fully worked out theory. There was a ready market for practical knowledge, but building a career as a theoretician was another matter entirely. The universities did not offer sufficient scope for radical innovation. To achieve an elevated position, engineers were generally dependent on royal patrons. In the Renaissance the royal courts of Europe grew to become centres of culture. They developed a great interest in intellectual issues and interesting new sciences could count on an enthusiastic reception. Many engineers, because of their military function, had traditionally worked in the service of the crown, so they were in a good position to profit from the opportunities presented by the court as an intellectual centre.

45

46

THE SCIENTIFIC REVOLUTION

Science at court, however, was science of a particular kind. Courtiers found in-depth theoretical discussions tedious. What they were interested in were ‘miracles of nature’. They wanted to be astonished by astounding phenomena, rather than truly wishing to understand them. The royal court, therefore, although it developed into an important centre of knowledge in Renaissance times, did not become a place for the systematic study of nature. Princes and other aristocrats, and wealthy citizens, established cabinets of specimens of natural history and gaped at the miracles of the natural world – the more bizarre the better. But as well as the wonders of nature itself, such as ostrich eggs, fossils and narwal tusks, their cabinets of curiosities also contained the products of ‘natural magic’. The royal engineers willingly supplied their patrons with mechanical clocks, complex sundials and various other mathematical instruments, although these were mostly showpieces of little practical value. They built fountains and designed all kinds of clever optical games, such as camera obscuras and drawings with misleading perspectives (anamorphoses). Even Simon Stevin was not above placing his skill at the service of princely entertainment. He built a land yacht in which he drove his princely patron along the beach on the shore of the North Sea. This achievement pales beside that of Cornelis Drebbel, originally from Holland and one of the most celebrated ‘magicians’ of the Renaissance, who demonstrated an actual submarine to King James I of England. In short, dependence on the court meant that most of the ingenuity of the engineers was used for the construction of playthings rather than for the advancement of science. Secondly, the engineers themselves did not strive for deeper theoretical understanding. Their preoccupation with ‘miracles’ was not merely a result of their eagerness to please kings and princes, it also conformed to their own scholarly ideal. The modern notion of tracing natural phenomena back to simple laws of nature did not exist. Just as natural history saw the world primarily as a collection of individual things, each with its own properties, so natural magic was regarded above all as a collection of separate tricks. Rather than seeking knowledge of nature as a whole, it looked for causes in individual properties. Nature amounted to a vast treasure trove

THE SIXTEENTH CENTURY

of miracles and manifestations, in which a stuffed bird of paradise had the same status as a fountain or a sundial. This tendency was further reinforced by Hermetic thinking, which was extremely influential both in courtly circles and among engineers, many of whom did indeed regard their activities as natural magic. People who built astonishing pieces of equipment of various sorts were not merely technicians, as we would call them, but magicians of a kind, commanding the hidden powers of nature and able to use them to create amazing effects. This meant that a distinction was made between such hidden powers and the normal, natural course of things. As a result, the achievements of engineers did not offer any point of departure for a better understanding of nature itself. There were nevertheless a few people who carried out systematic research into the hidden qualities of nature and dared to draw conclusions from them about how the world was structured. Among these exceptions was Englishman William Gilbert. He was not a mathematician or an engineer but a physician, and the subject he chose to study was magnetism. Magnetic attraction was an ultimately inexplicable miracle of nature, but it was also of practical importance since magnets had been used since the Middle Ages in ship’s compasses. Magnetism was no less a puzzling hidden property to Gilbert than it was to his contemporaries, but that did not prevent him from investigating the phenomenon closely, by strictly empirical means. He was able to chart virtually all the important features of magnets. Moreover, based on his own research he determined that the earth itself is a huge magnet. He could therefore be regarded as the discoverer of geomagnetism. Scholars before him had looked to the heavens for the source of the power that moved the magnetic needle, while most seafarers believed there was a huge magnetic mountain at the north pole. For Gilbert this discovery had mainly philosophical consequences. The fact that the earth was magnetic meant that it was a living being. Magnetism was a living force that animated the universe, a cosmic striving for unity. Systematic research into a hidden phenomenon like magnetism was justified by a worldview that to our way of thinking has a distinctly magical cast to it. Nevertheless, Gilbert’s

47

48

THE SCIENTIFIC REVOLUTION

work was seized upon by more practically oriented mathematicians eager to explore geomagnetism further for the benefit of shipping.

ASTRONOMY To some degree astronomy stands apart from other fields of mathematics. It was a branch of learning that did not first arise in the Renaissance but already had a tradition behind it in Europe. Astronomy dealt with the movements of heavenly bodies as they appear to us. Then there was cosmography, an introductory subject that set out the structure of the universe. It too was seen as part of mathematics. While most other branches of maths were above all of practical significance and were deployed by engineers, instrument makers and so forth, astronomy was regarded as a true science. It had traditionally been engaged in by recognized scholars (mostly medical practitioners), and it enjoyed additional prestige, especially in the Renaissance, because of its exalted subject matter: the heavens. The most important reason for the flourishing of astronomy in the Renaissance was the widespread faith in astrology, combined with the growth of the royal court as a cultural centre. Princes and nobles engaged their own astrologers to advise them on the decisions they needed to make. Court astrologers required a precise knowledge of the movements of the heavenly bodies to be able, at any given moment, to calculate the positions of the planets, to predict eclipses of the sun and moon, and so on. In addition, a prince or one of his courtiers might sometimes want to discuss the more philosophical aspects of what took place in the heavens. In the intellectual sphere, astronomy was given a fresh boost in the sixteenth century as a result of being allied with humanist ideals. By the Middle Ages the original writings of Ptolemy had largely been forgotten. As well as using tables, astronomers used simplified manuals. As early as the fifteenth century, however, Ptolemy’s writings were once again commented upon and disseminated. By the standards of the time, this made it a serious science as it involved the study of ancient sources in the humanist style. Yet astronomy was not confined to book learning. Precisely because they were continually having to make horoscopes, astronomers could not neglect the actual observation of the heavens.

THE SIXTEENTH CENTURY

In chapter one we saw that there was an inherent conflict between Aristotle’s concept of the universe and the system devised by Ptolemy. While Aristotle demanded that all movement in the heavens must be in regular circles, Ptolemy could explain the movements observed only by introducing all kinds of complex auxiliary structures. This raised the question of the extent to which Ptolemy’s system described the actual makeup of the universe as opposed to merely providing a handy mathematical model. As mathematics, and astronomy in particular, increasingly adopted the mantle of a serious science, more and more fundamental attention was paid to the issue of what status astronomical theories had and how the world was actually composed. The problem was addressed in the most radical way by Polish church administrator Nicolaus Copernicus. In 1543 he published a book called De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres). Copernicus had studied law and medicine in several Italian city states, and it was there that he became familiar with the new scientific thinking of his time and developed an interest in astronomy. Back in Poland he had little time for further studies. As personal physician and councillor to the prince-bishop of Warmia and as canon of Frauenburg (Frombork) he had many responsibilities. He wrote a treatise about the devaluation of currencies, and during a war with the Teutonic Order he was put in command of one of the most important Polish fortresses. At the end of his life, however, he returned to the pursuit of his youth and set himself the task of working out his ideas systematically. De revolutionibus should be seen firstly as a revision of Ptolemy’s Almagest. Copernicus was the first to attempt not merely to equal the work of the ancient astronomers but to surpass it. As such it is a highly technical book, full of tables and calculations. Copernicus replaced the mathematical models on which Ptolemy based his calculations with alternative models, the precise nature of which is generally speaking of interest only to specialists. However, Copernicus’s revision did not merely concern technical astronomical detail – it presumed an entirely new structure of the universe. Where Ptolemy had merely tinkered with the order laid out by Aristotle, Copernicus turned the entire system on its head. He regarded the observation that all heavenly bodies orbited the earth in twenty-four hours as a form of optical illusion. In reality it was not

49

50

THE SCIENTIFIC REVOLUTION

the sky that moved; the earth rotated on its axis every twenty-four hours. This obviated the need to allocate the earth a central place in the universe. Copernicus claimed that the sun, not the earth, was at the centre of the cosmos. His system is therefore called heliocentric (the Greek word for sun being helios), in contrast to Ptolemy’s geocentric system. The planets moved around the sun, each with its own orbital period. The moon was the only exception to this general rule. As in Aristotle’s system, it orbited the earth. The earth itself, however, and this is the most revolutionary aspect of Copernicus’s theory, moved around the sun along with the other planets. Truth be told, it is entirely unclear why Copernicus adopted this astronomical model, which in his time appeared quite peculiar. It is not as if Ptolemy’s astronomy was no longer adequate, nor as if Copernicus could achieve better results with his own model than with those of classical scholars. The observations of the time were insufficiently accurate for anyone to express a preference on those grounds. Furthermore, the model that Copernicus proposed made a nonsense of all the principles of physics that people thought they knew. Its only obvious virtue was what we would now call ‘mathematical elegance’; Copernicus’s model was mathematically more beautiful and simple than Ptolemy’s. The question remains, however, as to why this mathematical elegance was so important to Copernicus that he went against all the traditional truths. In his day it was not at all the standard by which to judge scientific theories. What is clear is that Copernicus wanted his model to be not merely mathematically sound but philosophically true. On the one hand he flouted the entire Aristotelian worldview because it suited him better mathematically to do so, while on the other hand his mathematical model had to comply with certain philosophical principles. This applied especially to the uniformity of circular movements. Like Ptolemy, Copernicus faced the problem that the movements of the planets were far more irregular than the model allowed, so that he too had to introduce all kinds of mathematical auxiliary structures to repair it. Unlike Ptolemy, however, he managed to construe all movements based on regular circular movements. It was an achievement that in the first fifty years after the appearance of his book was probably seen as more important than his heliocentric system. In that sense he was more ‘Aristotelian’ than Ptolemy.

THE SIXTEENTH CENTURY

In other respects too, Copernicus’s worldview remains very traditional. According to his vision, the universe was made up of a number of solid spheres. Each of the planets had its own sphere, by which it was moved, and the cosmos was bounded by the sphere of the fixed stars. Copernicus was a revolutionary, but that is not to say he was a modern thinker. If Copernicus had published his theories as a freestanding thesis, they would have likely passed almost unnoticed. But he published them as part of an astronomical manual that soon became hugely authoritative. As a result, they were soon widely known among astronomers. The system proposed by Copernicus seemed a strange idea to most of them, so they did not pay very much attention to that part of the book. In any case, it was not essential to their actual work. It was possible to perform good astronomical calculations without worrying about the structure of the universe. Those who did show an interest, moreover, were far from convinced by all the points Copernicus made. Generally speaking, they adopted only a few elements of his theory and distanced themselves from the rest. Nonetheless, doubt had been sown, and it invited further research. Of all the attacks made on the Aristotelian worldview in the sixteenth century, Copernicus’s had the most far-reaching consequences. The debate over the correct system occupied minds for decades to come and formed an important touchstone during the introduction of a new philosophy of nature in the seventeenth century. As such, however, this innovation was possible only after many others had joined the debate, bringing new elements with them. In the spirit of his time, Copernicus carried out hardly any observations of his own. Almost all his work is based on observations by the astronomers of antiquity. Impressed by Copernicus’s book, however, people once again started to study the sky with their own eyes. In other words, the empirical method, which was proving so successful in natural history, was now applied to the study of the heavens. Danish astronomer Tycho Brahe, a prominent noble who abandoned a prospective career in politics and the law to devote himself wholly to astronomy, built a large observatory he called Uraniborg on the island of Hven in the Sound, with the support of the king of Denmark. Along with a staff of assistants, he spent years

51

52

THE SCIENTIFIC REVOLUTION

carefully observing the movements of the heavenly bodies. With the help of a series of instruments that he designed himself, he achieved an unprecedented degree of accuracy. His work was interrupted when the Danish king was succeeded by someone less kindly disposed towards him. Tycho was forced to leave Denmark, but he was hospitably received at the court of Emperor Rudolf II in Prague. Tycho Brahe himself was not an adherent of the Copernican system. He recognized that the Ptolemaic-Aristotelian worldview required revision, but he believed there must be a better solution than Copernicus’s heliocentric system. He suggested a combination of the two: the planets moved around the sun, as Copernicus had claimed, not around the earth, but this whole set-up of sun plus planets moved around the earth, which stood still at the centre of the universe. The Tychonic system had its adherents for many years. Tycho Brahe never fully worked out his system mathematically. On his deathbed he handed that task on to Johannes Kepler, not a medical man for a change but a theologian, who had begun a career in mathematics at Graz in Styria, one of the Austrian states. Because of his Lutheran faith, Kepler was forced to leave the country. He sought out Tycho, who was then in Prague and who took him on as his assistant. After Tycho’s death, Kepler became his successor as imperial mathematician and in that capacity he set about carrying out the task assigned to him by Tycho Brahe. Tycho left all his papers and notes recording his observations to Kepler, who therefore had by far the richest and most valuable treasury of astronomical facts anywhere in Europe at the time. Kepler was himself a highly gifted and creative mathematician. The material could not have been in better hands. Kepler faced something of an ethical dilemma, since Tycho had made him promise to use the material to elaborate upon Tycho’s own system, and it was impossible simply to ignore the wishes of a dying man, but Kepler was an adherent of Copernicus’s theory. Naturally he was ultimately guided by his own preferences and organized his calculations according to the Copernican system. He did make a half-hearted attempt to see whether it would work Tycho’s way as well, but once he had determined to his satisfaction that it would not, he abandoned Tycho’s system for good.

THE SIXTEENTH CENTURY

Unfortunately, it then became clear that the material was hard to reconcile with Copernicus’s theories too. At this stage, however, Kepler proved more tenacious. For years he persisted in attempting to find a theory that would justify Tycho’s observations somehow or other, using calculations that almost drove him mad. He continued to take Copernicus’s heliocentric model as his starting point. Ultimately, he came to the conclusion that the planets did not travel around the sun in circles but in ellipses. In 1609 he published this result in a book with the proud title Astronomia nova (New Astronomy). Kepler’s findings were fairly shocking. Copernicus had just about managed, after a great deal of effort, to construe all the movements in the heavens based on a regular circular motion by all heavenly bodies. Kepler not only abandoned that regularity but even rejected circular motion as such. It is, therefore, not surprising that his contemporaries, including adherents of the Copernican system, reacted rather charily to his work. Kepler’s ellipses were in conflict with generally accepted ideas about the structure of the heavens, they were extremely tricky to use in calculations, and they were not even found to be expressions of ‘mathematical elegance’. Galileo ignored Kepler’s discoveries completely. Only very gradually did mathematicians develop a greater appreciation for them. It was not until decades later, with the work of Newton, that the importance of Kepler’s work became clear once and for all. This episode illustrates the degree to which science advances by fits and starts. In the sixteenth century, people had no idea what would later come to be regarded as important. Much of what Renaissance scholars saw as extremely significant has since been totally forgotten. Kepler’s work, which at the time seemed like an esoteric side-track, now occupies a place of honour in the history of science. The findings of mathematicians and astronomers inflicted serious damage on the Aristotelian system. Formally at least, astronomers occupied themselves purely with the description of the movements of heavenly bodies. More fundamental questions, such as those regarding the nature of the heavenly spheres, or the cause of movements in the sky, fell to the philosophers. The boundary between the disciplines became increasingly vague, however. The mathematicians ventured

53

54

THE SCIENTIFIC REVOLUTION

to make increasingly far-reaching statements about the properties of the universe. Young French mathematician Jean Pena investigated the Aristotelian system with the aid of the rules of mathematical optics. It was known that rays of light are refracted when they pass from one medium into another. Pena concluded that light coming from the stars would have to be refracted by the different spheres that, according to Aristotle, surrounded the earth. On its way to our eyes, after all, the light had to pass through the heavenly substance, then cross the sphere of fire and finally move through the air. The fact that we do not observe this refraction meant that Aristotle was wrong, Pena said: there is no sphere of fire around the earth, and the air reaches all the way to the stars. Pena therefore showed that Aristotle’s worldview could be tested by mathematics. He was better at reasoning than at observing, however; the atmosphere does in fact cause a refraction of the light that reaches us from elsewhere in the universe. The precise observations of Tycho Brahe represented further inroads into Aristotle’s doctrine. Aristotle had taught that the heavens were unchanging, but in 1572 Tycho unexpectedly discovered a new star in the sky that remained visible for several months. (We now know it to have been a supernova.) To preserve the invariability of the heavens, Aristotle had placed the comets, which appear, move, and disappear, in the sublunar world. He believed they were created by terrestrial exhalations that rose into the sphere of fire. When Tycho and others looked at the comets from a variety of angles, however, they were seen to be further from the earth than the moon. They were therefore superlunary, heavenly bodies. Ironically, it was Tycho’s conservatism that forced him to innovate further. Copernicus had introduced radical ideas partly to be able to preserve traditional notions such as the circularity of the heavenly spheres. Tycho did not want to go along with this radicalism. But in the alternative system that he then developed, the crystal spheres that carried the planets in their orbits ran through each other in a way that was unacceptable. At that point Tycho decided that the spheres did not exist and that the planets moved through space without any such support. The most audacious innovations in philosophical cosmology were undertaken by Johannes Kepler. In his Astronomia nova he wanted to

THE SIXTEENTH CENTURY

present not merely a new astronomical theory but a ‘physics of the heavens’. In other words, he believed it was insufficient to describe the movements of the planets; he wanted to know why they moved in the way they did. The force that moved the planets was applied by the sun, he claimed, and it was magnetic in nature. In this he was influenced by the ideas that William Gilbert had published not long before about magnetism as a cosmic principle. Still, Kepler was not satisfied with an explanation based only on physical principles. He was deeply convinced that the mathematical relationships he discovered in the world were not the result of chance but had their foundations in God’s spirit. God had organized creation according to measure, number and weight, and therefore the same relationships could be found everywhere. The relative proportions of the paths of the planets were, he claimed, identical to those between the intervals in musical theory, and they could be converted into certain relationships between the regular polyhedrons of spatial geometry. Everything was connected to everything else since God had created the world according to a unified plan. Kepler tried all his life to corroborate this belief with proofs and observations. To a great degree it inspired his astronomy. Kepler was also one of the few astronomers to explain the consequences that Copernicus’s system must have for astrology. Contrary to what we might expect, the Copernican system was not a death blow to the astrological tradition. Of course, even in the sixteenth century there were people who wanted nothing to do with astrology, but for religious or philosophical rather than astronomical reasons. The idea of an orderly cosmos in which the heavens acted upon the earth was not eroded by the theories of Copernicus. Kepler too remained an assiduous adherent of astrology. He did believe, however, that astrological theories needed to be adapted to the new picture of the cosmos and as a result he revised ancient theories in this field fairly thoroughly.

PHILOSOPHY OF NATURE Research by physicians, engineers and astronomers produced a growing quantity of material that conflicted with traditional beliefs and yet somehow or other had to be integrated into their worldview.

55

56

THE SCIENTIFIC REVOLUTION

What was the effect of all this new information on the ideas that scholars had about the realities of nature? As far as philosophy as a university study is concerned, not much changed in the sixteenth century. Aristotle’s work was taught just as before. This conservatism is understandable, since academic philosophy was intended only as an exercise in logical thinking and to give students a certain basic knowledge, not as a way of investigating all kinds of new and controversial subjects. Although there were criticisms to be made of Aristotle’s theories, they formed a well thought through, cohesive whole. For education they were irreplaceable. Among humanist scholars such limitations did not apply. In their circles people were certainly aware of the problems inherent in the old worldview. Those difficulties were only partly the consequence of discoveries in natural history or mathematics, incidentally. The voyages of exploration, the newly discovered writings of classical antiquity and the schism in the Church were at least as important. The framework within which the results of research were placed was therefore not scientific in nature. The worldview of the humanists was largely determined by their study of classical writings, as we have seen. To the extent that they felt a need for a new worldview, they looked above all for something old. An alternative to Aristotle was sought first in other writers of classical antiquity. The ideas of Plato, Plotinus, Pythagoras, the Stoics and others were dusted off and investigated for their relevance. The older the writer the better. It was usually assumed that in the distant past a golden age with a perfect understanding of the world had existed, which had then gradually fallen into decline. Which explains why scholars took the trouble to reconstruct all sorts of ancient Egyptian or Babylonian wisdom. Most discoveries, therefore, were not presented as new departures but as the recovery of older truths. Copernicus pointed out that centuries before Ptolemy some Pythagorean astronomers had defended the idea that the earth moved. The finding that comets were superlunary phenomena was in conflict with Aristotelian theory, but it was perfectly compatible with the ideas of other ancient philosophers, the Stoics for example. Where such clear precedents did not exist, it was always possible to find an obscure reference by some half-

THE SIXTEENTH CENTURY

forgotten ancient writer that could be interpreted in the required sense. After Harvey’s discovery of the circulation of the blood, writers immediately appeared who claimed that Hippocrates had known about this centuries ago. If the ancients knew everything already, then the obvious thing was to look first in their work for an alternative to the Aristotelian worldview, for ideas about the nature and causes of reality. But since everything could be found in the classics, given the appropriate interpretation, scholars need not place many limitations on themselves. As a result, the sixteenth century was in fact a time of great philosophical experimentation. The ideas put forward in the sixteenth century about the world were often somewhat metaphysical in character. Most have little to do with our contemporary concept of science. I now want to discuss a few cases of innovators who explicitly concerned themselves with physical reality. The study of such examples teaches us above all how difficult it was, despite a clear desire for innovation and an overwhelming quantity of material, to throw off settled habits of thought that were regarded as self-evident. Breaking down is always easier than building up. One of the most famous innovators of the sixteenth century was Swiss physician Paracelsus or, to give him his real name, Theophrastus von Hohenheim. Paracelsus was an extremely controversial figure and even today he prompts a range of contradictory feelings. He was a practicing physician, so the innovations he championed were not the abstract constructs of an academic in an ivory tower but had their origins in his medical practice. He was the first to experiment with chemical medicines. This does not alter the fact that in his writings he dealt with the entire visible and invisible world. He was a fierce opponent of Aristotle, but his own worldview is difficult to summarize. It was at any rate not scientific in the modern sense. It was a strange mixture of religion, magic, astrology, alchemy, popular superstition and much else besides. One of its essential elements was the idea that the human body is a microcosm, which is to say that the body in its structure and workings reflects the structure of the universe or macrocosm. To acquire knowledge of the world, therefore, we only need to learn about ourselves. Paracelsus’s ideas inspired fierce opposition, but he also had many admirers.

57

58

THE SCIENTIFIC REVOLUTION

Researchers like Gilbert, with his magnetic philosophy, or Kepler, with his search for cosmic harmony, were in fact also in search of a new worldview. The example of Kepler is instructive, since it teaches that the mathematical approach to reality which gained ground in the sixteenth century did not automatically open the door to a modern understanding of nature. Kepler was inspired mainly by ancient Pythagorean notions about the importance of numbers. Mathematics was more than just counting and measuring. The philosophical assumptions on which people based the belief that reality could be understood by mathematical means were not those of modern science but rather those of Platonic or Pythagorean philosophy. The world was founded on a divine order, which could be understood in mathematical terms. So, the mathematician did not seek random mathematical connections, nor an elegant mathematical solution, but a description in terms of beauty and harmony that was an expression of that divine order. Hence in practice the mathematical method often had more to do with numerical mysticism than with precise measuring and calculating. In natural history too, the concept of order was central. To the extent that those who described nature had pretensions that went beyond merely the gathering of information that was useful in, for example, medicine, they were above all intending to show that nature was a mirror of God’s order and, as it were, a book in which God demonstrated His power and goodness. Everything in the world had a place and a function given to it by a higher power. Things therefore meant more than was apparent at first sight. They had hidden meanings and taught moral lessons. To track down such hidden meanings, the route to be taken was not that of the modern scientific method but of an allegorical, historical or linguistic interpretation. The most ambitious writer of natural history was the English statesman and scholar Francis Bacon. He wanted to replace Aristotle’s logic, which was taken to be the heart of science, with a fully worked out ‘natural history’. According to Bacon, experience rather than reason was the foundation of science; we can gain knowledge by collecting as many facts about reality as possible. Bacon’s concrete advice in this respect, however, tended to be impractical. He rarely put his own basic assumptions into practice and his ideas about how

THE SIXTEENTH CENTURY

nature was constructed were extremely speculative. He was inspired by Paracelsus among others. The distinction between the superlunary and sublunary worlds, and the relationship between them, was central to his thinking. In short, the reform of the Aristotelian worldview ground to a halt for lack of a firm foundation upon which such an undertaking could be based. Even mathematics and natural history, the standpoints from which the traditional worldview was fiercely criticized, continued to be an unmistakable part of that same worldview. None of the many alternatives to the Aristotelian worldview that were developed were ultimately viable, even though they could sometimes lead to unexpected and valuable discoveries. In the absence of a sound philosophical foundation for a new worldview, the image of the world in the sixteenth century was above all that of a cabinet of curiosities, a collection of astonishing wonders that had little to do with each other, but which as a whole were a reflection, in one way or another, of divine order in creation. To depict that world as a cabinet of curiosities is not to stretch the point. The people of the sixteenth century themselves presented their cabinets as very much a world writ small. But however impressive their collections may have been, philosophically such an accumulation remained unsatisfactory. In the field of philosophy, therefore, the sixteenth century was above all a period of trial and error. At the same time a huge process of sifting and selecting took place. Concrete discoveries and some relevant theories were seized upon as valuable and came to be part of widely available scientific knowledge. But many, very many, were rejected even by contemporaries and consigned to oblivion. Nevertheless, for our story it is important to take brief note of this period of ferment, and the many fantastical and indefensible theories it produced. Not just because as people experimented they sometimes arrived at new ideas or results that were to prove fruitful, but mainly because it enables us to see that the history of ideas did not move in a straight line. It is simply incorrect to claim that the Aristotelian worldview reigned supreme before eventually being dethroned by modern science. It would be more accurate to say that the Aristotelian worldview had lost much of its standing by the sixteenth century,

59

60

THE SCIENTIFIC REVOLUTION

for reasons that have little to do with modern science. Many people were in search of a new worldview and in many fields alternative ideas were launched. The range of those ideas was extremely broad, but if the thinkers of the sixteenth century have anything in common, it is that none of them were ‘scientific’. Only in this atmosphere of searching and feeling out could a new worldview be formulated in the seventeenth century that quickly and completely eclipsed its rivals. With it the foundations were laid for the modern scientific vision.

• 3

THE SEVENTEENTH CENTURY: A NEW WORLDVIEW

Amid the array of new discoveries and theories produced during the sixteenth century, it was hard not to lose your way. As time went on, the old worldview became less and less satisfactory, but it was far from clear in which direction something better should be sought. Nevertheless, among the many new discoveries there were a few that had such a profound impact that they came to function as signposts for subsequent thinkers and researchers. The question of how the heavens were constituted became the central issue in thinking about reality. Essential to Aristotle’s theory was the distinction between sublunary and superlunary nature. Copernicus had classified the earth as one of the planets. It was a wild and unproven idea, but his work had placed the structure of the universe firmly on the agenda. Still, even for him, or for the few who adhered to his ideas, the cosmos was arranged according to a divine order, in which the heavens were distinct from the earth.

GALILEO AND A NEW VIEW OF THE HEAVENS What changed ideas on the subject radically was the invention of the telescope in the early seventeenth century. It brought an end to utter dependence on abstract reasoning or mathematical calculation when DOI: 10.4324/9781003323181-5

62

THE SCIENTIFIC REVOLUTION

it came to the nature of the heavens, since people could now examine heavenly bodies more closely with their own eyes. This did require a new habit of thought. Instruments in themselves do not bring innovation; only the people using them can do that. The invention of the telescope was important not for its introduction as such but for the discovery that it offered a new view of the world, one that was of relevance not least to philosophy. It seems the first useable telescope was constructed in the early seventeenth century by a maker of spectacles in the Dutch province of Zeeland. A telescope is made simply by placing two lenses one behind the other at the correct distance (in the earliest telescopes, one lens was convex and the other concave). Lenses and spectacles had existed for centuries and the idea of looking through several of them at once must have presented itself often enough, but the quality of lenses was so poor that no combination had ever produced worthwhile results. Only when someone had the idea of adding a diaphragm was a useable telescope produced. Once invented, the telescope quickly attracted attention, but mainly as a new form of ‘natural magic’. People realized that it might have applications in warfare, but it was not obvious that it could also be used in the study of nature. The instrument would probably have remained no more than a plaything for princes and nobles had it not been for Galileo. Galileo Galilei was an Italian mathematician. Right at the start of his career he took a dislike to Aristotelian natural philosophy. There was nothing extraordinary about that. But in his search for an alternative, Galileo allowed himself to be led not so much by all kinds of abstruse speculation as by the Italian engineering tradition. He approached the world as if it were one huge mechanical structure. He therefore focused on pumps, siphons, pendulums, studies of movement, the effects of stresses on beams and much more besides. All his dabbling ultimately served a higher goal: to challenge Aristotle and explore nature. With his thorough grounding in engineering, Galileo arrived at his first attempt at making scientific instruments. These had not previously been developed beyond the level of simple tools or devices used by mathematicians: pairs of compasses, rulers, protractors and variations upon them. The concept of a ‘scientific’ or ‘philosophical’ instrument that could make invisible phenomena visible simply did

THE SEVENTEENTH CENTURY

not exist. Such phenomena were absorbed into the domain of natural magic since they originated in the hidden nature of things. It was in northern Italy, especially in the circle around Galileo, that the idea first emerged of not merely regarding such artificial effects as wonders to be admired but to see them as a means of studying nature. One of the first inventions to which this approach was applied was the thermometer. The origin of this instrument is not completely clear. Devices recognizable to us as thermometers were not unknown in seventeenth-century Europe. (These early versions were gas thermometers, incidentally, not the liquid thermometers still commonly used today.) However, while elsewhere in Europe they were regarded as toys, rather than as instruments of value in scholarly studies, those associated with Galileo recognized them as devices for measuring heat. It is possible that Galileo was influenced here not just by the engineering tradition but by the medical tradition of Santorio. As we have seen, Santorio attached great importance to accurate measurement. He experimented with thermometers in around the same period as Galileo and even seems to have been the first to provide them with a calibrated scale. He was also the inventor of a hygrometer and a wind gauge. Galileo later claimed that he alone had invented the thermometer, but he was quite sensitive about such matters and not always entirely candid. That the transformation of the telescope from a plaything into a scientific instrument occurred in Galileo’s hands was certainly no accident. His background enabled him to recognize the instrument’s potential better than anyone. In 1609 the first reports about the new instrument reached him and he did not rest until he had built one himself. However, Galileo was not interested in simply constructing a curiosity; he wanted to make it as powerful as possible. His theoretical knowledge, his ‘magic touch’ and above all his determination to produce useable instruments, a very unusual ambition in those years, meant that the telescopes he built were vastly superior to all others in the Europe of his day. What happened next was even more important. Instead of merely showing some noble gentleman or other a few spectacular effects in the hope of reward, Galileo used his telescope to carry out ‘philosophical investigations’. When for that purpose he turned it to the

63

64

THE SCIENTIFIC REVOLUTION

sky, he arrived at several unprecedented conclusions. The planets looked markedly more like the earth than Aristotle had taught. The moon was not a flawless heavenly body but had mountains and valleys just like the earth. The planet Jupiter turned out to have no fewer than four satellites. Up to that point people had regarded the notion that not all planets orbited the sun, since one, the moon, still revolved around the earth, as an inconsistency of Copernicus’s system. Now that other planets too proved to have their own satellites, the supposition that the world was a planet, or the planets worlds, seemed a good deal less improbable. Galileo fully realized the importance of his discoveries and the benefits they could bring him. In 1610, afraid that somebody else would beat him to it, he hurriedly published the book Sidereus nuncius (The Sidereal Messenger). It was expressly intended not just for scholars but for as wide a readership as possible. Galileo deliberately sought publicity, making as much fanfare as he could. It had the desired effect. Half of Europe talked of nothing else and at a stroke the author became one of the most famous scholars of his day. At that point he was forty-six years old, with a less than glittering career behind him as a professor of mathematics at Pisa and Padua. Now he suddenly had his pick of jobs. He decided to become, for a princely salary, court scholar to the Duke of Tuscany, in his native city of Florence. The significance of Galileo’s discoveries became the subject of a lively debate. Further phenomena were added. In 1611 Galileo and, independently of him, German Jesuit Christopher Scheiner discovered spots on the sun, which moved in unison across the surface and changed their shape. Today they are still known as sunspots. This discovery produced a good deal of controversy. Galileo insisted that the actual body of the sun was spotted. The apparent movement of the spots was caused by the turning of the sun on its axis. Even the sun, in other words, was not as perfect as had always been assumed. Scheiner and other more traditional scholars were of the opinion that the sun itself was immaculate and the spots were actually clouds of sorts, or perhaps small planets, moving around the sun close to its surface. Another important discovery made by Galileo with the help of his telescope was that the planet Venus had phases, just like the moon. This demonstrated that at least one of the planets orbited the sun.

THE SEVENTEENTH CENTURY

Galileo’s tactic of catching the public by surprise had results that went beyond his career, since the impact of his findings on the prevailing worldview was considerable. Of course, Galileo’s timing was fortuitous. In another era his discoveries would no doubt have caused a stir, but attempts would probably have been made to fit them into the existing worldview as far as a bit of pushing and squeezing would allow. But by 1610 the generally accepted Aristotelian worldview had been increasingly coming under fire for almost a century. Furthermore, Galileo did not simply present his findings as spectacular curiosities but as a weapon for use against the entirety of Aristotelian philosophy. The shock effect of his publications caused more harm to the authority of the Aristotelian worldview than dozens of years of subtle argument could have done. In particular, Galileo’s discoveries represented a serious attack on the fundamental division between a sublunary and a superlunary world. If the heavenly bodies were related to the earth, as it now seemed, then that fundamental division in nature, on which Aristotle’s physics was based, was a delusion. The earth was not in a uniquely privileged place in the universe after all, in fact the moon and the planets might also be inhabited by humans, or by creatures resembling them. Fantasies of that kind were soon circulating. An inquisitive Dutchman, Ernst Brinck from Harderwijk, who during a journey through Italy in 1614 paid a visit to the famous Galileo, went home with the idea that the moon had not just mountains and valleys but forests and rivers too, that humans and animals lived there, in towns and villages – and that the sun was inhabited in the same way. It was not long before extra-terrestrial life became an established theme in literature and in scientific speculation. Galileo’s discoveries therefore not only turned on their head accepted ideas about the structure of the universe, they presented a view of an entirely new, unknown world. It now became possible to see, if very vaguely, what the long-sought alternative to the Aristotelian worldview might look like. Instead of a distinction between superlunary and sublunary nature and a hierarchical arrangement of the heavenly bodies, each with a character of its own, it now seemed that the universe was an immense space in which the various heavenly bodies – the earth, moon and planets, but also the sun, stars and comets – had no clear hierarchical relationship between them.

65

66

THE SCIENTIFIC REVOLUTION

This was still no more than a hazy suspicion. If heavenly bodies did not move in a fixed order, how did they regulate themselves? Galileo, along with his pupils, attempted to address this question. As well as studying the heavens, Galileo applied his mind to the structure of matter and to various problems in mechanics. This latter subject above all seems to have provided him with a key to understanding nature. As a mathematician and engineer, he was not at all keen on profound philosophical reflection. He believed there was little point speculating on the causes of heaviness and the fall of bodies to earth if you did not even know exactly how to describe falling motion mathematically. Aristotle had claimed that heavy bodies fall much faster than light bodies. But Galileo was able to show triumphantly that they fell at almost identical rates, and he produced a mathematical equation for their acceleration while falling. This falling motion was part of a more general problem. Aristotle had believed that bodies upon which no force is exerted are at rest, and that they move more quickly as the force applied to them increases. Galileo showed this to be untrue; for a regularly sustained movement, however rapid it might be, no force is required. A body has ‘inertia’, so that without external propulsion it persists in its movement. He believed this had cosmological consequences. The principle of inertia, Galileo claimed, was especially valid in the case of circular movements. In other words, it could explain the movements of the earth and planets, as well as the fact that we cannot by ourselves detect the rotation of the earth about its axis. Galileo’s ideas about circular movements were later shown to be wrong, but his investigation of inertia was not a dead end. Later researchers continued in the direction he had taken. The work of Galileo was one of the most important sources of inspiration for seventeenth-century science, quite apart from the fact that later researchers were able to adopt elements of it ready-made. Galileo thereby brought into view the outlines of a new world vision. Galileo was unable, however, to cross the final decisive threshold to a vision that truly encompassed everything, that not only explained observed phenomena but would inspire future generations. His discoveries were the result of a new attitude to the investigation of nature, but not of a completely crystallized new vision. It is striking, for example, that it never occurred to Galileo to use the

THE SEVENTEENTH CENTURY

magnifying glass as a scientific instrument. Lenses had existed for a long time, and after the invention of the telescope, attempts were soon being made to build more complex microscopes, but no important discoveries were made with them for the time being. There is another reason too why no one in the Italian city states managed to take that final step. Galileo deliberately presented himself as a mathematician, and the approach that he and his pupils adopted had its roots in the engineering tradition. He tackled practical problems and gave barely any thought to the philosophical and religious aspects of his worldview. That was the strength of Galileo’s approach, but it was a weakness as well, because for his contemporaries those aspects were extremely important. The seventeenth century was a time in which religious implications were always regarded as more important than scientific implications. As long as the new explanation of nature lacked a satisfactory philosophical and religious justification, it remained for many contemporaries an anomaly. That Galileo and his school neglected these philosophical aspects is not entirely their fault. The general climate in Italy in those days was not favourable to audacious new ideas in the realm of philosophy or religion. After decades of having no choice but to allow itself to be inundated by attacks from Protestants and freethinkers, the Catholic Church was finally mounting a powerful resurgence now known as the Counter Reformation. In all fields it tugged at the reins, and its remit extended to research into natural phenomena. This came to the fore in the clash between Galileo and the Church concerning the Copernican system, of which Galileo was a convinced supporter. The immediate cause was fairly trivial. For all his many positive qualities, Galileo could not be described as particularly tactful, and his aggressive manner made him quite a few enemies. He had incurred the wrath of several conservative Florentine clergy, who saw his defence of the Copernican system as a great opportunity to blacken his name. Were Galileo’s ideas not in conflict with the teachings of the Catholic Church? At various places in the Bible, after all, there were references to the course of the sun or the immobility of the earth. It is worth recalling here that the problem of the relationship between the Copernican system and the Bible had received hardly any attention up to this point. Of course, it had been noticed, but sixteenth-century theologians had enough to worry about without

67

68

THE SCIENTIFIC REVOLUTION

combating every eccentric scientific theory. Only because of Galileo’s behaviour did they make it their business. Galileo defended himself with verve, but again with little tact. The lessons in biblical exegesis that he felt compelled to give the Catholic theologians probably did his cause more harm than good. The scientific purport of the debate largely escaped a commission of theologians specially set up by the Church, which could see only a purely theological conflict and rejected Galileo’s defence. The use of Copernicus’s theories as the basis for astronomical calculations was permitted, but the doctrine that the sun really did stand motionless at the centre of the universe was branded a heresy. That was in 1616. Galileo was left in peace at that point but later, in 1633, when he raked up the matter again, he was formally convicted. As a result, the famous scholar had to suffer the dishonour of spending the final years of his life under house arrest. The consequences of his prosecution should not be overstated. The pope had great moral authority among Catholics, of course, but little actual power. He was dependent on the secular authorities for the implementation of his commands, and they were not always particularly responsive. Even a thoroughly Catholic country like France refused to give the Church’s condemnation of Copernican thinking the force of law. French scholars continued to defend the Copernican system undisturbed. The pope had more influence in the Italian city states, however. Italian scholars felt forced to follow the Vatican line after Galileo’s conviction and sentencing, or to limit themselves to uncontroversial subjects. That this censure by the Church handicapped scientific life in Italy to an important extent is beyond dispute, but it was not completely quashed. After 1633 the Italian city states continued to produce excellent anatomists, experimenters and astronomers. Nor would it be true to say that the Catholic Church was hostile to science as such. The education of the Jesuits, for example, had traditionally paid a great deal of attention to mathematics. Many Jesuits applied themselves to the study of natural phenomena, and among European astronomers were many Catholic clergy, quite a few of whom did important work. They too were infected by the new spirit of research into nature in the seventeenth century, partly inspired by Galileo’s discoveries,

THE SEVENTEENTH CENTURY

but they needed to respect certain boundaries. Statements relating to the fields of theology or philosophy were acceptable only in so far as they were compatible with official Catholic teaching, but there was generally little opposition to purely factual research. Jesuits and other men of the cloth carried out scientific investigations in all kinds of fields. They built optical instruments and reported on meteorological phenomena. When they then attempted to interpret their results, however, they came up against predetermined certainties. Mathematical or empirical findings must not come into conflict with theological dogma or with philosophical postulates sanctioned by theologians. Therefore, all findings were fitted into the framework of Aristotelian philosophy and the biblical worldview. Interpretations of newly discovered facts remained of necessity conservative. Given the situation in which research into nature found itself at the time, these limitations were disastrous. There was a need for a thorough revamping of the scientific worldview, and to that end all aspects of reality needed to be examined afresh. To achieve this it was inevitable that certain philosophical and even theological assumptions would have to be revisited, but that was exactly what the condemnation of Galileo deterred scholars from doing. For at least two centuries, Italy had been the scientific centre of Europe. In the seventeenth century the centre of gravity shifted to other countries, including France and England. The time of merely collecting facts or criticizing ancient reasoning was over. There was now above all a need for a visionary architect who dared to abandon all the old certainties in order to follow wherever the new conjectures led. One young French scholar had just written a lengthy book about the world when he heard about Galileo’s conviction by the Church. He hid his manuscript in fright, and for a while he remained noncommittal. Only when it became clear that for him as a Catholic no danger threatened did he come out into the open. His name was Descartes.

DESCARTES AND MECHANISTIC SCIENCE René Descartes had his origins in the lesser nobility of France. He served for a while in the armies of the Dutch Republic and Bavaria,

69

70

THE SCIENTIFIC REVOLUTION

but he was rich enough not to need to work and eventually he opted to spend the rest of his days as an independent philosopher. He withdrew from active life and settled in the Dutch Republic, where he lived for more than twenty years before moving to Sweden. He died of pneumonia shortly after arriving there. Descartes had a huge influence in a number of fields. The publication of his Géométrie in 1637 heralded a new era in the history of mathematics. He is also regarded as one of the founders of modern philosophy. He replaced the thinking of Aristotle with a philosophy that had entirely different points of departure, and his work in metaphysics and epistemology, in particular, broke new ground. In other words, Descartes did not limit himself to matters of detail but developed a vision of reality as a whole. Of course, we are most interested here in what he proposed in the realm of natural philosophy, the Aristotelian ‘physics’ that appeared in an entirely new form in Descartes’ work. It is a central aspect of his work, but we should bear in mind that its reception was influenced by a number of other things. As regards methodology, it is of crucial importance that Descartes rejected the then dominant tendency in science to base knowledge primarily on ancient writings. He thereby set his face against not only the scholastics but the humanist scholars too. In his view we should be led by reason, common sense and experience, and could happily leave anything written by Aristotle, Galen and all the other ancient philosophers unread. Descartes therefore saw no need to reconcile his own ideas with those of classical antiquity. He was not the first to propose such a course, but it was mainly under his influence that this new attitude soon became generally accepted, especially in studies of the natural world. Descartes’ most important publication in the field of natural philosophy was the book Principia philosophiae (Principles of Philosophy), published in 1644. In it he utterly rejected the hierarchical ‘great chain of being’ that Aristotle had seen in the cosmos. The order that prevailed in the world according to Descartes was of a completely different kind. The real world was ultimately arranged just like the abstract world of mathematics. Abstract quantities were infinitely divisible, and so were real quantities or particles of matter.

THE SEVENTEENTH CENTURY

And just as the realm of mathematical quantities was unlimited, so was the real world. Descartes declared outright that the universe had no centre, extremities or boundaries but was completely uniform. In this uniform world there was no place for different kinds of matter. Descartes therefore rejected the doctrine of the four elements. All apparent differences between substances must be traceable back to the shape and size of the smallest particles of which each was composed. Whereas according to Aristotle there was a fundamental distinction between earthly and heavenly matter, Descartes believed there was no essential difference in the matter from which the earth, the planets and other heavenly bodies were made. The entire universe was therefore ultimately made up of the same matter, which behaved in the same way wherever it was located. The fundamental concept he used to express this unity of nature was that of the laws of nature. The Aristotelian cosmos was above all a world of order, while the Cartesian universe was a world of patterns and regularity. When he created the world, God, Descartes said, had instituted several fundamental rules to which all matter was subject. These rules were as valid as the axioms of mathematics. The scholars of the Middle Ages and the sixteenth century had understood nature as a collection of individual things, each with its own, God-given characteristics. For Descartes all individual things and their qualities must be reducible to the general laws of nature. Stars and planets, heaven and earth, animals, plants and people: everything was formed out of matter according to those same laws. In living creatures, the same fundamental natural phenomena occurred as in dead matter; in processes that were the result of human ingenuity as in processes that happened of their own accord. One important innovation in relation to Aristotelian science was Descartes’ belief that all the workings of nature were manifestations of strict causality, which is to say that they happened through simple cause and effect. Purposiveness did not exist in nature, any more than the function of sympathy did, or of mysterious qualities for which there was no further explanation. To say that a stone falls because it is striving to reach the natural position of heavy bodies is not to give a scientific explanation. A particle of matter starts to

71

72

THE SCIENTIFIC REVOLUTION

move only because it is pushed or hit by another particle. Nor can the growth of an animal or plant be explained by reference to the end result, and the properties of natural things cannot be explained by their usefulness to humankind. For Descartes the entire world consisted of particles of matter that directly, by means of pressure and impact, operated upon each other, as parts of a huge machine. The machine analogy was deliberate. Because of Descartes’ comparison of the world to a machine or automaton, this has become known as mechanistic philosophy or the mechanistic worldview. Descartes’ mechanistic worldview did not emerge out of thin air. In the years that had passed since Galileo’s discoveries, various thinkers had toyed with a similar idea, including Pierre Gassendi in France, Isaac Beeckman in the Dutch Republic and Thomas Hobbes in England. None of them, however, had contemplated the concept so thoroughly as Descartes. Most importantly, in none of them do we find the notion of laws of nature. Nonetheless, their ideas often fed through into later discussions of the mechanistic worldview. After Descartes published his work, not everyone was prepared to adopt his ideas unquestioningly. Many of those who accepted his essential innovations departed from them on less fundamental points. Arguments arose over whether a vacuum existed, for example, and over the extent to which matter was infinitely divisible. Many variations on his mechanistic philosophy arose. Still, we need pay no further attention to these matters of detail, since we are not concerned here with the peculiarities of individual thinkers but only with the general thrust. I shall therefore instead look at ‘mechanistic philosophy’ in a broad sense. Nowadays it is hard to imagine the excitement that Descartes’ theories generated. The fundamental notions have become accepted as obvious, and the theories themselves have largely been superseded and therefore strike us above all as bizarre. So let us for a moment listen to someone from the same period. In a speech to commemorate the death of French philosopher Malebranche, the speaker (the famous French writer Bernard Le Bovier de Fontenelle, then secretary to France’s Académie Royale des Sciences) recalled how Nicolas Malebranche had got to know Descartes’ theories. At

THE SEVENTEENTH CENTURY

the age of twenty-six he had happened upon Descartes’ L’Homme (Treatise on Man) in a bookshop. Fontenelle said: He leafed through it, and was struck as if by a light to which his eyes were unaccustomed. He became aware of a science of which he had previously had not the slightest notion … He bought the book, started reading it right away and, although this may be hard to believe, was so moved by it that it gave him heart palpitations, which sometimes forced him to break off his reading. Bernard Le Bovier de Fontenelle, ‘Éloge du P. Malebranche’, in Histoire de l’Académie Royale des Sciences … for the year 1715 (Paris: Imprimerie Royale, 1718), p. 94.

The attraction of Descartes’ work lay to some extent in the great cleansing to which he subjected philosophy, consigning to history occult properties, the world’s hierarchical order, and the distinction between the superlunary and the sublunary. Many such ideas had lost their credibility, partly because of the latest discoveries, partly because some radical religious thinkers interpreted them in ways that were unacceptable to most. The fact that many people had become sceptical of astrological predictions made them receptive to a philosophy such as Descartes’ that offered a justification for such scepticism. It would be too simple, incidentally, to see the marginalization of astrology as a direct consequence of mechanistic philosophy, since the latter merely reinforced a tendency that had been underway for some time and of which the causes are unclear. The most important reason why Descartes’ philosophy was so convincing to so many people was that within it the many discoveries made in preceding years suddenly fell into place. Mechanistic philosophy offered that long-sought alternative to the Aristotelian worldview. The Copernican system, which within Aristotelian philosophy represented an absurdity, no longer seemed anything other than a logical outcome of Descartes’ general principles. According to Descartes, the universe was unbounded and uniform, and the sun and earth were not unique. The earth was one of the planets and the sun was one of the stars. The other stars too, Descartes postulated, were orbited by planets. Extrapolating from the heavenly bodies, he reached the conclusion that comets and planets, including the earth, originated as extinguished stars.

73

74

THE SCIENTIFIC REVOLUTION

The many new discoveries in medicine were likewise given a new meaning by Descartes, and he even subjected beliefs about living creatures to his mechanistic philosophy. He rejected the theory of the humours and the doctrine of multiple souls. The vitality of plants and the capacities of animals had nothing to do with any ‘soul’. Animals were simply clever automata of some kind, which could be understood by the examination of their parts. The human body, too, was ultimately only a machine. With this assertion he gave powerful support to Harvey’s theory of the circulation of the blood. Within traditional physiology it was not clear why, or how, blood flowed, but if you regarded the body as a hydraulic machine, then there was nothing odd about it. Supporters of Descartes described the heart as a pump that moved the blood around by mechanical means and therefore, again by mechanical means, sustained the various bodily functions. The human capacity for thought was the one thing that Descartes excluded from his mechanistic worldview. It was a gift from God that stood outside the normal natural order. In all of this it was of crucial importance that Descartes, unlike Galileo, did not limit himself to the purely natural. Mediaeval Aristotelian philosophy had derived its coherence from several basic assumptions that formed the foundations of the whole. All scientific knowledge had its place in a system in which the world was seen as a hierarchical entity, in which everything had a fixed place. This was not a purely scientific idea, since over time it had been fleshed out with religious content. Ideas about the order of creation were ultimately ideas about God and therefore could not be tampered with casually. Anyone who wanted to overthrow the Aristotelian worldview, or introduce a new worldview, needed to be able to justify doing so not just on scientific grounds but on religious grounds as well. Descartes succeeded in doing this, and he did so by basing his ideas on a concept of God that was more in accordance with the ways of thinking of his time than with those of mediaeval philosophy. In nature as conceived in the Middle Ages, God sat enthroned above a hierarchy of beings. Descartes presumed God’s universality, so that all of creation was directly dependent on God and you were no closer to His throne in one place than in another. The unlimited vastness of the universe and the universal validity of the laws of nature were to him a reflection of the eternal, unchanging nature of God. Because of this underlying

THE SEVENTEENTH CENTURY

conception of the deity, Descartes’ philosophy attracted the attention of people who had never been engaged with natural science. Descartes’ philosophy offered more than merely an attractive summary and synthesis of existing knowledge. His model indicated the direction that future research should take. The world of Aristotle was a completed whole in which there was no need for new developments. The world of Descartes, by contrast, almost begged to be investigated further. If the world was a machine, then like any other machine it could be broken down into its parts in order to trace its workings. Descartes himself had led the way with a number of more or less fully formed theories about the workings of the body, about the movement of the planets and so on. Unfortunately, these theories were generally less than convincing and were later abandoned. As an accurate description of the world, Descartes’ philosophy left much to be desired. He did, however, and this is more important, provide the instruments by means of which such a description could be compiled. Countless researchers were inspired by Descartes. This is clearly the case with regard to theories of medical science, especially anatomy and physiology. Modern anatomical research was not invented by Descartes – it had existed since Vesalius – but mechanistic philosophy did more than anything else to legitimize such efforts. It also provided a model of bodily functions that could serve as a starting point for further research. No longer were unconnected anatomical facts merely collected; people now had a better idea of what the function of all those discoveries was. Another example is provided by chemistry. Until this point it had been a purely empirical science, presenting recipes for making certain substances. Mechanistic philosophers tried to turn it into a more systematic science for the first time, based on Descartes’ model of matter. It is easy with hindsight to dismiss these mechanistic models of reality as unfounded speculation, as has often happened. But their significance should not be underestimated. It was under the influence of a mechanistic view of nature that something came into being in the seventeenth century that we might justifiably call a ‘programme of research’. Earlier innovations had been isolated steps. Anatomists, astronomers, natural historians and so forth had each done work of their own. The importance of their results for other fields of learning

75

76

THE SCIENTIFIC REVOLUTION

was in most cases barely noticed. With the emergence of the mechanistic view of nature as an alternative to the Aristotelian view, researchers realized that they were all part of a great joint enterprise. From that point on, what later became known as ‘natural science’ was a study with an identity of its own. Science also became visible as such. From the second half of the seventeenth century onwards, researchers organized themselves into all manner of societies, which eventually received official recognition. This occurred first in England, where in 1660 a group of scholars came together to form a society for the advancement of research into nature. Two years later it was given official status by means of a royal charter, becoming the Royal Society of London for Improving Natural Knowledge. The French refused to be left behind and in 1666 the Académie Royale des Sciences was founded in Paris. Each of these institutions was unique in character. While the Royal Society was mainly a group of enthusiasts, the French Académie was an institution whose members were appointed by the crown and received payment. One thing the two had in common, however, was that their aims were defined by a programme of research into nature as it took shape under the influence of mechanistic philosophy. They did not allow themselves to be distracted by the desire of a royal court for princely playthings; they were scientific institutions in the modern sense of the term. Their programme comprised both the empirical tradition of natural history and the mathematical approach of the preceding century, but both approaches were transformed to the point of being almost unrecognizable. Rather than searching for cosmic order and harmony, people were now looking for mechanistic models. Rather than observing and ordering, work on natural history was increasingly a matter of analysing and experimenting. Mathematicians strove above all to show that phenomena were the outcome of simple laws of nature. We shall now look more closely at both these approaches.

THE EMERGENCE OF AN EXPERIMENTAL TRADITION Direct observation had always been one of the most important foundations for the knowledge of reality. In the seventeenth century

THE SEVENTEENTH CENTURY

the range of observable things was made far greater than ever before by the introduction of new instruments. Galileo’s telescope was the first device to increase what could be observed of reality and the impression it made was dramatic, as we have seen. This remained an individual case, however. Other instruments were available, but it was not until the second half of the seventeenth century, under the influence of mechanistic philosophy, that the potential of such new inventions for the investigation of nature was exploited to the full. There is a specific reason for this. The use of instruments meant that reality was being studied under manipulated, ‘unnatural’ circumstances. According to Aristotelian philosophy, nature behaved unnaturally under contrived, artificial conditions and experiments could therefore teach us nothing about the workings of nature itself. Descartes believed that the laws of nature remained the same under all circumstances, so as he saw it the construction of artificial conditions could be an aid to getting to know nature better. For a considerable period after the initial spectacular successes, little more was undertaken even with the telescope, which had brought Galileo such triumphs. Only after 1650 did a steady stream of new observations start to flow. Polish astronomer Hevelius created an accurate map of the surface of the moon. Christiaan Huygens discovered a moon orbiting the planet Saturn and found out that the planet had rings. The Italian Giovanni Domenico Cassini then identified another four of Saturn’s moons. Nor was it merely a matter of discovering separate objects. The telescope was further developed, making precise measurements possible. Danish astronomer Ole Rømer detected irregularities in the orbits of the moons of Jupiter. He explained this with reference to the finitude of the speed of light. Jupiter’s moons in fact moved with complete regularity, but when the earth, as it orbited the sun, was further away from Jupiter, the light took longer to reach us and therefore the moons were not observed exactly at the spots predicted earlier, when the earth was closer to Jupiter. On that basis, Rømer became the first to succeed in estimating the speed of light. This brought him into conflict with Descartes’ theory, incidentally, since Descartes had believed that light arrived instantaneously at infinite speed.

77

78

THE SCIENTIFIC REVOLUTION

Lenses could be used to make a microscope as well as a telescope. The first microscopes were indeed built not long after the invention of the telescope, but it was not until the second half of the century that serious research using them took place at any scale. The results were a revelation, offering a glimpse of a world of which nobody had previously been aware, even though it was quite literally under their noses. Antoni van Leeuwenhoek, a researcher from Holland, observed that a drop of water was teeming with tiny creatures invisible to the naked eye. Small insects proved no less complex in their structure than large animals. Aristotle had believed that such simple life forms arose spontaneously from rotting matter and not even Descartes had questioned this. Investigation under the microscope showed that insects reproduce just like higher animals, and in doing so go through complicated life cycles. The classic book about the miracles observed under the microscope is Micrographia by English researcher Robert Hooke, published in 1665, which contains many magnificent illustrations. It should be stressed, however, that in contrast to those of the sixteenth century, these scholars were not looking for astonishing and incomprehensible wonders. On the contrary, the microscope was a weapon in the hands of mechanistically inspired philosophers intent on exposing the structure of the world and thereby making it fathomable. The microscope allowed them to find definitive proof of Harvey’s theory of the circulation of the blood, for example, since the way the blood flows from the arteries to the veins (through the capillaries) could be watched with the help of a magnifying lens. More generally, the microscope was an important instrument for the study of anatomy, like the research carried out by the great Italian anatomist Marcello Malpighi. Another entirely new field of study was plant anatomy, developed mainly by the Englishman Nehemiah Grew. In conjunction with all this research, and with the new mechanistic view of nature in general, natural history changed in character. Scholars gradually came to realize that ordinary, everyday things were no less significant than the rare or remarkable. Of course, in the sixteenth century a more sober form of natural history had existed, especially in botany, but it was regarded above all as auxiliary to medicine. In the seventeenth century it largely pulled

THE SEVENTEENTH CENTURY

free of that link. Botanists were no longer primarily interested in the medical applications of the plants they were studying. Their foremost concern was to chart the kingdom of nature. The descriptive method was no longer applied exclusively to plants. Fish, birds, shellfish and other animals were described based on observations, without mythological embellishments or references to classical authors. This had repercussions for the compilation of cabinets of curiosities. Interest in rarities did not disappear overnight, of course. After all, it exists to this day. Nevertheless, aside from Egyptian mummies, two-headed calves and South American armadillos, glass cases were filled with less spectacular collector’s items, such as shells, minerals and insects. Instead of bringing together exclusively freaks and monsters, collectors developed an eye for the regularity and simplicity of nature. One of the most important instruments of the Scientific Revolution was the barometer. It was originally developed in the school of Galileo and only later embraced by the mechanistic philosophers. Its discovery is attributed to Italian mathematician Evangelista Torricelli, who drew inspiration from Galileo’s observations of pumps and siphons. The workings of all these instruments are based on the fact that you can turn a glass of water (or any other liquid) upside down without anything pouring out of it as long as you somehow ensure that the opening of the glass remains under the surface. The cause of this phenomenon was initially unclear. It was usually put down to a ‘horror vacui’, the notion that nature abhors a vacuum. If the liquid were to pour out of the glass, a vacuum would be left that nature could not abide. This applied only up to a certain limit, however. If the pump or siphon attempted to lift the liquid too high, it would cease to work. The liquid stayed in the closed tube only to a given height, and the space above it was a vacuum. Torricelli wanted to investigate this vacuum further and he arranged for systematic research to take place into the phenomenon. He arrived at the idea of using the far heavier mercury in place of water. This did indeed turn out to rise much less far, making the instrument significantly simpler to use. Water would rise more than ten metres, but when a tube filled with mercury was turned upside down so that the opening was at the bottom, in a bath of mercury,

79

80

THE SCIENTIFIC REVOLUTION

the mercury inside fell to a maximum of 76 centimetres above its level in the container beneath. In the sixteenth century such a result would probably have been regarded as an interesting but bizarre phenomenon that was of little further significance. Adherents of mechanistic philosophy, however, saw it as support for their worldview. The results of Torricelli’s experiment seemed to show that the theory of the ‘horror vacui’ was invalid. What prevented the column of liquid from falling back, up to a certain height at least, was not some obscure fear of emptiness but simply the weight of the open air. As we have seen, according to Aristotle, air was a light element. In contrast to the heavy element earth, it rose by its very nature. According to the mechanistic philosophers, however, air and earth were made of the same kind of matter. Air bubbles rose if they were lighter than the surrounding medium, so in water for example, but, in theory, air had weight to it as well. The barometer now presented the opportunity to study the nature of air more closely. So Torricelli’s test did not remain an isolated incident. It became the starting point for a whole series of experiments. Many took place in France. Most of them, and the most important, were carried out by, or by order of, philosopher and mathematician Blaise Pascal. Among his most famous results was the discovery that the mercury in a barometer falls if you take the instrument up a tower or a mountain. This was regarded as a decisive experiment that proved the effect was indeed caused by the weight of the air. At higher altitudes there was less air above the barometer, so its weight was less, and the mercury was pushed up less far. Air, in other words, was matter with a weight to it that up to a point obeyed the classic laws of hydrostatics. The results achieved with the barometer were powerfully supported by the invention of the air pump. The first vacuum pump was built by the mayor of the German city of Magdeburg, Otto von Guericke. Using containers in which he created an artificial vacuum using his pump, he conducted a series of revelatory experiments. It was mainly in England that the importance of the pump within the framework of mechanistic science was recognized. Robert Boyle and his assistant Robert Hooke built a new version, then engaged in

THE SEVENTEENTH CENTURY

a long series of experiments to investigate the properties of air and of matter in general. Boyle came up with the idea of air pressure: the force exerted by the air is not simply the result of its weight but of its elasticity, or what he called the ‘spring of the air’. Boyle’s pump also presented better ways of investigating the nature of a vacuum. Research with barometers had already shown that in a vacuum there could be no sound, but there could be light. Boyle further showed that fire and life were not possible in a vacuum; animals placed in a vacuum died almost immediately. Water turned out to contain large amounts of air. Boyle’s work caught the imagination, and instrument makers built simplified air pumps that enabled rich citizens to repeat the experiments in their salons. More than any other instrument, the vacuum pump became a symbol of the new science. The immense enthusiasm with which many scholars took up the study of nature is striking. Expectations were high, and this could lead to disappointments or outright blunders. But in retrospect we can only conclude that the programme of experimental research, with mechanistic philosophy as its starting point, produced impressive results. Of course, some time and effort were needed before people fully mastered experimentation. Not all scholars immediately agreed on what constituted a good experiment and what conclusions could be attached to any given result. The further working out of such problems took place mainly in England. The scholars united in the Royal Society in London, among whom Robert Boyle had an important place, regarded experimentation as their foremost task. They also addressed methodological issues. They believed that experiments must be repeatable at any time. If others were to accept its results, then an experiment must take place in front of witnesses, and the precise conditions in which it was performed must be described in detail. The mainly foreign researchers who had not adhered fully to the rules they drew up, such as Pascal, were criticized. This may not always have been completely fair, but ultimately it meant that the scholars of the Royal Society made a crucial contribution to defining the modern scientific method, thereby enabling it to secure an increasingly strong position in society.

81

82

THE SCIENTIFIC REVOLUTION

MATHEMATIZATION OF SCIENCE Mathematics occupied an important place in Descartes’ natural philosophy. The mathematical approach to natural phenomena went back a long way, but previously it had always been quite separate from traditional physics. Descartes succeeded (by making use of the work of other researchers, especially Galileo and his school) in presenting a model in which the two could be integrated. He claimed that everything in nature was governed by a few simple natural laws and that natural phenomena could be derived from them mathematically. Descartes provided an example of such mathematization in his explanation of the rainbow. Since the invention of the telescope there had been a great deal of interest in optics. Kepler had put together a theory according to which light was no longer a bearer of qualities but instead was described in terms of pure mathematics. Descartes could now, based on simple laws of optics, calculate precisely how a rainbow was produced by the reflection of light off the inside of water droplets. The properties of a rainbow could be deduced from this theory with mathematical precision; only with respect to the colours and the breadth of the rainbow did uncertainties remain. Descartes’ theory of the rainbow is an impressive achievement and among the highlights of his work. On other points, however, Descartes was less successful. The most essential task of mathematics within the mechanistic view of nature concerned, obviously enough, mechanics, which ultimately needed to offer a description of the simple processes – of pressure, collision and movement – that were the means by which the smallest particles of matter brought about all natural phenomena. Descartes, therefore, began his Principia philosophiae with a description of the simplest mechanical rules that particles of matter obeyed. Unfortunately, he was embarrassingly wide of the mark. On closer inspection, of the seven rules of collision he presented, six are completely indefensible. He was corrected on this point by Dutch mathematician Christiaan Huygens, one of the most important scholars of the second half of the seventeenth century. Huygens devoted much of his attention to mathematical work on physics in general and mechanics in particular. Not only did he manage to deduce perfectly the rules governing colliding bodies, he also managed to describe in precise mathematical

THE SEVENTEENTH CENTURY

terms the movement of a pendulum. Most of his work is so complicated mathematically that it is hard to explain in brief, but that is not to say it is purely abstract. Huygens managed to apply the pendulum rules he had discovered to the design of accurate clocks. The fact that such an application was possible constituted further evidence that reality was indeed governed by the rules of mechanics. In mechanics and in optics, then, the mathematical approach to nature was reasonably successful. But not all aspects of science lend themselves equally well to mathematization. There was nothing mathematical about observations of insect legs. (Conversely, it is impossible to experiment with the paths taken by the stars.) Here too, people learned by bitter experience. To this day the enthusiasm with which seventeenth-century researchers set about mathematizing the most improbable phenomena seems astonishing. It sometimes produced downright nonsense, but also surprising results. Italian mathematician Giovanni Alfonso Borelli wrote a mathematical treatise about the movements of animals in which he discussed the structure of limbs and the necessary strength of muscles and so forth with the aid of the rules of mechanics. The book made a great impression and set the tone for a whole new approach in the medical sciences. There was a curious gap, however, in the mathematical approach to natural phenomena in Cartesian science, namely astronomy, which even in classical antiquity had been an explicitly mathematical pursuit. In the sixteenth century, through the work of Copernicus, Kepler and others, it had expanded enormously, but mathematical astronomy was all but ignored by Descartes and his early followers. Descartes treated the science of the heavens as exclusively qualitative, although he was convinced he could offer an explanation for the movements of heavenly bodies. He believed the planets moved around the sun in great vortices of heavenly matter, like whirlpools, but he did not attempt, as one might expect based on his programme, to calculate the movements of the planets with the aid of this model and thereby deduce them from some simple laws of nature. Nor did his followers make good this omission. In contrast to most other fields, interest in mathematical astronomy in the 1650s, under the influence of Cartesianism, seems to actually have experienced a marked decline.

83

84

THE SCIENTIFIC REVOLUTION

In retrospect this shortcoming is understandable. With his theory of swirling matter, Descartes was simply on the wrong track. His idea provides no scope for the mathematical deduction of the movements of heavenly bodies. This was later demonstrated by the greatest scientist of the period of the Scientific Revolution, English mathematician Isaac Newton, whose work sounded the death knell for Cartesian-mechanistic natural philosophy in the narrow sense. At the same time, Newton introduced to the new science his most spectacular success. We will look at his work next.

THE MATHEMATICAL SCIENCE OF ISAAC NEWTON Descartes’ work indicated the direction the study of nature would take from the seventeenth century onwards, but his theories were often very wide of the mark. This explains why he has not gone down in history as the great founder of modern science. That honour is reserved for English mathematician Isaac Newton, whose theories have stood the test of time. Newton’s work is traditionally seen as the completion or pinnacle of the Scientific Revolution. To refer to a pinnacle in this sense is always a rather perilous business. A concept such as the ‘Scientific Revolution’ is after all not a natural fact of life but an abstraction invented by people. It represents an attempt to make a complicated historical development understandable in a simple way. The Scientific Revolution therefore has no natural endpoint. There is much to be said for the claim it had run its course by about 1670. The basic principles of the new view of nature had been accepted by then; there was a new programme of research that was taken up with great enthusiasm by a new generation of scholars, and science had acquired institutional form in the academies in Paris and London. The researchers of the time, including Newton, no longer needed to create their fields of study from scratch. They could build upon foundations already laid. Nonetheless, most historians regard the Scientific Revolution as continuing until the time of Newton and ending only then, and they have several good reasons for doing so. With hindsight we can say that by about 1670 the revolutionary phase in science had ended, but at the time this was not at all clear. Nobody realized then that the academies in Paris and London had called into being a tradition that would stand

THE SEVENTEENTH CENTURY

the test of time. It might have been a passing fad. The founding of the Royal Society was the act of a handful of enthusiasts, it was controversial, and its longer-term impact was still unclear. A considerable time passed before such institutions adopted their definitive form. Theoretically too there was a good deal of uncertainty. With his philosophical principles, Descartes had created a framework for research into natural phenomena, but his philosophy had its opponents. In the second half of the seventeenth century, more and more deficiencies in his ideas came to light. Even people who had at first been fervent supporters now had their doubts and started looking around for a new philosophical system. As a result, there was a danger that a similar kind of uncertainty would arise as in the sixteenth century, when Aristotle’s philosophy became discredited without a satisfactory alternative being available. Many people felt Descartes had offered that alternative, but their jubilation was short-lived. In the second half of the seventeenth century, further new ideas did the rounds: Malebranche’s occasionalism, Leibniz’s doctrine of monads, Spinoza’s philosophy. This only aroused yet more doubts about the basis for the investigation of nature. In short, this was still a period of fermentation, and it was by no means a foregone conclusion that the brand-new scientific approach would emerge triumphant. The theories of Newton, in contrast to those of Descartes, were to endure, however. They established modern science once and for all as an authoritative and irreplaceable form of knowledge. Newton was a scientist of such stature that he was almost impossible to ignore. He cast his shadow over the entire period. He compiled theories that became the basis of a general vision of the world upon which other scholars could build: the earth and the structure of the universe, the forces that move the world, the nature of light, the basic principles of mechanics. His way of doing science was the main example followed by other researchers for more than a century. Furthermore, his ideas gained authority even outside the circle of those directly involved in the scientific endeavour. Well-educated eighteenth-century citizens drew their ideas from the work of Newton, and with that work the period of ferment and uncertainty largely came to an end. For most of his career Newton led a reclusive life, as a professor of mathematics and a fellow of one of the colleges of Cambridge

85

86

THE SCIENTIFIC REVOLUTION

University. In later years he became director of the English mint and president of the Royal Society, but by then he was carrying out little research of his own. He was an extraordinarily versatile scholar. Not only was he one of the greatest mathematicians of his day, he was also a meticulous experimenter, as his work on rays of light in particular demonstrates. He also worked on mechanical structures and instruments, which led him to invent the reflecting telescope. At first sight, Newton’s work seems to have a more limited range than Descartes’. Whereas in his philosophy Descartes dealt with the entire world, including living creatures, the soul and metaphysics, Newton’s publications are purely about mathematics, physics and optics. Appearances deceive, however. In some respects, Newton’s interests were actually broader than Descartes’. He was profoundly engaged with theology, church history and chronology, and he also studied the writings and theories of the alchemists, hoping to find in them clues to the forces in nature that bring about growth, life and movement. But as had been the case earlier with Kepler, his tendency to try to explain the deepest secrets of creation was reined in by his simultaneous striving for scientific exactitude. Ultimately, he published only things that could be proven by mathematical or experimental means, but that work should be understood within the totality of his activities. In his studies in chemistry, he combined the examination of ancient writings with practical experimental work. It was because of his grounding in alchemy that Newton, as an experimenter, in his work on optics as in other fields, achieved such success. Newton was a deeply religious man, and his criticism of Descartes can be explained at least in part by his dislike of the materialist tendencies of the Cartesian system. He was not alone among his countrymen in that respect. Many English scholars had a somewhat ambiguous attitude to Descartes. On the one hand, they burned with enthusiasm for the new science. The Cartesian programme of research was enthusiastically embraced in England and further developed; as we have seen, it was mainly in England that the experimental method was formulated as a deliberate strategy. On the other hand, the English had their reservations about Descartes’ own theories from the start. Their hesitancy was attributable in part simply to the fact that Descartes was a foreigner. National pride would not permit

THE SEVENTEENTH CENTURY

English scientific successes to be derived from the ideas of a French philosopher. But there were also serious objections to the content of those ideas. Descartes traced all natural phenomena back to the ‘push and impact’ of particles of matter. The English baulked at this radicalism, which seemed extremely materialistic. Although they adopted the essential elements of Descartes’ vision of nature, they tried to reconcile them with older ideas, according to which ‘immaterial’ or spiritual attributes helped to steer events in nature. The various theories of this kind need not detain us here. What is important is that their relative traditionalism meant the English had a built-in appetite for a radical revision of mechanical science. So, it came about that ‘attraction at a distance’, a taboo in Cartesianism, was seriously contemplated by the English. Robert Hooke proposed at one point that the movement of the planets around the sun might be caused by a power of attraction that was in inverse proportion to the square of the distance between the sun and the planet concerned. Scholars like Christopher Wren and Edmund Halley thought this a pleasing idea, but none were in a position to successfully complete the complex mathematical calculations attached to this model. In 1684 Edmund Halley visited Newton in Cambridge and put the problem to him. What kind of course would planets take under the influence of such a force of attraction? Newton answered that he had addressed that problem once before and solved it. A force of attraction that declined by the square of the distance between the two bodies involved resulted in an orbit in the form of an ellipse, a course that the planets do indeed describe in reality, as Kepler had discovered. Newton could not immediately locate his calculations, but he promised to send Halley the fully worked out solution. Halley had to wait a long time for the promised answer, since Newton had decided to tackle the problem afresh. He not only repeated all his calculations but further developed the various parts, calculated additional consequences and shaped it all into a compelling mathematical argument. When Halley finally received the work, he had reason to be more than satisfied. What Newton had delivered was not just the solution to an astronomical problem, it was an entirely new worldview.

87

88

THE SCIENTIFIC REVOLUTION

Halley saw to it that Newton’s work was published. It appeared in 1687 as Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). The title seems to be a deliberate reference to Descartes’ Principia Philosophiae (Principles of Philosophy). In a sense, the work of Descartes was still the great example for Newton. It was not an example to follow, however, but rather one to react against. In terms of content, Newton broke radically with Descartes. Like Descartes, he starts by laying out the fundamental laws of mechanics. But the rules that Newton lays down are diametrically opposed to the theories of Descartes, whose rules of collision had already been replaced by Huygens. Newton now formulated three laws that could serve as a foundation for all of mechanics. They definitively refuted the rules of Descartes. The most important goal that Newton set himself in his book concerned precisely the point on which Cartesian science had fallen short: to show how the movements of heavenly bodies, as they were known in mathematical astronomy, obey the principles of physics. A large part of the book is devoted to criticism of Descartes’ theory of vortices of matter. As we have seen, Descartes had never attempted to prove his physical explanation of the movement of the planets based on mathematical reasoning. Newton now showed that this was in any case impossible. Descartes’ theory produced results that conflicted with the observed movements. Newton demonstrated that from the laws discovered by Kepler concerning the movements of the planets, it necessarily followed that the planets moved under the influence of a force of attraction exerted by the sun. Newton then generalized from this principle, claiming that it applied not only to the sun and the earth but to all heavenly bodies, or bodies of any sort. All matter in the universe attracts all other matter with a force proportional to the mass of the attracting bodies, and inversely proportional to the square of their distance from each other. This is Newton’s famous law of universal gravitation. The movement of the planets around the sun is caused by the gravitational force exerted on them by the sun. The movement of the moon around the earth is caused by the attraction exerted by the earth. Furthermore, the same law was shown to operate in our own environment. The fall of heavy bodies on earth turned out to be

THE SEVENTEENTH CENTURY

nothing other than an effect of the gravitational force exerted by the earth. Newton suggested moreover that ebb and flood were caused by the moon’s force of attraction operating on seawater, but with the means available at the time this theory could not be substantiated. Falling motion, however, as described mathematically by Galileo, proved entirely consistent with the law of gravity. It was brilliant confirmation that heavenly and earthly phenomena were governed by the same laws. What for Descartes had been merely a matter of philosophical intuition had with Newton become a solid scientific fact. Or was it solid? Contemporaries had their doubts. In England Newton’s theories were immediately received with enthusiasm and applauded. On the continent there were reservations. Newton’s book was admired as a mathematical accomplishment, but it was not taken seriously as physics. The aim of science must be to trace phenomena back to simple physical principles, to the laws of nature. Newton traced everything back to a supposed ‘attraction’. How was such an attraction possible? The concept was reminiscent of the hidden qualities of Aristotelian theory, which had now been discarded. Christiaan Huygens, a man of great distinction, called Newton’s notion of attraction downright absurd. In an afterword to the second edition of his book, published in 1713, Newton responded to this criticism. He could not say how the attraction came about either, he wrote, but he had shown that it existed, which should be sufficient. There was undoubtedly an explanation for gravitational force, but for the time being it was possible only to speculate about what it might be. It was not the task of the physicist to come up with all kinds of hypotheses, but rather to use mathematics to derive laws from phenomena. By saying this, Newton put strict mechanistic philosophy into perspective. The first principles that needed to be used to explain phenomena were not laid down a priori but had to be sought out with the aid of mathematics. The second edition of the Principia met with markedly more interest outside England than the first. Within a year it was reprinted in Amsterdam. Criticism of the work gradually fell silent as scholars all over Europe came to accept Newton’s theories. That was not all.

89

90

THE SCIENTIFIC REVOLUTION

The success of Newton’s work undermined the credibility of mechanistic philosophy as a whole. Newton exemplified a new kind of science that replaced the mechanistic natural philosophy of the seventeenth century. The Cartesian research programme soon gave way to a new programme that came to be called Newtonian. Descartes’ basic premisses naturally remained intact, but the strictly mechanistic model, which had been the foundation of seventeenth-century research, was abandoned. In imitation of Newton, eighteenth-century scholars no longer looked for a mechanistic explanation of phenomena but instead for mathematical laws to describe them. This was not a new idea per se. Galileo had led the way when he declared that the researcher must not speculate about the cause of the fall of bodies but first describe that fall accurately. However, the popularity of mechanistic philosophy had sidelined this approach. The immense authority of Newton brought it to the fore once again. The importance of Newton, in other words, lies not just in his actual discoveries but also in the fact that he offered his contemporaries, and indeed later generations, an authoritative model of what science ought to be. Newton’s work was regarded as the epitome of science and his system as a pinnacle that would never be surpassed; the most any researcher in any branch of science could strive for was to match Newton’s model. The virtues of Descartes were thereby systematically disparaged. Cartesian science was the darkness in contrast to which the light brought by Newton could shine all the more brightly. Some of the criticism was justified, since of course Descartes’ theories had proven indefensible on many points. Other criticism was completely unfounded. The model of science propagated by Newton was mainly a correction and further elaboration upon that of Descartes. Newton held as firmly as ever to Descartes’ basic premisses: the uniformity of the world, the principle of causality and the laws of nature as a basis for explanations of phenomena. These points of departure had become so self-evident in such a short time that nobody was any longer aware of the degree to which they were attributable to Descartes. A certain historical justice can be detected here, should we choose to see it. Descartes, for all his brilliance, was a vain and ambitious man who preferred to hide as far as possible the degree to

THE SEVENTEENTH CENTURY

which much of his work was borrowed from others. To that extent, he was now getting paid in kind.

A REVOLUTION IN THE PREVAILING WORLDVIEW? Generally speaking, science exists at some distance from everyday practice. The sun rises and the sun sets, no matter what explanations scholars think they should give for it. The theory of Copernicus changed nothing on that score. And who has ever worried about the exact formula for gravity, the movement of a pendulum and suchlike? Nevertheless, in the Scientific Revolution of the seventeenth century something more far-reaching was going on. Although the precise findings of scientists were of little importance to daily life, the general view of nature on which they based their work had an influence outside science. People who did not themselves take part in the scientific debate were aware that something truly radical had occurred. This new awareness took two forms. The first was a general enthusiasm for the new philosophy. Many contemporaries, not only the scholars among them, were conscious of living at a crucial moment in history. In a period of roughly one generation, the worldview had changed so much that they were experiencing (in the words of English poet John Dryden) ‘almost a new nature’. Right in front of their eyes, people saw a new science arise that would explain the world according to new principles. Expectations of it were high. People were anticipating not just a better insight into reality but the solution to all kinds of practical problems, whether the determination of longitude at sea, the discovery of a cure for scurvy or the calculation of the trajectory of a cannonball. The idea that scientific knowledge must have practical application was a legacy of the engineering traditions of the Renaissance. Aristotelian physics had never entertained such pretentions. These high expectations are striking, because at first science contributed virtually nothing to the solving of practical problems. Increased knowledge of the human body did not lead to better treatment methods. Shipping and technology remained primarily a matter of practical insight and common sense. Even the new theoretical mechanics ultimately had little practical value. It is one thing to calculate the trajectory of a cannonball of mass M under the influence of a well-

91

92

THE SCIENTIFIC REVOLUTION

defined force F, preferably in a vacuum. It is quite another to predict where an actual cannonball will land after it has been fired from an extremely inaccurate gun with the help of a gunpowder charge of unclear composition. Nonetheless, the new science was unleashed on just such problems. At a stroke, virtually unlimited faith in what it could do arose in broad circles of the population. However, while some people allowed themselves to be carried away by the prospects opened up by the new worldview, others remained attached to their old, familiar world. That world had perhaps been more limited than the new vision, but it was on a human scale. Everything had a set place. Everything that happened meant something. The old world was not governed by the blind play of random laws of nature. The cosmic order was a divine order. Ideas about the structure of the world were a counterpart to prevailing ideas about religion, morality and the social order. So, the erosion of the traditional worldview did not have merely scientific significance; it could sometimes be experienced as an existential threat, especially by institutions and individuals whose interests were closely bound up with the dominant ecclesiastical and social order. The changes tended to arouse in them suspicion and uncertainty. The rise of Descartes’ mechanistic philosophy therefore met with fierce resistance from the start, especially on the part of theologians. The most powerful opposition emerged in the Netherlands, where Descartes was living at the time and where he had a large following. Utrecht theologian Gisbertus Voetius, a man of great authority in the Protestant world, set the pace. One of the most controversial points was again the Copernican system. The earlier condemnation of Galileo had occurred only within the Catholic realm, but from about 1655 onwards his work became a vexed issue in the Protestant world as well. Since there was no central authority in Protestantism, a definitive decision was never reached on the matter. It was more a divergence of opinion. Voetius and his conservative supporters denounced the Copernican system and with it Descartes’ philosophy. Other theologians were more open to Descartes’ ideas and refused to go along with this denunciation. From an international perspective, an important debate on the new view of nature was played out in the years 1664–1665, in roughly the

THE SEVENTEENTH CENTURY

same period as the new science gained official social recognition in the form of the scientific societies in London and Paris. It was prompted by the appearance of an unusually bright comet. For centuries people had seen in such extraordinary events – not just sightings of comets but the northern lights, earthquakes, ‘monstrous births’ and suchlike – omens of calamity. After all, as it was said, ‘God and nature do nothing in vain.’ To Descartes, however, irregular events like these were the consequence of exactly the same laws of nature as brought about ordinary, everyday phenomena. A comet was no more remarkable than a sunrise and had no special significance. This difference of opinion burst into the public consciousness in 1665–1666. Those who adhered to traditional ideas saw in the comet an ominous sign from God. They accused the supporters of Descartes of godlessness, despite their firm denials, and warned of terrible repercussions. For their part, the Cartesians branded their opponents’ fears pure superstition. That the new ideas provoked such conflicts is not entirely surprising. Descartes’ philosophy was an all-embracing programme. His theories about physics were rooted in his metaphysical beliefs about reality. For the new philosophers, God ruled the universe through the fixed and unchanging laws of nature. His greatness was visible in the order and regularity of nature, and divine intervention at will would only be detrimental in that sense. God’s majesty was incompatible with His repeated personal interference in all kinds of trifling matters. Attributing diverse natural phenomena that humans did not immediately understand to direct intervention by God was not only unscientific, it showed a lack of piety. Theologians like Voetius believed, however, that God was directly involved in worldly events and regarded Cartesian ideas as atheism pure and simple. This difference of opinion was brought to a head when the philosopher Baruch Spinoza stated that the same reasoning about comets could also be applied to the biblical miracles. Anyone who opposed superstition regarding comets or exorcism would, to be consistent, need to be critical of the similar phenomena described in the Bible. For practically everyone in his day, however, that was a bridge too far. Many people were willing to accept that comets and other contemporary phenomena were brought about by laws of nature that were of general application and not specially sent by God, but there was no desire to call into doubt the supernatural

93

94

THE SCIENTIFIC REVOLUTION

character of the biblical miracles. Spinoza’s line of reasoning therefore threatened to discredit the entire new view of nature. Of course, the correctness or otherwise of theories in the field of optics, say, or mechanics, was not dependent on metaphysical considerations, but this battle meant that the legitimacy of the new science was called into question. This was a serious problem because the investigation of nature was at this point still too young and fragile to stand on its own two feet. A solution needed to be found, which on the one hand spared everyone’s religious sensitivities and on the other meant that researchers could continue to take the existence of fixed and unchanging laws of nature as their starting point, without immediately being portrayed as atheists. The emergence of Newton’s theories meant that science could start again with a clean slate, as it were. Researchers could distance themselves from ideas that had formerly caused offence simply by attributing them to Descartes. Newton had now corrected Descartes’ theories and with Newton’s physics there could be no such reprehensible consequences, they assured people. This argument was given extra force by the fact that Newton himself was known as a deeply religious man – he kept his heretical ideas about the trinity carefully hidden. What people did and did not expect from the investigation of nature had become clear by this point. Researchers took due care in the way they formulated their results and refrained from making bold metaphysical assertions. Eighteenth-century philosophers stressed that the natural order was based on laws, but they did not regard that order as a necessary consequence of God’s majesty. Instead, they laid strong emphasis on God as an omnipotent lawmaker. God had established the laws of nature of his own free will, for the benefit of humanity. God could abolish those laws as easily as he had established them, so miracles remained possible. Newton’s system was presented as proof of God’s activity in the world. From 1713 onwards, supporters of Newton publicized a new interpretation of gravity, in which the non-physical character of the force of gravity was not only admitted but emphasized, since this meant that the world was not ruled merely by the properties of matter but by spiritual forces as well. In other words, God was actively involved in nature. In the seventeenth century, adherents of the new philosophy had roundly rejected any such claim, but by

THE SEVENTEENTH CENTURY

1713 the climate had changed and many saw this as a welcome rebuttal of a range of materialistic and irreligious tendencies. Newton’s theories were exactly what was required for many people who were very interested in modern science but felt uneasy about its consequences for religion. His great scientific authority abolished all doubt about what he said, and what his followers said, concerning the religious character of science. This convinced many traditional Christians that science did not represent a threat to the faith. As a result, the battle over the new worldview, and over the Copernican system in particular, gradually quietened down in the eighteenth century. Just how radical were the changes brought about by the Scientific Revolution? At one point in the seventeenth century, the latest ideas about reality looked downright revolutionary. For a while the new perspective of those investigating nature seemed to turn all existing ideas about God and the world on their heads. However, by the start of the eighteenth century, religious thinkers and philosophers had recovered from the shock and found a way to fit science into traditional religious and philosophical frameworks. They were able to run with the hare and hunt with the hounds. They could hold to the new view of nature and therefore continue pursuing scientific work undisturbed, while on other points they need adjust their worldview only to a minimal extent. Modern science developed further not as a comprehensive philosophical programme but as an activity with a limited scope. Research was the job of specialists, who made measurements and formulated theories about concrete phenomena. For a general vision of reality people turned to theologians and to the more metaphysically inclined thinkers. The ordinary man or woman did not need to be too concerned about the new ideas. The seventeenth-century Scientific Revolution therefore marked a break in the scholarly view of nature, the introduction of a new scientific worldview and a legitimization of future research. Furthermore, these changes proved irreversible. However, there was no real spiritual or social revolution. The effects of the scientific research conducted in those years was perceptible in the general worldview only over the long term.

95

PART II AUTONOMOUS SCIENCE: METHODS, THEORIES AND RESEARCHERS 1700–2000

The new view of nature as a uniform whole governed by laws, which took root in about 1650 among a small number of scholars, led within just a few decades to a serious programme of research, to important explanations and theories. It came to form the basis for investigations carried out in countless other fields. After 1700 the new science was no longer sincerely called into question. There were competitors, there was criticism, but even the fiercest critics felt forced at the very least to take the results of scientific research seriously. Nor did the authority of science remain the preserve of a small circle of scholars and researchers. Over time, science came to represent a cultural value of the same order as literature or art: an accomplishment from which a civilization derives part of its identity. The promotion of science came to be regarded as in the general interest, not just because of its immediate practical advantages but because it gives people a stake in an undertaking that transcends individual human beings and creates something of lasting value. All things considered, the fact that this occurred in the way that it did is remarkable. The new natural philosophy that emerged in the seventeenth century was initially the hobby of a relatively small number of people, with little significance for the practicalities of everyday life. A proper educational programme that placed the new DOI: 10.4324/9781003323181-6

98

AUTONOMOUS SCIENCE

science at its centre was absent for many years. For the time being, science had no practical value, and it was absolutely impossible for the vast majority of people to assess the standing of its pronouncements and methods. How the new science managed to gain authority so quickly and then retain and extend it is one of the most interesting, but also most difficult, questions in the history of science. To ensure that developments from that point on are comprehensible, I will nevertheless make an attempt at an explanation. Natural philosophy, if in a different form, had been regarded for centuries as one element of a good education. The new insights into nature were therefore seen by many cultivated people as an essential aspect of their general development. Research was mainly in the hands of aristocratic enthusiasts. They were independent both financially and politically, and could therefore permit themselves to be critical of tutors and people of authority. Some groups had a greater personal interest in the new learning. Physicians had traditionally appealed to higher knowledge – it was all that distinguished them from surgeons and quacks. At first this mainly consisted of familiarity with the works of writers of antiquity. When in the seventeenth century sources of that kind were discredited as a basis for science, it was only logical that medical practitioners would seek to ally themselves with the new ideas about nature. As such the new science offered no answer to questions about what sickness and health are, or how to make a person better but it did enable doctors to exhibit superior knowledge about the functioning of the body. They drew upon anatomical research, measurements, and observations made using a microscope. Phenomena exhibited by living beings were illustrated by means of experiments on plants and animals. All kinds of technicians also seem to have relied increasingly on scientific knowledge from the seventeenth century onwards. They probably had no more to gain from such abstract theories than the doctors did. Technology was above all a matter of practical and empirical tinkering. The influence of science on technology dates from much later, from the nineteenth century, as we shall see. Nonetheless, inventors and technicians increasingly sought a connection with the scientific thinking of their day. The people who in

AUTONOMOUS SCIENCE

the eighteenth century built the first steam engines did not talk of an animated spirit that dwelt inside the machine, which would have been an acceptable explanation in the sixteenth century. They interpreted what they were doing in terms of forces and other concepts derived from scientific theory. It seems they found such concepts more serviceable than those that had gone before. The use of scientific concepts by technicians was not always a conscious choice, however. The methods to be used were often imposed upon them by clients or financiers. Seafarers had traditionally used their own methods of navigation. From the seventeenth century onwards the admiralties, or the leaders of major trading companies like the Dutch East India Company, determined the rules to which they were obliged to adhere and decided which instruments they would be given for the journey. Starting in the eighteenth century, schools were founded for technical and industrial education, so that knowledge was no longer confined to the sphere of craftsmanship and attempts were made to give it a firmer, more scientific footing. Such schools therefore reflected the scientific worldview of the ruling elite. An urge to rationalize seems to have been an important driving force behind the increasing authority of science. It was propelled not least by an effort on the part of Western state institutions to extend their power, an effort that has been highly successful over the past three centuries. For that purpose, the state was organized along increasingly rational lines. This process of rationalization was grafted onto legal thinking. Within Roman law, lawyers had traditionally striven to create a coherent and rational system and so princely caprice was gradually replaced by state interests and codified law. The organs of government did not simply aim to ensure they were feared and obeyed. Society was to be made transparent, comprehensible and governable. A meticulous bureaucracy would need to ensure that population numbers were known, that individual citizens could be contacted, that taxpayers’ money was administered properly and that the country’s wealth did not flow away into foreign lands. Hence the introduction of a land registry, a population register and statistical offices. Government services mapped their country’s territory and waterways, and prospected for mineral deposits. Postal services and other means of communication

99

100

AUTONOMOUS SCIENCE

were set up, controlled from the centre. More and more rules were introduced, along with a system of permits. Scientific thinking could easily be connected up to this. Nobody could talk about the secrets of crafts unless they were craftspeople themselves, but if it became generally accepted that even crafts were subject to fixed general laws, then the work of those engaged in them could be managed and controlled. Generally speaking, a world that is predictable and subject to laws can be governed by rational means. Natural philosophy and investigations into nature, alongside Roman law, became an instrument for organizing the world. This was not purely a matter of practical usefulness. In many cases monarchs and government officials sought in the new insights a counterweight to the influence of the Church and the clergy, who, partly based on the moral authority vested in them by the people, had more power in many places than governments would have wished. From the nineteenth century onwards, moreover, business and industry formed a tighter bond. Scientific methods were developed to raise or improve production, and this meant that business leaders could keep a grip on things even in large organizations. Rational methods were worked out in detail and codified by experts. Business leaders were no longer dependent on the knowledge and experience of individual skilled workers, who became increasingly interchangeable. Where such different groups each tried to use science for their own purposes, however, it should not be assumed that the character of that science was always the same. If we want to follow the further development of science after the Scientific Revolution, we always need to be fully aware of what its aims were at any given moment and within which larger context it functioned. A different kind of science will flourish at a university than at a royal court or in a factory. In this second part of the book the history of science is divided, somewhat roughly, into a series of lengthy periods, in which different tendencies manifested themselves. In the eighteenth century the number of professional researchers was very small. They performed important work (especially the astronomers and mathematicians), but the development of science was mainly dependent on enthusiasts and artisans. It was in those circles that science acquired a firm footing and made progress with respect to its content. We look at this in chapter four. The rationalizing tendencies in society slowly

AUTONOMOUS SCIENCE

began to emerge in the second half of the eighteenth century. In the nineteenth century this rationalization became a powerful factor in the development and shaping of science. It is the subject of chapter five. By the nineteenth century the age of the amateur was over for good. Science became a matter for the universities, practised by professionals. It came to have its own ethos and to formulate its own aims. In this period the fundamental theories were developed upon which the most important fields are based. That is the subject of chapter six. Chapter seven, finally, looks at a tendency that represents in some senses the opposite of this trend: the increasingly powerful deployment of scientific methods in technology and the impact this had on the content of science, beginning in the late nineteenth century and gaining full force in the twentieth. As is clear from the above, there are no sharp dividing lines between these periods. Developments overlapped and to some extent occurred simultaneously. The division into chapters is intended as an aid to the reader, not as a firm chronological framework. I have tried to minimize the degree to which descriptions of specific subjects are spread over several chapters. In doing so I sometimes overstep the boundaries of the periods concerned.

101

• 4

THE EIGHTEENTH CENTURY: DISSEMINATING THE IDEA OF SCIENCE

KNOWLEDGE AND PRACTICE: INSTRUMENTS The legacy of seventeenth-century research into natural phenomena was perpetuated mainly by the great academies of science, set up under royal protection. As well as the Académie Royale des Sciences in Paris and London’s Royal Society, there were the academies of Berlin, founded in 1700, and Saint Petersburg, founded in 1724. Scientific research therefore involved just a few dozen people in all of Europe. Had science remained the business of a handful of academies, it would never have acquired the authority and influence it has today. However, it was able to rely upon interest among far broader circles, composed to a large extent of well-to-do amateurs. The eighteenth-century élite prided itself on its culture, good taste and refinement. Knowledge of nature was part of a good education. Alongside the great academies, all kinds of societies of local importance emerged, into which enthusiasts organized themselves. Some such amateurs tried to carry out serious research, but in most cases, they mainly engaged in physique amusante: science as a form of entertainment. A rich citizen might procure a vacuum pump to demonstrate the properties of air. Birds and other small animals would die in the airless void, feathers would fall like bricks, DOI: 10.4324/9781003323181-7

104

AUTONOMOUS SCIENCE

and many other experiments could be devised that were both entertaining and instructive. Another source of endless fun was the electrostatic generator. You could charge someone up and then try to kiss them, receiving a minor electric shock as a result. Such merriment may seem less than scientific, but it was nevertheless closely bound up with the new outlook on the natural world. Phenomena were explained according to the theories of authoritative researchers and regarded as confirmation of them. Instruments did not merely serve to produce entertaining effects at evening lectures; they were also intended to provide insights into nature. When experiments were performed, in salons, at tea parties or during public lectures, there was generally someone present with sufficient expertise to offer an explanation. Some speakers became so popular that they could make a decent living from it. Even more importantly perhaps, although these air pumps, electrostatic generators and other devices were destined to be used mainly as toys for the upper classes, they required skilled manufacture. In the major European cities, instrument making flourished in the eighteenth century, and its clients were mostly rich amateurs. Instruments that came onto the market in this way served mainly to provide a cultivated means of passing the time. Not all were suitable for ‘real’ science. The air pumps that were sold had to be easy to operate; the quality of the vacuum was of secondary importance. The telescopes used by astronomers needed to offer the greatest possible magnification, but this made them impractical for amateurs, since they had a very narrow field of view and were so big as to be unwieldy. Simple telescopes, with two convex lenses, produced an inverted image. As early as the seventeenth century it was discovered that the image could be turned upright by introducing an additional lens. Astronomers preferred to avoid this, since the extra lens dimmed the image and could make it less sharp, but in telescopes made for enthusiasts it was almost always provided as standard. These instruments were frequently not just toys but status symbols as well, however, in which case high quality might in fact be a requirement. Anyone who had a sundial or clock made for them wanted a great degree of precision, not because it was important for their plans for the day but for reasons of vanity, or to demonstrate to fellow townsfolk their grasp of such matters. The same applied to globes,

THE EIGHTEENTH CENTURY

maps and atlases, which tended to be beautifully produced static showpieces that were not meant to be taken on a journey but rather to enable their owners to present themselves as worldly-wise citizens. Anyone who found a simple globe unimpressive could have an orrery made, a mechanical model of the solar system based on the very latest findings. For instrument makers these were lucrative commissions in which they could give free rein to their craftsmanship and professional pride. Unintentionally, the amateurs with their purchases therefore brought about the development and refinement of instruments that enabled further study of the phenomena in vogue in the salons. The construction of good instruments is dependent only to a limited extent on theoretical knowledge. Instrument makers of the seventeenth century and later were able to build on the work of the surveyors, toolmakers and arithmeticians of the past. Even in the Middle Ages, instrument makers strove to make their sundials, navigational aids and suchlike as close to perfection as possible. This meant that they needed to be capable of making a precisely graduated scale, for example, which is by no means a simple matter. In the case of clockwork, an ingenious mechanism is of little use if the cogs are not uniform or do not fit together precisely. Better instruments were mainly the result of improved craftsmanship, rather than better theories. Instrument makers were generally not researchers, nor were they regarded as scholars or, to use the modern term, scientists. They were skilled workers who had often received some training in mathematics, but they did not usually know Latin, the language of scholarship. It was of course advantageous to understand what the devices you were making were intended to do. That was not a problem when it came to the traditional compasses, quadrants and so forth, but it was a different matter with the equipment used for experimental physics in eighteenth-century salons. Many instrument makers worked on commission for, or in conjunction with, a scholar who supervised their work. A minimum requirement for fruitful collaboration was a shared language. The best instrument makers therefore tended to have some theoretical schooling. One of the oldest instruments used in physics is the thermometer. As we have seen, the first true thermometers date from the early

105

106

AUTONOMOUS SCIENCE

seventeenth century, from the circle around Galileo. In the decades that followed they became popular toys among the upper classes. In the seventeenth and eighteenth centuries, all kinds of thermometers were developed and made, and their designers were intent on providing precise and predictable instruments, rather than on producing unfathomable effects. Theory let them down almost entirely in that respect. In the seventeenth century nobody knew exactly what heat was, in fact the concept of ‘temperature’ did not yet exist. Natural philosophers used thermometers to make visible ‘degrees of heat and cold’, an expression derived from medicine, and in the spirit of the new science, they tried to do so as accurately as possible. Whatever the degrees of heat and cold might be, they had to be comparable one with another: the thermometer must become a true instrument of measurement. The thermometer therefore needed to be provided with a scale and calibrated – whether between several fixed points or in some other way. All sorts of methods of achieving this were tried. In the eighteenth century there were almost as many temperature scales as there were instrument makers. One of the most influential among them was Polish by origin, although he lived and worked for most of his life in Amsterdam. Daniel Gabriel Fahrenheit had enjoyed an excellent education and was greatly interested in researching natural phenomena. He published an article on the theory of heat in the journal of the Royal Society. As well as making instruments, he supplemented his income by giving courses on the new science. Fahrenheit’s work on thermometers was inspired in part by the scientific theories of his time. To settle upon a zero for his thermometer scale, Fahrenheit chose to use a brine that he regarded as having an extremely constant degree of cold, made of water, ice and saltpetre. At the other end of the scale was the body heat of a healthy human, which he set at 96 degrees (choosing that precise number because it could be divided easily and was therefore helpful in manufacturing a regularly divided scale). Melting ice fell between the two, at 32 degrees. This scale had the advantage of giving clear definitions of air temperature, which was what most people wanted to measure. The most important disadvantage was that human body temperature, which Fahrenheit used as a reference point, can vary a good deal, but of

THE EIGHTEENTH CENTURY

course that became obvious only after accurate thermometers were produced. Later instrument makers who adopted the same scale used the boiling point of water as their standard measure, which they determined to be 212 degrees. With this adjustment, Fahrenheit’s scale is in use to this day in large areas of the world. A good design, which is to say a well-defined scale with wellchosen reference points, was not sufficient in itself to ensure a good thermometer. Its accuracy depended above all on the precision of its manufacture. The glass tube in which the liquid moved up and down needed to have a perfectly constant inner diameter and no impurities must be introduced when it was filled. There was much that could go wrong. The making of a thermometer was therefore reliant upon excellent craftsmanship, and as far as that was concerned, bitter lessons were learned. The use of the instruments could be far from easy too, relying as it did upon a combination of dexterity and experience. After Fahrenheit settled in Amsterdam in 1717, he sent two thermometers to each of a number of important business relations by way of advertisement, one filled with mercury and the other with alcohol. Before doing so he ensured that both always showed the same value. The level in his thermometers did not simply move up or down; each point on his scale represented a specific ‘temperament’ (which we would now call ‘temperature’). Before long, however, their new owners reported that the two thermometers were giving noticeably different readings. It was a profound disappointment for Fahrenheit, as he had devoted a great deal of time and effort to perfecting his instruments. Eventually Fahrenheit discovered that the error lay with the glass he had used. The liquid is not all that expands when the temperature rises. The glass reservoir containing it expands too and therefore the level of the liquid rises less than it otherwise would. During earlier tests in Germany, Fahrenheit believed he had overcome this problem. In the Netherlands, however, he was forced to buy the glass for his thermometers from a different source, and it turned out to have different properties from that to which he was accustomed – with disastrous results. It took him a great deal of additional work to make his thermometers acceptably reliable again. Still, such setbacks contributed to a better understanding of the behaviour of materials as they

107

108

AUTONOMOUS SCIENCE

grew warmer. Fahrenheit’s experiences as an instrument maker undoubtedly contributed to the development of theories about heat. Within science itself, astronomy was one field in which instruments had traditionally been important and a high degree of accuracy was sought. To be able to pinpoint as closely as possible the position of a particular star at a particular time of day, accurate clocks were needed, along with instruments that could measure angles precisely. In astronomy a tradition of precision measurement therefore arose, and it was further perfected in the eighteenth and nineteenth centuries. In addition to demonstrations in the salons, astronomy too prompted the development of the required instruments and measurements. One problem that was impossible to solve without accurate instruments was known as the annual parallax. Because of its orbit around the sun, the earth moves in relation to the fixed stars. The angle at which we see those stars therefore inevitably changes over the course of a year. Because the stars are such an immense distance away, however, this variation is extremely small and for a long time it was not observable. Astronomers were eager to detect it, because it might prove that the earth indeed moved as described by the Copernican system. Several eighteenth-century astronomers who claimed to have ascertained a parallax were later forced to acknowledge that they had been misled by errors in their measuring apparatus, caused for example by a seasonal variation in temperature. As well as these particular effects, they also had to take account of atmospheric refraction and of something called the ‘proper motion’ of the fixed stars. Moreover, in 1728 ‘aberration’ was discovered, an apparent change in the position of the stars brought about by the velocity of the earth. It was not until 1838 that Friedrich Wilhelm Bessel, of the observatory in Königsberg (present-day Kaliningrad), became the first to announce a reliable parallax measurement of a fixed star. He had discovered a deviation of around 0.3 seconds of arc, which meant that the distance to that star was roughly 700,000 times the distance between the earth and the sun (in today’s terminology more than ten lightyears). Several elements contributed to his success. Bessel followed a meticulous procedure and he had chosen his star carefully. He also had a new and extremely precise instrument at his disposal, a heliometer,

THE EIGHTEENTH CENTURY

specially developed to measure extremely small angles. It was built by Joseph Fraunhofer, director of an institute in Munich that manufactured precision instruments. His optical instruments, telescopes particularly, were unequalled in their day. Fraunhofer’s success is attributable above all to his command of his craft. For example, he used a new method of preparing glass that meant it was far purer and more homogenous in composition than any glass produced by his predecessors. But Fraunhofer was himself also a researcher and theoretician. He discovered dark lines in the spectrum of light from the sun, now known as Fraunhofer lines. To investigate them he developed the diffraction grating, and again it was his craftsmanship that enabled him to perfect this invention. Bessel’s procedures and Fraunhofer’s instruments established new norms for the making of accurate measurements. Once the parallax had been determined for one star, more quickly followed.

COLLECTING AND CLASSIFYING: NATURAL HISTORY One field of study, that was actually very old, remained extremely popular among amateur researchers in the eighteenth century: natural history. Back in the sixteenth century many doctors and prominent figures had compiled collections of precious stones, fossils, tropical shells, coins and so forth. But they concentrated mainly on the rarity of those objects, their remarkable appearance, and their moral or religious significance. Countless collections were assembled by rich individuals and others in the eighteenth century too, but now they were collected from the perspective of the new knowledge of nature. These served above all to illustrate scholarly insights and the exhibits were no longer arranged purely according to aesthetic criteria. Collectors attempted to organize their collections systematically to create a picture of how the world around us is constituted. Nor were the objects any longer artificially embellished since the intention was to show them in their natural state. Certain branches of natural history had been subject to a more scientific approach for some time, especially botany. It too was given fresh impetus in the eighteenth century. In earlier years the knowledge of

109

110

AUTONOMOUS SCIENCE

plants had been above all a subsidiary subject within medicine, but in the eighteenth century the bourgeoisie discovered it as an appropriate pastime. Many a cultivated lady or gentleman went out into the countryside with a specimen case. Gardening also came into fashion and a great array of exotic plants was imported. It has been estimated that in 1500 some two hundred species of plant were cultivated in Britain. In 1839 the figure was eighteen thousand. There can be little doubt that eighteenth-century researchers and amateurs saw botany in the light of the new natural history. Their descriptions no longer focused on the hidden qualities of the plants; instead, they limited themselves to the external characteristics. The emphasis came to lie on meticulous description and classification, and a means of distinguishing between the many species was required. Whereas in earlier times all kinds of plants and animals that closely resembled each other tended to be lumped together, people now paid attention to any number of tiny distinctions. The problem of course lay in deciding on what basis plants should be compared. How could you tell that a plant found in Spain was indeed the same as an apparently identical plant that grew in Switzerland? Botanists wanted to be able to describe their specimens in a way that made clear the plant’s nature. A need arose for a classification system in which all plants had a set place and could therefore be quickly located, and in which newly discovered species could be categorized unambiguously. Various attempts were made to achieve this in the seventeenth and eighteenth centuries. The arrangements were sometimes botanically sound but too complicated for application by the enthusiasts who made up the majority of plant researchers and therefore never came into general use. Carl Linnaeus in Sweden eventually developed a system of classification that many others adopted. He placed plants in categories that he arranged in a hierarchy. In describing these categories and species, he referred exclusively to specific characteristics that could be precisely determined – not to fundamental biological features (although ideally the categorization ought not to be in conflict with those), but to things that could easily be seen and counted. His entire classification of the plant kingdom turned on characteristics of the flower. Firstly, he distinguished between plants according to the number of stamens and their relative position. That enabled division

THE EIGHTEENTH CENTURY

into twenty-four classes. Within each of those classes he then looked at the number and position of the pistils in the flower, which resulted in a total of sixty-five orders. Based on other distinguishing features, the orders were further divided into genera and the genera into species. Within the species he distinguished between varieties. This produced a pleasing hierarchical system in which it was possible to determine quickly whether a plant belonged to a known species. The system may not have been profound, but it was certainly practical. Linnaeus used this arrangement to produce a clear system for naming each new species discovered in this way. Using his classification, he produced guidelines for scientific naming that would need to be widely communicated and so they were therefore in Latin. Each species name was made up of two parts, so this became known as binomial nomenclature. The first part of the name gave the genus, the second part the species. Linnaeus then extended the entire system to include the animal kingdom. Here too he created a hierarchy of classes, orders, genera and species and introduced binomial nomenclature, and here too his basic assumptions proved productive. The trick was to find anatomical features that clearly marked the difference between groups at a certain level and could be identified easily, without any need for in-depth anatomical investigation. For the division of mammals, for example, Linnaeus looked principally at the teeth. Linnaeus was in truth less at home in zoology than in botany, and later researchers sometimes felt obliged to correct his findings significantly. In botany and to a lesser degree in zoology, Linnaeus’s means of classification and his system of nomenclature were soon being put to use by important researchers. Over time his classification was further filled out and where necessary expanded and adjusted. He had paid little attention to simpler life forms such as fungi and algae, for example, and scholars who came after him extended his system into these fields. It acquired so much authority that attempts were made to apply it to things that were altogether different, such as diseases. Generally, this did not meet with lasting success. The classification of nature was inspired by the need to handle a growing quantity of knowledge, but once criteria for distinguishing between species had been developed, they also provided a way to

111

112

AUTONOMOUS SCIENCE

recognize new species that had not initially been observed. As more species were discovered, and the differences and similarities between them were more closely studied, classification according to five levels (kingdom, class, order, genus, species) proved inadequate in practice. In the late eighteenth century, French botanist AntoineLaurent de Jussieu introduced the level of families in between orders and genera. Later, yet more levels were introduced and today we have, for example, not just orders but suborders and superorders. Linnaeus’s system was in a sense a typical product of eighteenthcentury amateur enthusiasm. Based on ‘their Linnaeus’, the botanizing citizen or aristocrat could classify plants without worrying about deeper problems. Serious scholars were generally dissatisfied with a system that was above all convenient to use. They would have preferred one that in some way reflected the ‘true’ relationships between the species. In other words, they wanted not just artificial classification but a natural system, an arrangement based on the character of the plant as such, rather than on isolated criteria like features of the inflorescence. Jussieu put a lot of work into developing exactly that. But such a natural system presumed insight into the reasons why species resembled or differed from one another. Useable theories in that sense did not arrive until the nineteenth century.

FROM ALCHEMY TO CHEMISTRY The effects of the new view of nature were of course not limited to the work of cultivated amateurs. Activities that had traditionally been regarded as belonging purely to the domain of manual work were now seen in a fresh light. One field in which, after the theoretical innovations of the seventeenth century, a new wind had begun to blow was chemistry. This was an ancient discipline that traditionally consisted of practical instructions. Chemistry was the art of distilling and other laboratory work, which meant it was not a scholarly endeavour but a skill acquired by apothecaries, certain physicians, assayers of metals, mining experts and the like. There was extensive literature on the subject. Part of it was purely practical, rather like recipe books. Writings that were of a more theoretical nature were not based on what we would now recognize as scientific theory. The way in which chemists gave meaning to

THE EIGHTEENTH CENTURY

their various practices is virtually unrecognizable to modern chemists. We are talking here about alchemy, which was all about qualities. Alchemy developed an extensive secret language which spoke of its procedures in almost mystical terms. After the seventeenth century, this way of thinking lost its authority, although alchemical language and ideas were not necessarily inherently incompatible with scientific research. Isaac Newton engaged profoundly with alchemy, in the hope of finding within it a key to a deeper understanding of reality. It is telling, however, that he never published anything on the subject. He understood perfectly well that such work could expect little sympathy from his fellow scholars. Even after Newton, there were people who remained interested in alchemy, but indeed purely as a key to a deeper understanding of reality, and it took them onto spiritual rather than scientific terrain. Those who were active in practical chemistry turned their backs on that entire realm of thought and sought to associate themselves with the new ideas about nature. They interpreted chemical processes as expressing natural laws to which even the tiniest particles of matter were subject. This caused little change to the practical work of chemists. Assayers and apothecaries drew upon new theories to justify what they were doing, but the work itself remained much the same. In that sense it might seem as if the scientific revolution was of little significance. The new intellectual framework brought new practical problems to the fore, however. For a start, the way in which substances were distinguished and characterized was closely bound up with older assumptions. Traditional nomenclature was based mainly on known materials and their methods of preparation: distillation, incineration and so on. Experienced craftsmanship was central. A volatile substance derived from wine was called ‘spirit of wine’. A glassy substance found in nature was called vitriol (from Latin vitrum, glass) and the fluid that dripped out of it when distilled was known as ‘oil of vitriol’. One problem was that substances whose names and means of preparation differed widely sometimes appeared extremely similar. Something called ‘potash’ was derived from the burning of hardwood (in a pot, hence the name). A crusty material known as ‘wine stone’ was a deposit that formed on the inside of wine casks during

113

114

AUTONOMOUS SCIENCE

fermentation. When wine stone was heated, it left behind a residue called ‘salt of wine stone’, which had virtually the same properties as potash. Nevertheless, because of the different ways these materials were prepared, it was assumed that they had different qualities. In the eighteenth century, matter was interpreted as being composed of minuscule particles. Thinking in qualities gave way to thinking in terms of general laws and elementary building blocks. The origin or means of preparation were therefore no longer essential. To know what substance you had in your hand, it was important to know something about its building blocks. Based on that general idea, it became plausible that potash and ‘salt of wine stone’ actually were the same thing. But a theory that could be used to substantiate that fact was as yet unavailable. Several people tried to construct a chemical theory more closely aligned with the new ways of thinking. Just as in botany people wanted to know whether different plants belonged to the same species, the chemists wanted to know how to determine if certain substances were in fact the same. But in contrast to botany, in which only the outward appearance of the plant was studied, such questions could not be solved in chemistry without attention being paid to theoretical suppositions. Furthermore, the need for a clearer nomenclature became urgent as more and more new substances were discovered or made, an expansion of knowledge that is obvious in the case of metals. In classical antiquity there were no more than seven known metals: gold, silver, copper, iron, tin, mercury and lead. That remained the case for centuries. Mediaeval alchemists were familiar with brass (but not with zinc, its other main constituent besides copper). However, in the eighteenth century new metals were discovered quite rapidly. Swedish mineralogists found cobalt, nickel and manganese. Two Spanish researchers were the first to identify tungsten. Molybdenum and chrome were discovered in the eighteenth century too, and uranium was given a name, although it turned out to have been attached to an impure form. As well as metals and minerals extracted from the ground, chemists found that in some reactions, substances disappeared into the air or were derived from the air, and they succeeded in isolating several gases and investigating their properties.

THE EIGHTEENTH CENTURY

In the late eighteenth century, efforts were made to arrive at a new and clearly systematic nomenclature for minerals and chemical substances. A first attempt was made by Linnaeus himself. After categorizing the plant and animal kingdoms, he took a comparable approach to the rocks and minerals, traditionally the third kingdom of nature. There, however, his principles proved completely unusable. Swedish mineralogist Torbern Bergman, a pupil of Linnaeus, was more successful. He divided all matter (not just minerals) into salts, earths, metals and ‘phlogiston’, a fire-like element contained in combustible materials. In imitation of his mentor, he proposed a binomial nomenclature, in Latin. His system was then taken up and expanded by French chemist Louis Bernard Guyton de Morveau, who translated Bergman’s names into French. Guyton de Morveau worked closely together with the most important chemist of the age, Antoine Laurent Lavoisier. It was Lavoisier who laid the foundations for modern theories of chemistry. He assumed the existence of a small number of elementary particles, or elements, out of which all other substances were made. The elements remained extant within the compounds and could therefore be recovered from them. One famous experiment he used to demonstrate the validity of this theory involved the splitting of water into two gaseous components, which we now know as hydrogen and oxygen. Out of those two elements he then made water again. The weight of the water remained the same as that of the total of hydrogen and oxygen, and the weight of the two remained in the same ratio to each other (as did their volumes). Lavoisier put together a fully developed theory on the basis of which chemical substances could be categorized and named. He first indicated which substances were elements. If those elements did not yet have accepted and useable names, he named them himself. Their compounds could then be arranged according to the elements of which he believed them to consist. In the naming of compounds, Lavoisier generally followed the system devised by his friend Guyton de Morveau. He distinguished between oxides and various salts, such as sulphates or carbonates. According to their precise composition he then spoke of sulphur oxide, copper oxide and so forth. This was also intended to make clear what newly discovered or manufactured substances ought to be called. Potash and ‘salt of

115

116

AUTONOMOUS SCIENCE

wine stone’ were indeed the same substance, potassium carbonate according to Lavoisier’s nomenclature. (Wine stone is nowadays called potassium bitartrate.) Lavoisier was quite often wrong about the details, incidentally. Some substances that he saw as compounds were later found to be elements. He regarded oxygen, one of the newly discovered gases, as the element that characterized the group known as acids, but it later turned out not to be present in all acids by any means. His theory was in short less than perfect and it therefore met with a fair amount of initial scepticism among fellow chemists. His general starting points did however provide a good means of understanding and categorizing chemical phenomena. As the years went on, his theories became generally accepted, if with the necessary corrections. In 1787, Lavoisier, Guyton de Morveau and several other French chemists published a Méthode de nomenclature chimique (System of chemical nomenclature). The nomenclature in its entirety was, as will become clear, closely related to Lavoisier’s theory about the composition of substances. It was not intended merely to ease the work of practical chemistry, it was also propaganda for Lavoisier’s theories, for which reason it encountered a degree of resistance at first. Moreover, Lavoisier’s nomenclature was in French, rather than Latin. Variations sometimes occurred in the process of translation into other languages. As was already the case in botany, the giving of names in chemistry was more an initial step than the final destination. In the years that followed, a wide range of improvements and additions were proposed. Swedish chemist J.J. Berzelius thought up the chemical formulae that are still in use today (if, again, with adjustments). In doing so Berzelius fell back upon Latin, with the result that his proposals were accepted only with reluctance in France. Furthermore, Lavoisier had paid no attention at all to organic substances. It was not until the nineteenth century that chemists learned how to analyse and understand those, and they turned out to require an entirely new system of nomenclature and formulae.

NEWTONIAN MECHANICS AND ITS PROBLEMS This chapter has been about instruments, classification and nomenclature. As well as famous scholars, it has looked at instrument makers,

THE EIGHTEENTH CENTURY

amateurs and people who engaged in practical chemistry. Those who regard science mainly as progress in theoretical knowledge may feel that ‘real’ science has been rather neglected in this brief survey. But what is ‘real’ science? Scientific knowledge and scientific progress consist to a great extent of relatively banal work. The establishment of natural science after the seventeenth century was less a matter of a few scholars juggling abstract theories than the emergence of new questions and new ways of working – based, roughly speaking, on the ideas of seventeenth-century natural philosophy – among all kinds of groups (amateurs, practitioners etcetera) as regards thinking about and dealing with nature. Nonetheless, there were of course abstract theories as well, and I would not wish to suggest they were unimportant. Even if only a handful of people could be described as theoreticians, they set the standards, instructed the instrument makers, wrote textbooks, and in doing so determined what was serious science and what was not. Much of the work of amateurs and others presupposes the existence of a body of authoritative theories from which they derive support. If we want to understand the kind of scientific ideas and practices that prevailed in the eighteenth century, we cannot ignore the scholars at the major academies. The theories of Isaac Newton functioned as a model for the scientific approach to reality, as we saw in Part One of this book. In the early eighteenth century they were accepted by most scholars. Newton’s Principia of 1687 served for many years as the great example of a scientific treatment of natural phenomena. In just a few formulae, Newton expressed the basic laws of mechanics. They helped to enable, among other things, the precise calculation of the movements of heavenly bodies. The most appealing success in that field concerned the theory of comets, which Newton believed were bodies orbiting the sun under the influence of the force of gravity. English astronomer Edmund Halley, whose visit to Newton in 1684 had prompted the writing of the Principia, discovered in them a means of testing Newton’s theories. Halley investigated reports of earlier comets and arrived at the conclusion that observations from 1682, 1607 and 1531 probably all concerned the same comet. It must therefore be orbiting the sun at a rate of once every seventy-six years or so. Only for a short time

117

118

AUTONOMOUS SCIENCE

during that period was it close enough to the sun (and therefore to the earth) to be seen by us. Based on that theory, Halley predicted that the comet would appear again in about 1758. He also calculated its approximate orbital characteristics. He realized that he was too old to hope to experience the confirmation (or refutation) of his theory, but he believed his successors should be ready to do so. He was right. The comet did indeed return in the way he had predicted, thereby confirming, in a spectacular manner, Newton’s theories. It is not the case, however, that Newton’s work was the last word on the subject. In fact, it was just the start. It needed to be applied, further explored, refined and supplemented. Newton had used a seventeenth-century geometrical method of presenting his results. In 1736 Swiss mathematician Leonhard Euler, a member of the academy in Saint Petersburg, reformulated Newton’s mechanics according to the principles of modern mathematical analysis. It is not until Euler that we come upon modern formulations such as ‘force equals mass times acceleration’. In 1765 Euler published a further major work on the mechanics of rigid bodies, a subject about which Newton had in effect said nothing. Newtonian mechanics deals primarily with point masses. Euler extended the theory to bodies with a certain dimension that move while simultaneously rotating. The ideal scientific theory inherited by the eighteenth and nineteenth centuries was a mathematical theory. But it is important to realize that scholars were not actually attempting to explain mechanics with the aid of mathematics. Rather, mechanics was mathematics, precisely in the way that geometry and algebra were. Mechanics was the theory of forces and movements as they occur as such in nature. (Just as geometry at the time was regarded as the theory of actual space.) Nature itself was therefore mathematical. Mechanics as it developed in the eighteenth century would ideally express several simple natural laws as mathematical formulae. In theory, Newtonian mechanics presented a model that could explain everything that happened in the world. All phenomena were the result of bodies acting upon each other by means of forces. If you knew those forces and the laws that governed them, you would be able to explain all phenomena. Newtonian mechanics was for many years regarded as the summit of pure science, an ideal model for how other fields should be organized. It was therefore not

THE EIGHTEENTH CENTURY

a matter of ‘mathematizing’ in the sense of simply using mathematics, but rather of applying this special mathematical model. Still, championing such an approach was one thing, implementing it was a good deal more difficult. In practice, mathematical analysis applied mainly in idealized cases. The movement of a body that was subjected to a central force was simple to determine, and it was just about possible to calculate the movements of two bodies that mutually attracted each other. But a system of even three bodies exerting mutual attraction was so complicated mathematically that it was impossible to encompass all their movements in a formula. This is not a hypothetical case, since the earth, sun and moon form exactly such a system. Throughout the eighteenth century, the most prominent astronomers toiled away at the calculations required to determine the movements of the moon under the influence of the sun and the earth. The problem in terms of physics was clear; it was the labour of calculation that formed the stumbling block. Their work therefore more often resulted in new mathematical techniques, such as methods of approximation, than in new insights into physics. The mathematization of nature undertaken by the mathematicians who followed in Newton’s footsteps often resulted more in mathematics than in anything that had to do with concrete problems. Based on certain assumptions, for example, they investigated the flow of frictionless liquids, or the elasticity of ideal media, in other words phenomena that did not occur in nature. The elaborate calculations made in studying these matters had little relevance in practice. Actual reality was far more difficult to mathematize, even when it involved things that were clearly mechanistic. Flow, friction and resistance, for example, turned out to be impossible to fit into a simple model. To say nothing of other aspects of physics. It was not until the final years of the eighteenth century that some phenomena of physics were successfully treated mathematically according to the new Newtonian principles. The first major success concerned static electricity. It had been known since antiquity that some materials (most famously amber) could attract or repel each other after being rubbed, but this had never been seen as anything more than a strange trick of nature. In the eighteenth century the phenomenon made its way into the sphere of physique amusante.

119

120

AUTONOMOUS SCIENCE

Many different instruments were invented to demonstrate it, or to strengthen it and bring it under greater control. The most important of these were the electrostatic generator, the electroscope, the Leyden jar (a primitive capacitor) and the voltaic pile (the first battery). Meanwhile, scholars were trying to fathom the characteristics of the phenomenon. Since here, as with gravity, they were dealing with forces between bodies, it seemed obvious they should deploy a mathematical treatment after the example of Newton’s theory. Yet success eluded them for years. The researchers succeeded in discovering interesting facts (they concluded there were two different kinds of electricity), but the knowledge they garnered was not mathematical in nature. Not until 1785 did French engineer Charles Coulomb make a breakthrough. Coulomb had earlier worked on torsion and had built a torsion balance that enabled him to measure extremely small forces. By designing ingenious experiments, he was able to use it to find a ‘Newtonian’ formula for the force that operated between two electric charges. The baton of mathematical description of nature was then picked up by another French scholar, mathematician Pierre-Simon de Laplace. He engaged in a full programme of research. Laplace believed that all physical phenomena had a material basis. As well as particles of matter, he postulated the existence of particles of light, heat and magnetism. Forces operated between these particles. Once you knew the laws governing such forces, you could use them to explain all phenomena by means of mathematics. In about 1800, Laplace gathered a group of young researchers around him and managed to arouse their enthusiasm for his programme. Most had been trained at the new École Polytechnique and had a good grounding in mathematics. They attempted to capture all kinds of physical phenomena in impressive mathematical formulae, starting with forces between particles: characteristics of light and sound, flows of heat, capillary action and suchlike. In contrast to much earlier work, however, this was more than a purely mathematical exercise. The solutions to the mathematical formulae Laplace and his colleagues produced needed to agree precisely with the values measured in practice. Hence, they combined their mathematical work with extensive programmes of measurement. By making long series of measurements, changing the circumstances

THE EIGHTEENTH CENTURY

slightly each time, the phenomena under investigation were recorded as accurately as possible. Precise and systematic programmes of this kind had until then been customary only in astronomy. Laplace now caused them to take hold in physics. One of the first to mount such a programme was military engineer Etienne Malus. He investigated the reflection of light by a large number of different materials and measured what happened as he gradually altered the angle of incidence. At certain angles the reflected light seemed to be polarized. (The term ‘polarized’ was invented by Malus, although he did not yet know exactly what the phenomenon involved.) This was an unexpected discovery. In essence, however, series of measurements like these were not intended to reveal new phenomena but to determine the magnitude of known phenomena with as much precision as possible. The model advocated by Laplace to account for forces over distance between a wide range of different particles was by no means universally accepted and ultimately it proved untenable in many fields. Laplace’s programme ran into the sand after barely twenty years and the researchers around him went their separate ways. It had never been more than a local coterie. But although when it came to theory they were on a hiding to nothing, they did demonstrate the importance of systematic measurement and mathematical analysis in physics. The fact that the underlying model was flawed did not mean that the formulae derived from it were always incorrect. In that sense the programme was highly influential. Arguably, Laplace’s programme was dealt the hardest blow by Augustin Jean Fresnel, another engineer. Fresnel devoted himself mainly to optics, the study of light. He went back to the seventeenthcentury theories of Christiaan Huygens and the assertion, entirely at odds with the theories of Laplace and his associates, that light was a wave phenomenon in the ether. (The ether was believed to be a subtle substance that filled all of space.) Based on that theory, and along with a thorough mathematical elaboration upon it, he arrived at formulae for the refraction of light. These produced results that agreed very precisely with the measured values. Laplace was therefore defeated by his own weapons. Another French researcher who adopted Laplace’s method but not his principles was André Marie Ampère, famous above all for his

121

122

AUTONOMOUS SCIENCE

research into electric current. Experiments in 1820 by a Dane called Hans Christian Oersted did most to focus attention on this subject, in Paris especially. It turned out that electrical charges were not alone in exercising a force on each other, as had been thought for many years; electric currents did too. It was Ampère, rather than anyone from the circle around Laplace, who went on to find laws of mathematics that described the phenomenon. In doing so, he discovered that the forces of moving electrical charges are of the same nature as those exerted by magnets. In his theory he paid no further attention to Laplace’s electrical and magnetic particles. Laplace had essentially continued the eighteenth-century Newtonian programme of Euler and others. He saw nature as a mathematical whole, made up of forces between particles. Because that was how nature was composed, it could be described in mathematical terms. The remarkable thing about people like Fresnel and Ampère is that they subjected the world to mathematical calculations without any reference to assumptions about the mathematical nature of reality. In fact, Ampère’s views on reality had powerfully romantic aspects to them. Fresnel and Ampère not only looked beyond the eighteenth-century physicists, they sought most of their answers in a different direction entirely. For all the authority the Newtonian model of nature enjoyed, its actual success was for a long time limited. Progress in science was achieved in very different fields, by methods that had nothing mathematical about them, such as collection, classification and analysis. It almost seems as if the respect accorded to the Newtonian theory acted as a brake. In the seventeenth century a general concept of nature came into being that replaced the philosophical chaos of the sixteenth century and provided a framework for new discoveries and theories. That the new Newtonian physics came to be held in such high regard was only to be expected, but it was immediately seen as the only true faith, which in a sense was very un-Newtonian. Newton had explicitly stated that with his work he was not making any pronouncement about the nature of reality, but rather seeking mathematical laws that could describe phenomena. By the eighteenth century, based on the work of Newton and others, people had come to believe they knew with certainty how reality was composed. This meant that divergent or alternative models of

THE EIGHTEENTH CENTURY

how nature worked were ignored. In the nineteenth century, new philosophical movements provided an opportunity to look at nature in a new light. It was only in that century that the mathematization of nature truly picked up speed. When compared to the explosion of theories and research in the seventeenth century, the eighteenth century looks like a period of relative stagnation. It is true that progress in theoretical science slowed for a long time. Important new theories were not formulated again until the end of the century. Theories are, however, only one part of science, and perhaps not even the most important part. The development of laboratory techniques, and the compiling, describing and ordering of a mass of data, are also crucial aspects of the investigation of natural phenomena, and for many researchers they make up the lion’s share of the work. In all these fields the eighteenth century made an important contribution. The eighteenth century is of decisive importance above all because it was the period when the new ideas about nature, and the methods of research that accompanied them, gained a firm foothold in society. Eighteenth-century science was often more an aristocratic hobby than a serious profession, and this placed limits on the ways it could be practiced. However, the fact that all those well-to-do citizens, nobles and governors became convinced of the value of research into natural phenomena meant that they ultimately came to realize it could be more than a hobby. They recognized its importance for society, and that in itself had enormous consequences, as we shall see.

123

• 5

THE NINETEENTH CENTURY (I): SCIENCE AT THE SERVICE OF THE RATIONALIZATION OF SOCIETY

Even back in the seventeenth century, researchers had high expectations of their work. Science would be a force for the improvement of the world and the conditions of human life. Little had come of that so far. In the eighteenth century science was above all an innocent form of entertainment, designed to promote education in general. In the second half of the eighteenth century, however, both governments and citizens actively began to endeavour to give a more important place to science and rational knowledge. Science must be put at the service of general wellbeing. As a consequence, research into natural phenomena was no longer just a hobby for a wealthy elite. It was a serious business. This striving for rationalization was part of a broader cultural change of direction. The governments of several countries, sometimes along with representatives of the citizenry, began to address the ways society was organized. There were initiatives aimed at reforming and codifying legislation, at stimulating agriculture and industry, at improving public health and so on. Science was not the driving force behind such reforms, but it proved an excellent enabler. The royal road along which scientific ideas spread was education. Expertise had not traditionally been formalized; instead, it was mainly disseminated within trades and professions, whether through the DOI: 10.4324/9781003323181-8

THE NINETEENTH CENTURY (I)

guilds or otherwise. However, from the second half of the eighteenth century onwards, formal vocational schools were established. To exploit the natural resources of their territories, monarchs opened schools of mining. The first of these was established in 1716 in St. Joachimsthal (Jáchymov, in what is now the Czech Republic), but most date from the latter half of the eighteenth century. Particularly important were the Bergakademie in Freiberg in Germany, founded in 1765, and the École des Mines in Paris, which opened its doors in 1778. Mining had until then been dominated by alchemical ideas, but teaching in these institutions was based on modern scientific principles. The first school for engineers was the École Royale des Ponts et Chaussées (school for bridges and roads), founded in Paris in 1747. In existing training courses too, for army officers for example, more and more theoretical knowledge was required.

A ‘SCIENTIFIC’ SYSTEM OF MEASUREMENT This striving for rationalization was powerfully accelerated by the radical social change brought about by the French Revolution and the wars that flowed from it. The French revolutionaries were inspired by a powerful urge for reform. Some of the reforms of the time, such as that of the calendar, produced few results, but others were more successful. One well-known example is the reform of the system of weights and measures. In the eighteenth century a great many units of measurements were in regular use. They differed from place to place and according to the sector of business and industry in which they were applied. In the Dutch city of Haarlem an ell was longer than in neighbouring Leiden (69.8 cm instead of 68.5 cm) and an apothecary would use different weights from a goldsmith. When the entirety of French society was reshaped during and after the French Revolution of 1789, the aim being to make everything anew, attention turned to weights and measures. An innovative, supposedly scientific system was introduced to replace the many older versions. The government’s reforms were based on the authority of natural science, and the legitimacy of the new system lay above all in its rational character. The new units of measurement were in theory based on quantities derived from nature. The unit of length, the metre, for example, was

125

126

AUTONOMOUS SCIENCE

supposedly one 40,000,000th of the circumference of the earth, as measured at the equator. Each unit was rationally subdivided into tenths, hundredths and so forth. So, the system was decimal, whereas older systems had mostly been based on multiples of twelve. The reforms were introduced with much fanfare, but their significance for science should not be overstated. A system of measurement based on tenths is not essential to scientific progress, as is demonstrated by countries where it has not yet been introduced, or by the universally recognized units of time. The existence of a multiplicity of measurement systems is impractical, especially for trade. If scholars used certain units between themselves, they had little need to be concerned about all the other ways things were measured. It was however essential for a system of units to exist that made conversion between different measurements simple. If the unit of length was the metre, then the unit of surface area became the square metre, the unit of volume the cubic metre and the unit of speed the metre per second. In the old system, distance might have been given in hours walked, surface area in morgens, volume in bushels or barrels and speed in knots.

THE MODERN HOSPITAL These new tendencies were not confined to France, nor to the period of the French Revolution. One area in which increasing government interference was clearly felt was medicine. In about 1800 the modern hospital made its appearance. Hospitals, often known as infirmaries, had existed for some time, occasionally founded by city authorities but more often by charitable associations, religious orders or churches. They generally had a doctor attached to them, who oversaw what went on and visited the patients. But doctors did this work alongside their own private practice, which generally received more of their attention. Infirmaries were little more than places where those of limited means were nursed. Anyone able to pay any kind of fee would be treated at home. There were also insane asylums, but it was a similar story there; they fulfilled a warehousing function. At the end of the eighteenth century some national governments started building hospitals. As well as the direct benefit to public health, in some cases the founders had the interests of medical education in

THE NINETEENTH CENTURY (I)

mind. The new hospitals were often linked to a medical faculty and students would attend some of their lectures at the bedside of a patient. One famous example is the Vienna General Hospital, an initiative of Emperor Joseph II. It was founded in 1783, as the result of a reform of an existing almshouse. A staff of doctors and nurses was introduced; the latter were lay people instead of the usual members of religious congregations. The doctors put in charge of the new hospitals had a scientific education and used their position to give practical expression to their scientific ideals. The hospitals and their staff became centres of medical expertise and as such provided an important stimulus to the development of medicine. This gave rise to new specialisms. Psychiatric disorders, for example, had traditionally fallen outside the remit of medicine; at best they were seen as expressions of bodily ailments. However, when the nursing of the insane fell into the hands of doctors in the new institutions, some were prompted to make extensive studies of their patients’ disorders. This led to the emergence of psychiatry. In the nineteenth century, specialized clinics came into existence, for ophthalmology, for instance, or paediatrics and obstetrics, as well as sanatoria for consumptives. The number of specialists steadily increased and in specialist hospitals in particular there was room for research and the dissemination of knowledge. The many cases they dealt with presented material for comparison and for other forms of research. Whether this was always to the benefit of the patients of the time is questionable, incidentally. The above-mentioned hospital in Vienna was notorious for the aggressive way in which new medicines and other treatments were tried out on the patients. As a consequence of these reforms, hospitals and asylums became places where people were treated rather than simply nursed. Furthermore, they ceased to be places for the poor alone. The operating theatres, instruments and equipment for anaesthesia developed there were difficult for most people to arrange to have brought to their homes, so even the rich were increasingly treated in a hospital – though mostly in expensive, luxuriously equipped private clinics. Of course, countless practical problems arose as these reforms were introduced, but the importance of hospitals, with physicians in charge of them, was no longer a matter of dispute. Whether the

127

128

AUTONOMOUS SCIENCE

application of scientific principles immediately improved healthcare is highly doubtful. The effects of medical treatment were modest in the extreme until well into the twentieth century. In a hospital you had more chance of catching an infection than of being cured of one. Death rates among patients, and indeed among doctors, were often horrendous. The most important advantage to the patient was probably the increased attention and, especially for the mentally ill, a more humane attitude.

OBSERVATORIES, MEASURING STATIONS AND A GLOBAL SCIENCE The tendency that became visible in the hospitals was also in evidence in other areas. From the nineteenth century onwards, all kinds of services that had previously been the province of private initiatives or charity were now organized under government supervision. This was invariably accompanied by rationalization, according to which earlier activities were placed on a more professional footing and where possible supported by scientific principles. Think, for example, of the setting up of national offices of statistics, or the introduction of national standard time. Few of these things were immediately related to science or the investigation of the natural world, but in certain cases the new measures provided researchers with an opportunity to develop their field of study further, or even to establish an almost entirely new field. Governments initially focused their attention mainly on subjects that were demonstrably and directly useful. In Italy, volcanic eruptions periodically caused great damage and loss of life, so the government decided to employ experts to monitor dangerous volcanoes. The first permanent monitoring station of this kind was set up in Naples in 1841, the Osservatorio Vesuviano (Vesuvius Observatory). Of course, volcanos had attracted attention before, but they were studied by interested amateurs, in their free time, who sometimes achieved remarkable results but whose opportunities were limited. With the establishment of permanent stations, vulcanology became a profession. Weather and climate were another subject of obvious importance, for agriculture, for shipping and, it was generally assumed, for health.

THE NINETEENTH CENTURY (I)

In this field too, observation had a long history. As early as the eighteenth century, weather watchers all over Europe were busy registering weather conditions daily, especially temperature, air pressure, windspeed, wind direction and rainfall, all of which were clearly measurable. The focus was above all on the supposed influence of these factors on human health. In the nineteenth century national governments took this research upon themselves. They were concerned mainly with the interests of shipping and the initiative was often taken by naval officers. They managed to gain support for new meteorological institutes and a network of weather stations by holding up before their governments the prospect that a better grasp of the weather, to which they believed scientific research would lead, could bring considerable economic advantages. Similar considerations applied in the case of geomagnetism. Ships’ compasses pointed north according to the earth’s magnetic field. To navigate, therefore, it was important to know about local anomalies and variations. The first attempts to chart aberrations in compass readings had been made back in the seventeenth century, but when these proved to be not only irregular but variable over time, further systematic research was abandoned. In the nineteenth century, with the support of leading scholars and of the Russian and British governments, a network of measuring stations was set up where the earth’s magnetic field was measured continually in a uniform manner. The establishment of a network of measuring stations required both considerable investment and impressive scientific organization. Previously, private observers had each used their own thermometers and other instruments, and followed their own procedures. One person might take temperature readings in the afternoon, another in the evening, one might measure the temperature at ground level, another in the lee of the wind, a third in the open field. This meant that valid comparison between their observations was impossible. The setting up of a network demanded standardized instruments and standard protocols. A quite different question was what people actually wanted to do with the vast quantity of data gathered in this way. Meteorological observers in the eighteenth century had conscientiously drawn up tables of data, but they had not got much further than that. Nineteenth-century zeal in this area should therefore not be seen in

129

130

AUTONOMOUS SCIENCE

isolation from the use of new methods of processing data, and above all a new scientific vision that could give meaning to the results. That vision was propagated with particular success by German scholar Alexander von Humboldt, a passionate traveller who undertook long expeditions. From 1799 to 1804 he travelled through South and Central America, for example, and in 1829 to Siberia. His ideal was a science of the earth that would demonstrate the relationship between all observable happenings. He was not interested in the weather as such but in the connection between weather, climate, geography and any number of other phenomena. He studied the distribution of plants and animals and tried to explain it based on geographical or historical facts. He determined that there were several different vegetation zones on earth, and that such zones existed even in the high mountains. As well as their latitude, the development of these zones depended on the presence of seas and mountainous regions. Such insights were not entirely new, and Von Humboldt was able to build upon the work of others. He was far more rigorous in his methods, however. To be able to compare different types of vegetation, he entered data about each of them into tables. In doing so he looked not only at individual species, as had been usual up to then, but at families and their share of a total population. He tried to grasp the whole rather than just the parts – in his own words, he wanted to give vegetation a ‘physiognomy’. Regarding the conditions that would bring about a specific physiognomy in this sense, Van Humboldt did not limit himself to generalities; he wanted to determine them precisely and so he measured the temperature, the air pressure, the composition of the atmosphere, the intensity of the blue of the sky and much more besides. By collecting data about vegetation, weather and climate all over the world and comparing them, Von Humboldt hoped to arrive at general laws. As a result of this vision, meteorological observations gained significance as part of a total programme. The weather at a specific place could be understood only as part of a phenomenon of global extent. It was not a matter of air pressure at a particular place but the way in which waves of high or low pressure moved over the earth. This meant that it was possible to gain insight into the weather only by combining observations from many different local stations. Von

THE NINETEENTH CENTURY (I)

Humboldt therefore strongly advocated the setting up of a far-flung network of weather stations, with several central points where the data could be collected and analysed. He made an important contribution to the creation by the British and Russian governments of a network of magnetic observatories to measure and chart the earth’s magnetic field. He also recommended the use of maps for the purpose of analysing the data. Relevant information such as temperature, air pressure or annual rainfall must be drawn on a map so that it was clear at a glance how a given phenomenon was distributed across an area. Today such a way of working seems obvious, but it was not at the time, although there are earlier examples of data being entered on maps. English astronomer Edmund Halley had drawn up maps of trade winds and the direction of the earth’s magnetic field in the years around 1700, but only with Von Humboldt did this become a standard technique. Instead of all kinds of complex symbols, Von Humboldt championed the use of contour lines, for example, or lines that indicated an equal pressure, temperature and suchlike. Such maps would illustrate the connection between different phenomena, for example by facilitating the comparison of lines on climate maps with the outermost ranges of certain types of vegetation. Von Humboldt was not an important theoretician. He never found the general laws he was seeking but with his eye for global connections and his way of making them visible, he was highly influential nevertheless. He was responsible for making things like climate and physical conditions accessible to science. The setting up of weather stations and the collection of all kinds of geographical and other data did not result in a unitary theory, but it did produce a huge quantity of knowledge about the world, in a usable form. Such furthering of knowledge is also part of science. Later researchers could take the resulting data as a starting point for the construction of theories about the formation of mountain ranges, vulcanism, climate and the like. The use of weather maps made modern meteorology possible and in time led to the discovery of low- and high-pressure areas, and their significance to weather. Von Humboldt can also be seen as the founder of biogeography. He never took the step to ecology, incidentally, in which the dependency of animals and plants on their environments is central. It was not until 1895 that Danish

131

132

AUTONOMOUS SCIENCE

botanist J.E.B. Warming presented a recognizable ecological programme in his Plantesamfund (Plant Communities).

SCIENCE AND WESTERN IMPERIALISM It is of course no accident that Von Humboldt’s efforts to create a science of the entire earth, which would include climate, plant growth and much more besides, coincided with a period in which the Europeans were interested in the world oversees for very different reasons. European countries were taking control of larger and larger areas of the globe. The collection of facts about those areas was of importance for their colonial policies. Knowledge is power, as people had realized even in the eighteenth century. The collection of reliable knowledge about foreign parts was regarded as a long-term investment. In European capitals as much knowledge was collected as was needed for successful domination and exploitation. In addition to the hope of practical advantages, for example in the form of raw materials, cash crops or other economically valuable discoveries, there was a realization that knowledge of the world – knowledge in scientific terms – was ultimately essential for the continued existence of the nation. As early as the sixteenth century, the Spanish government sent extensive questionnaires to distant parts of Spain’s overseas empire with questions about population, geography and so on. Other countries followed. At first such knowledge was collected mostly by government officials and missionaries, but as time went on the task of exploring and exploiting far-off lands was increasingly entrusted to people with a scientific schooling or interest. In the eighteenth century a tradition of purely scientific voyages of discovery arose. Among the best-known expeditions of those days are the voyages to the Pacific Ocean by Englishman James Cook, on behalf of the British admiralty and with the support of the Royal Society. His first voyage took place in 1768. The purpose of his journey was to expand knowledge of coasts and shipping routes but Cook also took an artist with him to sketch newly discovered lands. He tested methods of navigation and collected a large amount of botanical and ethnographic material. No less famous, although of a very different kind, was the military expedition by French general Napoleon

THE NINETEENTH CENTURY (I)

Bonaparte to Egypt in 1798–99. Although his was a purely military undertaking, Napoleon took several dozen scholars with him, who were given an opportunity to study the land extensively. This is illustrative of the connection that could sometimes exist between imperialism and the acquisition of knowledge. As the nineteenth century progressed, the world was explored in even its most remote parts by European travellers, not only at sea but deep into the interior. It was no coincidence that this urge to explore coincided with the expansion of Western global hegemony, political, economic and cultural. It often had the character of a race, in which states tried to take possession of as much land as possible as quickly as possible to prevent anyone else getting there before them. The Russians explored Siberia, Central Asia and Alaska, the Americans the interior of North America. The assembling and interpretation of data was once again the business of experts. Scientific societies provided a forum where knowledge of the world could be announced, published and discussed. Britain’s Royal Geographical Society, founded in 1830, launched expeditions to find the source of the Nile and to fill other blank spaces on the map. Generally speaking, scientists wanted to serve the interests of their own native country, but they did not look only at immediate usefulness and most were opposed to secrecy. In these circles, knowledge of the world was regarded as the shared property of all the ‘civilized’ nations. (Although some governments thought differently about that.) Natural history museums and similar institutions, such as botanical gardens and zoos, were of great importance in the acquisition of knowledge about the world outside Europe. From the eighteenth century onwards, their collections were placed at the service of the scientific vision of nature. Smaller collections had a mainly didactic purpose, and many educational institutions compiled them. Some of the larger collections, by contrast, with plentiful financial resources and an active acquisitions policy, were able to become centres of knowledge about the colonies. This was particularly true of royal collections such as that of Kew Gardens in England, founded in 1759. Linnaeus, as a professor in Uppsala, was like a spider in the web. He sent students to distant lands and tried to talk everyone into giving him previously unknown plants.

133

134

AUTONOMOUS SCIENCE

In this period the pre-eminent museums were public institutions under scientific leadership. Their goal was not purely didactic. Nineteenth-century museums were actively involved with dispatching scientific expeditions that focused on natural history, anthropology or geography, aiming to collect objects and further expand knowledge of the non-European world. At the same time, they were themselves centres of important scientific research. This applied particularly to museums in the major European capitals, which were generally the best provided for as regards collections, budgets and staff. The development of palaeontology and comparative anatomy in the nineteenth century was to an important degree the work of two men who were involved with museums, George Cuvier, who was attached to the Museum of Natural History in Paris (which had grown out of the Jardin du Roi, the royal garden of medicinal plants, taken over during the revolution) and Richard Owen, curator of the collection of the Royal College of Surgeons in London. They were able to base their research on the countless specimens under their control. Towards the end of the nineteenth century, the European states’ urge to explore reached a highpoint. There was now an extensive network of institutions and scholars who concerned themselves with the charting of the climate, the animal and plant kingdoms, geomagnetism, or the gravitational field. To some extent they worked overseas, and natural history museums and gardens had been established in the colonies and other countries too by this point. They were under Western leadership, as if that were inevitable. Indigenous people served only as low-level employees. Even after the colonies became independent, Western experts were left in charge in many cases. Only very gradually did local people find their way into positions of greater responsibility. The exploration of the world continually expanded into new areas – the seabed, for example, which had been neglected at first, since it was assumed that only cold and darkness prevailed there. The first deep-sea soundings had to do with plans to lay underwater telegraph cables. Gradually, curiosity regarding this virtually unknown world was aroused, inspired in part by debates about the theories of Charles Darwin after 1859. No practical use was expected of the results of exploration of the ocean depths, but in 1868

THE NINETEENTH CENTURY (I)

the Royal Society nevertheless persuaded the British government to undertake short expeditions to dredge the seabed in search of unknown life forms. This produced so much material that a major expedition was then equipped. In 1872 the British research ship HMS Challenger left for a voyage around the world that lasted three and a half years and produced a treasure trove of unfamiliar material. The Challenger expedition is regarded as marking the start of modern oceanography. Further expeditions followed. The Germans, for example, did not want to be left behind and in 1925 they launched the Meteor, which in the two years that followed charted the circulation across the globe of the seawater in the Atlantic Ocean. In essence this is the type of research that continues to this day. In the twentieth century the ocean became more than ever the object of human exploitation. Research increasingly focused on practical aspects, such as fisheries and submarine mineral resources, and on the influence of the oceans on the climate. In the late eighteenth century and throughout the nineteenth, the scientific approach had the wind in its sails, since it fitted well with the rational, bureaucratic model adhered to by the modern state in those years. The state attempted to extend its influence by gathering knowledge and demanding specialist education of its civil servants. The idea was that with a cool, businesslike attitude and rational methods of research, all problems could be addressed. The form of knowledge that corresponded best with this ideal was that of modern research into natural phenomena. Science therefore had the opportunity to anchor itself in society. There was, however, a price to be paid. Researchers had to adjust to the demands made of them, and with the rationalization of society, science became a form of business. In this chapter we have seen little of the heroics so often associated with the scientific endeavour. Most researchers who were allied to museums, botanical gardens, observatories, weather stations or hospitals never made spectacular discoveries or developed important new theories. They were science’s foot soldiers, so to speak. They measured, observed, described plants or insects, calibrated instruments, sketched in details on maps, sought pathogens, tested medicines and tried to find a pattern in a chaotic mass of data concerning air pressure, climate, endemic diseases and so forth.

135

136

AUTONOMOUS SCIENCE

Most had no great difficulty making this adjustment. The ideal of nineteenth-century researchers was entirely in line with the wishes of state and society, namely to clarify reality on scientific grounds. It was an ideal that should not be seen in isolation from the practical usefulness of their work, but that at the same time rose far above it. In the nineteenth century nature was studied not merely for practical reasons or as an outcome of aristocratic curiosity. It became a true calling. There was huge enthusiasm for discovering the secrets of nature and developing new scientific theories. Science emerged as an independent societal factor. It came to have its own values, which were barely questioned any longer. True, most researchers remained in the rank and file, but some succeeded in setting themselves up as high priests of nature.

• 6

THE NINETEENTH CENTURY (II): PROFESSIONAL SCIENCE

UNIVERSITIES AND PROFESSORS In the preceding chapter, it has been explained that in the nineteenth century, science thrived because it was useful for state-building and colonial rule. However, such utlitarian motives were not the only ones. It was precisely in this period that science managed to gain recognition as a powerful cultural factor with its own values and objectives. The basis of this ideal lay in the universities. Of course, science was not entirely independent of social developments. As a result of the aforementioned rationalizing tendencies, the voice of scientifically educated experts with respect to all kinds of social and practical issues carried ever more weight. High birth and a schooling in the classics were in many cases no longer regarded as sufficient reason for a person to be given leadership. Governments increasingly relied on a new class of professional experts – lawyers, but also doctors, mathematicians and engineers. The social elite had traditionally been educated in the universities, but until the nineteenth century these were institutions that played only a marginal role in the history of science. Universities were rarely centres of research; instead, they were places where the values of classical scholarly culture were handed down. True, as temples of DOI: 10.4324/9781003323181-9

138

AUTONOMOUS SCIENCE

scholarship they accommodated mathematicians and natural philosophers, but they mainly represented an ideal of classical learning. Knowledge was preserved there, honoured and passed on, but not increased. In the nineteenth century this changed. German educational reformer Wilhelm von Humboldt (brother of Alexander) propagated a new ideal. As he saw it, universities should become centres of Bildung, which is to say the cultivation of character and personality. This meant that students should not merely be filled with booklore. They must actively make knowledge their own. At the universities, therefore, science should be a hands-on endeavour, with students able to develop freely. Only then would valuable members of society be cultivated. Bildung was still mainly a general education, with much emphasis on classical civilization and classical literature, but scientific research certainly had a place in it. Wilhelm von Humboldt’s ideas went beyond theory. As a top Prussian civil servant, he was able to push through truly important reforms to education. He placed his stamp on the university in Berlin in particular, which was founded under his leadership in 1809. It set an example to other universities in Germany. To say that all changes in the nineteenth century are attributable to Von Humboldt would be an exaggeration, but his ideas legitimized the recognition as valuable of developments that were already underway, or were being implemented for completely different reasons, and the decision to allocate them a place at the universities. One change that was put into effect over the course of the century concerned the organization of education. The universities were traditionally divided into four faculties, the three higher faculties of theology, law and medicine, and the philosophy faculty, which had above all an introductory function. Subjects like mathematics and natural philosophy were included in the latter, along with languages, history and moral philosophy. As the nineteenth century went on, a new arrangement came about in which the exact sciences were given faculties of their own, on a par with the others. Of course, this influenced the character and status of science subjects. Eighteenth-century chemistry, for example, had been more a craft than a scholarly study, consisting mainly of the making of distillations of various sorts. In nineteenth-century education, the

THE NINETEENTH CENTURY (II)

emphasis came to lie far more on the analysis of unknown substances. This demanded a stronger theoretical schooling than was provided by adherence to recipes from something not unlike a cookery book. Chemistry now acquired scientific status and became a fully fledged university subject. Nineteenth-century universities were largely responsible for subdividing the field of science in the way still familiar to us today, with physics, chemistry, botany, zoology, mathematics, geology and astronomy. These subject boundaries were not self-explanatory. Around 1800, many regarded heat, light and electricity as part of chemistry, but they later came to fall under physics. The teaching of mathematics was increasingly limited to pure mathematics, while subjects like optics and mechanics, previously regarded as aspects of mathematics, were now taught by physicists. Generally, the phenomena of life were initially studied as part of physiology, in the medical faculty, rather than as aspects of botany or zoology. It was really only in the second half of the nineteenth century that biology became a subject in its own right. In the nineteenth century, scientific research increasingly became the business of university professors, and this meant it was no longer a game played by aristocrats but was absorbed into the hierarchical university system. Some professors had a handful of assistants, others ran whole laboratories, but in essence they were all little potentates, most of them highly enamoured of their own authority. There was little joint research, whether at the academies or elsewhere. The students they trained and the assistants they appointed were expected to go along with whatever their master said. In the nineteenth century a ‘Herr Professor’ quite often developed into a demigod of sorts. One student who attended lectures by psychologist Wilhelm Wundt wrote above his notes, ‘It’s as if the Holy Spirit is dictating.’ As the universities gained in importance, the significance of the scientific societies declined. As the century wore on the membership of such academies was almost entirely monopolized by university professors. For them, membership of such an institution meant primarily additional prestige. Their actual work was done elsewhere. In this sense differences emerged between countries. In France or Italy academic life was organized centrally and hierarchically, so a lot

139

140

AUTONOMOUS SCIENCE

of power was concentrated in the hands of a few, who could promote their favourites to important positions and thereby influence the content of the work done. The French and Italian academies may not have been centres of scientific research, but they were centres of power. In Germany, with its fragmented political structure, this was not the case.

WOMEN IN SCIENCE Science increasingly came to form its own bastion, but at the same time it was subject to the influence of the profound social shifts that began in the nineteenth century. One of the most important of these was the changing role of women. Traditionally, theirs had been a subordinate role, in society and therefore in science. To the extent that women worked as researchers, they remained in the shadow of their male colleagues. Men like astronomer Hevelius or chemist Lavoisier involved their wives in their work, and eighteenth-century astronomer William Herschel worked along with his unmarried sister Caroline. The contribution made by such women was sometimes substantial, but in general it is hard to pinpoint. Most male servants and assistants also went unnamed, incidentally. Caroline Herschel was one of very few women who eventually received official recognition for their scientific work. Their situation was highly variable, nonetheless, depending on time, region, religious denomination and social status. In the eighteenth century, as explained earlier, science was largely the business of well-to-do amateurs. In that world women of the elite had some room for manoeuvre, and intellectual life was played out largely in salons run by women. Some were extremely well informed about the scientific issues of their day and took an active part in debates, and from their privileged social position they might manage to gain access to public institutions. Italian Laura Bassi became a professor in experimental physics at the University of Bologna, which made her the first woman in history to occupy an academic chair. This was possible only because she had an extremely influential patron, Cardinal Prospero Lambertini, who later became Pope Benedict XIV. Some scholarly women were admired mainly for their rarity, but Bassi was taken entirely seriously as a scientist.

THE NINETEENTH CENTURY (II)

The professionalization of science in the nineteenth century meant the loss of the world of the salons and well-to-do amateurs, since the professional experts who made the running in the nineteenth century were closely linked to the political elite. Women were excluded as a consequence. They lost their role as mediators and arbiters of intellectual life. The already modest place occupied by women in science was reduced until they had hardly any room left at all. In the second half of the century, however, change set in. New groups were demanding their rightful position in society and the monopoly of the established elite was increasingly contested. The place of women became a matter of political dispute. Conservatives wanted to keep women out of public functions, whereas progressives supported emancipation. More and more girls of the upper and middle classes received a secondary-school education and started to do paid work. Women threw themselves into scientific research, which expanded enormously in these years, if at first mainly in subordinate roles and doing all kinds of routine tasks, as laboratory assistants, for example. At the Harvard College Observatory women were deployed as ‘computers’, since the processing of data from astronomical observations demanded extensive calculations. This had to be done by hand as there were no electronic calculators yet. Those who did the work were called ‘computers’ and women were regarded as particularly suited to it. They were cheap, too, since they earned far less than men in equivalent positions. After a while, some women succeeded in being admitted to university after they finished secondary school. The existing rules generally had to be changed or set aside to allow this. Several intellectual professions, such as that of a doctor, fairly quickly came within reach of women, but it was only from the early twentieth century onwards that they gradually started to take up professorial chairs or occupy positions of leadership. Support and resistance alternated, which quite often meant that a woman’s career took an unpredictable and rather erratic course. Many women who wanted to build a career were utterly dependent on one or two people with sufficient authority to make it possible. One of the first women to secure an important position in science was Maria Skłodowska, who was Polish. Since the universities in

141

142

AUTONOMOUS SCIENCE

Poland were closed to women, she studied in Paris. After graduating she married French physicist Pierre Curie and went through life from then on as Marie Curie. Pierre Curie had a job at a Parisian school for industrial chemistry and physics, and he arranged for his wife to be given space to work there. In 1903 she gained a doctoral degree with research on the spontaneous invisible radiation emitted by some elements, which she named radioactivity. Her research met with broad recognition. Pierre and Marie Curie, who worked closely together, were awarded the Nobel Prize for physics that year, along with French researcher Henri Becquerel, who had been the first to discover radiation. (In 1911 Marie was also awarded the Nobel Prize for chemistry.) But when in 1904 a special chair was created at the Sorbonne for research into radioactivity, it went to Pierre, not to Marie. She gained a professorship only as the successor to her husband after his tragic death a year and a half later. The prospect of membership of the French Académie des Sciences was blocked by conservative resistance. Another pioneer of research into radioactivity was Austrian physicist Lise Meitner. After gaining her doctorate in 1906 in Vienna, she went to Berlin, where she collaborated with chemist Otto Hahn. As a result of Hahn’s mediation she was given space to work in the university’s chemistry lab, to which women did not officially have access, although without any formal appointment. Meitner played a crucial part in the discovery of nuclear fission. Her work was highly valued, but her academic career got off the ground only very slowly. In 1926 she was given the title of professor, but further progress ended abruptly when the Nazis came to power. As a Jew she had to flee Germany, after which she was side-lined as a scientist. When several years later the Nobel Prize committee awarded its physics prize for the discovery of nuclear fission, Meitner was passed over. The prize went to Hahn. He gave Meitner a share of the prize money, but the honour was reserved for him. Of course, gifted men sometimes had to contend with jealousy and opposition, but they could generally fall back on a far larger network of friends and on established social norms. Formal discrimination laid down in laws and regulations no longer exists today in most countries. Old patterns of behaviour are tenacious, however, and in general women still have bigger obstacles to overcome than men.

THE NINETEENTH CENTURY (II)

LABORATORIES One of the most important changes in the scientific landscape of the nineteenth century was the emergence of university laboratories. Some universities had labs in the eighteenth century, but they were used for demonstrations. The students were shown experiments but were not allowed to touch anything themselves. Research was done in people’s homes, in their own study rooms. There they could invite a few friends or interested individuals to watch. Basic skills were learned in such private circles, or through an apprenticeship at a pharmacy, for example, but not at the university. When in the nineteenth century the universities were given the public task of serving and disseminating science, they too felt the need for sets of scientific instruments. To give students a scientific education it was no longer enough to show them familiar experiments. Young people needed to learn to roll up their sleeves. For the professors, research was no longer a private hobby but part of their public function. They therefore demanded appropriate facilities that would allow them to involve their students. Not the very first but certainly the most influential facility of this sort was the chemistry lab run by Justus von Liebig, a professor at the University of Giessen in Germany, which he set up in 1824. At first, he paid for it out of his own resources and it was housed in little more than a wooden shed. Under Liebig’s leadership students were able to train there to carry out experimental work. Soon students were pouring in, at first mostly young men who wanted to become pharmacists, but gradually more true chemists. The undeniable success of this form of education led to the laboratory becoming an official part of the university and therefore being financed by it. Following Giessen’s example, countless other universities then set up their own chemistry labs. The students in Liebig’s laboratory first took a course in practical skills. Next, they were given a research task, which they worked at independently under Liebig’s supervision. As well as being instructive for the students, this system was very useful for Liebig himself. It gave him a large staff of laboratory assistants whom he could set to work on problems that he found important. Liebig’s syllabus was largely copied by other universities. This gave science

143

144

AUTONOMOUS SCIENCE

considerably greater momentum, especially where it concerned relatively routine research, such as work to determine the properties of specific substances. As well as for chemistry, university laboratories were set up for other subjects over the course of the nineteenth century, and in many cases a ‘practical’ became an obligatory part of a course of study. The first proper physics laboratory was opened in the 1830s by Wilhelm Weber in Göttingen. Separate laboratories came into being for psychology, zoology, botany and other subjects. As in Liebig’s case, they often began as private undertakings by a professor before later being taken over by the university. H.G. Magnus’s physics laboratory in Berlin became a university facility in 1862, but it remained where it was, in the professor’s home. As the century went on, such laboratories became increasingly well appointed. They were given an organization with a permanent staff, seminaries and colloquia. Research was steadily gaining a more central place in the university world. Not all private laboratories were converted into university facilities. The professor of physiology at the University of Breslau (Wrocław, now in Poland), Jan Evangelista Purkyně, set up a physiology laboratory in 1839 that was not part of the university but an independent legal entity. As such it was the first of its kind, but it was soon being imitated.

CLASSIFICATION AND CONFERENCES Eighteenth-century science had been dominated by a few dozen researchers and a handful of academies. In the nineteenth century, with the rise of university research and the founding of laboratories and other institutions, the business of science became larger and more complex. Mutual coordination therefore grew in importance. This was certainly true of classification and nomenclature. Earlier systems, like Linnaeus’s natural history classification, were based on no more than the authority of those who had compiled them. With the arrival of more cocks of the walk, there were more and more aberrant ideas, which only added to the confusion. In the case of Linnaeus’s rules for the naming of plants and animals, it soon became clear that they could be interpreted in various

THE NINETEENTH CENTURY (II)

ways when it came to adding further content or elaborating upon results. These were far from intellectually engrossing issues, since they were all about the correct spelling, the use of capital letters and what to do if a name had been deployed twice, or if a species was found on further inspection to have been placed in the wrong family. Nomenclature was the concern of people we might now call geeks rather than of visionary thinkers. In 1871 the famous English biologist Alfred Russel Wallace called nomenclature ‘one of the driest and most uninviting of subjects’. Nonetheless, in order to avoid losing track amid the many hundreds of thousands of species of beetles, fungi and so forth, a clear and unambiguous system was vital. The subject was of relevance beyond natural history. The chemistry labs set up from the nineteenth century onwards produced and investigated an ever-swelling tide of compounds. In 1800 around 500 organic compounds were known about, in 1840 some 1,500 and by 1860 there were 3,000. Then the acceleration really took off: by 1880, 15,000 were known and in 1910 there were no fewer than 150,000 recognized organic compounds. Somehow, they would all have to be distinguished from one another. As long as all these issues were discussed within national societies or other associations of limited size, there was little chance of reaching agreement. In 1860, however, the first international scientific conference was held in Karlsruhe in Germany. The subject was chemistry and the most important of its initiators was the German chemist August Kekulé, who at the time was working in Ghent in Belgium. He managed to bring together about 140 chemists from thirteen countries. The intention was to arrive at shared starting points for chemistry theory. In Karlsruhe this was achieved only to a very limited extent. Those present failed to agree on chemical classification and nomenclature. Nonetheless, the formal gathering promoted collaboration in this particular field of study and the assembled chemists found their meeting useful. The conference of 1860 was therefore far from the last. Agreement on the principles of classification and nomenclature was finally reached at a meeting in Geneva in 1892. Later conferences and committees further elaborated upon these basic principles. After 1860 scientific conferences quickly became an accepted means of bringing unity to a field of study. The botanists soon followed the example set by the chemists and in 1867, at an international conference

145

146

AUTONOMOUS SCIENCE

in Paris, a system of rules for nomenclature was adopted, put together by Alphonse de Candolle. The first international zoological conference came in 1889, again in Paris. Here too, nomenclature was an important topic of discussion. A proposal was presented by Raphaël Blanchard of which most points were carried, whether at this or at later conferences. As a result of the fact that zoology and botany each had their own associations and conferences, the rules for plants and animals diverged on points of detail even though both were based on the rules laid down by Linnaeus. That remains the case to this day. Neither in botany nor in zoology did these rules immediately put an end to disagreement in every respect. In practice various competing systems of nomenclature existed based on nationality or within specialist fields such as entomology. Still, the disagreements were over matters of detail. One notorious issue concerned the degree to which the oldest name ought always to be used, even if, for example, the classification changed. Various associations proposed solving matters such as this by setting up international nomenclature committees, but that was not easy to achieve. As far as botany goes, clarity was reached only in 1935 at a conference in Cambridge. In zoology the rules were laid down afresh in a Code of Zoological Nomenclature, published in 1961. A separate code was developed for bacteria. Initially, attempts were made to follow the botanical system because bacteria were regarded as plants. This turned out to create problems from time to time, since the field in which the names were applied was quite different. Furthermore, the microbiologists held themselves aloof from other biologists. They organized themselves in their own societies and held their own conferences. From 1930 onwards they worked on a code of their own, which was approved and published in 1947. With the introduction of these various codes, the work of the conferences and their respective committees was by no means over. As each field of study developed further, new problems kept arising and insights changed. As a result, all the systems of nomenclature are to this day subject to extension and review. In all the other scientific disciplines too, conferences were organized and international organizations set up, most of them towards the end of the nineteenth century. In astronomy it took a surprisingly long time for an international organization to get off the ground,

THE NINETEENTH CENTURY (II)

although by the time it did, there had been several international projects led by special committees. The function of a general international conference was for a long time fulfilled by meetings of Germany’s Astronomische Gesellschaft (Astronomical Society), which foreigners also attended. Not until 1919, after the First World War, when the Western powers advocated a temporary boycott of German science, was the International Astronomical Union founded. (The Germans and their First World War allies were excluded.) The Union took upon itself the introduction of an international standard time and established a central notification bureau for sightings of comets. The classification and standardization of astronomical data became an important task. In 1928 a committee was formed to determine the precise boundaries of the constellations. Conferences have been important in arriving at shared concepts and points of departure. Apart from animals, plants, chemicals and constellations, the shapes of crystals were classified, for example, as were cloud formations, rocks, geological eras, mathematical symbols and mental disorders. The work of classification and naming does not generally receive much attention in histories of science, and the results are rarely spectacular, but it is essential to the daily progress of scientific endeavour. Scientists have always devoted much ingenuity and energy to this sort of classification, not because it directly results in major discoveries, but simply to bring order to potential chaos. Scientific conferences had a social function as well. However much the professors at different universities acted as their own boss, they were very much aware of being part of a shared undertaking. They therefore felt a need for new forms of communication between themselves. National or international meetings were one means of achieving this. Another was the founding of professional journals, which provided a forum for discussion and debate and put a face to each specialism. In the second half of the nineteenth century, practically every field of science acquired its own journals.

THE RISE OF THE EXPERIMENT: PHYSIOLOGY In the eighteenth century the methods of investigating nature were mostly thought of as relatively simple. The researcher must collect

147

148

AUTONOMOUS SCIENCE

and organize data, and after the application of profound thought, a scientific theory would roll out of its own accord. Many theories were therefore little more than informal speculation based on the available facts. The emphasis was on collection, classification and organization, on recognizing connections, and on composition and structure. Experiments were conducted, but more as the culmination of a discourse than anything else. An experiment was above all an argument. The main intention was to use one crucial result to unveil some essential feature of nature. However, experiments came to occupy a more important place at nineteenth-century universities. Research became a task for welleducated people working in well-equipped laboratories. Furthermore, better or more complex machines and accurate instruments of measurement steadily became available, making experimental work increasingly important and all-encompassing. As a consequence, all kinds of previously unknown phenomena were discovered and explored. One of the most high-profile elements of nineteenth-century science was physiology, or research into the phenomena of life. In the eighteenth century the nature of life had been sought in a special life force, which was eventually defined more specifically as, for example, ‘irritability’ or ‘sensitivity to impressions’, but which remained extremely vague. Proof of such general concepts was sought experimentally. Over the course of the eighteenth century, several audacious experiments were carried out, such as the regeneration experiments by Trembley, Spallanzani and others. They divided up into pieces certain lower organisms – polyps, slugs and the like – and watched them grow back into the entire creature. This required great experimental skill. It was the business of educated researchers, not amateurs, so it is understandable that this kind of research had not taken off before this time. Gradually, with the professionalization of science in the nineteenth century, more of this kind of highly skilled experimentation took place. Medical practitioners had become distrustful of the speculative theories of their predecessors and strove to acquire more precise information. As a result, a new experimental physiology came to fruition, firstly in France. Claude Bernard is regarded as the most important researcher in this field. He studied medicine, but never worked as a doctor, only as a researcher and professor. To him physiology was no longer an auxiliary to medicine, as it had been up

THE NINETEENTH CENTURY (II)

to then, but a subject in its own right. Rather than curing illnesses, his aim was to gain knowledge of the foundations of life and so he also studied plants, if to a limited degree. Bernard’s experiments were not intended to capture the essence of life; instead, they were always focused on concrete problems of detail. What is the function of certain nerves? What brings about the secretion of the gastric juices? Does the conversion of oxygen (which is inhaled) into carbon dioxide (which is exhaled) take place in the lungs, in the blood, or in bodily tissue? His most important method for answering such questions was vivisection. By severing nerve pathways, for example, or by intervening in the living body in other ways, he attempted to reveal the workings of the different organs. This did not lead to any revolutionary new insights, but it did produce countless discoveries at the level of detail, and with them a far better understanding of the workings of the body. As well as vivisection, physiology made use of technical aids, probably the most important being the microscope. Eighteenthcentury botanists and zoologists had barely used microscopes at all, but in the nineteenth century the instrument (greatly improved in the meantime) was part of their standard equipment. Nineteenthcentury microscopic research had a strongly experimental component. Researchers did not look through a microscope to see what there was to be seen but tried to focus on specific details that they wished to study. The making of preparations became an art in itself and to this end various staining techniques were developed. So to enable simple ‘observation’, intervention in nature was required. The new attitude was accompanied by new ideas about what life actually was. In the nineteenth century a prevailing sense arose that the phenomena of life must be explained according to the same laws of nature that governed dead matter. Physiology was nothing other than the physics of very complex systems, which could be analysed by studying the way the various nerves, glands and so on worked. Only by means of experimentation could their operations be isolated and investigated. This idea was given a powerful boost when it was discovered that all living beings were made out of elementary building blocks, called cells. The authoritative formulation of this theory is by Theodor Schwann, in his Mikroskopische Untersuchungen (Microscopic

149

150

AUTONOMOUS SCIENCE

Investigations) of 1839. Based on cell theory, nineteenth-century physiologists tried to interpret the phenomena of life as far as possible as a matter of purely physical or chemical processes; life was a natural phenomenon like any other. This reductionist tendency gradually gained ground as the century went on. It had meanwhile become clear that countless processes were possible only within living cells or micro-organisms, not outside them, an insight powerfully defended by the French researcher Louis Pasteur in particular. The many processes that took place in the cell were attributed to the functioning of the protoplasm, the cell fluid. For lack of sufficient data, all kinds of ideas circulated about how the protoplasm could make them happen. One of the processes that depended on the presence of living cells was fermentation. But in 1897 German chemist Eduard Buchner found a way of causing fermentation without living yeast cells, by means of a substance he derived from cells. He called this substance zymase. (Modern studies have revealed that it is in fact a whole complex of substances.) This opened the way to a new approach to the phenomena of life, which were no longer conceived as the action of a living substance but as an expression of chemical reactions in the cell. Substances with similar workings to zymase were named enzymes. Since they had potential applications in both medicine and industry (fermentation is involved in various industrial processes), there was soon a great deal of interest in the structure and functioning of these enzymes. Researchers came together to form a new field of study, called biochemistry. Life was thereby further stripped of its mystique. The workings of the living cell turned out – at least in part – to be founded on ordinary chemical reactions by complex substances, which could be studied in the test tube. This basic assumption proved extraordinarily productive. Over time many other reactions were discovered. Substances that were involved, in addition to enzymes, included hormones and vitamins.

MEASURING AND EXPERIMENTING IN THE STUDY OF NATURE In physics too, the experimental approach was in the ascendant in the nineteenth century. A distinction needs to be made between it

THE NINETEENTH CENTURY (II)

and the simple compilation of a series of measurements, as advocated by Laplace. That had mainly been a means of arriving at quantitative relationships, and thereby at mathematical laws. Experiments could also exist outside such mathematical programmes. The nineteenth century’s greatest experimenter was probably Englishman Michael Faraday, who did not work at a university but at the Royal Institution, a private philanthropic organization. He is famous above all for his discoveries regarding electricity, which included electromagnetism, electromagnetic induction, and the principles behind the dynamo and the electric motor. Faraday had scarcely any schooling in mathematics, in fact he had not even attended elementary school. It is not always easy to define the boundary between experimentation and measuring, or between an experiment and an observation. Simple observations of constellations of stars or of micro-organisms require extremely complex equipment, and all experiments involve observation of some kind. The point is that researchers no longer saw themselves as purely spectators of goings-on in the world, required only to report on them as reliably as possible. They attempted to analyse phenomena by actually intervening. The different elements that could be distinguished needed to be isolated and their specific effects evaluated. Furthermore, in the nineteenth century use was made of increasingly complicated equipment even for simple observations. That alone made the resulting observation a good deal less direct. Even in fields such as astronomy, a new attitude developed. It was impossible to experiment with stars, of course, and astronomy had always been all about the observation and measurement of what happened in the heavens. It had become a professional business back in the eighteenth century, but the average observatory was equipped with little more than a telescope and a clock. The astronomer’s eye was the most important instrument. In the nineteenth century, however, photography was introduced. Parallax measurements were initially carried out using precision telescopes, but displacements over a long period of time could be measured more easily and efficiently by taking photographs. Before the introduction of photography, the parallax had been determined for some dozens of stars over a period of about half a century. In

151

152

AUTONOMOUS SCIENCE

another half-century, photographic measurements determined the parallax, and therefore the distance from earth, of several thousand stars. (Still a fraction of the total, of course; most stars were simply too far away for any kind of measurement, even using the most refined methods.) You might wonder what the point of this was. That the stars were an extremely long way away had become clear in the eighteenth century, and it might seem completely pointless to ask whether a specific star was at a distance of twenty lightyears or thirty. But that was not how people reasoned. The astronomers of the time regarded such measurements of distance as part of their job, and society as a whole had great respect for pure scientific knowledge, and so did not question the usefulness of this research. The development of photographic plates was quite a performance in those years, but it made phenomena visible that had never been seen before. Objects that could not be detected even through the strongest of telescopes could be photographed using long exposure times. In other ways too, direct observation was surpassed. Around the middle of the century, spectroscopy was introduced. This enabled the further analysis of starlight, which in turn produced data about the chemical composition of the stars, on the basis of which the stars could be classified. In the nineteenth century, astronomical observatories did not remain simple observation posts; they were transformed into complete laboratories. Chemistry was the precise opposite of astronomy in this sense. It had traditionally been a purely experimental field. In the nineteenth century, however, a new form of experimentation was introduced. It was not focused on the creation, dissociation or analysis of substances by means of simple reactions; instead, it attempted to gain an insight into their structure. Chemists wanted to understand how various reactions took place and also to be able to predict the course they would take. So theoretical concepts and research methods were borrowed from physics. In the early twentieth century the structure of crystals was investigated with the help of X-ray diffraction, but conclusions could be drawn from the diffraction patterns only by those who understood various underlying mechanisms. So it was by a very indirect route that information was gained about the substance itself.

THE NINETEENTH CENTURY (II)

The way a phenomenon such as life was perceived is inseparable from the way in which it was investigated. On the one hand a mechanistic, materialist view invited an experimental approach, but if this was successful, it in turn reinforced those initial reductionist tendencies and led to new and more far-reaching theories. In the study of dead nature too, the research method and the forming of theories are connected, in a way that is unmistakable yet hard to define. With nineteenth-century methods of research, in which nature was analysed by means of an array of complex equipment, it became difficult to persist in thinking that a researcher merely had to observe carefully and think deeply. It also became increasingly difficult to regard nature as an unproblematic mathematical entity. The old image of the world as made up of phenomena that could easily be observed, and could all be explained directly based on a few fundamental laws of nature, proved too simple. In the laboratory more and more abstract variables were encountered that had only an indirect connection with phenomena observable by the human senses. To conduct an experiment, it was important to know the relationship between the different variables. Systematic experimentation and the compilation of long columns of measurements is of little use if you do not know what you need to do with your results. Researchers increasingly resorted to an abstract mathematical approach.

FURTHER MATHEMATIZATION In France, after the work of Fresnel and Ampère, the mathematical procedure became rather bogged down. French researchers continued engaging in experiments and making precision measurements, but they no longer arrived at innovative theoretical work. The mathematical programme of the circle around Laplace inspired a number of researchers abroad, however. Although there was no rapid dissemination of the new mathematical approach, over the course of the nineteenth century, small groups of researchers emerged that had a similar programme. In Germany, Wilhelm Weber, at the University of Göttingen and later of Leipzig, was the most eminent defender of mathematical physics. He worked mainly on optics and electrodynamics. In Britain,

153

154

AUTONOMOUS SCIENCE

too, some researchers applied themselves to the formulation of theories of physics in mathematical formulae. In the second half of the nineteenth century, the Scot James Clerk Maxwell managed to come up with a fully worked out mathematical theory in which he brought together all electrical and magnetic phenomena in nine partial differential equations. He had a number of loyal followers who further developed his ideas. Most had been educated at the University of Cambridge, where mathematics was an important element of the exam system. One of his disciples, Oliver Heaviside, produced a simpler formulation of the theory, in four rather than nine equations, known today as the Maxwell equations. Like Fresnel and Ampère before him, Maxwell permitted himself a good deal of freedom with regard to the eighteenth-century ‘Newtonian’ (and ‘Laplacian’) model, which took account only of forces between particles. Rather than accounting for the force exerted by one body on another, his formulae described a state of tension in space. The energy resided as it were in the space itself, and the force extended as a ‘field’ of ‘lines of force’. Maxwell was nevertheless a convinced ‘Newtonian’, in the sense that he believed all natural phenomena – including electricity and magnetism – must be reducible to mechanical processes. Chemistry had for a long time been above all a practical field of study in which mathematics had no place, although precise measurements were made of the quantity of a specific substance involved in a specific reaction, in proportion to other substances. One of its aims was to gain more insight into the nature of the reactions and the substances involved: Which substances were simple and which were composed of different elements? What did those compounds look like? As far as the reactions themselves were concerned, chemists assumed they could be understood only based on the nature of the elementary components. In the late nineteenth century, however, they went in search of mathematical connections that could describe the way in which the reactions took place. Several chemists – Norwegians Peter Waage and Cato Maximilian Guldberg, and Dutchman Jacobus Henricus van’t Hoff – demonstrated that reaction speed was dependent on the concentrations of substances in a solution. They succeeded in describing this with the help of mathematical formulae. Van’t Hoff in

THE NINETEENTH CENTURY (II)

particular was one of the most important pioneers of this kind of mathematical description of chemical processes. He showed that reactions strove to achieve a certain equilibrium that could be calculated. This meant that calculations in chemistry were no longer founded upon the properties of the smallest particles (although Van’t Hoff himself did assume the existence of such particles and based his theories on this notion), but instead on measurable quantities such as the concentrations of substances. In this period graphs became the method of choice for defining connections between all sorts of phenomena and analysing them mathematically. Before the nineteenth century, graphs were largely unknown. Of course, in some situations people have resorted to diagrams or forms of graphic depiction to explain things throughout history but for a long time these were the exceptions, requiring special interpretation. Mathematicians and physicists at the time of the Scientific Revolution did not generally use graphs, and the mathematical models on which the researchers of the eighteenth century based their work were above all analytical. For them the study of problems in mechanics or astronomy came down to the drawing up and solving of complex differential equations. The long columns of precision measurements compiled from the early nineteenth century onwards were not presented as graphs but in the form of tables. Data from selfregistering instruments of measurement, which by their nature already took the form of graphs, were converted into tables for the purpose of working with them. In science in this period, people preferred to work with numbers, rather than with curved lines. In the eighteenth century, incidentally, there was one scholar who recognized the potential of graphs, not just for purposes of illustrative representation but for the analysis of the results of measurements. He was a Swiss mathematician called Johann Heinrich Lambert. He wrote at length on the subject, but after his death many years passed before his ideas were picked up. Most researchers felt uncomfortable with them and had great difficulty understanding abstract diagrams. It was only with the further mathematization of research in the nineteenth century that graphs gradually took off, first in Scotland and England, later in the rest of Europe.

155

156

AUTONOMOUS SCIENCE

All kinds of connections between quantities and variables were made visible and accessible to analysis by researchers with the aid of graphs. The mathematization of research into nature from the late nineteenth century onwards therefore cannot be considered in isolation from the use of graphs, in fact some areas of research are almost unimaginable without it. Phase theory at the turn of the twentieth century investigated the interdependency of the pressure, temperature, volume and state (gaseous, liquid or solid) of substances. The relationships between these were extremely complicated and could only really be studied by presenting them as graphs. Researchers in this field even sometimes found drawings on paper insufficient and made three-dimensional models out of plaster. The reading and drawing of graphs became a standard technique taught at universities, and it infiltrated almost all disciplines. Their usefulness was recognized even in subjects in which mathematics was not a prominent presence. The representation of measurements, for example, in a graph offered insight into a process without the need for complex mathematical analyses. Sometimes graphs were deployed in fields where they may not have been strictly necessary. In about 1900 physiologists liked using them as a rhetorical means of giving their results an impressive and scientific look. The methods used had an impact on general ideas about nature. The eighteenth-century way of describing nature hinged on the idea that it was a kind of mathematical system, like Euclidian geometry. The old mechanistic physics was essentially deductive. To understand natural phenomena you had to go back to elementary laws of nature, which described the forces operating between the smallest particles of matter. The rest, beyond these fundamental laws, was of secondary importance. Nineteenth-century researchers may have agreed in theory, but in practice they were not too concerned about the deeper, underlying principles. They simply looked for connections between known phenomena, between the pressure, volume and temperature of a specific quantity of gas, for example. Such variables were subject to laws that might not be fundamental, in the sense that you could trace them directly back to properties of the smallest particles, but they were very well suited for use in calculation. If researchers discovered a mathematical relationship between such quantities,

THE NINETEENTH CENTURY (II)

they would call it a ‘law’, without worrying too much about whether such a law was fundamental in character or not. Ideas about what laws of nature actually were therefore underwent a shift.

STATISTICS Another new way of processing large amounts of (initially often simple) data was by using statistics. The first works on probability date from the mid-seventeenth century. In the eighteenth century the theory was further developed by famous mathematicians like Carl Friedrich Gauss and Pierre-Simon de Laplace. In those years it did not occur to anyone to apply it to research on nature. Statistics started with estimations of chance in various forms of gambling, but this resulted in an abstract mathematical theory that had little relevance to concrete problems. Modern statistics arose in the nineteenth century, and the discipline has gradually acquired greater significance since then. Its first application concerned accuracy of measurement. Around 1800 astronomers realized that the unavoidable imprecision of observations contained an element of chance and therefore could be subjected to a calculus of probability. The ‘law of errors’ indicated how, with a great many observations of a particular phenomenon such as the position of a star, the measurements recorded clustered around the correct value. By deploying this law, a better estimate could be made of the actual value, based on a series of observations. In studies of weather and climate, which were undertaken from the early nineteenth century onwards, there was no option but to work with averages and deviations from the average. Here too, therefore, an elementary form of statistics could not be avoided, although it served merely as an aid to the processing of observational data. The laws that were sought were simply assumed to be precise in character. The application of the calculus of probability to theories themselves did not start in the study of the natural world but in the social sciences. In the eighteenth century this field was regarded as part of political economy, and it mainly consisted of the collection of large amounts of data about the state. The word ‘statistics’ is related to the word state, indicating that it was an instrument of political science. The idea behind it was that based on all the data, social life could be

157

158

AUTONOMOUS SCIENCE

captured in laws, as had already happened with the reality studied by physicists. By means of these laws, politics would be put on a scientific footing. This optimism was not universally shared, incidentally. Some people believed that real science went beyond merely the collecting of facts and that masses of assembled data did not provide any insight into the causes of, or connections between, phenomena. Within political science the methods of probability calculus were applied with the aim of arriving at meaningful results. Nineteenthcentury Belgian mathematician and astronomer Adolphe Quetelet is regarded as the founding father of modern statistics. He used statistics almost exclusively for the social sciences, not the natural sciences, but he is important because he was the first to indicate that mathematical conclusions could be attached to statistical methods. He showed that an apparently chaotic heap of data behaved in many cases in an orderly manner. The average of the results generally remained constant. Even the variation in the data, expressed as a deviation from the mean, was often mathematically predictable. Based on the average and the error distribution, sets of data could be processed further. His approach proved fertile in countless fields – in medicine, for example. One of Quetelet’s admirers, who corresponded with him, was English nurse Florence Nightingale, who devoted herself to the professionalization of nursing and to the improvement of medical care for serving soldiers. To give additional force to her campaign she used statistical arguments, which enabled her to show the degree to which improved nursing impacted upon death rates. At the end of the nineteenth century, medical statistics expanded enormously. By systematically collecting figures for deaths and illnesses and combining them with other data, about drinking water, soil types, wellbeing and suchlike, doctors expected to be able to identify unhealthy living conditions and to combat epidemics. In fields such as biology and agricultural science too, statistics proved extremely useful. A plant breeder deals not just with a single specimen but with a field full of plants that have different properties. Determining whether one crop is doing better than another requires basic statistical methods. In botany and zoology the notion of a species as completely uniform was abandoned. In the twentieth

THE NINETEENTH CENTURY (II)

century, biologists decided that species consist of populations made up of individuals, each with a slightly different combination of inherited characteristics. Statistics proved invaluable in seeing the competitive advantages conferred by certain desirable or undesirable qualities, and their effects on the outcome of competition between different species. Statistical methods were not just a useful aid. They turned out to open up new routes for the mathematical description of theories of physics. It was not obvious that statistics would play a role here. They deal with uncertainties, and in the natural sciences laws were sought that gave a precise description of an ideal case. A single welldesigned experiment was therefore considered worth more than a great mass of data. In certain branches of science, however, such an attitude proved untenable. This first became clear in thermodynamics, the science of heat and energy. The initial problem was that heat does not flow spontaneously from a colder to a hotter object. The question was whether this fact could be formulated as a hard law of nature, in the way that the fact that water always flows downwards and never upwards is a necessary consequence of gravity. The aforementioned Maxwell thought not. He saw heat as the irregular movement of the molecules of a substance (in his argument a gas) and claimed that this irregularity could be described using the same laws as those that statistics applied to social phenomena. The speed and path length of molecules at a given moment therefore varied according to a probability distribution. Thus conceived, the fact that in the macroscopic world, heat always flowed from hot to cold was only a probable, not a necessary outcome. It was a sum of individual processes each of which was entirely reversible, and this meant the whole result was reversible too. That heat always flowed from hot to cold was not absolutely true, it was merely statistically valid. This implied, according to Maxwell, that our knowledge in this area was less than perfect. His aim remained to reduce the phenomena to strictly dynamic principles with absolute validity. In 1877, Austrian scholar Ludwig Boltzmann continued reasoning along these lines. Like Maxwell, he regarded temperature as an effect of the movement of molecules. Some distributions of temperature

159

160

AUTONOMOUS SCIENCE

within a system are more probable than others, in the sense that they can be achieved in several different ways. It is a statistical ‘law’ that a system strives for a situation of greater probability. The calculus of probability does not conceal our lack of knowledge, it shows how nature behaves of necessity. The fact that heat always flows from hot to cold and never from cold to hot was based on probability, but despite that it really was a hard law of nature. Like Maxwell, Boltzmann cherished the ideal of a deterministic, exact physics but the theory of thermodynamics proved impossible to couch in those terms. If a calculus of probability were taken as a foundation, you would get a good deal further. To the degree that this became clearer, an increasingly fundamental character was attributed to Boltzmann’s laws, based as they were on statistics. Ideas about the mathematical nature of reality were therefore revised for pragmatic reasons. The nineteenth century can in a sense be seen as the heyday of modern science. The limitations placed upon science in the eighteenth century had been broken through on various fronts. Science was no longer almost entirely dependent on the activities of enthusiasts in an aristocratic society. It was supported by a self-conscious and largely independent group of professional scholars, which is to say people who earned their living through science and derived their social standing from it: professors, but also museum curators, observers and others. They were absolutely convinced of the importance of their work for the education of the individual and for the progress of society. Science was a profession, but it was also a calling. Conclusions concerning the natural world were no longer legitimized by appealing to the ideals of classical civilization. The new professors of exact science felt compelled to develop a new ideal of knowledge. They believed that the modern study of nature itself laid the foundations for human civilization and social progress. Modern science was the highest possible achievement of the human mind. Professors impressed this ideal upon their students and defended it in writing time and again. Given that universities were still the places where the social elite was educated, these ideas also acquired great authority elsewhere in society. In secondary schools, too, more and more attention was paid to science subjects.

THE NINETEENTH CENTURY (II)

The research conducted by scholars in this period was comprehensive and multi-faceted. The combination of an experimental approach, increasingly precise measurement and the application of mathematical techniques and ways of thinking produced a stream of results, in the fields of electricity, magnetism, light, radiation, energy and heat. The most important theoretical syntheses of what is known as classical physics came into existence in the nineteenth century. In fact, it was then, in and through this research, that modern physics acquired its identity. The same went for fields such as chemistry and physiology. The new ethos implied a certain attitude to reality. For many people, the methods used to investigate nature provided a key to the understanding of reality as such. Most professors had a firm belief in the model of classical physics, in which everything was reduced to mechanical forces. They strove to organize all other sciences along the same lines. This tendency was clearly visible in the life sciences in particular. Their urge to make their mark sometimes led to open attacks on other ‘unscientific’ beliefs, especially those of religious dogma. While science was at the peak of its reputation and appeared to hold the key to the whole of reality, forces were at work that aimed to make it subservient once again. Precisely the fact that it was so astonishingly successful at describing and manipulating nature made its findings valuable to people with more worldly motives. At first utility and idealism seemed in harmony, but in the twentieth century tensions between the two increased. Science was becoming ever more powerful when it came to explaining and controlling nature, but at the same time it was being forced to focus on the aims of others. After the end of the nineteenth century, nature’s high priesthood gradually began to decline.

161

• 7

THE TWENTIETH CENTURY: INDUSTRIAL SCIENCE

THE RISE OF INDUSTRIAL SCIENCE The nineteenth century can in a sense be regarded as the heyday of science in its classic manifestation. It enjoyed great respect and authority; scientific research needed no further justification. But at the same time tendencies were at work that undermined its status. Science was not developing only at the universities, and as well as being an exalted goddess it was still a milch cow, perhaps more so than ever. Over the course of the nineteenth century, the Western world was transformed into an industrial society. Traditional crafts increasingly gave way to industrial methods of production. Transport, communications and many other facets of everyday life underwent revolutionary changes. Contemporaries were all too well aware of this. Those involved mostly believed that all technical progress was attributable to science, or, perhaps more accurately, no real distinction was any longer made between the two. The nineteenth century witnessed a veritable epidemic of discoveries and inventions. At all levels the realization sank in that innovations could help society move forward and make people rich. In America there were even those who built careers as inventors. They developed new inventions that they then either sold on to DOI: 10.4324/9781003323181-10

THE TWENTIETH CENTURY

interested manufacturers (the patent legislation of the nineteenth century made this possible) or exploited in businesses of their own. Thomas Alva Edison became the most famous among the latter group. He invented the phonograph, turned electric lighting into a commercial product and set up the first workable electricity grid. The chemist Leo Hendrik Baekeland, originally from Belgium, was another inventor who devoted his efforts to the development of industrial processes. After initiating several successful enterprises, he threw himself into the production of plastics. German professor of chemistry Adolf von Baeyer had earlier described how a material could be made out of phenol and formaldehyde, but it was too brittle for practical application. Baekeland improved the process and made one of the first successful plastics, called Bakelite. He set up a company that produced and marketed it. In retrospect, it is clear that many of these inventions had little to do with science. They were mainly the work of skilful tinkerers. The first practical combustion engine, the Otto engine, was built between 1860 and 1868 by German commercial traveller Nicolaus August Otto. He based his idea on a description in a newspaper of a design by another autodidact, Etienne Lenoir. Generally speaking, the problems were not so much theoretical as practical. The first engines jolted far too much because the explosions were so powerful. Only after Otto had solved that problem could his engine be put to use in automobiles. At first it was relatively easy for a bicycle manufacturer to switch to producing cars, or even aeroplanes, and it was a similar story with many other inventions of the time. It is certainly true that, far more so than before, such inventors consciously sought a connection with the scientific worldview. Even for those with little schooling, a range of courses, textbooks or journals was available. Inventors like Edison kept themselves well appraised of developments in science. They undoubtedly saw their own inventions as an expression of scientific progress. ‘Scientific character’ was quite often an argument used by those marketing new industrial products. Manufacturers of new foodstuffs, for example, liked to put recommendations by famous scientists on the packaging. They paid for the privilege. Chemist Von Liebig linked his name to a firm producing meat extract, a product he had earlier researched, in return for five

163

164

AUTONOMOUS SCIENCE

thousand pounds along with a seat on the board with a salary of a thousand pounds a year. Nonetheless, the entrepreneurs who took the lead in industrialization often had a rather ambivalent attitude towards the work of the universities. On the one hand, they had a firm faith in science and scientific progress. German industrialist Werner von Siemens had not undergone any scientific training, but he had lost his heart to science, far more than to technology. He conducted his own research and was taken entirely seriously in circles of ‘real’ scientists. On the other hand, entrepreneurs were well aware of the importance of practical knowledge. They were suspicious of academics who had their heads full of abstract theories but lacked any practical sense. Theoreticians therefore initially got nowhere in many businesses. This applied not only to academics but to graduates of the new technical schools. Many fields of technology therefore remained for a long time in the hands of those who had acquired their skills purely through practical experience. Scientific research gained in significance in industry only at the point when researchers began to involve themselves directly with industrial products and processes. Up until then they had mainly studied nature, which is to say they were interested above all in fundamental laws. The concrete problems of production, which were linked with all kinds of incidental and fluctuating factors such as the properties of a material, wear-and-tear and so on, were regarded as disruptions that stood in the way of understanding. Of course, there were exceptions, but until the nineteenth century these had never produced a systematic science. With increasing industrialization, it became worthwhile to study certain practical problems systematically, with scientific tools. This happened in metallurgy, for example. The nineteenth century was built on iron and steel. To make iron suitable for new applications such as train tracks, steam boilers and railway bridges, it was important to be familiar with the properties of the material. Back in the eighteenth century, researchers had found that you could make the macrostructure of a piece of metal visible by treating fracture surfaces with acids. But it was only towards the end of the nineteenth century that this discovery was systematically investigated, when an English amateur called Henry C. Sorby applied new microscopic

THE TWENTIETH CENTURY

techniques, derived from research on minerals. (The problem lay mainly in the making of suitable preparations.) More important still was the fact that this work was latched onto by researchers – if only after a number of years – and a recognizable field of study emerged, so that various researchers built upon each other’s work. They attempted to arrive at more general insights into the relationship between the structure of the metal, its properties and the way it was processed. At the start they were no more than a loose network of scattered individuals. Among the first to make important advances were Adolf Martens, a German railway engineer who carried out research in his spare time, and Floris Osmond, a French engineer at the iron foundry in Le Creusot. Their work was picked up by established scientists and continued at universities and colleges of higher education. Metallurgical and metallographic research then quickly gained a firm foothold as a separate specialism in the iron industry. Scientific tests made it possible to determine beforehand whether a material met certain requirements. Later metallurgists also contributed to the development of new alloys with the properties needed for industrial use. Such fields were developed above all by engineers who had graduated from the polytechnic schools. (The term ‘engineer’ is older, but it had formerly been used to refer to fortress builders and other army experts.) Their work covered a broad range of areas, from tools, machines, bridges and canals to mining and chemical installations. They developed a strong esprit de corps. Engineers set up their own organizations, with departments and committees that concentrated on study within specific terrains (such as metallography). Some fields were dominated by engineering graduates. This was true particularly in the laying and maintenance of railways, an area in which governments were highly influential. At the end of the nineteenth century, electricity infrastructure was introduced, an entirely new area that was based on the work of scientific researchers but with which there was little experience and which could be risky. The building and management of electricity grids was entrusted to specially trained electrotechnical engineers. The new electrotechnical industries were often founded by those who had graduated in the subject. From the middle of the nineteenth century, therefore, industry increasingly drew upon theoretical and scientific knowledge. Businesses

165

166

AUTONOMOUS SCIENCE

could have materials or machines tested by private laboratories or engineering bureaus. Larger companies set up their own technical departments and laboratories. In the twentieth century more and more industrial tasks became the work of specialists with training that was in part theoretical. The same held true outside of industry, incidentally, in fields such as transport, agriculture, urban development, town planning and public health. As this work became more important and gained in prestige, graduates of university courses such as physics or biology became interested in it. Professors at the universities more frequently turned their attention to industry; they worked as advisers, used their own laboratories to carry out research that had an industrial purpose or was commissioned by industry, and tried to get their students jobs in industrial companies. Materials science, agronomy, organizational science and nutrition became recognized as academic subjects.

THE SCIENCE OF MEASUREMENT The Industrial Revolution brought new opportunities and problems for techniques of measurement. Industrial tests demanded reliable measurements, in areas including electricity and light intensity where no authoritative units of measurement existed. Furthermore, these measurements needed to be related to known quantities, such as energy. The fact that scientific theories were becoming increasingly mathematical had an impact on the demands made of the units chosen for use. At the same time, the accuracy of instruments of measurement had always depended mainly on the expertise of the maker. The new techniques of production in the nineteenth century brought purer raw and manufactured materials, more consistent (and therefore predictable) quality and more precise quantification. This made it possible to make accurate and standardized instruments. The emergence of modern industrial standards enabled precise measurement in many fields. Although these improvements did not serve any primary scientific goal, science profited from them enormously. The new accuracy that was demanded and could be achieved in turn made new demands of methods and units of measurement. Therefore, in the second half of the nineteenth century, serious efforts were made to improve measuring systems. It was important

THE TWENTIETH CENTURY

that experiments or results were comparable with one another. A rational system of units needed to be founded on a limited number of basic units, from which all others could be derived. The question was, what should these basic units be? The Standards Committee of the British Association for the Advancement of Science expressed a preference for a system that took the centimetre, the gram and the second as basic units, known as the CGS system. It could be used to express all mechanical quantities. Other quantities would then be described in mechanical terms. The choice of a system of units was not all that was required, however. It was also important to determine how the different quantities could best be measured, how units derived from the basic units needed to be defined and how instruments should be calibrated. Most discussions of questions of this sort arose within organizations of specialists in specific fields, usually engineers. In 1875 a central body was called into being to concern itself specifically with questions about measurements and standards. That year, in Paris, the Metre Convention had been signed, an international treaty whereby seventeen countries – others later joined them – agreed to attend to the further unification of weights and measures, as well as calibration methods and standards, through the Bureau International des Poids et Mesures (International Bureau of Weights and Measures). This indicated that science had become the stuff of politics. Unification was no longer simply a matter of practical benefits to be gained. It was also about the political desirability of collaboration as such. In the years that followed, similar fora were set up in other fields. In 1906, for example, the International Electrotechnical Commission was founded, in which decisions were made about specifications in the electrotechnical field. This too was a committee composed of official government representatives. In this period governments were increasingly taking the initiative in setting up international conferences or international organizations, for example to look at issues surrounding health, or telegraphy. Their representatives were often well-known researchers. The Bureau International des Poids et Mesures, established in 1875, initially concentrated mainly on the precise definition of units of length and mass, and the problems of calibration. But gradually the organization became central to everything that had to do with

167

168

AUTONOMOUS SCIENCE

systems of measurement. Subcommittees were set up for the measurement of temperature, radiation and so forth. When the decision was made in the twentieth century that a system in which everything was reduced to time, distance and mass was unworkable and several other basic units would have to be introduced, the Bureau International was tasked with making a decision. In 1954 the Conférence Générale des Poids et Mesures, which brought together representatives of the states that had signed up to the Metre Convention, decided to introduce a system based on six units. The metre, kilogram and second were supplemented by a unit for temperature, the degree kelvin, later called the kelvin, a unit for the strength of an electric current, the ampere, and a unit for light intensity, the candela. The system also included rules governing the units derived from these six, and the symbols used to indicate them. The Metre Convention was both more and less ambitious than earlier reforms at the time of the French Revolution. More ambitious because it was not limited to lengths and weights but provided a general system of measurement. Less ambitious because it no longer aimed to bring about the reform of a whole society. The system of units laid down was placed primarily at the service of science and technology. All kinds of commonly used units of measurement could happily exist alongside it. Moreover, whereas the French revolutionaries thought they were creating a system that would last for eternity, the Metre Convention was a result of the insight that a system of measurement continually needed to be refined, supplemented and adjusted. The six basic units of 1954 were expanded in 1971 to include a seventh, the mole, as a unit of ‘amount of substance’. Work on measurements and units is less than spectacular, and the results as such do not exactly grip the imagination. The important thing was to avoid misunderstandings and to be able to take measurement to a higher level of accuracy. For research, however, the industrial standards were of great significance. In the eighteenth century, individual instruments had generally been made to order by a specialist instrument maker, but from the nineteenth century onwards, precision instruments were increasingly made in series – ammeters, oscilloscopes, X-ray tubes, electron microscopes and so on. Many laboratories had them as part of their standard equipment. Of course, for each new phenomenon a new test installation had to

THE TWENTIETH CENTURY

be set up, but this could often be built out of standard elements. Today there are even standard laboratory rats.

RESEARCH INSTITUTES Industrialization greatly increased the prestige of science. The immense technical and economic progress seen in these years was attributed in large part to the contribution of scientific knowledge. Science was regarded as a blessing for mankind. Towards the end of the nineteenth century, a climate developed in which the advancement of science was regarded as a moral imperative. In 1889 American millionaire Andrew Carnegie published his Gospel of Wealth, in which he argued that the rich should spend their money on philanthropy and good causes. His message fell on fertile soil. Many American millionaires established funds with cultural, humanitarian or scientific aims in view. In 1902 Carnegie himself set up the Carnegie Institution of Washington for Fundamental and Scientific Research. A year earlier another millionaire had founded the Rockefeller Institute for Medical Research. In Europe similar bodies existed. They paid for research, provided equipment, built laboratories or supported science in some other way. Belgian industrialist Ernest Solvay established an institute for the organization of international scientific conferences. Wealthy individuals put up prizes to encourage or reward researchers. The juries were usually drawn from scientific academies. The Nobel Prizes, founded by Swedish industrialist Alfred Nobel and presented annually since 1901, were the best endowed and partly for that reason have become the best known, but there were many more. Nor did governments sit on their hands. Science was not merely regarded as essential in building prosperity; scientific presentations also made a vital contribution to a nation’s standing in the world. At the end of the century, more money became available for science at the universities. Whereas university laboratories had until then mostly been accommodated in whatever space happened to be available, specially designed buildings now arose, entirely suited to their task. Such laboratories often specialized in one particular field of research: spectroscopy in Bonn and Tübingen, radiology in Heidelberg, extremely low temperatures in Leiden.

169

170

AUTONOMOUS SCIENCE

The universities were no longer the exclusive guardians of scientific research. The new flows of funds generated special research institutes outside them. At older institutions such as universities, museums or botanical gardens, research had generally been a secondary objective that the staff imposed on themselves. (Only astronomers were traditionally focused on research.) The new institutes had research as their main function. When they were set up, the money or initiative of private citizens was far from unimportant, but they were also able to rely on government support. In 1885 French chemist Louis Pasteur, head of the physiologicalchemical laboratory of the École Normale Supérieure in Paris, managed to develop a vaccine against rabies, a deadly infection until then. Pasteur was not only a great researcher, he also had an excellent feel for what we now call public relations, and he exploited his success to the full. The French Académie des Sciences supported his request for a research lab of his own. Since Pasteur wanted to be independent, he rejected the option of government financing. Instead, a large public collection was organized, and people proved only too willing to support this blessing to a suffering humanity: two and a half million francs came in. The Pasteur Institute opened its doors in 1888. More than just a place to prepare vaccines and treat sufferers from rabies, it developed into a prominent centre for microbiological research. Pasteur’s institute was imitated almost immediately, both in France and beyond. When in Russia an adjutant of Prince Oldenburgsky, a relative of the Tsar, was bitten by a rabid dog, the prince sent him to Pasteur in Paris, who cured him. The prince was so impressed that he decided Saint Petersburg must have such an institute too. At his urging, the Imperial Institute for Experimental Medicine was set up. Similar institutes were established both in and outside Europe. Another important body was the Physikalische-Technische Reichsanstalt (Imperial Institute for Physics and Technology), founded in Berlin in 1887. It was intended for research in the fields of physics and technology, especially methods of measurement. Although this was an official state institution, it came into being only after years of intensive lobbying by the major industrialist Werner von Siemens. He personally donated the two hectares of land on

THE TWENTIETH CENTURY

which it was built and paid to have the building work started at a point when it was not yet entirely certain that the Reichstag would give its approval. In 1911, again in Germany, the Kaiser Wilhelm Society was founded, as a kind of umbrella organization intended to channel private financing under state supervision. It created a great many research institutes in fields the German government regarded as important. Businesses not only gave money to independent researchers, they also carried out their own scientific research. In the chemicals and electrotechnical industries, which had close ties with science, the first real research laboratories were established around the turn of the century. These were company labs whose work was no longer aimed at solving everyday problems but instead focused on more fundamental issues that were unlikely to bear fruit in the short term. They acquired knowledge that was of importance for the development of new products, or for the improvement of the manufacturing process. The example was set by research departments such as that of General Electric, founded in 1900, and of the American telecommunications company AT&T, founded in 1911 and after a merger in 1925 known as Bell Labs. In 1928 the chemicals firm of E.I. du Pont de Nemours used the offer of a far higher salary to tempt the renowned chemist Wallace Hume Carothers away from Harvard. He established a research group at the company and started researching new synthetic polymers. He discovered nylon, which Du Pont put into production in 1939. The state too eventually became involved in scientific research more directly. During the Second World War, scientists were deployed in large numbers to develop military techniques. In England radar was invented, while Nazi Germany designed rockets and jet planes, and in America, at the Massachusetts Institute of Technology (MIT), control theory was perfected to create fire control systems. All these efforts pale into insignificance when set beside the Manhattan Project, a code name for the development of the American atomic bomb. The United States spent almost two billion dollars on it in total, a vast sum in those days. The project was under the direct control of the American government. A large new research centre arose in the deserts of New Mexico, entirely devoted to nuclear research. Thousands of people

171

172

AUTONOMOUS SCIENCE

worked there in isolation and in the strictest secrecy. There could be none of the basic scientific values of openness and exchange of information in this case. In the decades after the Second World War, the Cold War gave governments cause to initiate more such military projects. The increase in prosperity in these years provided the necessary financial scope. Nuclear physics and, by extension, the physics of elementary particles, could count on generous support for many years. Most spectacular of all, however, was the development of space travel. In the United States and the Soviet Union in particular, enormous sums of money were spent on it, spurred by mutual rivalry. The launch of the first Russian satellite, Sputnik, in 1957, sent a wave of panic through the offices of the American government and prompted a huge and extremely expensive effort to win back lost ground. In 1958 the National Aeronautics and Space Administration (NASA) was set up, run directly by the American government. It was the United States’ most expensive scientific programme. The United States won the space race before the eyes of the world when in 1969 it mounted a successful manned voyage to the moon and back, an achievement the Soviets failed to match. But the contest was not merely a matter of national prestige. Much of this work most definitely had a military purpose. Satellites were useful for espionage, communication and localisation, for example. By their very nature, many research results were secret, although some were later released for civil or scientific use. Oceanographers were able to chart the distribution of underwater volcanoes in the Pacific Ocean with the help of data from American military satellites, which could measure anomalies on the surface of the water caused by the gravitational field of the volcanic mountain below. Scientific research grew in extent and scale, and therefore became more costly. Eventually only the state could produce sufficient resources to participate in earnest. The Pasteur Institute in Paris managed to remain a privately financed body for more than half a century, but ultimately the economic crisis after the Second World War forced it to accept state support. It then became increasingly dependent on that support. The Kaiser Wilhelm Society became the Max Planck Society, financed by the German government.

THE TWENTIETH CENTURY

CONTROL AND MODELLING In the twentieth century, even more than before, nature was described in mathematical and quantifiable terms. This tendency was obviously inspired in part by industrial working methods. In the factories the work of skilled artisans gave way to standard routines, designed by specialists but preferably performed by an unskilled workforce, or even by machines. It had previously been up to the worker to judge whether a specific piece of work was a success. New relationships meant that strict norms applied. The size and shape of a given product, any deviation from the ideal form, the purity of the materials and all other features needed to fall within precisely defined boundaries. This meant that such characteristics were measured and quantified accurately, and that every step in the production process was described in detail. Although to begin with it meant the loss of much skilled craftsmanship, quantification of this sort had its advantages in the long term. Laying everything down in protocols and norms made it possible to gradually improve procedures, and therefore in many cases more was ultimately possible than by the artisanal method. It was not so much a matter of understanding a phenomenon at a fundamental level as of making it manageable. The question was: Which factors need to be changed, and how, in order to reach a precisely quantified result? Ideally this would be described in a mathematical model. The mathematical and quantifiable approach now caught on in the natural sciences too. Mathematics was no longer primarily a means of analysis. It had become a means of control. This applied even to fields in which up to this point little calculation had taken place, like biology and psychology. It was not sufficient to say that increased moisture in the environment had resulted in an increase in the numbers of a specific plant or insect. Things of this sort had to be expressed in mathematical models, in which the increase was clearly quantified. Some researchers took a while to adjust to the change. Scientific knowledge had traditionally had a distinctly philosophical component; it was all about insight and understanding, not control of the outward phenomenon. Anyone with knowledge of the deeper laws of nature would automatically understand goings-on in the real world

173

174

AUTONOMOUS SCIENCE

better, not by means of precise calculation (a thing as commonplace as the weather was for a long time impossible to predict even a day in advance), but in a more old-fashioned, intuitive manner. The tendency to describe the world as mathematically controllable has grown strongly over recent decades as a result of a new mathematical instrument: computer simulation. The electronic calculator was introduced in the second half of the twentieth century, its development having taken place mainly in the defence industries, because of its usefulness for military purposes. Later it was used elsewhere too, at first mainly for the processing of administrative data. In scientific research the calculator began as an additional aid in carrying out familiar tasks, such as the performance of calculations or the plotting of graphs. It therefore had to compete with other aids, like analogue computers. From about 1970 onwards, however, the introduction of the integrated circuit (chip) brought such progress in the memory capacity and calculation rate of the digital computer that countless new opportunities arose. Complicated and complex systems that until then had been impervious to analysis were suddenly subjected to computation by means of numerical data processing. A complex system was divided up into elementary steps that were analysed one by one. In most cases this did not produce a precise result but one that was close to it, or at least such was the intention. For the evaluation of the movements of a system of particles that influence each other (something that cannot be done analytically in the case of a system of more than two particles), the movements of each particle were calculated one by one for a specific moment. Based on that outcome, their place could be calculated for a next moment, one second later, for example. This gave an approximation; in reality the movement of each particle changes continually. Even during the very shortest of time intervals, the speed does not remain constant. If the time interval chosen was short enough, however, then the deviation was not large. The entire procedure was then repeated for the new constellation, and so on. In this case the elementary laws were known and an attempt was made to predict the behaviour of the system on that basis. The reverse is also possible: to make a model that corresponds as precisely as possible with the known behaviour of an existing system and by that means discover the fundamental laws by which it is governed.

THE TWENTIETH CENTURY

With the help of models, biologists investigated the growth of living beings in this way. The computer presented even fundamental research with new possibilities. It suddenly became feasible to experiment with models of systems that could not be accessed directly in reality, such as the interior of stars, the evolution of the universe, the structure of atoms and molecules, or the long-term development of the climate or an ecological system. As well as new opportunities to work through such problems by calculation, the computer also offered ways to analyse and present the results. The individual steps in the model were generally not captured in simple formulae but conjured up visually. Via the screen, the researcher could walk, as it were, through a threedimensional model of the system being investigated, or look at it from different sides. The important parameters were shown using colours or in some other way. Of course, this demanded special expertise of a new kind, and thus time and money. One of the pioneers of such methods was the American Robert Wilhelmson, who worked on sophisticated simulations of thunderstorms. In the 1980s he managed to attract attention with an animated film in which the airflows in a thundercloud were made visible. In doing so he collaborated with an image specialist, Matthew Arrot, with whom he led an entire group of other specialists. Since then, animations and graphic representations of this kind have become increasingly common. In many cases the necessary software is readily available for purchase. The use of the computer has also had a major influence on the content of scientific research and has changed whole fields of study. One good example is meteorology. The starting point for descriptions and predictions of weather was the huge amount of data available from weather stations and later from satellites as well. Until the arrival of the computer, these data were made accessible by converting them into weather maps. The reading of such maps demanded not just experience but insight into the underlying processes. A meteorologist did not work only with numbers but looked at the structure of the clouds, the clarity of the air and so forth. This experience enabled the meteorologist to develop a certain understanding of the phenomena in the atmosphere. With the introduction of the computer these traditional methods largely disappeared. Modern computers and data processing can store a

175

176

AUTONOMOUS SCIENCE

continual stream of data and provide a ‘snapshot’ of the state of the atmosphere at a given time. By entering data into models, weather maps of the situation in the future can be produced automatically. Modern meteorologists no longer look at the sky but instead at a screen. Rather than interpreting weather conditions, they develop computer models that describe as accurately as possible the dynamics of the atmosphere. Beyond pure modelling, the new numerical methods sometimes led to substantive shifts. In biological classification, Linnaeus’s system was maintained for many years. Species were classified on the basis of similarities in structure, which needed to be spotted more or less intuitively. At the end of the twentieth century, however, it became customary to take similarities and differences in the genetic material as a starting point. These could be quantified precisely, using new methods of analysis. While comparative anatomical research always contained an element of subjectivity or tradition, genetic kinship could be expressed in concrete figures. It was therefore beneficial to use genetics as the basis for biological classification. This brought to light quite a few errors in existing classification, if mainly on points that were already disputed. The new technique even contributed to the coming into use of new basic principles. German entomologist Willi Hennig had proposed in 1950 that the plant and animal kingdoms should no longer be categorized according to similarities of structure but based on common ancestry. His vision, called ‘cladistics’, initially met with fierce resistance. But as genetic methods increasingly made his ideas seem not only philosophically sensible but practically feasible, cladistics gained ground. Linnaeus’s classification had survived the rise of Darwinism without difficulty, even though the foundations of ideas about the relationship between different species had changed radically. Only the arrival of new technical possibilities brought about the introduction of a new system.

INDEPENDENCE UNDER PRESSURE The academic ideal of knowledge as preached by Wilhelm von Humboldt was idealistic and comprehensive. The value of a scientific education lay not in technical, practically applicable knowledge but in

THE TWENTIETH CENTURY

the fact that the student learned to think independently and scientifically. Scientists were not narrow specialists but cultivated people with broad interests and independent judgement. They sought pure knowledge, which stood outside place and time, as it were. The production of knowledge should not be shaped by social considerations. (In practice, of course, it most definitely was.) This model assumes that researchers are independent individuals who have more or less retained the aristocratic frame of mind of their predecessors. Although this image still abounds in ideas about science, in the twentieth century it became increasingly remote from reality. Science was less and less about philosophical forms of insight. Control and predictability became central. Researchers studied more complex systems instead of fundamental laws, and this blurred the distinction between engineers and scientists. In complete contrast to the ideal of Bildung, scientists have increasingly become specialists in their field. The big organizations often employed dozens of researchers, most of them students, postgraduates or interns. Over the course of the twentieth century, the typical research unit became made up of a leader with a small staff and an ever-changing group of undergraduates, postgraduates and visiting scholars, all working within a larger institution. Research progressively became a matter of collaboration, whereby a problem was tackled by a team of theoreticians, instrument makers, computer programmers, experimenters and other specialists. Nowadays this kind of collaboration often stretches across different institutes or countries. It is not uncommon for a scientific article to be attributed to dozens of authors. In such a constellation it is not always easy to determine exactly what each person’s contribution has been. The hierarchical relationships within many institutions have often paved the way to abuses. Many researchers worked in subordinate positions and the women among them in particular, irrespective of their qualifications, were not always taken seriously. Although the head of a laboratory or a research group was expected to determine the direction of the work and to supervise it, the contribution made by the leader’s subordinates could in practice be considerable. Even in the early twentieth century, attention was drawn to this problem. When Ivan Pavlov, head of the institute set up in Saint Petersburg

177

178

AUTONOMOUS SCIENCE

by Prince Oldenburgsky, was awarded the Nobel Prize for Medicine in 1904, there was at first some hesitation among Nobel Committee members. Was he truly responsible for all the work that had been carried out in his laboratory? There was more and more specific support for special projects. Researchers were increasingly dependent on external funding from bodies that had aims of their own, into which the research they were financing needed to fit. The objectives of private citizens who provided funding were usually humanitarian or medical, while business and industry wanted research that would eventually benefit them financially, and projects supported by governments quite often had political or military goals. Such relationships meant that researchers did not have budgets they were free to spend on whatever they found important. They needed to make a convincing case that their plans suited the aims of their financiers, whether they be government bodies or private foundations. In the second half of the twentieth century especially, many governments set up special bodies to support science and research, and at the same time to control them. They wanted to see results. It became common for scientific projects to require prior approval based on a refined proposal, complete with a budget. This meant that the scientists involved lost part of their independence. The researcher became a cog in a larger machine. The most crass examples of limitations on scientific freedom come from totalitarian states, which is hardly a surprise. One of the most frequently discussed cases of political influence occurred in the Soviet Union. From 1929 onwards, Ukrainian agronomist Trofim Denisovich Lysenko promoted a treatment for seeds of cultivated crops that he called ‘vernalization’. He claimed that if the seeds were exposed to the correct temperature and humidity, the crop could be harvested more quickly and therefore sown later. Although the results of his treatment were suspect, Lysenko managed to gain the support of the Soviet leadership, including party leader Stalin and his successor Khrushchev. By skilfully trimming his sails to the wind, and with the help of a little good luck, Lysenko managed to acquire a powerful position in Soviet science. He presented his ‘vernalization’ as a consequence of superior Soviet science and accompanied this claim with fierce attacks on ‘bourgeois’ genetics, which was all the rage in the

THE TWENTIETH CENTURY

West. His campaign was not confined to intellectual debates. One of the most prominent of Russian geneticists, Nikolai Vavilov, was sent to a labour camp, where he died, and in 1948 all education and research in conventional genetics was officially forbidden. Only after the fall of Khrushchev in 1964 was Lysenko discredited, and by that point biology in the Soviet Union had been set back by many years. In Germany’s Third Reich, too, scientists were put under intolerable pressure. In the prevailing National Socialist ideology, a central place was reserved for the supposed superiority of the ‘Arian race’ (which mainly meant the Germans), and the presumed inferiority of other human ‘races’, especially Jews, who should be physically extinguished. The Nazis demanded that science legitimize these ideas. The entire history of humankind must be rewritten so that the achievements of non-Arian peoples or individuals were systematically erased. Jews, such as Einstein, must be written out of the history of physics. Jewish scientists were persecuted and killed. German scientists reacted in various ways to these demands. A few resisted on principle. Others tried to take advantage of the new political circumstances and supported the notion of the existence of a special ‘Arian science’, in which they themselves, of course, were hoping to play first fiddle. Most, however, simply bent before the storm. By compromising and looking the other way when they saw abuses they felt powerless to prevent, they hoped at least to be able to preserve the core of their profession. After the downfall of the Third Reich in 1945, most quickly picked up the old threads again. These examples are extreme, but even in more democratically governed countries scientists were increasingly subjected to guidance and control. In many places secrecy was imposed on them in the interests of business or national defence. Scientists’ career opportunities quite often depended on their willingness to conform to the wishes of financiers or institutions. This could be problematic, especially if a certain outcome of their research was highly desirable for economic or other reasons. Ideological pressure groups or industrial interest groups often did not want to be thwarted by scientific facts. Some populist politicians or movements went so far as to make science and rationality in themselves suspect, but most people, even individuals with vested interests, recognized that in the long term nobody benefits from the undermining of science. In the nineteenth

179

180

AUTONOMOUS SCIENCE

century the protection of scientific integrity was above all part of the professional ethos of the individual scientist. When, as the twentieth century went on, threats to their independence meant this offered little solace, the emphasis gradually came to lie more on formalized and institutional supervision. For certain types of research, especially in the medical sphere, strict research protocols were compiled. Laboratories that wanted to see their results acknowledged needed to be certified by an independent authority. Such a certificate stipulated that the laboratory was adequately equipped and work was carried out there according to the rules of the game. Furthermore, strict norms were developed according to which scientists must evaluate each other’s work, a system known as peer review. Although these instruments were developed mainly to safeguard the independence and trustworthiness of science, it soon turned out, ironically, that they too presented an extremely useful way of steering and controlling researchers. In this chapter we have focused rather less on scientific theories and discoveries than in earlier chapters. This is not because there were none. The problem is rather that in the twentieth century so much progress was made in a vast range of fields that it is impossible to give a general overview of it all in any detail. In many histories of science this problem is avoided by concentrating on the most important discoveries, which tend to be those that are fundamental to our view of the world or that lie at the basis of many other theories, even of entire fields of study. It seems reasonable to ask, however, whether such an overview could ever give an accurate picture of science in the twentieth century, which is characterized precisely by the fact that it has increasingly concerned itself with technical and practical problems. Researchers do not merely study fundamental laws that are abstracted from reality. They try to explain the features and behaviours of complex systems that are found in our reality, and they do so not by first reducing them to fundamental laws but by capturing them in mathematical models. Of course, fundamental theoretical research is engaged in as well, focused on gaining a better understanding of nature, but this makes up only a small proportion of scientific work.

THE TWENTIETH CENTURY

Scientists who have followed developments in the business of science over recent decades sometimes observe a gradual decline: scientific independence is under pressure, research is stymied by bureaucracy and regulation, and the scientific spirit can no longer range free. The problem is that you can express this kind of value judgement only if you have a fixed benchmark. The traditional way science is evaluated is by reference to the scientific ethos as developed in the nineteenth-century universities. It was then that the prevailing social image of science was formed, as was the self-image of the researcher as it still largely exists to this day, as an ideal. To the extent that people have pointed to the loss of this nineteenthcentury ideal, they are right. That form of science has gone, if it ever truly existed, and it is not coming back. This book is an historical overview, not a work of cultural criticism. To what degree the tendencies identified are to be judged as positive or negative is a question we must all answer for ourselves. But however much technical and pragmatic considerations have the upper hand in contemporary science, it would be incorrect to suggest that science has degenerated into a purely technical undertaking, dominated by the power of big money. Science still has cultural value, and much research is carried out that has as its main aim to provide us with insight. It was precisely in the twentieth century that our view of the world was increasingly influenced, directly or indirectly, by scientific knowledge. And that development was itself influenced to an important extent by the ‘great discoveries’ to which the third and last part of this book will be largely devoted.

181

PART III THE SCIENTIFIC WORLDVIEW

Measuring, classifying, calculating – is science anything more than that? In 1862 the famous astronomer Urbain Le Verrier, head of the Paris observatory, found reason to complain about one of his underlings, nineteen-year-old apprentice astronomer Camille Flammarion, who had been so bold as to publish a book in his free time. In the book it was claimed, based on arguments, that there must be other inhabited worlds in the universe. In Le Verrier’s view, Flammarion had thereby disqualified himself; he was not an apprentice astronomer but an apprentice poet. He summarily dismissed the young man. Many years later, Flammarion still thought back with some bitterness to the atmosphere of the Paris observatory. Many of the astronomers there had never looked through a telescope in their lives. Nobody was interested in contemplating the heavens, nobody wondered what those other worlds were, nobody travelled in their mind through the infinite expanses above us … Like first-class office clerks and attentive arithmeticians, they saw nothing out there except columns of figures. Camille Flammarion, Mémoires biographiques et philosophiques d’un astronome (Paris: Ernest Flammarion, [1912]) p. 154.

DOI: 10.4324/9781003323181-11

184

THE SCIENTIFIC WORLDVIEW

As I argued in Part II of this book, scientific methods were developed with an eye to the exploration (and domination) of the world, the rationalization of the state and the modernization of industry. Measuring and counting, improvements to instruments, classification and calculation were at the heart of the business of science, and there can be little doubt that a majority of researchers did not look much further than that. They were interested in the intellectual puzzles they engaged with, in their status and position within the profession or within society at large, and naturally in their own salaries. They did not concern themselves with fundamental questions. Aside from that, though, for some people science has always opened a window on an existence above and beyond their everyday lives. ‘Poets’ such as Flammarion had their place in science as people who were spurred on by wonderment and who hoped that research would provide a deeper insight into reality. Kosmos was the title Alexander von Humboldt gave to his most important book, and nineteenth-century physiologist Haeckel called one of his books Die Welträtsel (The Riddle of the Universe). Down through history, the longing for greater insight has been an important motivation for many, especially young people, to devote themselves to science, or to support it. It has inspired them to leave well-trodden paths and go off in search of new horizons. Not that such enthusiasm always produces results. Flammarion never made a major discovery. Le Verrier, by contrast, with his calculations based on aberrations in the paths of the known planets, deduced the existence of a still unknown planet and thereby became recognized as the discoverer of Neptune. In a different way, however, Flammarion was of great significance to science. He became world famous through his writing, in which he gave full rein to his astonishment at and love of science. He reached a huge readership. Important researchers later testified that their childhood decision to pursue a scientific career was inspired in part by reading Flammarion’s books. So, when we talk of the influence of science on modern society, we do not merely mean that we can now work in more standardized ways or produce substances of greater purity, although that too is of great importance. It also means that our entire perspective on the

THE SCIENTIFIC WORLDVIEW

world has changed. Religious and philosophical ideas have been subjected to the influence of developments in science. These days we even talk about a scientific worldview. How can it be that an activity with such a practical focus has been so significant for human thought in general? Ironically, it is precisely because science concentrates on small, solvable puzzles rather than on major questions that it is capable of having this degree of influence. The practical orientation of modern science actually represents an extremely radical development. Of course, there has been a large field of practical knowledge and skills since prehistoric times. What is new about modern science is not that it seeks practical solutions on an ad hoc basis but that in doing so it pays no attention any longer (ideally at least) to considerations of a moral, metaphysical, political or religious nature. In other words, modern investigations of nature represent an approach to the world in which aspects not accessible to logic or observation are disqualified. Modern science occupies itself only with what is countable and measurable. This turns practical, banal knowledge into an independent domain. In earlier centuries too, people had no doubt of the value of logical arguments and clear-cut proof. However, in everyday practice, logic and evidence often played, and indeed often still play, second fiddle to emotions (such as fear or anger), interests, loyalties and intuitive certainties. People see reality in its entirety, not as a purely logical puzzle. Reality is not only about what is countable and measurable but about morality, aspiration and meaning. In normal situations our experience of reality is an intricate whole made up of fact and conjecture, of what is and what ought to be, and these cannot be disentangled. Acting according to familiar patterns is therefore often a more serviceable way to get a grip on a situation than logically impeccable reasoning. Until the seventeenth century, only a vague distinction was made in the study of reality between elements that belonged under the heading of physics and those that were, for example, elements of religion. Mediaeval seafarers, farmers, craftspeople, master builders and soldiers were dependent for their continued existence on all kinds of insights of practical application. Such knowledge was based,

185

186

THE SCIENTIFIC WORLDVIEW

however, on traditions and hearsay, and mixed with religious or magical representations, passed down by parents and teachers whom you were not permitted to criticize. It was perhaps of use in practice, it might even be correct, but it was not ‘scientific’. Nor was mediaeval natural philosophy primarily about the practical aspects of nature; it was part of philosophy as a whole. Nature was a resource that people wanted to know better, but it was also a God-given order. There was of course an interest in causal connections, but they were inseparable from their philosophical significance. In science as it has taken shape since the seventeenth century, nature is set apart from daily experience and regarded purely as a logical puzzle. In the final analysis, only arguments based on simple logic and checkable facts can be decisive. That is not to say that individual researchers, or groups of researchers, might not have their own secret objectives, personal quirks or hidden agendas, often without being aware of it themselves. But to gain legitimacy within science, their ideas need to have a rational underpinning and to hold out under professional scrutiny by their colleagues. By concentrating on practical aspects, science became an independent domain, divorced from the rest of society. From that independent position it could exert its influence. Scientific thinking brought about a separation between what is countable and measurable and questions that cannot be answered objectively: questions of a moral nature, questions about invisible things, questions about purpose and meaning. But that separation was not absolute. As we have seen, in daily life and in traditional philosophy, moral and metaphysical issues were not kept strictly apart from practical knowledge and factual questions. Moral and social values were linked to and legitimized by ideas about the nature of reality, for example about the origin of the world and of the human species. This gave accepted ideas about the great questions of life a factual dimension that was accessible to science. In theory, religious or philosophical interests had no influence on the results of scientific research. But the outcome of that research could certainly be relevant to philosophy or religion, or to some other field. The fact that investigation of the natural world had consequences for the great questions of life contributed to the creation of the modern image of the scientific endeavour. The work of physicists,

THE SCIENTIFIC WORLDVIEW

mathematicians and other researchers could throw light on subjects of a fundamental nature that philosophers and theologians had debated futilely for centuries, and this created the idea that science was not just a way to solve practical and everyday problems, it was itself a kind of higher knowledge. It was therefore able to claim to some degree the status previously accorded to religion and philosophy. Flammarion’s enthusiasm was therefore not so unrealistic as it seems. Not only have similar ideas inspired many important researchers, their research has repeatedly thrown a fresh light on questions long thought unanswerable. In doing so it has become a factor that any philosophy needs to take into account, and as a result, science has had a lasting influence on our worldview. In what follows I want to place three great questions at the centre and see how our view of the world with respect to it has been changed by scientific research. It concerns the origin of the world, the origin of life and of humankind, and the nature of reality. Such discussions are inseparable from more everyday aspects of science, incidentally. Research that throws light on them is quite often focused on more practical aims, and researchers are rarely satisfied with a purely philosophical insight.

187

• 8

THE ORIGIN OF THE WORLD

THE BIBLE AND THE NEW IMAGE OF THE WORLD In Europe the question of how the world originated was traditionally bound up with an interpretation of the Bible as an account of God’s redemption. Essential to the Christian vision is the notion that the world has a history. It has an origin (called the Creation) that is localized in time, and in the End Times heaven and earth will pass away. Looking into the future was perilous, and indeed a threat to vested interests, but the history of the world was a legitimate and respected subject. At a very early stage, scholars attempted to make the clues in the Bible about the origin of the world more explicit, and to supplement them from other sources. The secular sciences were developed and deployed for the purpose. At first the discussion remained at the level of historical studies and hermeneutics. Since there was a prevailing idea that the world could not be much older than human civilization, the obvious course was to reconstruct its prehistory based on written sources. By combining biblical accounts with astronomical data, lists of kings, facts about calendars and a whole range of Greek, Egyptian and later also Chinese historical writings, scholars attempted to determine the precise time of creation. Frenchman Joseph Justus Scaliger, who became a professor DOI: 10.4324/9781003323181-12

190

THE SCIENTIFIC WORLDVIEW

in Leiden in 1593, did more than anyone to make this kind of research scientific, but the genre continued to flourish until well into the eighteenth century. The ‘new philosophy’ of the seventeenth century created a new vision of the world, which was now conceived as an unbounded universe in which the stars were so many suns and the earth merely one of many planets. The question inevitably arose as to the extent to which the new universe could accommodate the old beliefs. For some time, there was fierce resistance to the Copernican system. Several prominent theologians rejected the new ideas on biblical grounds and clung to the old notion of a stationary earth at the centre of the universe. By around the middle of the eighteenth century, however, this resistance had lost much of its strength. Most educated people accepted the new discoveries and insights, although without wanting their traditional beliefs to be eroded. They tried in one way or another to reconcile the two. Existing metaphysical and religious convictions were therefore linked with the new ideas in physics. This produced what can sometimes look to us like rather hybrid constructions. There was speculation about the location of hell, which had traditionally been imagined as somewhere deep in the earth but which had no place in philosophical ideas. Traditional philosophers took their lead from Aristotle, and the concept of hell did not exist in his thinking. When from the seventeenth century onwards a new image of the earth arose, there was an opportunity to fit hell into it. Volcanoes came to be seen as proof of the existence of huge underground fires. Catholic writers, in particular, regarded that as the place where the souls of the damned were tormented. English writer Tobias Swinden, by contrast, believed that hell was to be found in the sun, a hot and fiery body at the centre of the universe (where traditionally the earth, and therefore hell, was imagined to be) and big enough for all damned souls to be fitted into it. The question of how the world came into being was also addressed using a mixture of biblical and natural philosophy. It was no longer a matter of history; instead, the issue acquired cosmic dimensions. Descartes posited that the earth had once been a star. Others believed that in the End Times the earth would become a star once again and serve as the home of the chosen. In 1684,

THE ORIGIN OF THE WORLD

Englishman Thomas Burnet published a theory in which he explained the biblical Flood based on geological processes that operated on all the planets. Others defended the notion that the Flood was caused by a collision between the earth and a comet. Ideas of this sort, in which biblical miracles were given natural explanations, were regarded as too extreme by most people. Generally, the Flood continued to be seen as a supernatural event. But even as such it was a recurring theme in eighteenth-century science. This expressed itself most clearly in the study of fossils. While fossils tended to be regarded as freaks of nature, in 1695 Englishman John Woodward defended the idea that they were the remains of animals that had drowned in the Flood. Rather than manifestations of a hidden force of nature, they were vestiges that provided clues about the history of the earth. So scientific research did not take the place of the old biblical beliefs. Almost everyone assumed that what was written in the Bible was true. When science presented new facts, people tried somehow or other to give them a place within that familiar framework. This might eventually lead to new interpretations. All in all, the influence of science on the generally accepted worldview was limited. Over the course of the eighteenth and nineteenth centuries, the link between scientific research and biblical history steadily loosened. Questions about the location or indeed the existence of hell, for example, disappeared from the picture unnoticed. This was not because of new discoveries or theories. The reasons for the shift lay in the sphere of philosophy, religion and politics. People were looking at the old material with fresh eyes. In France especially, the powerful Catholic Church had to endure fierce attacks for political reasons. Its enemies consciously went in search of an alternative philosophy, which would naturally have to be secular and unrelated to the moral and political values propagated by the Church. They increasingly drew upon the findings of natural science. Research needed to legitimize an alternative outlook on life. This contradistinction between a religious and a scientificphilosophical vision of the world was pretty much confined to France in the eighteenth century, but in the nineteenth century it became a European phenomenon and the conflicts it produced could sometimes be bitter.

191

192

THE SCIENTIFIC WORLDVIEW

In their battle against the claims of the Church, the philosophers tried to create an all-encompassing cosmogony, based on scientific principles and often in deliberate contrast to the biblical story of creation. Such ‘theories of the earth’, which usually tied in with the theories of Descartes, were attempts to explain the origins of the world based on astronomy, physics or chemistry. Particularly memorable is something known as the nebular hypothesis arrived at by French astronomer and mathematician Pierre-Simon de Laplace, which was first published in 1796. Laplace assumed that the solar system had originated in its current form from a condensed cloud of gas that had once been the atmosphere of the sun. This could explain most of the characteristics of the solar system, such as the fact that all the planets move around the sun in practically circular orbits, in the same direction, and more or less in the same plane. Laplace formulated his hypothesis not as a scientific fact but as a speculative nugget in one of his works that was aimed at a broad public. Whether you could actually call such a thing science was a thorny issue even in his own day. The issue of the origin of the universe was regarded in those years mainly as a philosophical and theological question. German philosopher Immanuel Kant also put forward the idea of a primordial nebula, in a way less thorough but much more pretentious than anything found in Laplace. The theory acquired authority, however, mainly because it was propagated by a scientist of Laplace’s standing. When dealing with philosophical issues, people still wanted to take account of the results of scientific research. Philosophers based their ideas on the scientific knowledge available. Except that they felt at liberty to supplement it to create a complete system, one that was compatible with the basic assumptions of natural philosophy. In retrospect, such hypotheses were usually mere inventions. Nonetheless, they did give some direction to research. They expressly sought the significance of scientific discoveries in a broader context and kept alive the awareness that there was something here that needed to be explained. Anyone who was not satisfied by such speculative hypotheses and wanted more certainty needed to go in search of concrete clues, which is to say of remains from the past that would enable it to be reconstructed. Such remains were indeed identified, but at the same time the terrain was increasingly subdivided. Although philosophers

THE ORIGIN OF THE WORLD

initially saw no difficulty with tackling the origin of the universe, the earth and humankind simultaneously, these ultimately turned out to be matters that each required their own specific treatment and that led to very different answers.

THE DEVELOPMENT OF GEOLOGY The first field in which agreement was reached regarding the interpretation of the facts was that of the history of the earth. Clues were provided by the different types of rock and the shape of the earth’s surface. Interest in these phenomena was aroused by the development of mineralogy, as a consequence of the introduction of a more scientific approach to mining techniques from the eighteenth century onwards. It was a field that encompassed the natural history of rocks and minerals, and it developed instruments for recognizing and classifying them. In 1765, in the Auvergne in France, mineralogist Nicolas Desmarest discovered deposits that he recognized as volcanic rock. It seemed that here, long before humans walked the earth, there had been volcanic eruptions. Elsewhere in Europe too, including the Rhineland, extinct volcanoes were discovered. The landscape must therefore be far older than human habitation and have changed hugely over time. The mineralogists were developing an eye for the way in which the earth was formed, and in the period around 1800, mineralogy gave birth to geology. As well as volcanic rocks, geologists found rocks that had clearly been formed by marine deposits, resulting in sediments that must have been laid down over a very long period of time. In some cases it became clear that these had subsequently been lifted above sea level, sometimes tilted, eroded and then later covered in fresh layers of rock. The researchers deduced from this that the earth’s crust was not only extremely ancient but must have undergone all kinds of changes in a distant past. What had caused these changes was less clear. The new discoveries therefore immediately prompted new all-embracing hypotheses with a philosophical or religious purport. Abraham Gottlob Werner, head of the Freiberg mining school, defended a theory according to which the entire crust of the earth was formed by deposits in water. The Scot James Hutton, by contrast, one of the

193

194

THE SCIENTIFIC WORLDVIEW

most important geological researchers of his day, defended a cyclical idea, in which the earth’s crust was periodically rejuvenated by eruptions of the planet’s internal fire. Half a century later, Charles Lyell, also a Scot, formulated a basic geological principle best described as the principle of uniformity. In the distant past the only forces to operate on the earth’s crust were those still observed in the present day, such as erosion, earthquakes and volcanoes. He excluded the possibility of sudden complete upheavals. This gave him an opportunity to estimate the age of the different layers of rock. Earlier philosophical speculations, such as the nebular hypothesis, had provided a framework within which to understand the origin of the world, but they offered no clues as to its precise chronology. Lyell posited that processes of deposition, erosion and so on, which had formed the earth’s crust in the past, happened at the same rate then as they do now. He therefore decided to measure how quickly they currently occur. From various indications he was able to determine the extent to which the coastline at Naples had risen since antiquity. An analysis of the structure and activity of Etna on Sicily, which is more than three kilometres tall, led him to conclude that over the past twelve thousand years, material thrown out by the volcano had added no more than a few hundred feet to its height. Assuming that the entire mountain was formed in the same way, it must be many times older than that. By determining how processes like sedimentation and erosion occurred, it was possible to estimate, if only roughly, the length of time that had been required to form the layers of the earth’s crust. In the first half of the nineteenth century, serious efforts were made to chart the earth’s strata. This was precision work that involved measuring, comparing and classifying. The researchers who carried it out were mostly British. They put the layers in order by age, based on which they divided the earth’s past into a number of eras, which were then given names including Silurian, Devonian and Carboniferous. This classification was adopted internationally and over time it was further refined, then eventually standardized. The first international geological conference was held in Paris in 1878, and issues of classification and nomenclature were at the centre of attention. Research on the earth’s strata made clear that the planet must have existed more or less in its present form for a very long time. It

THE ORIGIN OF THE WORLD

was partly based on this idea that Charles Darwin put forward his theory of evolution in 1859. He assumed that species of animals and plants had diverged through a process of extremely gradual change. For that to happen, an immense span of time was required, but after examining the research by Lyell and others he decided that was no obstacle. Less than three years later, however, geology came under fire on precisely that point. In 1862 the authoritative British physicist William Thomson (later ennobled as Lord Kelvin) made a concerted attempt to calculate the age of the earth by drawing upon the principles of physics. Thomson put forward a dynamic model; the laws of nature had always been the same, but that did not mean conditions on earth had never changed. Measurements showed that our planet was losing heat. This meant it had once been hotter than it was now and was currently cooling down. If you could discover its initial heat and its speed of cooling, you could calculate its age. Such ideas had been proposed in the context of philosophical speculation, but Thomson was the first to work them out using advanced theories of thermodynamics. He calculated that the earth had turned from a liquid into a solid around 98 million years ago. By following a similar line of reasoning, he estimated the age of the sun at no more than a few hundred million years. Thomson later attempted to refine his method. Others too came up with ages based on estimates of the changeability of certain features of the earth, such as the salt content of the oceans or the influence of tidal forces on the movements of the earth and the moon. Much of this research was intended to counter evolutionary theory, and most of the researchers came to results lower than Thomson’s, in the order of no more than a few tens of millions of years. The geologists were less than happy with this. Assuming that the present crust of the earth was formed by long processes of sedimentation and erosion, they needed far longer periods. It was not until the twentieth century that the impasse was resolved. It was then that radioactivity was discovered – some atomic nuclei are unstable and therefore decay, emitting radiant energy as they do so. For a start, this overturned Thomson’s calculations: radioactivity in the earth was an additional source of heat, so the earth was cooling far more slowly than Thomson had concluded. It

195

196

THE SCIENTIFIC WORLDVIEW

was a similar story with calculations relating to the sun, which also derives its energy largely from nuclear processes that were only now being discovered. Radioactivity also provided a means of repeating the calculations from scratch. The loss of heat by the earth was not an accurate gauge, but the speed of decay of atomic nuclei turned out to be determinable with great accuracy. Each type of radioactive atom has what is known as a half life, the time it takes for fifty per cent of the nuclei to decay into known fragments (which often then fall apart themselves). For some types of nuclei the half life is very short (less than one second), for others very long, in some cases hundreds of thousands of years. This means that from the numerical ratios of the various atoms found in rocks, it is possible to determine when they were formed. All the processes by which the age of the earth had been estimated up to then (cooling, but also erosion or sedimentation) had a timeline that was either difficult to determine or erratic. The speed of atomic processes, by contrast, could be measured precisely. This provided for the first time a reliable (if difficult to read) ‘clock’ that could measure the age of the earth, or parts of the earth’s crust. So in the second half of the twentieth century, it became possible to measure the earth’s age fairly precisely (around four and a half billion years), but this was not the result of a single discovery or the work of one particular scientist. Countless researchers and research groups made a contribution. First of all, they needed to discover of which radioactive processes traces could be found in rocks and which were suitable for use in determining age. They had to invent methods of isolating and measuring the elements involved. Then samples needed to be taken at countless places on earth and investigated. This radiometric research soon became a specialism in its own right, conducted more by physicists than by geologists. It was above all the rise of geology that threw light on the age of the earth, but this did not mean that a majority of geologists focused on such issues. In practice geology was engaged in as part of the hunt for mineral resources, rather than to answer philosophical questions. The history of the earth nevertheless provided geologists with a theoretical framework within which to understand geological phenomena. As a result, questions about the age and origin of the earth received a great deal of attention within geology.

THE ORIGIN OF THE WORLD

The issue of the correct dating was only one part of this endeavour. Of course, geologists made grateful use of the results achieved by physicists, but their own research developed in a completely different direction. The most important theoretical work that united geology was provided not by the search for the correct age but by the theory of the moving continents produced by the German Alfred Wegener. Wegener developed his theory around 1911. Based on a large amount of stratigraphic, biogeographical and other studies, he claimed that the continents had once been joined together and were driven apart over time. At first the idea was not taken seriously. Wegener was unable to present any credible mechanism by which the continents could move, and practically all researchers found that objection impossible to overcome. But after the Second World War, research on the ocean floor provided a breakthrough. The continents did not move across the seabed as Wegener had believed; the seabed moved too. This led to the theory that the earth’s crust was divided into plates that were very slowly shifting in relation to one another, floating as it were on more fluid material deeper in the earth. All kinds of phenomena that had been poorly understood until then – volcanoes, earthquakes, the formation of mountains, the distribution of oceans and continents – became possible to comprehend based on this theory.

THE ORIGIN OF THE UNIVERSE Geology focuses purely on processes that are played out in and on the earth. It offers no definitive answer as to how the earth itself came into being. The geologists had now provided a better insight into the age of the earth, but the age and origin of the universe remained shrouded in mystery. Until far into the nineteenth century, this remained the subject of philosophical speculation. Since ancient times professional astronomers had concentrated mainly on determining the positions of planets and comets and calculating their movements. In the eighteenth century they studied these as applications of Newtonian mechanics. Attention to the universe as such arose mainly thanks to a relative outsider, Friedrich Wilhelm Herschel – or William Herschel as he is usually called, since he emigrated from Germany to

197

198

THE SCIENTIFIC WORLDVIEW

England at an early age. Initially Herschel made a living as a composer, but from 1773 onwards he applied himself to astronomical observations. He did not know very much about the calculations and precise measurements of position that dominated astronomy in his day, but the telescopes he built were superior to any that existed at the time. He became the most famous astronomer of the eighteenth century. His best-known achievement came in 1781, when he observed a heavenly body that later turned out to be a previously unknown planet: Uranus. With his new powerful telescopes and his open-minded attitude, Herschel studied the stars and the depths of the universe. In doing so he brought any number of remarkable features to light that previous astronomers had failed to notice, or had ignored. He discovered the existence of binary stars, for instance. He also took special notice of nebulae. Back in the seventeenth century, astronomers had discovered that some stars appeared not as points of light but as hazy flecks. They had never paid much attention to them. Herschel did. When he methodically charted these nebulae, there turned out to be thousands of them. In response to Herschel’s discoveries, researchers and philosophers started to look systematically at the question of how the universe was constructed. They focused on the nebulae in particular, probably because of the existing nebular hypotheses of Laplace and others. Their significance was at first far from clear, however. Until the early twentieth century, there were two prevailing beliefs about the universe, roughly speaking. According to one idea, nebulae were structures at a great distance that were comparable to the Milky Way galaxy, that great disc of stars of which our sun is one. The universe was therefore relatively big and empty and our galaxy relatively small. According to the other idea, the Milky Way was the most important structure in the universe, which it largely filled. Nebulae were therefore relatively small and close by. In the long run the focus on such cosmological questions changed astronomy as a science profoundly. The technical issue that became central was how the distance between us and the stars and nebulae could be determined. Parallax measurement made it possible to determine the distance to only a relatively tiny number of bright stars. For the vast majority of stars, the parallax was still too small to

THE ORIGIN OF THE WORLD

measure. Another option was to calculate the distance to an object by measuring its light intensity. The further away an object is, the less its apparent brightness. Measuring the brightness of stars with the aid of photographic techniques became a specialism in its own right in the nineteenth century. But quite apart from the technical difficulty of making such measurements, there was a fundamental problem with this method: it works only if the absolute brightness of the object is known, or is in a known proportion to the brightness of an object for which the distance is known. The question was, were there classes of objects whose absolute brightness was comparable? The first thing needed in finding an answer was a meticulous classification of the stars and other objects in the heavens. An initial result was achieved by American astronomer Henrietta Leavitt, head of the photometry department of the Harvard College Observatory. She was researching variable stars, which is to say stars whose brightness as seen from earth continually changes. By concentrating on stars in a particular cluster (the Small Magellanic Cloud), of which it could be assumed they were all at more or less the same distance from earth, she found a relationship between maximum brightness and the period of variation for a certain class of variable stars (the Cepheids). This opened up an opportunity to determine their absolute brightness relative to one another and therefore their distance from earth by comparison to the known distance to one star among them. Not all problems were thereby solved at one blow, since the stars involved are fairly rare, but later such relationships were found between other classes of stars as well. From the 1860s onwards, spectroscopy was developed in astronomy. In combination with photographic techniques it presented new opportunities, first of all for classification. Research on the spectrum of light from a star or nebula allowed conclusions to be drawn about the nature of the object. Where such conclusions concerned brightness, distance could also be estimated. Just as with determining the age of the earth, this was not a matter of a sudden discovery but of a great deal of experimentation and testing by countless astronomers in many different places. Only gradually did the significance of the different observations become clear, and it was not until the 1920s that they were sufficiently unambiguous to show definitively that our galaxy is only one of many in an otherwise extremely big and empty universe.

199

200

THE SCIENTIFIC WORLDVIEW

Spectroscopy, combined with the determination of distance, also led to a completely unexpected discovery. The doppler effect (a shift in the light spectrum as a result of relative movement) made it possible to measure the speed of stars or nebulae in relation to the earth. In 1929 Edwin Hubble discovered that all solar systems in the universe move away from us faster the further from us they are. This led to the theory of the expanding universe, one of the most important discoveries of twentieth-century cosmology. It was logical to extrapolate back in time from this and to present the hypothesis that in the beginning the universe was concentrated at a single point, from which it then expanded. This theory of the big bang was at first quite controversial. Other theories about the nature of the universe were proposed. In 1965, however, background radiation in the universe was discovered that had earlier been predicted, based on the big bang theory, as a remnant of the earliest stage of the universe. In the years that followed, the big bang theory was accepted by the majority of researchers as a result. From the current rate of expansion and the laws of nature, it was possible to calculate when the big bang must have taken place. The most recent measurements suggest that the universe is 13.8 billion years old. But meanwhile this precise determination of age had become a matter of secondary importance. In the twentieth century much more interesting and fundamental questions arose about space, time and matter. The universe turned out to contain far more complex structures than anyone had ever dared dream of. After the Second World War, systematic attempts were made to search the universe with increasingly sophisticated equipment, not only for visible sources of light but for all kinds of other radiation: infrared, radio frequency and so on. This led to a whole succession of discoveries of objects that had never previously been imagined, including quasars, pulsars and the like. Perhaps most famous among them were ‘black holes’. At first these were posited purely theoretically, but in 2019 the first image of a supermassive black hole was captured, or rather an image of the halo of light around it (a black hole itself is invisible by definition). Although attempts have been made to place all these phenomena in a unified theory about the evolution of the universe, such efforts still

THE ORIGIN OF THE WORLD

come up against immense problems. The models used invariably demand a far greater amount of matter in the universe than can actually be observed. Cosmology is anything but a completed science. As is clear from the above, science may have managed to answer the traditional ‘big questions’, but those questions have rather faded from view in the process. On the one hand unceasing specialization has taken place within science, along with professionalization. This has meant that technical matters have become central and researchers are less and less inclined to look at their subject with the eyes of a philosopher. (Ironically, in many cases it proves possible to answer a philosophical question only after the philosophical approach is abandoned.) On the other hand, as research has gone on, an entirely new structure of reality has been discovered, so that new questions arise and answers to the old ones become less important. Questions about the exact age of the world arose in the context of the biblical story of creation. In a world of floating continents, an expanding universe and countless galaxies, not just researchers but philosophers have more pressing issues to address. The independence of cosmology and the emergence of its own research programme do not mean, however, that major philosophical issues have completely disappeared from sight. One central question in modern cosmological research is: Are we alone? In other words, does a form of life, even intelligent life, exist elsewhere in the universe? In the eighteenth and nineteenth centuries it was automatically assumed that the universe looks the same everywhere and must contain many inhabited worlds. Modern cosmological research shows that the universe is far less ‘human’ than once thought and features that characterize the earth are probably rather rare. The applicability of such knowledge is probably nil, but researchers who take concrete steps in the field of the study of extraterrestrial life or extra-terrestrial intelligence can count on immense interest and generous support. Cosmology is a science without practical use, which nevertheless has large sums of money made available to it, mainly because of the fundamental character of the questions it poses.

201

• 9

THE NATURE OF LIFE AND THE ORIGIN OF HUMAN BEINGS

EARLY SCIENTIFIC IDEAS ABOUT HUMANKIND AND ITS PLACE IN THE WORLD Since early times people have asked questions such as: Who are we? Where do we come from? What is life? What makes human beings different from animals? In Europe these questions were always answered with reference to the Bible and ancient philosophy. The Scientific Revolution of the seventeenth century did little to change that in the short term. In philosophical questions of this kind, theories about optics, mechanics or astronomy are not a great deal of use. In the eighteenth century some radical thinkers propagated a materialist view of our species, the notion of humans as machines. It was not very different from that which some thinkers had claimed even in antiquity and their ideas did not catch on. In his biological classification, Linnaeus placed humans in the animal kingdom, but in fact he was merely following Aristotle. It was generally assumed that humans were distinguished by having a soul, an immaterial principle, the seat of rational thought and moral action. The soul was outside the natural order and therefore beyond the reach of the natural sciences. Here too, change occurred in about 1800. Biological classification as compiled by Linnaeus inevitably raised the question as to what the DOI: 10.4324/9781003323181-13

THE NATURE OF LIFE AND THE ORIGIN OF HUMAN BEINGS

background could be to the similarities and differences between species. A natural order must exist within which all species had a meaningful place. And this order would, it was felt, have to be hierarchical in nature. Appeal was therefore made to the old philosophical idea of the ‘chain of being’, or ‘ladder of nature’, which meant that all things could be graded in a hierarchy, from low to high. At the bottom were the minerals, above them the plants, then the animals, and at the top human beings. Within the animal kingdom, mammals were higher than worms, and within mammals, apes were higher than mice. There were now attempts to make this idea into a theory that could be used to support biological research. With partial success. Botanist Antoine-Laurent de Jussieu used the ‘great chain of being’ as a starting point for his natural classification of flowering plants. Ranking in nature was no longer presented by means of purely philosophical arguments but with scientific arguments too, derived from anatomy and taxonomy. Not everybody agreed. Based on an extensive comparative study of the anatomy of different animal species, Frenchman Georges Cuvier tore to shreds the idea of an order of nature in the form of links in a chain with gradual transitions between them. Animals could not be arranged on a scale that followed a straight line. There were four groups that adhered to four quite different structural plans: vertebrates, molluscs, arthropods and ‘radiata’. Later they were taken up into Linnaeus’s system as ‘phyla’, a level of classification below kingdom and above class. These groups could not be traced back to each other, nor could they be placed in a hierarchy. Despite this criticism, most researchers clung to the notion of a universal hierarchical order. In retrospect we can conclude that they did so mainly for ideological reasons. Society was firmly attached to the idea of natural superiority and of humankind as the crowning point of creation. In the nineteenth century, the principle of ranking was by some even applied to people: Black Africans were at the bottom, and there were those who regarded them as representing the transition from apes. The Chinese and Turks were one link in the chain higher, and at the very top were the white Europeans – or possibly the Classical Greeks. Nowadays we no longer regard this as science but as dangerous

203

204

THE SCIENTIFIC WORLDVIEW

nonsense. In the late eighteenth and early nineteenth centuries, however, such a hierarchy was seen by many as a scientific fact. Science, according to the majority of scholars, did indeed answer major questions here, but in doing so it followed a programme that was dictated by philosophical and ideological assumptions.

THE IDEA OF EVOLUTION Not everyone was content with the notion of a static hierarchy, however. You can climb a ladder, some realized, and so they gave a new twist to that concept. In the early nineteenth century the idea arose of a step-by-step ascent, an evolution. This did not catch on at first in natural history. It was attractive mainly with a view to the study of human societies. Up to this point it had always automatically been assumed that society had changed little since the time of Creation, or even that what had happened since was primarily a decline. In the late eighteenth century, however, philosophers emerged who insisted that humankind was developing and aspired to perfection. This idea of a striving for perfection was then projected onto the entire natural world. Its most important champion was the French researcher Jean-Baptist de Lamarck, who began his career as a botanist. Purely classifying and naming was too limiting for him. As he saw it, a scientist must offer a philosophical view of all of reality. In 1802, inspired by the speculative philosophical theories about the history of the earth that were in vogue in the eighteenth century, he presented a new theory of life. The phenomena of life, Lamarck said, could be explained based entirely on the general laws of nature. To him life was a way of organizing matter. Such an organizational urge was ingrained in nature, as it were, so any special creation of living beings, let alone a predetermined order, was out of the question. From dead matter, simple life forms emerged that could reproduce. Partly under pressure of circumstances, they developed certain functions and organs, which they then passed on to their offspring. So over time, organisms came into being that were increasingly complex. ‘Lower’ life forms gradually evolved into ‘higher’ life forms. Eventually, according to Lamarck, the human species arose from the process. All this was

THE NATURE OF LIFE AND THE ORIGIN OF HUMAN BEINGS

mainly speculation in good eighteenth-century philosophical style, not the consequence of thorough research into natural history. Nonetheless, perhaps precisely because of its philosophical character, Lamarck’s work attracted a great deal of attention. For the time being, however, the idea of evolution was fruitful above all for the study of humans and their societies. The nineteenth century saw the rise of modern archaeology. From archaeological finds, scholars concluded that the ancient inhabitants of Europe were at the same level as the ‘savages’ of Africa and Australia. Danish antiquarians came up with a classification of prehistory based on an ‘evolution’ in the materials used. The oldest period was the Stone Age, when humans used only stone (along with wood and suchlike) as a material. It was followed by the Bronze Age and finally the Iron Age. In other words, in roughly the same period as knowledge of the earth developed into a specialism in its own right, known as geology, it proved possible to discover the history of humankind in ways other than those of philology or textual criticism. The study of primaeval times therefore increasingly split into different fields.

DARWIN’S CONTRIBUTION Within natural history, ideas about evolution had little initial support among experts on plants and animals. Their factual basis was meagre. Rather like Laplace’s nebular hypothesis, evolution was defended mainly on philosophical grounds and was ultimately more an ideology than a science. It remained above all a favourite topic for certain philosophers. Most people did not take the theory very seriously and held fast to the notion that all species were created separately, which was more compatible with the teachings of the Church. This changed with the publication of Charles Darwin’s On the Origin of Species in 1859. In contrast to earlier thinkers such as Lamarck, Darwin rejected the idea that there was some kind of striving for organization or greater perfection at the root of evolution that could give it direction a priori. On the contrary, evolution was based on variations that were the result of pure chance and that made offspring different from their parents. In the struggle to survive, some variations resulted in a more successful outcome than

205

206

THE SCIENTIFIC WORLDVIEW

others. The individuals with the most favourable characteristics reproduced, while those with the most unfavourable characteristics had less chance of having a large number of offspring. As a result, the most favourable characteristics were passed down to subsequent generations while unfavourable ones were not. This mechanism became known, in a term taken from philosopher Herbert Spencer, as the ‘survival of the fittest’. A species, or its characteristics, did not change as a result of direction or control but because of selection, or the removal of forms less fitted to survive. This selection was carried out not by a benevolent divinity but by the cruel struggle for life. Before Darwin, few people took the idea of biological evolution seriously, but after his work was published nobody could avoid it any longer. Darwin was not a philosopher but a biologist to his fingertips. While theories such as those of Lamarck were above all philosophical constructs into which the facts needed to be fitted, Darwin’s work was based on in-depth scientific study. He was thoroughly schooled in the geological sciences and had published important work in that field, including a monograph on coral islands. He had also studied all facets of life. He was familiar with Cuvier’s comparative anatomy in every detail, but his interest went far further than pure anatomy. He published comprehensive and detailed studies about the pollination of plants, about the structure and taxonomy of barnacles, about insectivorous plants, about climbing plants and about the formation of humus by earthworms. He also recognized the importance of the experience built up by the breeders of plants and animals of his day and made a thorough study of their work. In short, when Darwin wrote about variation within species, similarities in anatomy or the development of the embryo, he knew what he was talking about. The exhaustive and meticulous biological basis of Darwin’s theories gave them authority. This does not alter the fact that the theory of evolution was more than a biological theory. It still derived its power of attraction to a large extent from its applicability to human societies. It was therefore soon being deployed in public debates. In human societies too, progress was said to be possible only through a struggle for survival, in which weaker or less well-adapted individuals, races or other groups inevitably met their end. This was music to the ears of some. In the nineteenth century, England had developed into a capitalist

THE NATURE OF LIFE AND THE ORIGIN OF HUMAN BEINGS

society, where a few people garnered immense wealth and many others suffered appalling poverty. The new rulers praised the virtues of free competition. Darwin’s theory seemed to provide a justification for this form of capitalism; nature itself laid down that the best would survive the struggle. It seemed that anyone who ended up at the bottom of society was simply unfit and so ideas derived from Darwin were used to justify the oppression of one group by another. This applied not only to social classes but also to races or national or ethnic groups. Colonial relationships, for example, were often justified in this way. Others were concerned about the fact that modern society created the conditions under which supposedly inferior groups, those who were physically or mentally ‘unfit’, could reproduce at will. They proposed correcting this by means of an active programme of eugenics. The application of Darwin’s theories to human society is known as social Darwinism, but in practice there seems to be no clear distinction between this and biological Darwinism; in the nineteenth century, evolution was applied to everything. Few realized the full consequences of the ‘survival of the fittest’, incidentally. Many people who spoke about evolution with reference to Darwin turned out in fact to believe implicitly in the inevitability of progress. The theory of evolution was more than purely a biological theory, and this was one important reason for resistance to it, although for that matter the same applied to the earlier theories of the first few years of the nineteenth century. Anyone who stood by the old virtues and values, believed in tradition and felt most at home in the fixed relationships of a hierarchical society would strongly object to the whole idea of evolution. In the eyes of many, Darwin had now made things even worse. They abhorred materialism and the moral nihilism of a ‘progress’ based purely on struggle and success, and they feared that his theory cut the ground from under religion and morality. The significance of Darwin’s theories does not lie merely in the fact that they dominated public debate for decades. Unlike those of Lamarck, his ideas also bore fruit within biology. Selection as the mechanism of evolution was a controversial concept in biology for many years, but the idea of a prolonged evolution in itself became generally accepted as a result of Darwin’s work. Over time Darwin’s

207

208

THE SCIENTIFIC WORLDVIEW

theory came to function as an umbrella concept that provided the life sciences with a coherent programme. Many separate phenomena that had not been fully comprehended until this point, or had been studied singly, made sense in the light of evolutionary theory. Darwin’s work was soon being applied in research into embryonal development. Some biologists believed that the development of the embryo of an individual repeated, as it were, that of the species. You could therefore study the genealogy of a species by following the development of an embryo in detail. Although this idea was too optimistic, embryonal development certainly did provide clues as to a creature’s lineage. More importantly, researchers now started to make an in-depth study of the mechanism by which embryos developed. Embryology had previously been mainly a descriptive science, but now it developed an important experimental tradition. Of course, most biological research derived its justification from more practical and less philosophical issues: the breeding of agricultural plant species, the question of how to combat insect pests and related topics. In these areas too, however, the theory of evolution provided a useful framework in the long term. While in the broader society a debate raged over the great philosophical questions, most biologists used Darwin’s theory to solve technical questions of limited range.

DESCENT When Darwin launched his theory in 1859, there was still little concrete evidence of evolution. Geology had created an awareness of an immensely long primaeval era with innumerable changes to the earth’s surface. Indications had already been found, in the form of fossilized animal and plant remains, that in prehistoric times an entirely different flora and fauna had existed from that which we see today. But a clear succession of species that developed out of each other was not yet a picture easily derived from these clues. The publication of Darwin’s book prompted an extensive search for fossils, as an aid to reconstructing earlier life forms and arranging them in the form of a genealogy. In interpreting fossil remains, the taxonomy developed by Linnaeus and his successors was an essential aid. This was work that rarely spoke

THE NATURE OF LIFE AND THE ORIGIN OF HUMAN BEINGS

to the imagination. Most fossils were of plants and lower animals, mainly sea creatures, and research into lineages mainly concentrated on these. For the general public, however, the image of prehistoric times was soon to be reshaped by several more spectacular large animals. Their remains had been found before Darwin’s time, but until the publication of his book they were isolated rarities. After 1859 they became tangible proof of the theory of evolution. Fieldwork in the United States, in particular, produced a great torrent of new species. In the popular imagination the primaeval era became a time of prehistoric monsters. Darwin and his theories could not have hoped for a better ambassador than the dinosaur, with a set of characteristics that spoke to the imagination. What spoke most vividly to the imagination, however, was the origin of human beings. Although Darwin himself commented upon it only after some hesitation, it was clear from the start that humans could not be off-limits in his theory. The assumption was tentatively reached that even the unique characteristics of human beings were not simply the consequence of an immaterial ‘soul’ created by God but, like all other characteristics, had emerged over time as a result of variation and selection. It seemed probable that humans had once come into being out of some kind of primate forebear. For many anthropologists, the search for the ‘missing link’ in the lineage from apes to humans became something like the quest for the holy grail. The first sensational result came in 1891–1892, when a Dutch doctor called Eugène Dubois, after a targeted campaign, found the remains of a primitive hominid on the island of Java. The find caused a sensation and spurred other researchers to launch similar searches. In the years that followed, finds were reported from China, South Africa and India. The oldest and most spectacular of them were made in East Africa, by husband-and-wife team Louis and Mary Leaky among others, who from 1960 onwards, after decades of research, found a number of fossils of very old hominids in the Olduvai Gorge in Tanzania. Africa was identified as the region where the human species originated, and it became a focus of the search for fossils of humans. At first the only way to reconstruct lineages was to look for anatomical similarities between fossils or between living organisms.

209

210

THE SCIENTIFIC WORLDVIEW

In the second half of the twentieth century (as we shall see), the development of genetics provided a new instrument. Organisms could now be compared not just by their physique but by the molecular structure inside the cells of their bodies. In many cases this made a far more precise determination possible of their relationship to each other. So human genetics helped to throw light on the questions surrounding the origin of humankind. All this led to the conclusion that more than five million years ago, somewhere in Africa, the hominid family had originated from an anthropoid ancestor, in other words from an ape. Several hundreds of thousands of years ago, again in Africa, the modern human species, Homo sapiens as Linnaeus had named it, developed from these hominids, all of which had since died out. Here too, increasing knowledge meant that the single allencompassing image fragmented. Modern insight no longer has any place for a linear descent or a clear ‘missing link’. The history of the hominids, as it has taken shape in contemporary research, consists of a confusing tangle of lines of descent. And whereas earlier thinkers mostly sketched a complete picture of human development, in which they readily connected anatomical structure (as concerned brain volume or upright posture) with intelligence, language, culture and lifestyle, modern researchers increasingly came to recognize that they could not say anything at all about these latter aspects. Research into the origins of humankind has always been able to count on immense interest. Every ancient skull newly dug up in Africa still receives worldwide attention in the media. The amount of time and money put into such research can hardly be justified in terms of its purely biological importance – after all, why would the ancestry of the human species be more interesting than that of, say, the whale? The degree of fascination is attributable purely to the sensation of being on the trail of our own origins, so that we comprehend who we are. To know who we are and where we come from, we turn to scientific research.

THE MYSTERIES OF THE MIND In another way too, humans have increasingly been examined through scientific eyes. The success of the theory of evolution was a

THE NATURE OF LIFE AND THE ORIGIN OF HUMAN BEINGS

natural consequence of the intellectual climate of the second half of the nineteenth century. In that period people had a strong belief in progress, as I have described, and expectations of science were extremely high. Darwin’s work fitted into a new scientific conception of life in general. At the end of the nineteenth century, psychology came into being as a scientific discipline. Its founder was Wilhelm Wundt, who in 1879, as a professor in Leipzig in Germany, set up the first psychology laboratory. By measuring reaction times, among other things, he hoped to be able to analyse the elements that constituted attention, memory, association and so on. Although he did not himself wish to break with the more traditional philosophical-contemplative approach to the human mind, others preferred to focus on the more reductionist aspects of his work. Important in this connection is the rise of neurology. Given that the mind, as far as could be ascertained, resided in the brain, researchers hoped that an analysis of the operations of the brain and the nerves of which it was composed would teach them how the mind worked. The experiments of Russian physiologist Ivan Pavlov became famous. Pavlov investigated digestion and the way in which the nervous system was regulated, with dogs as his object of study. At first, he believed that the secretion of saliva and gastric juices was regulated based on a cerebral judgement, albeit unconscious. He spoke of ‘psychic reflexes’. On closer examination, however, it became clear there was no psychic process; the nerve impulses involved were purely physiological reactions. Pavlov therefore replaced the term ‘psychic reflex’ with ‘conditioned reflex’. For many people it was tempting to think that such reflexes might determine all other apparently cerebral acts as well. Pavlov lost interest in digestion and from 1903 onwards he mainly conducted fundamental research into the workings of the nervous system. Incidentally, the positivist approach did not of itself mean that all scientists denied the psychic character of the mind. As well as in measurements of nerves and brains, there was great interest in phenomena such as hypnosis. The most famous theory of the period around 1900 about the human mind, illustrative of the way that scientific and philosophical (some would say irrational) considerations intertwined, was that of Viennese psychiatrist Sigmund Freud.

211

212

THE SCIENTIFIC WORLDVIEW

In his theory a central place was reserved for unconscious experiences and desires. Freud sought access to these not through objective measurements but through less tangible processes such as dreams and associations. This did not deter him from presenting his theory as hard science. Others disputed that status. Freud had a small flock of devoted followers, especially among therapists, but the majority of researchers were unconvinced by his thinking, and remain so. All the same, some of Freud’s ideas became extremely well known and have had a great deal of influence on the general cultural climate. Psychology was above all a technical and practical field. It derived its raison d’être mainly from changes to education and business life around 1900, which created a desire for things such as career choice tests and aptitude tests for job applicants. But precisely the fact that human qualities were investigated by purely technical means implied that people were regarded more in terms of their objectively measurable functions. Theories like those of Freud can be seen in part as a reaction to increasing scientification. From the Second World War onwards, the programmable calculator gave a completely fresh boost to thinking about the mind. In earlier times, attempts had been made to understand the workings of the brain in terms of mechanical clocks or hydraulic machines, but the electronic calculator presented a model that could be subjected to experimentation. Theoreticians of the computer such as Englishman Alan Turing did not merely tackle the question of how the device could be applied, they also posed fundamental questions such as: What is the difference between a human brain and a computer? As computers became faster and more powerful, this question became increasingly urgent. A computer made it possible to build what came to be called neural networks, through which some aspects of human thinking could be imitated. In recent years the term ‘artificial intelligence’ has been used increasingly to refer to the solution of all kinds of practical problems. This raises the question of what actually sets humans apart from machines.

THE MECHANISM OF HEREDITY Darwin’s theories were fundamental and they guided further research, but they also had weaknesses. From the very start it was clear

THE NATURE OF LIFE AND THE ORIGIN OF HUMAN BEINGS

that his assumptions fell short on several essential points. In particular, it was unclear how variation between individuals of the same species arises and is passed on. As long as there was no satisfactory answer to that question, the theory of evolution lacked a firm footing. As a result of efforts to find one, research into heredity became central to biology. Biologists were not alone in turning their attention to heredity. The entire late nineteenth century was fascinated by the concept. Novels, to take just one example, were packed with ideas about hereditary defects, degeneration and atavism (a throwback to an earlier stage). But biologists tried to grapple with the issue more accurately and fundamentally. How did it come about that children resembled their parents and how was it possible that over a large number of generations change nevertheless took place? The problem was not merely of theoretical importance. Knowledge about how the characteristics of organisms can be changed might be important for breeders of plants and livestock. One of the most famous researchers in this field was the American Thomas Hunt Morgan, who in the early twentieth century undertook an extensive breeding programme with fruit flies to work out how inherited characteristics were passed on. His work involved literally millions of flies. By creating lineages of flies with precisely determined characteristics and crossbreeding them, Morgan was able to show that specific variable characteristics must be produced by hereditary material that was handed down unchanged. In the early twentieth century the term ‘genes’ was introduced for this material. Because individuals inherit half their genes from their fathers and half from their mothers, each individual has a unique combination of genes, which determines precisely which characteristics that individual exhibits. It was for many years unclear how these genes should be imagined, but the uncertainty in that regard only made the concept more useful as a theory. Eventually it turned out that genes were made of a substance called deoxyribonucleic acid, or DNA. It had been isolated from cell nuclei back in the mid-nineteenth century and it consisted of long strings of complex molecules, made up of building blocks called nucleotides. In the 1940s research on bacteria made it clear that DNA had an essential function in the passing on of inherited

213

214

THE SCIENTIFIC WORLDVIEW

characteristics. In 1953 British researchers James Watson and Francis Crick determined that a DNA molecule has the appearance of a double helix, or two paired spirals. This discovery is one of the most famous episodes in twentieth-century science. It placed the study of heredity in the hands of biochemists. The solution to the mystery of life was now sought above all in the reactions between complex molecules. In 1957 Francis Crick formulated two important points of departure for research into heredity. Greatly simplified, they come down to the following. Firstly, he claimed that DNA encodes genetic information in the sequence of its nucleotides, which forms as it were a set of instructions for the sequence of amino acids (and therefore the structure) of the proteins in the body. The second point of departure was that information goes from DNA to protein, not the other way round. These two assertions have remained the basis for genetic research. The actual processes are of course considerably more complex. Other substances are involved, for example. Out of biochemistry a separate discipline now arose, molecular biology, which did not merely attempt to understand how specific characteristics were rooted in a specific DNA structure but also developed methods of actually tinkering with the DNA. A wide range of practical applications emerged. Sometimes it was mainly a matter of recognition; in criminal detection, DNA analysis (of hair or blood, for example) is nowadays a standard technique used on a suspected perpetrator. Sometimes it is a matter of actually changing inherited characteristics, for example to make crops more resistant to disease. This latter application, particularly, is far from uncontroversial.

HEREDITY AND EVOLUTION The elucidation of the mechanism of heredity had enormous consequences for the idea of evolution. On the one hand the main points of Darwin’s theory were confirmed, but on the other almost every part of it was turned upside down. In this context we speak of ‘neo-Darwinism’ or the ‘neo-Darwinist synthesis’, which is to say a synthesis between the theory of evolution and genetics, which looks at how inherited characteristics are determined by variations in

THE NATURE OF LIFE AND THE ORIGIN OF HUMAN BEINGS

genetic material (in other words in the genes). The new theory of evolution was intended to show how characteristics, and ultimately species, can change when, as a result of certain factors, shifts happen in genetic variation. The synthesis of heredity and evolution is in retrospect fairly inevitable, but for contemporaries it was anything but selfexplanatory. One reason for this was that you needed quite a bit of mathematics, or more specifically statistics, to make an evolutionary model based on the distribution of genes within a population. This was not something with which biologists were traditionally very familiar. The synthesis came about within a new field of study, population genetics, which was largely established by statisticians. The deciphering of DNA did not have a direct influence on the neo-Darwinian synthesis, incidentally, although knowledge of the precise mechanism of heredity did provide additional backing. The neo-Darwinist synthesis unified the life sciences, even more than nineteenth-century evolutionism had done. But the social aspects had all but disappeared. Variation and heredity had become purely biological concepts that were found only at a molecular level. A nineteenth-century concept such as ‘race’ simply evaporated in the hands of the new generation of biologists. In the entirety of biological variation within a species, classic racial differences (such as skin colour) proved of minor importance. This is not to say that the new genetics had no significance for the philosophical picture of humankind. The undermining of the concept of race was just as important politically for the generation born after the Second World War as the founding of it had been for the people of the nineteenth century. Moreover, research showed that human beings were genetically far more closely related to animals than previously thought. All kinds of specifically human characteristics or individual skills turned out to have a genetic basis. The question of what was actually ‘human’ came to the fore with great intensity. Molecular biology seemed to open up the possibility of changing the inherited characteristics of humans, whether to combat inherited diseases or to breed people with greater intelligence – or perhaps with less criminal tendencies. It was a prospect that raised any number of ethical questions, but it also forced a fresh reflection on free will and human nature.

215

216

THE SCIENTIFIC WORLDVIEW

Lastly, the discovery of the molecular foundation of life presented a new outlook on another fundamental issue that inevitably arose within the framework of the theory of evolution, namely the origin of life itself. The question now was how in the conditions prevailing on the primitive earth those molecules could have formed, and then further developed, that provided the building blocks for DNA and other vital components of the cell. In this field interesting results were achieved, but to this day there is no generally accepted theory of the origins of life.

A SCIENCE OF HUMAN BEINGS? Out of biological, paleontological, biochemical and molecularbiological research, therefore, a vision of human beings arises that diverges markedly from the more philosophical ideas that were current until the nineteenth century. It is true that researchers were still largely groping in the dark or had differences of opinion on important matters such as the workings of the brain, the origins of language and so forth, but that does not alter the fact that a clear programme of research existed. Attempts were made to reduce these phenomena to molecular functioning or other ‘blind’ forces of nature. Frustratingly, however, all this knowledge is of limited use. In most cases it remains completely unclear what the repercussions should be of today’s fundamental biological, psychological or neurological theories for the daily practice of politics, childrearing, art, economics or the administration of justice. The old philosophy offered a total programme as far as that is concerned, and nineteenth-century scholars still believed it was one that their scientific approach could replace. But twentieth-century researchers withdrew to increasingly limited areas of science. Knowledge in all these fields has grown immensely, but at the same time the larger programme has steadily fragmented. There have certainly been attempts to make things like morality accessible to science. An important new boost was given when in the middle of the twentieth century the theories of Darwin were declared to be applicable not just to anatomical structure but to animal behaviour. Animals had a fixed, inherited repertoire of behaviour that developed under the influence of natural selection. This assumption created a new discipline, called ethology.

THE NATURE OF LIFE AND THE ORIGIN OF HUMAN BEINGS

Interest in this field again largely grew out of the question of what the consequences of such a starting point were for the interpretation of human behaviour. Its practitioners made no secret of the fact. One of the pioneers of ethology, Austrian Konrad Lorenz, spent much of his career studying the behaviour of the greylag goose. He justified its choice with the argument that there was an astonishing similarity between the family life of the goose and that of humans. More recently, research on apes has aroused great interest, precisely because they are our closest biological relatives. Efforts to explain human behaviour in biological terms have met with fierce opposition, even within science. In the 1970s a debate concerning what came to be called sociobiology raged in the United States over how far human behaviour was laid down in biology and to what extent it was determined by culture or by free choice. Aside from ideological factors, this debate was fuelled by conflicts of interest. Psychologists, sociologists and other social scientists wanted to have the last word on social matters and did not want to cede authority within their discipline to biologists. They felt less troubled by social scientists who took biological ideas seriously and tried to integrate them into their field. The various sciences that deal with the human species have continued to exist alongside each other and they each have their own principles and methods. So it is not possible to say that at this point there is anything that could be called a general theory of humankind that underpins, or ought to underpin, all the knowledge that we have about human beings.

217

• 10

THE NATURE OF REALITY

A RATIONAL WORLD? The Scientific Revolution of the seventeenth century included not least a radical new vision of natural reality. From that time onwards, nature was seen as uniform, as subject to exclusively causal processes, and as operating in accordance with underlying laws of nature that could be expressed mathematically. This image of the natural world has not really been seriously contested since. Controversy has remained, however, as to where exactly the boundaries lie of that which can be described as ‘natural’, and therefore explicable in principle by natural science. Even in the seventeenth century there were arguments about the extent to which the natural order excluded special, supernatural interference. Within the new Newtonian natural science, a place was reserved for direct divine intervention. Newton himself had believed that the laws of nature must lead to an instability in the solar system, since the planets were pulling each other out of their orbits. God therefore had to intervene from time to time to correct the resulting disturbance. According to Newton, he did so by means of comets, which with their force of gravity drew the planets back onto the right track. DOI: 10.4324/9781003323181-14

THE NATURE OF REALITY

Such an appeal to supernatural intervention fell increasingly out of favour among researchers, however. Eighteenth-century mathematicians refused to accept that the solar system was unstable in the way Newton suggested. They continued to make calculations and, sure enough, came to the opposite conclusion. The story goes that when at the end of the eighteenth century the French mathematician Laplace (he of the nebular hypothesis) presented his great comprehensive work on the subject, Mécanique céleste, to Napoleon, the emperor remarked that God was not mentioned anywhere in it. Laplace is said to have answered with pride, ‘Sire, I had no need of that hypothesis.’ But even if we assume that nature works exclusively according to natural laws of causality, all kinds of possibilities remain open. Is the world completely determined on the basis of a few simple laws of nature, such that in theory every future event can be predicted? Or is it a chaotic and to a great degree unknowable whole? In the second part of this book I briefly referred to a variety of possible interpretations that relate to the different ways of using mathematics. There is also room for disagreement over the scope of scientific explanations within the domain of nature. It is clear that by using the model of natural science you will not get far in the realms of ethics, politics, aesthetics or poetry. But what about health, or childrearing? The eighteenth-century Enlightenment regarded science as a universal model that guaranteed human happiness and provided a basis for the design of a rational society. Early nineteenth-century Romanticism, by contrast, stressed the shortcomings of natural science and gave a central place to concepts such as feeling, imagination and dignity. In the eighteenth century, people were convinced that the world was rational, which is to say that it could be fathomed by human reason. Only its ultimate foundations were unknowable, but no one should worry themselves too much about those. They had been established by God, with a view to the welfare of his creation, human beings in particular. The world was therefore not merely comprehensible to humans, it was designed to suit their understanding and needs. The ideal of knowledge at that time was personified by the figure of Newton. To the general public of the eighteenth century, Newton had not merely captured a number of phenomena in mathematical

219

220

THE SCIENTIFIC WORLDVIEW

formulae. For most people, after all, those were impossible to follow. What they mainly learned from Newton was that nature was rationally structured and governed by mechanical laws. Nature could be comprehended by normal human intelligence. In other words, Newton had made the world intelligible. In the nineteenth century this optimistic vision, in which the entire world was created to suit humankind, went into decline. However, this was not accompanied by a reduction in scientific ambition. People appealed less and less to God as the founder of the basic principles of nature, but this led them to want to subject those principles to investigation. In the eighteenth century nobody had bothered too much about the cause of the general gravity that Newton had discovered. It was simply a divine creation. In the nineteenth century more and more scholars came forward who wanted to derive it from other laws of nature (without success, incidentally). In the second half of the nineteenth century especially, the pretensions of science were sky high. People believed that ‘scientific principles’, as they were applied at the time, provided the key to the understanding of all of reality. This self-confidence was strongly encouraged by the technical progress of the day, which was seen as a corollary of the exact sciences. Then there was the fact that in those years natural science became professionalized in institutes and at universities. The new professors and researchers bestrode their fields of study with healthy ambitions and made considerable progress. No wonder they raised enormous expectations as to what their profession might be able to achieve.

THE BUILDING BLOCKS OF REALITY The physicists and physiologists of the nineteenth century generally based their faith in the almighty power of science on an image of nature in which everything could be attributed to mathematical and mechanical phenomena. The issue of the structure of material reality (according to some the only reality there was) was a topic of intense debate all through the nineteenth century. For reasons of principle, many tended towards a theory according to which nature was composed of the smallest of particles, called atoms.

THE NATURE OF REALITY

This representation of reality had found acceptance mainly because of chemistry, which ever since Lavoisier had assumed that all chemical substances were built out of a relatively small number of elements. This created a presumption that there were certain elementary building blocks, known as atoms. Englishman John Dalton was one of the firmest defenders of this idea. He believed that the different elements were each composed of a different kind of atom, distinguishable by its weight. Compounds were therefore made up of ‘combined atoms’, combinations (according to simple, fixed relationships) of the ‘primary atoms’ of which the elements consisted. Based on these assumptions, Dalton, by means of measurements of the weight and volume of the various substances, was able to put together a table of atomic weights, as well as a list of the precise ways in which the combined atoms of the different compounds were made up of the primary atoms. Although with the benefit of hindsight we can find quite a few mistakes in his list, his way of describing compounds in chemistry proved extremely productive. Later the theory of atoms turned out to be a good point of departure not only in chemistry but in thermodynamics. Yet this did not constitute proof that atoms actually existed. After all, nobody had ever seen one. Moreover, the theory also left certain characteristics of matter hard to explain, such as the radiation spectra. The existence of atoms was for the time being no more than a hypothesis. Scholars could still be found who were sceptical about it. Eighteenth-century researchers had mostly proposed that things such as heat and light were material, based on a theory of atoms. They regarded heat as a weightless substance that could flow from one body to another. One problem with this theory was that it did not explain why heat could be released by friction, movement, or an electrical current. In the nineteenth century this materialist interpretation fell out of favour. Researchers now preferred to interpret heat as the movement of particles of matter. For them the world was not made up of matter alone but also of ‘force’, which was thought of as a kind of immaterial substance. Heat and movement, but also electricity and chemical forces, for example, were manifestations of it. Force in this sense was at first a vague concept. (It should not be confused with the forces of Newtonian mechanics.) However, in

221

222

THE SCIENTIFIC WORLDVIEW

about 1840 the British researcher James Prescott Joule, building on the principle that heat and mechanical work were both expressions of the same ‘force’ and could therefore be converted into each other, managed to determine by means of precise measurements exactly how much heat was equivalent to how much mechanical work. The vague concept of ‘force’ thereby became something that could be measured. A few years later, German researcher Hermann Helmholtz extended Joule’s findings onto a far larger terrain. The principle that the underlying force is constant, not only in heat and work but in electrical and magnetic processes, was for him the starting point for a mathematical analysis. He showed that many of the known laws of nature could be derived from this principle. It meant that heat, movement, electricity and so on were not independent entities but different manifestations of a single overarching principle. In 1851 English physicist William Thomson (Lord Kelvin) introduced the term ‘energy’ to replace the problematic concept of ‘force’. Energy could be converted into its various manifestations – heat, movement, electrical or chemical energy – but could not be destroyed or created. It was therefore a fundamental building block of reality. Maxwell’s theory about the electromagnetic field amounted to a fresh attack on the idea that everything in nature could be explained based on atoms that fly through empty space. In Maxwell’s theory, forces did not emanate from matter, they were stored, as it were, in space. The theory received a boost when German researcher Heinrich Hertz proved by experimental means the existence of electromagnetic waves that spread through empty space. This was hard to understand in terms of atoms or energy and it therefore gave rise to all kinds of new ideas. Most of the resulting theories had a prominent place reserved for ‘ether’, a mysterious substance that filled space. Ether was weightless but conceived as matter. Previously, many people had thought of reality in purely material terms; everything was a manifestation of material atoms. Now some researchers went to the other extreme by believing that matter was a manifestation of something else. William Thomson (Lord Kelvin) posited that space was filled with a world ether that had the

THE NATURE OF REALITY

properties of an ideal fluid, and that atoms were nothing other than swirling rings (called vortices) in this ether, rather like the smoke rings that practiced pipe smokers know how to blow. Based on this idea, he was able to make a reasonable case that atoms could attract or repel each other and connect together. Others, especially in France, adhered to the idea of point atoms: an atom was an immaterial centre by which forces were exerted. All matter in the world therefore consisted purely of forces. Others again, especially German chemist Wilhelm Ostwald, regarded matter as a form of energy and energy as the only building block of reality. It was therefore not mechanics, as had traditionally been thought, that provided access to the most fundamental laws of nature, but the study of energy. Few were convinced by Ostwald’s ideas. By contrast, what became known as the electromagnetic worldview, according to which all of nature was an expression of electromagnetic forces, did acquire some supporters. German researcher Wilhelm Wien proposed that the laws of mechanics, of thermodynamics, of chemistry and of gravity should be reduced to those of electrodynamics. Around 1900 many researchers saw this as a promising programme. Discoveries in the field of energy and electromagnetism were important theoretical innovations, but they also figured in more metaphysical debates about the nature of reality. For many people the introduction of the concept of energy raised the question of how this new concept could be reconciled with the workings of the mind on the body. Was the law of the conservation of energy not violated if the material world (the body) could be controlled from the nonmaterial world (the mind)? For Ostwald his own energy ideas were the starting point for a new outlook on the world, complete with services of worship and hymns. For many others, researchers but also lay people, this took a less extreme form: science was an alternative way of looking at the world that was expected not only to explain reality but to give it meaning. The replacement of a purely atomic worldview by one in which a major role was reserved for force or energy in a sense met the need felt by many people to see in reality not just material but something like spiritual forces, which in turn needed to be described in strictly scientific terms. There was no room for a traditional image of God.

223

224

THE SCIENTIFIC WORLDVIEW

Those who had loathed the materialism of science certainly did not regard the new ideas as an improvement.

RESEARCH INTO RADIATION The ideas looked at above concerning the structure of matter had a dubious scientific status. Scientific theories like those of Maxwell, Helmholtz and others gave a mathematical description of phenomena in reality that did not depend on the underlying structure of that reality. They were useful to work with. The fact that many scholars could not resist speculating on that underlying structure says a great deal about their quest to explain the world as a whole. When a field of study presented itself that did indeed throw light on the structure of matter, they needed little encouragement to step into it. The field in question was the experimental study of radiation phenomena. Around the middle of the nineteenth century, spectral analysis was developed. By means of a diffraction grating or prism, researchers could analyse the light that various substances emitted when heated. Each of the different elements had its own characteristic colour spectrum, which must be due to a property of its atoms. Based on existing models of atoms, however, it was hard to explain how this could be. In the second half of the nineteenth century, these efforts were accompanied by a growing interest in research into what were then called cathode rays – the radiation released when electrical charges were introduced into diluted gases. The phenomenon revealed itself through phenomena of light, but it turned out that invisible radiation was also involved. This could be made visible by artificial means, for example by the use of a fluorescent screen. As physicists became better at detecting this kind of radiation, they began to find it in other places too. In 1895 the German physicist Wilhelm Conrad Röntgen discovered, more or less by chance, a new kind of penetrating radiation, which he called X-rays. This persuaded researchers to go in search of further sources of radiation, and that in turn led to the discovery of radioactivity, the phenomenon that some substances naturally put out invisible radiation. An essential aid to this research

THE NATURE OF REALITY

was the photographic plate, which proved to be sensitive not only to light but to other forms of radiation as well. Physicists soon distinguished between different forms of radiation, which had contrasting characteristics. Some types turned out to consist of negatively charged particles. Soon there were researchers who claimed that in these they had found the elementary building blocks of matter. The question then arose as to how it came about that the hypothetical atoms themselves were electrically neutral. Apparently, there were not just negative particles in the atom but also a positive charge. Researchers, most of them British, put together hypothetical models of atoms in order to explain known characteristics such as radiation and light spectra. Another route entirely was pioneered by New Zealand physicist Ernest Rutherford, who worked in Canada and England. Instead of understanding the new radiation as building blocks, he deployed it as an instrument. To research the nature of matter, he bombarded it with particles of existing radiation and looked to see how different particles were scattered or deflected. This was made possible by new technology that enabled him to detect individual particles of radiation. In 1911 Rutherford concluded from his experiments that the mass of atoms must be almost entirely concentrated in a tiny, massive nucleus. Research into radiation and the particles it emitted developed rapidly into a fully fledged field of research, for which all kinds of new methods and instruments were invented. At last, the researchers had found a means of looking inside matter, as it were. The period around 1900 was therefore an exciting time for physicists engaged in research into the structure of matter. New discoveries were continually being made and new ideas launched. The old question about the existence of atoms quietly disappeared from the picture. Around 1900 the question was no longer whether atoms existed but rather what they looked like. Research on radiation and radioactivity did not lead to a final theory that made physics, based on a theory of atoms, complete at last. In fact, researchers were forced to conclude that in this field nature paid little heed to what people found reasonable. The structure of

225

226

THE SCIENTIFIC WORLDVIEW

matter turned out to be a subject in which the assumptions of classical physics no longer applied. That nature in some cases, as in thermodynamics, conformed to statistical ‘laws’ instead of to strict, predetermined principles was in itself a shock to some. But existing ideas about the world were not completely turned on their head until the arrival of two new developments at the start of the twentieth century: relativity theory and quantum mechanics. Each is a complicated mathematical theory that can only really be understood by specialists. Up to a point, however, the consequences of these two theories can in fact be described in ordinary language. From this it becomes clear that a strictly scientific treatment leads to consequences that are sometimes in direct conflict with our intuition, or even seem utterly impossible. Because of these consequences, both theories have become familiar to a wide circle of people.

THE THEORY OF RELATIVITY Probably the better known, in name at least, is the theory of relativity, developed by Albert Einstein, a physicist of German extraction who studied at the Swiss technical college in Zürich. He had hoped to find work at a university after he finished his studies but could not get a foot in the door. So, he took a job with the Swiss patent office in Bern. In his free time, he continued to study physics and published a number of scientific articles. These eventually drew attention to him, and he was at last able to begin an academic career. There are actually two relativity theories, the theory of special relativity, published by Einstein in 1905 when he was still working at the patent office, and the theory of general relativity, published in 1916, which is a generalized version of the first. Both theories, roughly speaking, deal with the behaviour of particles in a spacetime system. They show that time and space do not have the characteristics we intuitively attribute to them. Two events that for one observer take place simultaneously are not necessarily simultaneous for another observer. This leads to any number of paradoxes, of which much was made in the popular imagination. Einstein’s ideas had a revolutionary impact within physics. Since the seventeenth century, Newtonian mechanics had been the basis

THE NATURE OF REALITY

of research into nature. Newton’s laws were supplemented, refined and reformulated over time, but not called into question in any essential way. Now, however, Einstein showed that although those laws held good for our daily reality, in essence they were merely approximations. In extreme circumstances, such as at (to us, at least) very high speeds, they no longer applied. This was not a matter of a few minor corrections; Newton’s fundamental ideas about nature itself, about concepts like space, time and speed, needed to be reconsidered. As did mass. It followed from Einstein’s theory that mass was a form of energy, and vice versa. Mass and energy were equivalent according to the famous formula E = mc2 (where E stands for energy, m for mass, and c is the speed of light in a vacuum). Because departures from the theories of Newton were detectable only in extreme situations, Einstein’s theory is of importance mainly in two fields of research. The first is cosmology. To understand phenomena in the depths of space, Newton’s theory is in many cases inadequate and the theory of relativity comes into play. The second concerns the behaviour of elementary particles, which sometimes move at the speed of light. In interactions between elementary particles, mass is actually converted into energy, or vice versa. A fundamental concept in late nineteenth-century physics was, as we have seen, world ether. This was the medium of phenomena such as light and electromagnetic radiation, which were seen as waves in the ether. Einstein consigned the concept to the scrapheap. The speed of light was constant relative to the observer, not relative to some hypothetical ether. This meant that it was suddenly unclear what light actually was. The significance of Einstein’s theories goes beyond the calculations of physics, however. The fact that concepts like time and space behave differently from the way classical physics said they did, and the way in which we intuitively experience them, means that the theory of relativity is important for our general view of reality. Einstein’s theories are mathematical in character and difficult for an outsider to fathom. Nonetheless, his results appealed hugely to the imagination, that of the general public included. He came to widespread fame after 1919. In that year an expedition led by British astronomer Arthur Eddington made measurements during a solar eclipse to determine whether starlight was bent by the body of the sun, as Einstein had

227

228

THE SCIENTIFIC WORLDVIEW

said it would be. The observations that were made provided convincing confirmation of Einstein’s predictions. The result was reported widely in the press and from that moment on, Einstein was one of the twentieth century’s most famous figures. Only a very few people truly understood his theory, but it nevertheless filled a need. Whereas two centuries earlier Newton had become famous because he made the world possible to comprehend, Einstein was now worshipped for having made it impossible to comprehend once again. For many people his work restored a sense of mystery, a feeling of touching upon the deepest secrets of the world, which surpassed human powers of imagination. Einstein’s work therefore had a huge, if difficult to measure, influence on the culture and thinking of the twentieth century, despite the fact that the usefulness of his theories for everyday life is close to zero.

QUANTUM MECHANICS In research into the structure of material reality, too, revolutionary shifts in insight took place in the early years of the twentieth century. Once again, Einstein was there right at the start of the development, along with German physicist Max Planck. Neither of the two, however, initially understood what they had unleashed. Planck was researching the radiation of ‘dark bodies’, or bodies that do not reflect any light. To reach his conclusions, he smuggled in an assumption that this radiation was quantized, which is to say it was released in specific amounts. He did not go on to see this as a feature of light itself. Einstein showed that certain phenomena in physics could be explained by assuming that light does in fact consist of particles. His proposal was not taken very seriously at first. In the nineteenth century Fresnel and others had shown conclusively that light was a wave phenomenon. In the first half of the twentieth century, however, these light quanta, or more generally radiation quanta, assumed a key role in research into the structure of matter. (Which explains why the theory is called quantum mechanics.) Danish physicist Niels Bohr put together a model of the atom in which negatively charged particles (electrons) orbited around a positively charged nucleus. This meant his model was compatible with the earlier findings of

THE NATURE OF REALITY

Rutherford. New in Bohr’s model was that the energy of these electrons was also quantized. In concrete terms this meant that electrons could occupy only certain orbits (corresponding to certain energy levels). Transitions between the different energy levels happened abruptly and were accompanied by the emission or absorption of light. Light emitted by an atom therefore also had a certain amount of energy. Because the energy corresponded with the frequency, the light sent out was of a specific wavelength. Bohr thereby succeeded in explaining the characteristic light spectra of elements. Light, bizarrely enough, could be described either as a wave or as a particle. And just as light (waves in the electromagnetic field) also had a particle aspect, particles, conversely, also needed to obey the laws governing the movement of waves. In 1926 German physicist Erwin Schrödinger put forward an equation for the movement of an elementary particle. Known as the Schrödinger equation, it is one of the twentieth century’s fundamental laws of nature. It can be used to calculate the likelihood that a particle, under certain preconditions, finds itself at a certain time in a certain place. It is not possible to calculate precisely either the movement or the place. Not because mathematics falls short but because the particle itself behaves unpredictably. This non-deterministic behaviour was described in the equation with the aid of a wave function. For centuries physicists had believed in a world governed by laws of nature that determined everything with precision. The fact that nature was essentially non-deterministic came as a shock to many. Einstein, for example, refused all his life to accept this consequence and tried to undermine the theory in every possible way. He did not succeed; the paradoxes to which he pointed turned out to correspond to reality. But even physicists who were more receptive to new developments struggled to reach agreement about the consequences the new theory inevitably had for the interpretation of nature. Niels Bohr fiercely defended the fundamentally probabilistic character of the movement of particles (a stance known as the Copenhagen interpretation of quantum mechanics). This interpretation gained the greatest authority, but other ideas existed alongside it, as they still do. Fascination for these new insights was not limited to scientific circles, although the Schrödinger equation remained relatively unknown.

229

230

THE SCIENTIFIC WORLDVIEW

For the general public, quantum mechanics was embodied most of all by what is known as Heisenberg’s uncertainty principle. A year after Schrödinger presented his equation, German physicist Werner Heisenberg formulated a law according to which it was impossible to know within certain margins of accuracy both the location and the impulse (mass times speed) of a particle. If you determined its precise position, then you robbed yourself thereby of the possibility of making an accurate measurement of its impulse, and vice versa. This finding, which was entirely compatible with Schrödinger’s theory, was more appealing than the mathematical formalism of the latter. The same applies to the uncertainty principle as to the theory of relativity, incidentally, in that most people picked it up as an element of a view of the world, not as a theory in physics. The uncertainty principle was properly understood only by specialists, but that does not alter the fact that it had an unmistakable effect on twentieth-century culture.

IN SEARCH OF A THEORY OF EVERYTHING People have the feeling that knowledge of this sort gives them a deeper understanding of the structure of reality, which is undoubtedly an important reason why research along these lines is so prominent. Research into the way in which the world is composed of tiny particles, and the laws that prevail at that scale, addresses one of the ‘big questions’ of our time. But there are also more prosaic reasons for this research. It certainly does have its applications. The invention of the atomic bomb in 1945 is a direct consequence of the development of atomic physics and an application of the equivalence of mass and energy that follows from Einstein’s relativity theory. The effects of the dropping of atomic bombs on two Japanese cities at the end of the Second World War showed a dark side of scientific progress to which little attention had been paid until then. The fact that knowledge about these fundamental processes gave rise to the atomic bomb and to nuclear energy has undoubtedly contributed to the willingness of governments to foot huge bills to support this kind of research. After the Second World War, particle physics had a powerful wind in its sails. Researchers focused on deeper and deeper levels of reality. They wanted to find out where the origins

THE NATURE OF REALITY

of the smallest particles lay. In large accelerators, particles could be made to collide, so that they disintegrated into their component parts. This led to the discovery of countless new elementary particles within what is known as the standard model, and a small number of them were identified as being those of which all the other particles were composed. The notion that knowledge of elementary building blocks meant knowledge of reality as such was in essence the ideal of seventeenthcentury mechanical philosophy, but in the twentieth century it turned out to have lost none of its power. A complete description of elementary particles and the interactions between them would amount to a ‘theory of everything’, as it was evocatively described. Electrical and magnetic forces, a variety of forces that could be observed only at the level of particles and the force of gravity all needed to be brought together in one big, unified theory. Although for the time being no practical use for this research is to be expected, it is nevertheless a subject to which physicists all over the world have devoted a great deal of time and energy since the Second World War. Many of them therefore regard it as more or less the heart of their discipline. The theoretical unification they are striving for has not been achieved to this day. The biggest stumbling block is gravity, which turns out to be extremely difficult to accommodate in one system along with the other forces of nature. Many hundreds of theoreticians are working on the problem and all kinds of theories have been proposed and elaborated upon in great detail, but none enjoys general endorsement. One problem is the extreme difficulty of testing such theories in practice. As increasingly fundamental levels are plumbed, smaller and smaller elementary particles need to be dealt with and greater energies are required to bring about the predicted phenomena. This necessitates the building of vast particle accelerators and other equipment that cannot be paid for even by most national governments on their own. Moreover, since the end of the Cold War, the attention paid by governments to this kind of research has reduced considerably. It is now justified mainly by the deeper insight into reality that it offers us.

231

• 11

THE INFLUENCE OF SCIENCE ON THE GENERAL WORLDVIEW

SCIENTIFICATION? It is almost a cliché to say that we live in a technologized world and that our worldview is to a great degree determined by science. It is true, but we need to consider the fact that the worldview of modern times is scientific in the same way that the Middle Ages were ‘Christian’. In mediaeval times the Church was dominant, but the vast majority of the population was illiterate. People could not read the Bible themselves and had no direct access to the opinions of church fathers or theologians. They got their ideas from sermons, from images in church buildings, and from legends and stories passed down orally. As a consequence, their ‘Christianity’ often stood at a great distance from official doctrine and was imbued with all sorts of popular traditions and outlandish heresies. In today’s society something similar applies to science. Scientific research is extremely remote from most people. Many undoubtedly regard science nowadays as a source of ultimate answers, but they themselves are not at all well placed to evaluate or understand scientific statements and most make no effort to do so. Scientific information reaches them through a filter: via doctors, schoolteachers, civil servants, journalists, television programmes, and more recently DOI: 10.4324/9781003323181-15

THE INFLUENCE OF SCIENCE ON THE GENERAL WORLDVIEW

podcasts, the internet and so-called social media. Of course, each of these intermediaries has goals of its own, which do not always coincide with those of the scientists involved. But however imperfect insight into the business of science may be, there is no denying that it has had a major influence on the worldview of the ordinary man or woman. Most people accept without question that the earth is a sphere and orbits around the sun. Even small children are familiar these days with the concept of a primaeval era and with dinosaurs. People know that vitamins in their food are important. We all have a vague notion of what atomic energy is and have heard of ‘genes’. At the same time, however, it is arguable that generally speaking none of this runs deep. What people pick up are not coherent theories but odd facts or findings that speak to them for one reason or another – because they are spectacular, like the dinosaurs, or because they are regarded as useful, like vitamins. Often such facts are taken out of their scientific context, so that they acquire a whole new meaning. Science provides elements that people use to furnish their worldview, but the influence on that worldview itself, on the worlds they live in and experience, is in most cases minimal. This seems almost unavoidable. At the start of Part Three of this book I pointed out that the world as people experience it is not only factual; it has a moral side as well. Things are good or bad, meaningful or meaningless. In theory we are no doubt capable of completely separating a moral conception of the world from a natural conception of it. In practice the two are closely wedded. People want to see their moral beliefs reflected one way or another in the actual world around them. When something is morally necessary, it cannot be regarded as the result of chance and blind forces of nature. It is not without reason that, through myths, legends and other traditions, people have of old created a picture of the world that is very much determined by their moral insights. I am talking here about ‘laypersons’, but this is no less true of the scientists themselves. Researchers generally specialize in a very narrow field and have little insight into the other scientific matters at issue. They are, therefore, as much laypeople as anyone else, and of course they have just as great a need for a moral compass. The results of science need to be fitted into a worldview that has much of its

233

234

THE SCIENTIFIC WORLDVIEW

grounding elsewhere and is on the whole morally inspired. In fact, in some cases ‘scientific’ outcomes are quite strongly coloured by morality, especially when it comes to subjects that affect people directly. In the field of medicine in particular, it has sometimes been difficult to leave moral judgements out of account. Medicine is regarded as an exact science, based on insights into the workings of nature. Doctors have often actively propagated their scientific ideas. Around 1900 they agitated for social reforms in the areas of housing, waste disposal or food inspection, based on medical ideas about the importance of hygiene. Ideas developed in scientific circles about health and healing often quickly feed through into the broad strata of society in some form. This is true of the notion of bacteria, the danger of infection and so on. Despite this, for much of its history ‘scientific’ medicine allowed itself to be led by ideas that, in retrospect, were mainly based on moral desirability. This is particularly obvious in the case of theories about sexuality or sex. Behaviour that was frowned upon by society, such as masturbation, was almost universally condemned by doctors as unhealthy and dangerous. Women were not just physically weaker than men, they were seen as more emotional and more caring, with a greater tendency to nervousness. From a scientific point of view it seemed unarguable that their nature was such that they were clearly intended for motherhood and for caring functions in society. They were simply not suited to giving leadership or performing abstract intellectual work. In these cases, ‘scientific theory’ was above all a theoretical dressing-up of prevailing prejudices. While moral judgements influence the interpretation of the workings of nature, ideas about the natural world in their turn affect the moral dimension. Religious leaders sometimes warned that too much respect for science can undermine faith in moral and religious values. Others, especially in the first half of the twentieth century, praised science as a moral undertaking, because of the values of objectivity and universality it was felt to embody. In the nineteenth century some hoped that modern science would provide the foundations for a total world vision. Science would reveal the deeper structure of reality and so offer a general and coherent insight based upon which all the big questions could be answered. The scientific story about the origin of the world, via the nebular

THE INFLUENCE OF SCIENCE ON THE GENERAL WORLDVIEW

hypothesis and the theory of evolution, was like a cosmic drama in which humankind was given a role to play. Forms of society and morality were legitimized by it. Generally, such ideas were strongest in those who resisted the Church and its monopoly on morality for political reasons. Science is reductionist by nature, however. It limits itself to that which is countable and measurable, and as a result it does not fit easily with the need many people have for a vision of the totality of existence. In the twentieth century, further research left little of any such overall view intact. Scientification did not so much mean that science was the basis of a comprehensive philosophical vision as that science paid less and less heed to philosophical questions. Research became autonomous and splintered into a great many specialist areas that often exhibited little connection with each other. Instead of a great cosmic drama, today’s science mainly shows us a mountain of detailed data, whose relevance to human existence is not immediately clear. Science nowadays makes statements about major questions that were traditionally treated within a moral framework. However, not only does it thereby pay little attention to the moral values of a majority of the population, it also offers no alternative to the worldview that people actually hold, if only because for most of us science is something we cannot possibly follow in any detail. Science is therefore in a sense a foreign body in our society, a business that with general approval has vast sums of money pumped into it, is highly praised for its results and is regarded as an essential resource as soon as a simple practical problem presents itself. Yet science has not been taken up into the moral economy of every day and for that reason it is quite often experienced as a threat to the general worldview, an object of distrust and sometimes downright hostility. The social acceptance of scientific findings has sometimes gone smoothly and sometimes encountered considerable obstacles. It can never entirely be taken for granted.

ACCOMMODATION OF SCIENTIFIC FINDINGS Whereas in the seventeenth century the relationship between the scientific and the biblical worldview produced quite a bit of uncertainty and strain, those tensions had been resolved by the early

235

236

THE SCIENTIFIC WORLDVIEW

eighteenth century. Leading intellectuals assumed that the findings of modern scientific research could easily be reconciled with the traditional Christian worldview as taught by the Church. Research into nature revealed the power, greatness and goodness of the Creator. The image of God of those times had a positive cast to it. While in the seventeenth century God had been seen above all as vengeful and punishing, the eighteenth century preferred to paint a picture of a benevolent God who had designed the world for the benefit of his creatures. This meant that the universe was rather static, its existing order guaranteed by God. Generally, researchers hesitated to challenge such ideas, since in the resulting climate they were largely left to their own devices. Serious dissatisfaction with this situation arose again only towards the end of the eighteenth century, in the Romantic period. Romanticism is not a coherent complex of ideas; rather it is mainly characterized by a quest for higher values. Thinkers were therefore able to arrive at very different conclusions, but the way in which faith and science were reconciled in the eighteenth century no longer appealed to any of them. Opposition arose, not so much to science itself as to the accompanying optimistic, and in the eyes of the Romantics superficial, image of God that prevailed in the preceding period. In the Romantic vision of reality, the spiritual dimension had a prominent place. In some cases, adherents of Romanticism simply turned their backs on science and devoted themselves entirely to poetry, music or mysticism. But others went in search of a higher reality in nature itself and refused to resign themselves to the image of nature served up to them by the sciences. They did not contest the results of research, but they regarded it as one-sided. Anyone who looked only for material and causal phenomena would naturally find nothing else, they claimed, but nature was infinitely richer and had a spiritual component. The movement inspired by Romanticism that sought a more spiritual understanding of nature remains to this day an important undercurrent in Western culture. This is perhaps most visible, once again, in medicine. Since the nineteenth century a broad range of therapies have blossomed that insist on regarding human beings as spiritual creatures, such as homeopathy, magnetism, herbal medicine

THE INFLUENCE OF SCIENCE ON THE GENERAL WORLDVIEW

and faith healing. In some cases, these were practised by scientifically trained doctors, but very often they were administered by lay therapists. In all their variety they had as a shared characteristic their often emphatic dismissal of reductionist theories of scientific medicine. Instead, they claimed to be applying a more natural form of healing that brought people into harmony with their environment. Their power of attraction seems to reside above all in the fact that they connect with the moral universe in which many people live. As research became more reductionist and materialistic over the course of the nineteenth century, spiritual interpretations of its results only gained in power and vitality. Natural science was not so much rejected – in fact Romantics took on the vocabulary and to some extent the way of thinking of the natural sciences – but it was given a different spin, in order to show that the modern world needed to be interpreted according to human values. Evolution, for instance, came to be seen by them not as a blind process but as a pursuit by nature of a ‘higher’ state of being. Later, in the twentieth century, evolution was even equated with spiritual growth. All kinds of ideas and practices sprang up that attempted to bridge the gap between the spiritual world as people wanted it to be and the material world as science showed it to them. In the nineteenth century, first in America, the spiritualist movement arose. Spiritualists claimed they could prove they were able to make contact with the spirits of the dead. There was enormous interest in spiritualism for many years, even among some scientists, who believed that it was a way to provide empirical, scientific proof of life after death. Ultimately it has to be said that spiritualism appealed mainly to those with feelings of unease about the positivist, materialist worldview. People tried, after science appeared to have banished the spirit from the world, to fill reality once again with meaning. Typically, a form of spiritualism was sought that could be said to be in agreement with scientific principles and was therefore immune to attacks from the positivist side. The twentieth century too was full of such attempts. The great fame of the theory of relativity and quantum mechanics among the general public of the twentieth and twentyfirst centuries seems to be attributable in large part to their incomprehensible character, which appears to transcend materialist and causally determined laws of nature. Anyone wanting to find spiritual

237

238

THE SCIENTIFIC WORLDVIEW

significance can therefore (apparently at least) appeal to the authority of science itself.

REJECTION OF SCIENTIFIC FINDINGS Along with this Romantic undercurrent, a second movement has arisen since the late nineteenth century that rejects modern society as such, and calls for a return to the Christian tradition. In part its rise has to do with the emancipation in the nineteenth century of previously voiceless population groups. It was the time of rising mass movements and political parties. Religious leaders mobilized their followers with an appeal to shared moral values. Over the course of their struggle for emancipation, many people came to identify strongly with the tradition of faith. At the same time, in this era the downsides of modernization started to become visible: urbanization, the disintegration of traditional communities, impoverishment and the rise of a socialist movement that looked like a threat to political stability. As a consequence, many turned their backs on modernity. People generally had no difficulty with things such as precision and predictability, but they resisted the philosophical conclusions that many drew from modern science. In general, it can be said that opposition to the scientific worldview was strongly moralistic in tone: if people would just stick to the right moral and religious ideas, society would return to the tried and trusted moral order. Religious believers were not necessarily opposed to science, of which they had little understanding. They strove for a certain moral conception of reality. Protestants in particular were eager to defend the Bible as the word of God. The battle became focused on the biblical story of creation. The most controversial issue when it came to science was Charles Darwin’s theory of evolution, the claim that the animal and plant kingdoms had derived their current form as a result of natural, undirected processes of change that had taken an ineffably long time to occur. The idea that even humankind had descended from lower life forms in this way was totally unacceptable. It is clear, incidentally, that in many cases attacks on the theory of evolution were a stick to beat the dog and that people regarded this particular theory as so threatening mainly because it stood for any number of other things with

THE INFLUENCE OF SCIENCE ON THE GENERAL WORLDVIEW

which they disagreed: liberalism, socialism, communism, capitalism, the decline of a sense of community and so on. The strongest, or at least the best organized and most influential opposition to Darwin’s theories emerged in the United States. Here ‘creationism’ began as a deliberate attempt to safeguard education in schools against despicable Darwinist tendencies. The issue of whether the biblical story should at least be taught on an equal footing with evolutionary theory was fought out all the way to the courts. The movement was organized by influential religious leaders with the help of modern mass media, including creationism’s own magazines, and using sophisticated propaganda techniques. Two things are particularly striking. Firstly, that the movement did not turn squarely against modern science but rather sought arguments to show that the theory of evolution was scientifically invalid. This was to a great degree a tactical decision. American law meant religious arguments stood no chance in court, so creationists had to take a different tack. It also indicates that scientific research into nature could no longer be ignored, even by its fiercest opponents. The second thing that stands out is that attitudes hardened over time. If in the early twentieth century the creationists were still prepared to compromise somewhat – the six days of the biblical creation story could if necessary be seen as referring to a longer period – after the Second World War a far more radical version got the upper hand, which stuck firmly to the letter of the biblical story. For conservative pressure groups the battle against the theory of evolution was a key part of a struggle over social and political reform. As far as that is concerned, modern creationism is an exponent of growing polarization, in the United States especially, between Christian fundamentalists and neoconservatives on the one hand, and on the other the individualism and hedonism that has reigned supreme since the 1960s. This conflict is played out beyond the field of evolution. In recent years the subject actually seems to have lost some of its urgency in comparison to themes such as abortion, climate change, immigration, ‘wokeism’ and national security. Here moral values rather than scientific insights are ultimately decisive.

239

CONCLUDING REMARKS

The history of modern science is not a story with an obvious hero. There is no single method used by all researchers, one that is moreover specific to science. In truth there is no difference in principle between the rational research methods of scientists and those of police detectives. Logical consistency is a basic requirement for scientists, but also for lawyers. In any case, ‘science’ as a single discipline does not exist. To a certain degree it is purely a matter of chance which form of knowledge is regarded as science at any given moment and which is not. Ideas about what constitutes good and true knowledge are subject to change, and the history of science can be described according to these changing ideals of knowledge. In this book modern science is largely identified with the ideal of knowledge as it emerged in the seventeenth century. In that century nature was conceived in terms of three fundamental principles – uniformity, causality and regularity – which have remained as the foundations of what came to be known as scientific knowledge and scientific research. Researchers who attempted to explain the characteristics of a thing based on certain inherent ‘qualities’, or who believed that different laws of nature applied on the moon than on earth, were no longer taken seriously after the seventeenth century. DOI: 10.4324/9781003323181-16

CONCLUDING REMARKS

Nonetheless, these three basic premises of science are not sacred. Descartes took the idea of the uniformity of nature so far that he believed the whole of the universe consisted of just one kind of matter. That notion turned out to be untenable. Modern physics still cherishes the ideal of being able to trace all existing phenomena back to one fundamental primal force. But in practice – as long as the ‘theory of everything’ has not been found – physicists accept different kinds of forces and particles. The conviction that everything in nature happens according to causality alone has actually been seriously dented. Quantum mechanics has made the strict causal determinism of classical physics problematic on certain points, although this sounds more dramatic than it is. The non-determinism of quantum mechanics can still be captured in mathematical rules and has nothing to do with purposiveness, sympathy, or any of the other qualities seventeenth-century natural philosophers were the first to set themselves against. Moreover, its ambit is small. In the vast majority of science, causality is still assumed. The philosophical implications of quantum mechanics are important, but they do not signify a complete change to the scientific approach to nature. From the adjustments to the original programme it is clear once again that the programme of natural science is not a straitjacket. Scientific principles are not unassailable dogmas. Researchers are above all pragmatic people. They certainly have a tendency to hold onto principles and theories that have proven successful in the past, but if new findings give reason for it, even the most fundamental principles can be amended. Therefore, it would not be right to equate the premises of science with any particular philosophical stance. The starting points of ‘the new science’ took shape under the influence of certain metaphysical and philosophical assumptions. But once science had come into existence, it proved able to pursue its course without that original inspiration. New researchers who continued the work of their predecessors did not worry about the metaphysical background. The premises and the theories and methods based on them proved serviceable and convincing even to people with quite different convictions. Precisely that has been the great strength of science

241

242

CONCLUDING REMARKS

over the past three hundred years. Science is the work of people, but scientific knowledge has to a great degree an autonomous character. Because of that autonomy, the results of scientific research do not always fit easily into prevailing value systems. Some have therefore claimed that the modern scientific conception of the world is inevitably in conflict with religion. Others have insisted that religion and science encompass different aspects of reality and therefore can never stand in each other’s way. It is impossible to give a definitive answer here. It depends how we choose to define concepts like ‘religion’ and ‘science’ – and that turns out largely to depend on the political agenda of those involved. Freethinkers who are hostile to established religions will generally introduce anti-religious elements into their definition of science, while apologists for religious belief will be careful to keep them out. In general, however, it is possible to say that the much-discussed conflict between faith and science is usually not about science as such (nor is it always directly about faith). Those who oppose the results of scientific research are generally speaking hostile above all to aspects of modern culture they do not like. Science serves here as a standard-bearer of modernity in general. In some ways this is understandable because however autonomous scientific knowledge may be, the business of science is nevertheless an integral part of our culture. Natural science has come to have a colossal influence on society, not as an abstract body of knowledge that stands above the world but because of its specific aims and applications in a given historical situation. It has acquired its greatest influence since it adopted the form of technology, of knowledge that is not exclusively focused on gaining a better philosophical understanding of the world but on governing and manipulating it. We can no longer imagine our society without this form of science. In this sense science is not alien to society, influencing things from outside, as it were. The scientific way of describing the world did not succeed without good reason. Science became great because society had a clear need for such forms of knowledge. The desire to reduce reality to what can be counted and measured did not begin with scientific research, it was part of a far broader endeavour. Bureaucrats and lawyers wanted to base what they did on clear,

CONCLUDING REMARKS

normalized data. The rationalization of society created a demand for experts who could justify their advice based on objective criteria. In its ways of working, science is certainly a product of our culture, which is in itself sufficient reason to attribute cultural significance to scientific findings. The character of science, in short, is paradoxical. On the one hand it is fully part of our culture, on the other it is an autonomous body of knowledge. On the one hand science pays no attention to moral and philosophical values and places itself at the service of cold bureaucratic calculation. On the other hand, for many people science still represents the quest to solve the riddles of the universe and inspires them to break through existing boundaries. The results of scientific research do not allow themselves to be controlled by bureaucrats and economists any more than by philosophers and theologians, or indeed by the researchers themselves. In view of its ambiguous and elusive character, it is no wonder that science can evoke feelings of admiration and feelings of horror, a sense of greatness and of powerlessness.

243

FURTHER READING

This book was written based on a large amount of specialized literature, both old and new, and in a variety of languages. Listing all those works would not be very helpful to the intended audience of this book. My intention here is to offer some guidance to people who want to know more about the general subject or some specific aspect of it. Obviously, even with a focus on the topics discussed in the book, such a list can only be very selective and highly arbitrary. Who wants to find literature on some specific subject or episode is best referred to the Isis Current Bibliography, hosted by the History of Science Society. It is available online: https://hssonline.org/ page/isiscbexplore. An excellent reference work in one volume is J.L. Heilbron ed., The Oxford Companion to the History of Modern Science (Oxford: Oxford University Press 2003). Indispensable reference tools are the Dictionary of Scientific Biography (16 vol., New York 1970–1980) and the New Dictionary of Scientific Biography (8 vol., Farrington Hills (MI) 2008) (also accessible online). Many scientists have biographies, too many to even begin mentioning here. In addition, there exist websites devoted to publishing the works of many of the figures covered in this book. Mention should be made of the Newton Project (www. newtonproject.ox.ac.uk) and the Darwin Correspondence Project

FURTHER READING

(www.darwinproject.ac.uk). There exist several museums for the history of science. Some of them do an excellent job in giving information and showing parts of their collections online. Particularly impressive is the website of the Museo Galileo in Florence: www.museogalileo.it/en/. Recent scholarly overviews tend to be edited volumes with contributions by a host of specialists. See for instance Peter J.T. Morris and Alan Rocke eds, A Cultural History of Chemistry (6 vol., New York: Bloomsbury 2021); Jed Z. Buchwald and Robert Fox eds, The Oxford Handbook of the History of Physics (Oxford: Oxford University Press 2013); Michael Ruse ed., The Cambridge Encyclopedia of Darwin and Evolutionary Thought (Cambridge: Cambridge University Press 2013); or the various volumes of the Cambridge History of Science (8 vol., Cambridge: Cambridge University Press 2002–2020). The older Cambridge Studies in the History of Science, with one volume per author, offers shorter and more general entry-level overviews of the respective periods and fields, but the series is no longer completely up to date with recent scholarship. Some single-author overviews of specific scientific fields are: John D. North, Cosmos: An Illustrated History of Astronomy and Cosmology (Chicago: University of Chicago Press 2008); William H. Brock, The Chemical Tree: A History of Chemistry (New York: W.W. Norton & Co. 2000); Michel Morange, A History of Biology (Princeton: Princeton University Press 2021); Paul Lawrence Farber, Finding Order in Nature: The Naturalist Tradition from Linnaeus to E.O. Wilson (Baltimore and London: Johns Hopkins University Press 2000); Kristine C. Harper, Weather by the Numbers: The Genesis of Modern Meteorology (Cambridge, Mass.: MIT Press 2008). A standard textbook on ancient and medieval science is David C. Lindberg, The Beginnings of Western Science: The European Scientific Tradition in Philosophical, Religious, and Institutional Context, Prehistory to A.D. 1450 (second edition, Chicago: University of Chicago Press 2007). A useful overview of various, often conflicting approaches to the scientific revolution is offered by H. Floris Cohen, The Scientific Revolution: A Historiographical Inquiry (Chicago: University of Chicago Press 1994). Some more recent studies of the scientific revolution are David Wootton, The Invention of Science: A New History of the Scientific Revolution (London: Allen Lane 2015); and H. Floris Cohen, How

245

246

FURTHER READING

Modern Science Came into the World: Four Civilizations, One 17thCentury Breakthrough (Amsterdam: Amsterdam University Press 2010). Specific aspects of the history of early modern science are discussed in Lawrence M. Principe, The Secrets of Alchemy (Chicago and London: University of Chicago Press 2013); Brian Ogilvie, The Science of Describing: Natural History in Renaissance Europe (Chicago: University of Chicago Press 2006); Rienk Vermij, Thinking on Earthquakes in Early Modern Europe: Firm Beliefs on Shaky Ground (Abingdon, Oxon: Routledge 2022). See for some later episodes: F.A. Stafleu, Linneaus and the Linnaeans: The Spreading of their Ideas in Systematic Botany, 1735–1789 (Utrecht: A. Oosthoek’s Uitgeversmaataschappij N.V. 1971); Reed C. Rollins ed., ‘Linnaeus: Codes and Nomenclature in Biology. Symposium on Linnaeus and Nomenclaturial Codes’, Systematic Zoology 8 (1959) 2–47; Marc Ratcliff, The Quest for the Invisible: Microscopy in the Enlightenment (Farnham and Burlington: Ashgate Publishing 2009); Janet Browne, The Secular Ark: Studies in the History of Biogeography (New Haven, CT: Yale University Press 1983); Scott L. Montgomery, The Moon and the Western Imagination (Tucson, AZ: University Of Arizona Press 2001). On social and professional aspects see the following: James McClellan, Science Reorganized: Scientific Societies in the Eighteenth Century (New York: Columbia University Press 1985); Eileen Hooper-Greenhill, Museums and the Shaping of Knowledge (London: Routledge 1992); Susan Sheets-Pyenson, Cathedrals of Science: The Development of Colonial Natural History Museums during the Late Nineteenth Century (Kingston, Ont.: McGill-Queen’s University Press 1988); Aro Velmet, Pasteur’s Empire: Bacteriology and Politics in France, its Colonies, and the World (Oxford: Oxford University Press 2020); Roy MacLeod ed., Nature and Empire: Science and the Colonial Enterprise (Chicago: The University of Chicago Press 2000) (Osiris 15); Günther B. Risse, Mending Bodies, Saving Souls: A History of Hospitals (Oxford: Oxford University Press 1999); Frank A.J.L. James ed., The Development of the Laboratory: Essays on the Place of Experiment in Industrial Civilization (Basingstoke: Palgrave Macmillan 1989); Anna Reser and Leila McNeill, Forces of Nature: The Women who Changed Science (London: Frances Lincoln 2021). For examples of industrial science see the following works: R.F. Tylecote, A History of Metallurgy (London: The Metals Society 1976, second

FURTHER READING

edition 1992); Paul Israel, From Machine Shop to Industrial Laboratory: Telegraphy and the Changing Context of American Invention, 1830–1920 (Baltimore and London: Johns Hopkins University Press 1992). Similar problems are at play in agriculture: L.T.G. Theunissen, Beauty or Statistics: Practice and Science in Dutch Livestock Breeding, 1900–2000 (Toronto: University of Toronto Press 2020). On some scientific ‘tools’: Gerard L’E. Turner, Scientific Instruments 1500–1900: An Introduction (London and Berkeley: University of California Press 1998); Laura Tilling, ‘Early Experimental Graphs’, British Journal for the History of Science 8 (1975) 193–213; Theodore M. Porter, The Rise of Statistical Thinking 1820–1900 (Princeton, NJ: Princeton University Press 2020); Terri Quinn, From Artefacts to Atoms: The BIPM and the Search for Ultimate Measurement Standards (Oxford: Oxford University Press 2012); Thomas Haigh and Paul E. Ceruzzi, A New History of Modern Computing (Cambridge, MA: MIT Press 2021); Atsushi Akera, Calculating a Natural World: Scientists, Engineers, and Computers During the Rise of U.S. Cold War Research (Cambridge, MA: MIT Press 2008). For the big questions, see Ivano Dal Prete, On the Edge of Infinity: The Antiquity of the Earth in Medieval and Early Modern Europe (Oxford: Oxford University Press 2021); Martin J.S. Rudwick, Earth’s Deep History: How it Was Discovered and Why It Matters (Chicago: University of Chicago Press 2016); Peter Bowler, Evolution: The History of an Idea (Berkeley: University of California Press 2009; first edition 1984); Evelyn Fox Keller, The Century of the Gene (Cambridge, Mass.: Harvard University Press 2000). On the impact on our worldview more in general: Gary B. Ferngren ed., Science and Religion: A Historical Introduction (Baltimore: Johns Hopkins University Press 2017); Richard Holmes, The Age of Wonder: How the Romantic Genius Discovered the Beauty and Terror of Science (New York: Harper Collins 2008); Roland R. Numbers, The Creationists: From Scientific Creationism to Intelligent Design (second edition, Cambridge, MA: Harvard University Press 2006); Erik M. Conway and Naomi Oreskes, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (New York: Bloomsbury 2010).

247

INDEX

aberration 108 Académie Royale des Sciences, Paris 72, 76, 104 academies 76, 84–85, 103, 133, 139–140, 170; at Berlin 103; at St Petersburg 103, 118; see also Académie Royale des Sciences; Royal Society of London accuracy 52, 105, 108, 120–121, 157, 166, 168 Agricola, Georg 38 air pump 80–81, 103–104 Alexandrian science 15–20, 43 alchemy 28–29, 113 Alphonsine tables 27 amateurs 100–101, 104–105, 109, 112, 117, 128, 140–141, 164 Ampère, André Marie 121–122 anatomy 40, 75 Arab sciences 21–23, 25–26, 28, 44 archeology 205

Archimedes 16–17, 43, 44 ‘arian science’ 179 Aristotle (Aristotelēs) 12–14, 17–18, 20, 22–25, 33, 49, 54, 61, 66, 78, 202 Aristotelianism 22–26, 30–32, 56, 69, 74–75, 77; criticism of 31, 33, 53–54, 57–59, 65, 70 Arrot, Matthew 175 artificial intelligence 212 astrology 14, 25–26, 28, 48, 55; criticism of 26, 73 astronomy: ancient and medieval 16–18, 27; Renaissance 35, 48–55; and mechanistic philosophy 83; modern 146–147, 157, 197–201; see also parallax; observatories atoms: in Greek science 12; in modern science 220–221, 223, 225, 228–229 atomic bomb 170, 230 Avicenna (Ibn Sina) 21, 28

INDEX

Babylonian science 17, 25 Bacon, Francis 58–59 Baekeland, Leo Hendrik 163 Baeyer, Adolf von 163 ballistics 45, 91–92 barometer 79–81 Bassi, Laura 140 Becquerel, Henri 142 Beeckman, Isaac 72 Bell Labs 171 Bergman, Torbern 115 Bernard, Claude 148–149 Berzelius, Jöns Jakob 116 Bessel, Friedrich Wilhelm 108–109 Bible 33–34, 189–192, 238–239; chronology 45, 189–190; creation 189, 192, 238; Flood 191; miracles 93–94, 191; see also creationism Big Bang theory 200 Bildung 138, 177 biochemistry 150, 214 biogeography 130, 131 biology 139, 158–159, 173, 206–208, 215; molecular 214–215; sociobiology 217; see also botany; evolution; microbiology; natural history; zoology black holes 200 Blanchard, Raphaël 146 Bohr, Niels 228–229 Boltzmann, Ludwig 160 Borelli, Giovanni Alphonso 83 botany 38–39, 79, 109–111, 145–146, 203 botanical gardens 133, 134, 170; Jardin du Roi, Paris 134; Royal Gardens, Kew 133 Boyle, Robert 80–81 Brahe, see Tycho Brahe British Association for the Advancement of Science 167–168

Buchner, Eduard 150 Bureau International des Poids et Mesures 167–168 Burnet, Thomas 191 cabinets of curiosities 38, 46, 59, 79 Candolle, Alphonse de 146 Carnegie, Andrew 169 Carnegie Institution of Washington for Fundamental and Scientific Research 169 Carothers, Wallace Hume 171 Cassini, Jean-Dominic 77 Catholicism, see Church, Catholic celestial spheres 14, 24, 51, 54 cell theory 149–150 chain of being 70, 203 chemistry 75, 112–116, 138–139, 143–145; physical chemistry 152, 154–155; see also alchemy Church, Catholic 21, 32, 67–69, 191–192; Galileo and 67–68; see also Jesuits Churches, protestant: on motion of the earth 92; Reformation 33–34 classification 110–112, 144–147, 152, 176, 194, 199; cladistics 176 clocks 83, 105, 108, 151 Clusius, Carolus (Charles de L’Escluse) 39 collections 79, 109, 133; see also botanical gardens; cabinets of curiosities; museums colonialism 132–134, 207 combustion engine 163 comets 54, 56, 93, 117–118, 191, 218 computers, computer simulation 174–176, 212 Conférence Générale des Poids et Mesures 168 conferences 145–147, 194

249

250

INDEX

Cook, James 133 Copernicus, Nicolaus 33, 49–51, 61, 64, 68; see also heliocentric system Cosmas Indicopleustes (Kosmas Indikopleustos) 24–25 cosmology 13–14, 198–201, 227 Coulomb, Charles 120 court, science at 45–46, 48 creationism 239 Crick, Francis 214 Curie, Pierre 142 Curie-Skłodowska, Marie 141–142 Cuvier, Georges 134, 203 Dalton, John 221 Darwin, Charles 134, 195, 205–209; see also evolution Darwinism: social Darwinism 207, 215; neo-Darwinist synthesis 214–215; see also evolution Descartes, René 69–75, 77–78, 82–86, 88, 90, 93, 180; see also mechanistic philosophy Desmarest, Nicolas 193 diffraction grating 109 Dioscorides (Dioskoridēs) 20, 39 DNA (Deoxyribonucleic acid) 213–215 doctors, see physicians Dodonaeus, Rembertus (Rembert Dodoens) 39 Drebbel, Cornelis 46 Dubois, Eugène 209 ecology 131–132 Eddington, Arthur 227 Edison, Thomas Alva 163 education 99, 124–125, 133, 141, 143, 160, 239; medical 126–127; see also schools; universities

Einstein, Albert 179, 226–228, 229 electrostatic generator 104, 120 electricity 104, 119–120, 151, 165 electrodynamics 121–122, 223 electromagnetism 151, 154, 222–223 elementary particles 227, 229–231; see also atoms elements: in Greek philosophy 13, 71; in modern chemistry 115–116, 221 embryology 208 energy 221–223 engineers 16–17, 43–46, 62, 165, 167, 177 Epicurus (Epikouros) 12 ether (world ether) 121, 222–223, 227 ethology 216–217 Euclid (Eukleidēs) 16 eugenics 207 Euler, Leonhard 118 evolution 195, 204–210, 212–216, 237; human 209–210; resistance to 195, 207, 238–239; see also Darwinism expanding universe 200 expeditions (scientific) 132–133; Challenger expedition 135; Meteor expedition 135 experiments 77, 79–81, 148–149, 150–153, 159 extraterrestrial life 65, 201 Fahrenheit, Daniel Gabriel 106–108 Faraday, Michael 151 Flammarion, Camille 183–184, 187 Fontenelle, Bernard Le Bovier de 72–73 fossils 191, 208–209 French Revolution 125, 168 Fraunhofer, Joseph 109 Fresnel, Augustin Jean 121–122, 228

INDEX

Freud, Sigmund 211–212 funding 169–172, 178–179, 230 Galen (Galenus) 20, 23–24, 28, 33, 40; criticism of 40–42 Galileo (Galileo Galilei) 53, 62–68, 79, 90 gases 114, 116 Gassendi, Pierre 72 Gauss, Carl Friedrich 157 genes 213 genetics 176, 178–179, 210, 213– 215 geology 193–197, 206 Gessner, Konrad 38 Gilbert, William 33, 47–48, 55, 58 graphs 155–156, 174 gravity 88–89, 218, 231 Grew, Nehemiah 78 Guericke, Otto von 80 Guldberg, Cato Maximilian 154 Guyton de Morveau, Louis Bernard 115–116 Haeckel, Ernst 184 Hahn, Otto 142 Halley, Edmund 87–88, 117–118, 131 harmonics 15–18, 55 Harvey, William 41–42, 57, 74, 78 heaven, as opposed to earth 13, 35, 61, 65; see also celestial spheres Heaviside, Oliver 154 Heisenberg, Werner 230 heliocentric system 50; reception of 51–52; opposition to 67–68, 92, 95, 190 hell 190, 191 Helmholz, Hermann 222 Hennig, Willi 176 heredity 212–216

Hermeticism 34–35, 47 Herschel, Caroline 140 Herschel, William (Friedrich Wilhelm) 140, 197–198 Hertz, Heinrich 222 Hevelius, Johannes 77, 140 Hippocrates (Hippokratēs) 19, 57 Hobbes, Thomas 72 Hooke, Robert 78, 80, 87 hospitals 126–128; Vienna General Hospital 127 Hubble, Edwin 200 humanism 32–33, 35, 43, 48, 56; criticism of 70 Humboldt, Alexander von 130–132, 184 Humboldt, Wilhelm von 138, 176 humours 19, 20, 26, 41 Hutton, James 193 Huygens, Christiaan 77, 82–83, 89, 121 hydrostatics 17, 44, 80 industry 100, 162–166, 171, 173, 178; AT&T 171; E.I. du Pont de Nemours 171; General Electric Company 171 instruments 62–63, 77, 103–109, 120, 129, 151, 166, 168–169; instrument makers 104–105; see also air pump; barometer; clock; diffraction grating; electrostatic generator; microscope; telescope; thermometer integrity 178–180 International Electrotechnical Commission (IEC) 167 inventors 162–163 Jesuits 64, 68 Joule, James Prescott 222

251

252

INDEX

journals 147 Jussieu, Antoine-Laurent de 112, 203 Kaiser Wilhelm Society 171–172 Kant, Immanuel 192 Kekulé, August 145 Kepler, Johannes 52–55, 58, 82 laboratories 143–144, 166, 169, 171, 180, 211 Lactantius (Lucius Caecilius Firmianus) 24 Lamarck, Jean-Baptist de 204–205 Lambert, Johann Heinrich 155 Laplace, Pierre-Simon de 120–122, 157, 192, 219 Lavoisier, Antoine Laurent 151–116, 140 laws of nature 71–72, 88–90, 94, 156–157, 174, 180 Leaky, Louis 209 Leaky, Mary 209 Leavitt, Henrietta 199 Leeuwenhoek, Antoni van 78 Leibniz, Gottfried Wilhelm 85 Lenoir, Etienne 163 Letronne, Antoine-Jean 24–25 Le Verrier, Urbain Jean Joseph 183–184 Liebig, Justus von 143, 163–164 Linnaeus, Carl 110–111, 115, 133, 144, 202 life 23–24, 74, 148–150, 153, 204, 214; see also soul Lobelius (de L’Obel), Matthias 39 logic 12, 22, 27 Lorenz, Konrad 217 Lyell, Charles 194 Lysenko, Trofim Denisovich 178–179

magic, see natural magic magnetism 47–48, 122; geomagnetism 47, 129, 131 Magnus, Heinrich Gustav 144 Malebranche, Nicolas 72–73, 85 Malpighi, Marcello 78 Malus, Etienne 121 Manhattan project 171–172 maps 131; weather maps 175 Martens, Adolf 165 Massachusett Institute of Technology 171 mathematics: ancient 14–16; mixed 16–17, 27; medieval 21, 27; Renaissance 35, 42–45, 58; in seventeenth century 70, 76, 86; modern 139; see also statistics mathematization 82–83, 118–123, 153–160, 173–176, 215 Max Planck Society 172 Maxwell, James Clerk 154, 159, 222 measuring stations 129, 131 mechanics 17, 66, 82–83, 86, 88, 91–92, 118 mechanistic philosophy 72–81, 82–84, 89–90; criticism of 85–87, 92–94 medicine: ancient 18–20; medieval 21, 25–26, 27–28; Renaissance 33; and new science 74, 98; modern 126–128, 158; and popular belief 234, 236–237; see also anatomy; hospitals; humours; physicians; physiology Meitner, Lise 142 Mercator, Gerardus (Gerard Kremer) 45 metals 114 metallurgy 164–165 meteorology 14, 128–129, 130–131, 157, 175–176

INDEX

Metre Convention 167–168 microbiology 146–170 microscopes, microscopy 67, 78, 149 military technology 43, 171–172, 174 Milky Way 198–199 mineralogy 115, 193 miracles 93–94, 191 Monardes, Nicolás 39 moon 64–65; motion of 119 Morgan, Thomas Hunt 213 Mosaic physics 34 motion 13, 45, 66; inertial 66 Musaeum (Mouseion), Alexandria 15, 18, 20, 40 museums 133–134, 170; Museum of Natural History, Paris 134 Napoleon Bonaparte 132–133, 219 National Aeronautics and Space Administration (NASA) 172 natural history 35–39, 58, 76, 79, 109–112, 133–134 natural magic 34–35, 46–47, 62–63 natural philosophy: ancient 11–12, 16, 21–22; medieval 26–27; Renaissance 56–60; and modern science 70, 98, 186 nature 11, 19 navigation 42, 99, 129, 132 nebular hypothesis 192 neo-Platonism 12, 34 neurology 211 Newton, Isaac 53, 84–91, 113, 122, 218 Newtonianism 90, 94–95, 117–119, 122, 219–220 Nightingale, Florence 158 Nobel, Alfred 169 Nobel prize 142, 169, 178 nomenclature 144–147, 194; binomial 111

observatories, astronomical 151, 170; Harvard College Observatory 141, 199; Paris Observatory 183; Tycho’s 51–52 oceanography 134–145, 172 Oersted, Hans Christian 122 optics 16, 18, 21, 53, 82, 86, 121 Osmond, Floris 165 Ostwald, Wilhelm 223 Otto, Nicolaus August 163 Owen, Richard 134 paleontology 134, 208–210 Paracelsus (Theophrastus von Hohenheim) 57, 59 parallax (annual) 108–109, 151–152, 198 Pascal, Blaise 80 Pasteur, Louis 150, 170 Pasteur Institute 170, 172 patent law 163 Pavlov, Ivan Petrovich 177–211 peer review 180 Pena, Jean 54 philosophy 11–12, 20, 23, 35, 191–193, 201, 203–205, 211, 216, 241; see also natural philosophy photography 151–152, 199 physicians 27–28, 33, 98, 126–128 physiology 41–42, 74, 75, 139, 148–150 physics: medieval 22, 26; modern 121, 139, 144, 153–154, 161; nuclear 171–172, 225–231; see also natural philosophy; quantum mechanics; relativity; thermodynamics Physikalisch-Technische Reichsanstalt, Berlin 170–171 physique amusante 103–104, 119 Planck, Max 228

253

254

INDEX

planetary system: according to Aristotle 14, 17; according to Copernicus 49–51; according to Kepler 53; according to Ptolemy 18; according to Tycho 52; see also heliocentric system planets 13–14, 17–18, 64, 83, 87; Jupiter 64, 77; Neptune 184; Saturn 77; Uranus 198; Venus 64; see also planetary system plastics 163, 171 Plato (Platōn) 12, 15, 17, 20, 22, 24, 34, 56 Pliny (Gaius Plinius Secundus) 36–37 Plotinus (Plotinos) 12, 34, 56 probability theory 157–158 professionalization 140, 141, 160, 201, 220 professors 139, 143, 147 Protestantism, see Churches, Protestant psychology 211–212 Ptolemy (Klaudios Ptolemaios) 17–18, 25, 27, 48–50 Purkyne, Jan Evangelista 144 Pythagoras 11, 15, 56 quantum mechanics 226, 228–230, 237, 241 Quetelet, Adolphe 158 radioactivity 142, 195–196, 224–225 race 179, 203–204, 207, 215 rationalization 99–100, 125, 128 reductionism 150, 153, 211, 216, 235, 237 relativity, theory of 226–228, 237 religion 29, 33–35, 58, 67, 74, 86, 92–95, 185–187, 218–219, 232, 236, 242; see also Bible; Church(es); theology

Rockefeller Institute for Medical Research 169 romanticism 122, 219, 236–237 Rømer, Ole 77 Röntgen, Wilhelm Conrad 224 Royal College of Surgeons 134 Royal Geographical Society 133 Royal Institution 151 Royal Society of London 76, 81, 104, 135 Rutherford, Ernest 225 Santorio (Sanctorius), Santorio 41, 63 Scaliger, Joseph Justus 189 Scheiner, Christopher 64 scientific revolution 3, 5–6, 84, 95 Schiller, Friedrich von 2 Schrödinger, Erwin 229–230 Schwann, Theodor 149 scholasticism 27, 33 schools: Bergakademie (Freiberg mining school) 125, 193; École des Mines, Paris 125; École des Ponts et Chaussées, Paris 125; École Polytechnique, Paris 120; see also education Siemens, Werner von 164, 170–171 societies: international 146–147, 165; Astronomische Gesellschaft 147; British Association for the Advancement of Science 167–168; International Astronomical Union 147; Royal Geographical Society 133; see also academies Solvay, Ernest 169 Sorby, Henry C. 164–165 soul 23–24, 34, 74, 78, 202, 209 space travel 172 Spallanzani, Lazzaro 148 spectroscopy 109, 152, 199–200, 224 Spencer, Herbert 206

INDEX

Spinoza, Baruch 85, 93–94 spiritualism 237 Sputnik 172 statistics 157–160, 215 Stevin, Simon 44, 46 Stoicism 12, 56 sun 190, 195–196; spectrum 109; sunspots 64 Swinden, Tobias 190 Tartaglia, Nicolo 44–45 technology 98–99, 162–165 telescope 61–63, 67, 77, 86, 104, 198 theology 22, 26, 27, 32, 69 Theophrastus (Theophrastos) 39 ‘theory of everything’ 231, 241 thermodynamics 159–160, 195, 221–222 thermometer 63, 105–108 Thomson, William (Lord Kelvin) 195, 222–223 Torricelli, Evangelista 79–80 Trembley, Abraham 148 Turing, Allan 212 Tycho Brahe 51–52, 54 units of measurement 125–126, 166–168; CGS system 167 universities: early 22, 32, 44; nineteenth century 137–140, 143–144, 166, 169; see also professors

vacuum pump, see air pump Van’t Hoff, Jacobus Henricus 154 Vesalius, Andreas (Andries van Wesel) 33, 40–41 Vesuvius Observatory (Osservatorio Vesuviano) 128 Vitruvius 43 vivisection 149 Voetius, Gisbertus 92–93 volcanoes 128, 190, 193, 197 vulcanology 128 Waage, Peter 154 Wallace, Alfred Russel 145 Warming, Johannes Eugenius Bülow 132 Watson, James 214 Weber, Wilhelm 144, 153 Wegener, Alfred 197 Werner, Abraham Gottlob 193 Wien, Wilhelm 223 Wilhelmson, Robert 175 women 140–142, 178, 234 Woodward, John 191 Word War I 147 World War II 171–172, 215, 230 Wren, Christopher 87 Wundt, Wilhelm 139, 211 X-rays 224; X-ray diffraction 152 Zeno (Zēnōn) 12 zoology 38, 79, 111, 146

255