128 73 3MB
English Pages 257 [251] Year 2020
History, Philosophy and Theory of the Life Sciences
Lorenzo Baravalle Luciana Zaterka Editors
Life and Evolution
Latin American Essays on the History and Philosophy of Biology
History, Philosophy and Theory of the Life Sciences Volume 26
Editor-in-Chief Charles T. Wolfe, Sarton Centre for History of Science, Ghent University, Ghent, Oost-Vlaanderen, Belgium Philippe Huneman, IHPST (CNRS/Université Paris I Panthéon-Sorbonne), Institut d'Histoire et de Philosophie de, Paris, France Thomas A. C. Reydon, Institute of Philosophy, Leibniz Universität Hannover, Hannover, Germany Editorial Board Marshall Abrams, University of Alabama, Birmingham, AL, USA1
History, Philosophy and Theory of the Life Sciences is a space for dialogue between life scientists, philosophers and historians – welcoming both essays about the principles and domains of cutting-edge research in the life sciences, novel ways of tackling philosophical issues raised by the life sciences, as well as original research about the history of methods, ideas and tools, which constitute the genealogy of our current ways of understanding living phenomena. The series is interested in receiving book proposals that • are aimed at academic audience of graduate level and up • combine historical and/or philosophical and/or theoretical studies with work from disciplines within the life sciences broadly conceived, including (but not limited to) the following areas: • Anatomy & Physiology • Behavioral Biology • Biochemistry • Bioscience and Society • Cell Biology • Conservation Biology • Developmental Biology • Ecology • Evolution & Diversity of Life • Genetics, Genomics & Disease • Genetics & Molecular Biology • Immunology & Medicine • Microbiology • Neuroscience • Plant Science • Psychiatry & Psychology • Structural Biology • Systems Biology • Systematic Biology, Phylogeny Reconstruction & Classification • Virology The series editors aim to make a first decision within 1 month of submission. In case of a positive first decision the work will be provisionally contracted: the final decision about publication will depend upon the result of the anonymous peer review of the complete manuscript. The series editors aim to have the work peer-reviewed within 3 months after submission of the complete manuscript. The series editors discourage the submission of manuscripts that contain reprints of previously published material and of manuscripts that are below 150 printed pages (75,000 words). For inquiries and submission of proposals prospective authors can contact one of the editors: Charles T. Wolfe: [email protected] Philippe Huneman: [email protected] Thomas A.C. Reydon: [email protected] More information about this series at http://www.springer.com/series/8916
Lorenzo Baravalle • Luciana Zaterka Editors
Life and Evolution Latin American Essays on the History and Philosophy of Biology
Editors Lorenzo Baravalle Center of Natural and Human Sciences Federal University of ABC, Bairro Bangú Santo André, SP, Brazil
Luciana Zaterka Center of Natural and Human Sciences Federal University of ABC, Bairro Bangú Santo André, SP, Brazil
ISSN 2211-1948 ISSN 2211-1956 (electronic) History, Philosophy and Theory of the Life Sciences ISBN 978-3-030-39588-9 ISBN 978-3-030-39589-6 (eBook) https://doi.org/10.1007/978-3-030-39589-6 © Springer Nature Switzerland AG 2020, Corrected Publication 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
A central theme in the history and philosophy of biology is that life should be viewed historically and that biological entities and kinds should be understood as historically contingent. For example, many historians and philosophers, as well as biologists, have argued that species are historical entities. On this view, species are made up of organisms connected directly or indirectly by parent/offspring relations. A species is a lineage of organisms. What makes an organism a member of a species is its historical continuity with other organisms in the lineage that makes up the species. Organisms within a species typically share characteristics and interact with other organisms in the species. Many philosophers, following the lead of David Hull, argue that this view can be applied to academic disciplines as well. What makes a scholar a philosopher of biology is historical continuity with a lineage and involves interaction with other scholars in the lineage. Those who take this perspective acknowledge that members of a discipline share interests and employ similar methods for advancing those interests. But they would also emphasize that diversity within a discipline, like diversity within a species, can be a driver of intellectual change. This perspective on disciplines points to the importance of this book. The history and philosophy of biology is a young field. One might say it started taking the form of a discipline with the work of David Hull and Michael Ruse in the late 1960s. William Wimsatt, Elliott Sober, Philip Kitcher, Robert Brandon, John Beatty, and others quickly joined them. Hull and Ruse contributed through their writing, and also through their mentorship and organizational efforts. They encouraged and supported new people to join the field. They helped initiate a biennial conference, which led to an academic society known as the International Society of the History, Philosophy, and Social Studies of Biology (ISHPSSB). This society is truly global, holding its conferences in Europe, North, Central, and South America, and Australia. Ruse launched a new journal, Biology and Philosophy, which became highly respected in Australia, Europe, and North America. More recently, an open access journal, Philosophy, Theory, and Practice, has been established. And these
v
vi
Foreword
are just some of the specialist journals in the history and philosophy of biology. Work in this discipline is also published in general philosophy of science journals, including Philosophy of Science and the British Journal for Philosophy of Science, and philosophy journals, including The Journal of Philosophy and Synthese. The history and philosophy of biology is a diverse and vibrant discipline, in large part, because it is a global discipline. Scholars from many different countries and cultures share and draw upon one another’s research by presenting and listening to papers at international conferences and publishing and reading work in international journals (such as those mentioned above) and book series including series published by Cambridge University Press (first edited by Ruse) and a series published by the University of Chicago Press (once edited by Hull). Although history and philosophy of biology is a global discipline, communication is typically in English. For example, all the venues mentioned here are conducted exclusively in the English language. Papers at the ISHPSSB conference are presented in English regardless of where the conference is held, including Norway, France, Mexico, and Brazil. The journals mentioned above are restricted to English. Many in the field follow work published in their own language and also in English, but not work published in other languages. Meanwhile, historians and philosophers of biology in Latin America have been publishing a large portion of their research in Spanish. Many non-Spanish speakers know little about the Latin America literature. This was true of me until I read this book, which demonstrates that Latin American research in the history and philosophy of biology is rich, and contributes much to the discipline. It engages the same kinds of intellectual problems, employs the same kinds of methods, and achieves the same standards of excellence as work published in English. For example, Gustavo Caponi’s chapter seeks a naturalized account of teleology. Caponi proceeds by analyzing the concepts of biological function, fitness, and adaption. He proposes that each of these concepts is a specification of a more general concept: biological function is a biological specification of function, fitness is a biological specification of effectiveness, and adaption is a biological specification of design. He argues that elucidating these conceptual links provides the basis for a naturalized understanding of teleology. This chapter addresses a problem central to the history and philosophy of biology. It employs rigorous methods of analysis. And it proposes a novel and intriguing account of how to solve the problem of naturalizing teleology. Caponi cites and draws upon work published in English as well as work published in Spanish. He is part of the lineage of the discipline of history and philosophy of biology. This is true of the authors of the other chapters in this book as well. Unfortunately, many of us do not know about their work. This book should change that. The literature in history and philosophy of biology is rich and diverse because it is global. Many chapters in this book take up intellectual problems that have already attracted a lot of attention in English venues. But other chapters take up problems that are less familiar. For example, Charbel Niño El-Hani and Nei Nunes Neto’s chapter addresses the problem of how abiotic geochemical cycles transitioned into life-constrained biogeochemical cycles. By proposing a sophisticated account of
Foreword
vii
how ecological systems first developed, this chapter enriches our understanding of the features that make ecological systems distinctive. This chapter will familiarize readers with an extensive Spanish literature on the organizational approach to understanding ecological systems. If we view the discipline of the history and philosophy of biology as being defined by a disciplinary history rather than by a disciplinary essence of subject matter, it is apparent that this book marks an important moment in the discipline. It is one of the first books written in English by a group of Latin American historians and philosophers of biology. The lineage of scholars in Latin America have been conducting excellent research. They have been contributing the sciences and societies of Latin America. But those of us who do not read Spanish have been largely ignorant of their work. This book can inform us of the important research that its authors are conducting. In addition, by drawing upon the work of their colleagues, this book also provides a window into the extensive Latin American literature in the history and philosophy of biology. C. Kenneth Waters
Canada Research Chair in Logic and Philosophy of Science, Professor of Philosophy, Department of Philosophy, University of Calgary, Calgary, Canada
Acknowledgments
We are extremely grateful to all our authors for their work and the effort that they put in the realization of this book. Along with them, we thank José Díez, Marc Artiga, and two anonymous reviewers for the feedbacks that they provided during the editing process, and Ken Waters for having kindly accepted to write a Foreword for the book. While editing the volume, we have been constantly supported by the Center of Natural and Human Sciences of the Federal University of ABC (Brazil) and by Springer staff, especially Bruno Fiuza and Menas Donald Kiras. We finally thank the editors of the series History, Philosophy and Theory of the Life Sciences for the opportunity they gave us to publish this book. Lorenzo Baravalle Luciana Zaterka
ix
Contents
1 Introduction to Life and Evolution�������������������������������������������������������� 1 Lorenzo Baravalle and Luciana Zaterka 2 Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between Mendelism and Biometry? ������������������������ 11 Lilian Al-Chueyr Pereira Martins 3 Blood, Transfusions, and Longevity ������������������������������������������������������ 29 Ronei Clécio Mocellin and Luciana Zaterka 4 Performative Epistemology and the Philosophy of Experimental Biology: A Synoptic Overview������������������������������������ 47 Maurizio Esposito and Gabriel Vallejos Baccelliere 5 Life on Earth Is Not a Passenger, but a Driver: Explaining the Transition from a Physicochemical to a Life-Constrained World from an Organizational Perspective ������������������������������������������ 69 Charbel Niño El-Hani and Nei Nunes-Neto 6 Cooperation and the Gradual Emergence of Life and Teleonomy������������������������������������������������������������������������������������������ 85 Alejandro Rosas and Juan Diego Morales 7 Evolutionary Debunking Arguments and Moral Realism�������������������� 103 Maximiliano Martínez, Alejandro Mosqueda, and Jorge Oseguera 8 The Darwinian Naturalization of Teleology������������������������������������������ 121 Gustavo Caponi 9 Drift as a Force of Evolution: A Manipulationist Account������������������ 143 Lorenzo Baravalle and Davide Vecchi
xi
xii
Contents
10 Laws, Models, and Theories in Biology: A Unifying Interpretation ������������������������������������������������������������������������������������������ 163 Pablo Lorenzano and Martín Andrés Díaz 11 Systemic Analysis and Functional Explanation: Structure and Limitations ���������������������������������������������������������������������� 209 Andrea Soledad Olmos, Ariel Jonathan Roffé, and Santiago Ginnobili Correction to: Laws, Models, and Theories in Biology: A Unifying Interpretation���������������������������������������������������������������������������������� C1 Index������������������������������������������������������������������������������������������������������������������ 231
Contributors
Gabriel Vallejos Baccelliere Department of Biology, Faculty of Sciences, University of Chile, Ñuñoa, Region Metropolitana, Chile Lorenzo Baravalle Center of Natural and Human Sciences, Federal University of ABC, Bairro Bangú, Santo André, SP, Brazil Gustavo Caponi Department of Philosophy, Federal University of Santa Catarina, Florianópolis, SC, Brazil Martín Andrés Díaz Center of Studies in Philosophy and History of Science, National University of Quilmes, Bernal, Buenos Aires, Argentina Charbel Niño El-Hani Institute of Biology, Federal University of Bahia and National Institute of Science and Technology in Interdisciplinary and Transdisciplinary Studies in Ecology and Evolution (INCT IN-TREE), Ondina, Salvador, BA, Brazil Maurizio Esposito Departament of Philosophy, University of Santiago de Chile, Santiago, Chile Santiago Ginnobili Center of Studies in Philosophy and History of Science, National University of Quilmes, Bernal, Buenos Aires, Argentina Pablo Lorenzano Center of Studies in Philosophy and History of Science, National University of Quilmes, Bernal, Buenos Aires, Argentina National Scientific and Technical Research Council (CONICET), National University of Quilmes, Bernal, Buenos Aires, Argentina Maximiliano Martínez Departamento de Humanidades, Universidad Autónoma Metropolitana, Unidad Cuajimalpa, Mexico City, Mexico Lilian Al-Chueyr Pereira Martins Department of Biology, Faculty of Philosophy, Sciences and Linguistics of Ribeirão Preto, University of São Paulo, Bairro Monte Alegre, Ribeirão Preto, SP, Brazil
xiii
xiv
Contributors
Ronei Clécio Mocellin Department of Philosophy, Federal University of Paraná, Centro, Curitiba, PR, Brazil Juan Diego Morales Philosophy Program, Universidad de Cartagena, Cartagena, Colombia Alejandro Mosqueda Posgrado en Ciencias Sociales y Humanidades, Universidad Autónoma Metropolitana, Unidad Cuajimalpa, México City, Mexico Nei Nunes Neto Faculty of Biological and Environmental Sciences, Federal University of Grande Dourados and National Institute of Science and Technology in Interdisciplinary and Transdisciplinary Studies in Ecology and Evolution (INCT IN-TREE), Dourados, MS, Brazil Andrea Soledad Olmos Center of Studies in Philosophy and History of Science, National University of Quilmes, Bernal, Buenos Aires, Argentina Jorge Oseguera Philosophy Department, Florida State University, Tallahassee, FL, USA Ariel Jonathan Roffé Center of Studies in Philosophy and History of Science, National University of Quilmes, Bernal, Buenos Aires, Argentina Alejandro Rosas Philosophy Department, Universidad Nacional de Colombia, Bogotá, Colombia Davide Vecchi Centro de Filosofia das Ciências, Faculdade de Ciências, Universidade de Lisboa, Lisbon, Portugal Luciana Zaterka Center of Natural and Human Sciences, Federal University of ABC, Bairro Bangú, Santo André, SP, Brazil
Getting to Know the Contributors
Lorenzo Baravalle is interested in the attempts to generalize evolutionary theory to non-strictly biological domains, such as computation, epistemology, and, more specifically, culture. He has published on topics related to the theoretical structure of evolutionary theory and evolutionary explanations. He obtained his PhD in philosophy from the University of Barcelona and the University Rovira i Virgili (Spain) in 2010, and he is currently a faculty member in the Center of Natural and Human Science of the Federal University of ABC (UFABC, Brazil) and in the Department of History and Philosophy of Sciences of the Faculty of Sciences of the University of Lisbon (Portugal). He coordinates a project on the structure of cultural evolutionary theory, financially supported by the Brazilian National Council for Scientific and Technological Development (CNPQ), and collaborates in a project on the epistemic role, place, and meaning of manipulation and intervention in the life sciences (broadly conceived) funded by the National Fund for Scientific and Technological Development of Chile. Gustavo Caponi has been studying philosophy and history of biology for more than 25 years. As far as philosophy of biology is concerned, most of his work focused on evolutionary biology, and the subjects to which he devoted himself cover most of the classical questions of that domain of philosophy of science. In the case of the epistemological history of biology, his interests led him to work on naturalists such as Buffon, Lamarck, Cuvier, Geoffroy Saint-Hilaire, Darwin, and Ameghino. However, he also dealt with Claude Bernard. The axis of his work as a historian has been always to understand the Darwinian revolution. Martín Andrés Díaz is a biologist and specialist in philosophy and history of science. In the field of biology, he works in Antarctic environmental management, carrying out research and management activities to minimize the environmental impact of the activities carried out in the Antarctic continent. It is also dedicated to the environmental management of the mountainous areas of Argentina. One of the axes of his work is the application of the complex systems approach to the interaction between climate and the distribution of organisms. In philosophy of science, he xv
xvi
Getting to Know the Contributors
works on the reconstruction and analysis of ecological theories, especially on the theories of species richness gradients and of population dynamics. Finally, he works as a university professor in the field of environmental health. Charbel Niño El-Hani is full professor of history, philosophy, and biology, teaching at the Institute of Biology, Federal University of Bahia, Brazil, and productivity in research grantee level 1-B from the Brazilian National Council for Scientific and Technological Development (CNPq). He is affiliated with the Graduate Studies Programs in History, Philosophy, and Science Teaching (Federal University of Bahia and State University of Feira de Santana), in Ecology and Biomonitoring (Federal University of Bahia), and in Genetics and Biodiversity (Federal University of Bahia). He coordinates the History, Philosophy, and Biology Teaching Lab at UFBA and the National Institute of Science and Technology in Interdisciplinary and Transdisciplinary Studies in Ecology and Evolution (INCT IN-TREE), funded by the Brazilian agencies CNPq and CAPES, gathering 300 researchers from different fields. His research interests are in science education research, philosophy of biology, biosemiotics, ecology, and animal behavior. He is a member of editorial boards of Brazilian and international journals in science education and philosophy of biology and the book review editor of Science & Education. He has published more than 150 papers in peer-reviewed journals and 6 books along his career. Maurizio Esposito is mainly interested in the history and philosophy of biology. He has published a monograph on the history of organismal biology and various articles exploring the relation between the life sciences and society. He is also interested in the epistemology of natural science and the connections across scientific practices, epistemic values, and theories. He has previously worked at UNAM (Mexico) after he obtained his PhD from the University of Leeds (UK) in 2012. He is a faculty member in the Department of Philosophy at the University of Santiago (Chile) and is currently coordinating a project on the epistemic role, place, and meaning of manipulation and intervention in the life sciences (broadly conceived) funded by the National Fund for Scientific and Technological Development of Chile. Santiago Ginnobili has a PhD in philosophy from the Universidad de Buenos Aires. He is professor at the University of Buenos Aires and a fellow researcher at Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Argentina. His area of expertise is philosophy of science and philosophy of biology. His metatheoretical research about the theory of natural selection and functional biology has been published in the book La teoría de la selección natural – Una e xploración metacientífica. He has also published several papers in specialized journals about this subject. His research analytical interests are combined with historical matters: he has dedicated effort to explicate not only the fine structure of biological theories—appealing mainly to the tools of metatheoretical structuralism—but also their historical origin and how they have evolved in the history of science.
Getting to Know the Contributors
xvii
Pablo Lorenzano is a philosopher specialized in history and philosophy of science (HPS), in particular of biology. He is known for his philosophical (synchronic and diachronic) analyses of empirical theories by means of metatheoretical structuralism and for his historiographical analysis of the origins of genetics. Having a PhD from the Free University of Berlin, he has published the book Geschichte und Struktur der klassischen Genetik (1995) and edited History and Philosophy of Life Sciences in the South Cone (Lorenzano, Martins, and Regner (eds.), London: College Publications, 2013), among others. He was chair of the Joint Commission of the International Union of History and Philosophy of Science and Technology. He is full professor and director of the Center of Studies in Philosophy and History of Science (CEFHIC), National University of Quilmes, and main researcher of the National Scientific and Technical Research Council (CONICET), Argentina. Maximiliano Martínez is a philosopher that has applied conceptual and factual analysis to some problems and debates in the philosophy of biology. For example, he proposes a reconceptualization of the concept of “natural selection” through the notion of “multilevel causation,” in order to capture the creative evolutionary causal role of selection. He also defends the use of a multilevel causal framework to a better understanding of several complex phenomena studied by the extended synthesis, such as development, niche construction, morphogenesis, etc. Recently, he has been working on the debate between realism and antirealism in moral theory. He argues that a factual analysis of altruism and cooperation supports a modest naturalistic realism. Lilian Al-Chueyr Pereira Martins is a researcher on history, philosophy of life sciences, and their epistemic interfaces including science education. She has devoted herself so far mainly to the study of spontaneous generation since antiquity, evolutionary theories in the nineteenth and twentieth centuries, and classical genetics. First president of the Brazilian Association for History and Philosophy of Biology (ABFHIB) and author of the book Lamarck’s Theory on Animal Progression (A teoria da progressão dos animais de Lamarck) (2007), she is currently one of the editors of the journal Philosophy and History of Biology (Filosofia e História da Biologia). Ronei Clécio Mocellin has been interested in philosophy and history of chemistry. He obtained his PhD in philosophy at the University of Paris X, under the guidance of Bernadette Bensaude-Vincent (2009), and was a postdoctoral fellow (2012–2014) in the Department of Philosophy of the University of São Paulo (Brazil). He has published on topics related to the existence of a “chemical style of reasoning” and about of sociocultural capillarity of chemical materials. Currently, he is associate professor in the Department of Philosophy of the Federal University of Paraná (Brazil).
xviii
Getting to Know the Contributors
Juan Diego Morales is professor and researcher at the Universidad de Cartagena, Colombia. His latest work has focused on the problem of mental causation and physicalism. He also does research on metaphysics, epistemology, the later Wittgenstein, and the relation between science and spirituality. In his 2018 book The Emergence of Mind in a Physical World, he develops philosophical and scientific arguments claiming that mind and other objects of the social sciences are emergent, that is, macro-physical phenomena. In 2017, he won the first place in the Concurso de Ensaios Categoria Sênior sobre o Problema Mente-Cérebro da Universidade Federal de Juiz de Fora, Brazil, with the essay “Mental Causation as Emergent Causation.” Currently, his research focuses on the consequences of mental embodiment for social sciences, ethics, and political theory. Alejandro Mosqueda works at the intersections between ethics, philosophy of action, and philosophy of law. His PhD dissertation analyzes the role of excuses and justifications in the context of attributions of responsibility, considering the importance of understanding what it means to cause damages through negligence, recklessness, inadvertence, and accident and how each of these cases affects the agent’s responsibility. He is currently in a postdoctoral research at the Universidad Autónoma Metropolitana Unidad Cuajimalpa (UAM-C), working on slippery slope arguments that are commonly used in discussions about the decriminalization of abortion. Nei Nunes-Neto is professor of epistemology of science and science education at the School of Biological and Environmental Sciences, Federal University of Grande Dourados (UFGD), Brazil, and is a researcher of the National Institute of Science and Technology in Interdisciplinary and Transdisciplinary Studies in Ecology and Evolution (INCT IN-TREE), funded by the Brazilian agencies CNPq and CAPES, gathering 300 researchers from different fields. He is affiliated with the Graduate Studies Program in History, Philosophy, and Science Teaching (Federal University of Bahia and State University of Feira de Santana). His research interests are in science education research, philosophy of biology, and ethics. Andrea Soledad Olmos is a philosopher of biology. Her PhD project focuses on the structure of behavioral biology explanations, especially on the relationship between behavioral neurobiology and behavioral ecology. She wrote about mechanical and functional explanations in ethology and neuroethology. She works on the historical and conceptual relationships between ethology and comparative psychology and currently holds a PhD scholarship at the Universidad de Buenos Aires, working on the project “Rational Reconstructions in Biology and Their Socially Relevant Consequences” under the direction of Santiago Ginnobili. Jorge Oseguera studied Philosophy at the National Autonomous University of Mexico (UNAM), where he also earned a diploma on bioethics. He completed his undergraduate dissertation on the naturalistic fallacy and the is-ought gap and his MA in philosophy from Florida State University in 2015 focusing on the debate of
Getting to Know the Contributors
xix
evolutionary debunking arguments, under the direction of Michael Bishop and Michael Ruse. Currently, he works on his PhD at the same institution. For his dissertation, he applies the evolutionary debunking arguments to the debate on theories of well-being. He teaches classes on ethics and on reasoning and critical thinking. His research interests focus metaethics (particularly constructivism), moral psychology, and political philosophy (particularly anarchism). Ariel Jonathan Roffé is a philosopher of biology, specializing in the philosophy of systematics. His PhD thesis will focus on topics related to cladistic methods of phylogenetic inference and classification. He has also written about other topics within the philosophy of biology, such as population genetics and microevolution, optimality studies in behavioral ecology, and functional biology. He also has broader academic interests, ranging from mathematical logic to computer programming. Some of his latest work includes computer programs that utilize logical or cladistic tools to improve existing practices (logic teaching, formal reconstruction checking, schedule building, etc.). He currently holds a PhD scholarship at CONICET (Consejo de Investigaciones Científicas y Técnicas, Argentina), working from the CEFHIC-UNQ (Centro de Estudios de Filosofía e Historia de la Ciencia, Universidad Nacional de Quilmes, Argentina). Alejandro Rosas obtained his PhD in 1991 at the University of Münster, Germany, with a dissertation on Kant’s theoretical philosophy. He later made a turn to philosophical naturalism and began research on the explanation of moral behavior, drawing insights from the evolutionary biology of cooperation and from the behavioral and cognitive sciences. He teaches and does research in the Philosophy Department of the Universidad Nacional de Colombia since 1992. His research has been funded by the Deutsche Forschungsgemeinschaft, the Alexander von Humboldt Foundation, the Konrad Lorenz Institute for Evolution and Cognition Research, and the John Simon Guggenheim Memorial Foundation. His publications can be seen at https:// www.researchgate.net/profile/Alejandro_Rosas. Gabriel Vallejos has a degree in biochemistry at the University of Chile, where he is currently a PhD candidate in Science with a mention in molecular and cell biology. His scientific work focuses mainly on biological physicochemistry and the development of mathematical models to understand the mechanisms of enzymatic action. In parallel, he works as a researcher in philosophy of science, specifically in philosophy of experimental biology and experimental sciences in general, with an approach on scientific practice. His main philosophical interest is to unravel the epistemology that is implicit in scientific practice in order to understand what kind of knowledge about an independent nature can be obtained in the artificial environment of the laboratory and how it can be justified. Davide Vecchi is a philosopher of biology working in the Department of History and Philosophy of Sciences, Faculty of Sciences, University of Lisbon (Portugal). Formed philosophically at Bologna University (Italy) and the LSE (UK) and
xx
Getting to Know the Contributors
b iologically at the KLI (Austria) and the University of Santiago (Chile), his research interests span historical, philosophical, and theoretical issues in the life sciences, particularly molecular and evolutionary biology. At the moment, his research focuses on making sense of the causal role of DNA, genes, and genomes in development and evolution. Luciana Zaterka has been researching on the interface between the history of science and the theory of knowledge, with emphasis on issues related to the history and philosophy of chemistry, for more than 20 years. Since 2014, along with the Graduate Program of the Federal University of ABC (UFABC, Brazil) and with the financial support of the Brazilian National Council for Scientific and Technological Development (CNPQ) and the São Paulo Research Foundation (FAPESP), she investigates historical and epistemological issues related to human longevity and its unfolding to contemporaneity. She mainly researches the following subjects in modernity: reason and experience, experimental philosophy, and the place of chemistry, medicine, and biology in the modern scientific revolution, with emphasis on the works of Francis Bacon, Robert Boyle, John Locke, Baruch Spinoza, and Friedrich Nietzsche.
Chapter 1
Introduction to Life and Evolution Lorenzo Baravalle and Luciana Zaterka
During the last decades, increasing attention has been paid in Latin America to the history and philosophy of biology. As attested by the creation and growth of many specialized journals, many scholars have actively engaged in this field, producing high-quality research. Although several authors regularly publish in English, most of them still prefer to write in Spanish or in Portuguese and, for this reason, their ideas have barely crossed the boundaries of the continent. This book aims to remedy this state of affairs, by offering to the international reader a collection of original articles by some of the most skillful historians and philosophers of biology currently working in Latin American universities. The invited authors have been chosen following three main criteria. First of all, of course, the excellence of their work. They all have published in well-established peer-reviewed journals and have either promising or already recognized academic trajectories. Secondly, we have attempted to maximize the geographical representativeness: among the invited authors, there are researchers from Argentinian, Brazilian, Chilean, Colombian, and Mexican universities. Finally, in order to preserve a plurality of perspectives, we have carefully selected scholars with somehow different intellectual backgrounds: some of them are philosophers of science or epistemologists, while others have a formal education in biology or other scientific fields and a keen interest for the history of science. Notice that our emphasis on the regional provenance of the invited authors is not intended to suggest the existence of something like a Latin American history and philosophy of biology, supposedly endowed with distinctive features. On the contrary, we firmly believe that advances in this field can be achieved only by stimulating the integration of local authors into the international debate. Accordingly, we L. Baravalle (*) · L. Zaterka (*) Center of Natural and Human Sciences, Federal University of ABC, Bairro Bangú, Santo André, SP, Brazil e-mail: [email protected]; [email protected] © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_1
1
2
L. Baravalle and L. Zaterka
have selected as central themes of the book two topics, that is, life and evolution, which are at the same time representatives of the interests of our invited authors and of the worldwide community. Regarding the first topic, the book includes contributions ranging from the history of the concept of life to the philosophical reflection on life manipulation and life extension while, concerning the topic of evolution, it includes articles on the structure of evolutionary theory, its historical development, and human evolution. In order to ensure the book’s coherence and impact, we encouraged the authors to explore, as far as possible, connections between the two topics and engage in debates with each other as well as with leading international researchers. Of course, a book of this kind – aimed to reflect a plurality of perspectives more than attain some unitary goal – inevitably entails a certain degree of heterogeneity. We hope that the reader will be able to appreciate our effort for bringing different views and conceptions together, forgiving us for some conceptual leaps between chapters. While our selection of authors is certainly representative of the geographical, disciplinary, and theoretical differences in the area, it is sadly also representative of the limited presence of women in the field. In spite of the fact that some of the most important contributions to the diffusion of the history and philosophy of biology in Latin America were made by women (see next section), the field is still prevalently masculine. Actually, in many Latin American countries this situation is common in many areas of philosophy. We sincerely hope that this state of affairs may change in the near future. In the remaining of this introductory chapter we first sketch a brief history of the history and philosophy of biology in Latina America and, then, we discuss in more detail the content of the authors’ contributions.
1.1 The History and Philosophy of Biology in Latin America The institutionalization of the history and philosophy of biology as autonomous disciplinary fields is relatively recent in Latin America. Although historical and philosophical reflections on biological subjects and practices were not completely absent in the previous decades, it is only from the 1990s that scientists, historians, and philosophers of science started to pay systematic attention to conceptual issues surrounding genetics, evolution, and development. Mexican scholars played a prominent role in the creation of academic platforms aimed to promote the debate in this area of knowledge. The Institute for Philosophical Investigations of the National Autonomous University of Mexico (UNAM - Mexico City) was possibly the first institution to organize research activities related to the history and the philosophy of biology in the region. This academic ferment resulted in a series of milestone publications, such as Historia y explicacion en biología (“History and Explanation in Biology”; Martínez and Barahona 1998), in which original essays by Latin American historians and philosophers were collected along with Spanish translations of classic works by Richard Lewontin, Ernst Mayr, Stuart Kaufmann, and David Hull (among others).
1 Introduction to Life and Evolution
3
The creation of a research group on evolution and cognition in the Centre for Philosophical, Political and Social Studies Vicente Lombardo Toledano (Mexico City) led, in 1999, to the publication of the first Ibero-American international journal entirely dedicated to the philosophy of the life sciences, Ludus Vitalis.1 In the same year, Oaxaca was the venue of the second congress of the International Society for the History, Philosophy and Social Studies of Biology (ISHPSSB). These two circumstances greatly contributed to the further diffusion of the debate and the professionalization of the disciplinary field in Mexico and in other Latin America countries, especially Colombia, Brazil, and Argentina. From the intellectual exchange of scholars from these countries (among them, Gustavo Caponi and Alejandro Rosas, who have both contributed to the realization of the present book), the first Latin American international network of history and philosophy of biology emerged in 2004. It was called the Bogotá group, and organized meetings until 2012. In that year, the members of the Bogotá group, along with colleagues from Spanish universities, founded the Ibero-American Association of Philosophy of Biology (AIFIBI). The first congress, in Valencia, counted with the participation of many researchers from both sides of the Atlantic. The second and third congresses, respectively in Valle de Bravo (Mexico) in 2015 and in Bogotá in 2018, further consolidated the Spanish and Portuguese speaking community of historians and philosophers of biology. Other, both national and international, associations contributed to this goal. The most notable are possibly the Brazilian Association of Philosophy and History of Biology (ABFHiB) – whose current directorship includes two contributors of the present book (Charbel Niño El-Hani and Lilian Al-Chueyr Pereira Martins) – and the Association for Philosophy and History of Science of the South Cone (AFHIC). ABFHiB has been organizing regular meetings since 2006. In 2017, it offered the logistic support for the ISHPSSB congress in São Paulo. ABFHiB also edits a biannual specialist journal, Philosophy and History of Biology. Although more generally oriented towards the history and philosophy of science, AFHIC, which has been organizing congresses every 2 years since 1998 in different countries of South America (Brazil, Argentina, Uruguay, Chile and, in 2020, Colombia), has provided an important platform for stimulating the debate in the history and philosophy of biology. From this intense research activity several noteworthy publications have stemmed. Among the most influential monographies, we find, in Mexico, La explicación teleológica (“The Teleological Explanation”) by Margarita Ponce (1977), El método de la ciencia: epistemología y darwinismo (“The Method of Science: Epistemology and Darwinism”) by Rosaura Ruiz and Francisco Ayala (1998), and El sesgo hereditario (“The Hereditary Bias”) by Carlos López Beltrán (2004). Outside Mexico, especially notable are La ontogenia del pensamiento evolutivo: hacia una interpretación semiótica de la Naturaleza (“The Ontogeny of Evolutionary Thinking: Towards a Semiotic Interpretation of Nature”) by Eugenio Andrade
Actually, the Centre had been previously publishing another journal, Uroboros, discontinued in 1997. 1
4
L. Baravalle and L. Zaterka
(1998), Leyes sin causa y causas sin ley (“Laws without Cause and Causes without Law”) by Gustavo Caponi (2014), and La teoría de la selección natural: una exploración metacientífica (“The Theory of Natural Selection: A Metascientific Exploration”) by Santiago Ginnobili (2018), to mention just a few. As examples of the collective books edited over the last years, we can moreover cite Filosofia da Biologia (“Philosophy of Biology”; Abrantes 2011), which collects in Portuguese an impressive selection of articles written by the members of the Bogotá group, or Darwin’s Evolving Legacy (Martínez-Contreras and Ponce de León 2011), a book aimed at promoting an interdisciplinary debate between biologists, historians, and philosophers. Besides the already mentioned Ludus Vitalis and Philosophy and History of Biology, many other Latin American journals dedicated to the general history and philosophy of science have paid great attention to the debate in the history and philosophy of biology. Among them, the most important are possibly Scientiae Studia in Brazil (which included in its editorial board many of the authors of the present book), Revista colombiana de filosofía de la ciencia in Colombia, and Metatheoria in Argentina (whose chief editor, Pablo Lorenzano, is one of the contributors of this book).
1.2 Content of the Book Although the articles here collected are to a considerable extent heterogeneous, we have attempted to maximize the coherence of the volume by starting from the most historical contributions, proceeding progressively to the most philosophical ones. In this section, besides summarizing the content of each chapter, we shall draw some connections between them. Lilian Al-Chueyr Pereira Martins opens the book with a discussion on the important controversy over biological inheritance that took place in Great Britain at the beginning of the twentieth century between the Mendelians, headed by William Bateson, and the biometricians, headed by Walter Frank Raphael Weldon and Karl Pearson. As it is well known, the biometrical school advocated a theory of blending inheritance grounded on Galton’s law of ancestral heredity, whereas the Mendelians interpreted the units of biological inheritance as discrete particles, i.e. the genes. While biometricians supported their thesis by employing sophisticated statistical methods, the Mendelians privileged experiments with hybrids obtained through crossings with animals and plants. Through the analysis of a rich and extensive bibliographical material – including published and unpublished manuscripts and correspondence – Pereira Martins investigates the role that the struggle for authority played in the controversy. She focuses especially on the figure of Weldon and on his late attempt of reconciliation between Mendelism and biometry. In Pereira Martins’ opinion, the fact that Weldon admitted the possibility of such reconciliation only towards the end of his life supports the claim that one of the main factors that originally motivated the controversy was the desire, of the leading figures of Mendelism and biometry, to obtain supremacy in the field of heredity and evolution.
1 Introduction to Life and Evolution
5
In Chap. 3, Ronei Clécio Mocellin and Luciana Zaterka present a discussion on the relation between “blood, practice of transfusions and longevity” from modernity to the present days. From the investigation of three historical episodes in which blood transfusions were associated with the fight against senescence, they argue about the existence of a common epistemological research program. This is the application of what the authors call the “Baconian research program.” The first episode concerns the metaphysical-theological foundation of this program, especially in Francis Bacon and Robert Boyle’s works, and the reasons that subsequently led, in the second half of the seventeenth century in France, to the abandonment and even the banning of blood transfusions. The second historical study highlights some ideas and practices carried out by the Russian doctor and philosopher Alexander Alexandrovich Malinovsky-Bogdanov, the director of the first world institution exclusively devoted to the study of blood and transfusion. The last historical study deals with the contemporary concept of blood, especially in the context of the philosophical movement known as transhumanism. Maurizio Esposito and Gabriel Vallejos Baccelliere, in Chap. 4, analyze the so- called performative epistemology (Pickering 1995). From this perspective, the philosophical focus is on how “reality” or “nature” are technically and materially mastered and tamed. Science is a collection of practices and not just a theoretical enterprise. Throughout the text, the authors spell out the epistemological consequences of this methodological position. In a scientific world where entities are not just thought or represented but touched, used and transformed, the question about their existence does not really matter. What does matter is, rather, to what extent we can understand, through our experimental practices, how natural processes work. By assuming the centrality of this question, Esposito and Vallejos highlight four principal aspects of experimental practice: constrained action, standardization, epistemic “tightening,” and extrapolation. Altogether, these points chart what they call the Epistemic Experimental Space (EES), i.e. the abstract space in which experimental knowledge is produced, assessed, and validated. The authors show how the integration within this space makes experimental knowledge in biology a highly consistent, reliable, and successful epistemic activity. By adopting an organizational perspective on ecological systems, Charbel Niño El-Hani and Nei Nunes Neto investigate, in Chap. 5, how the transition from a physicochemical to a life-constrained world occurred. In their opinion, this transition can be conceptualized as a passage from a closure of processes to a closure of constraints in the ecological realm. While processes produce physicochemical changes, constraints are entities that act upon processes, reducing their degree of freedom while remaining unaffected by them (Moreno and Mossio 2015). After having offered a detailed elucidation of these concepts, the authors illustrate the passage from a closure of processes to a closure of constraints through a discussion of the CLAW (so called after their proponents, Charlson, Lovelock, Andreae, and Warren) hypothesis. The CLAW hypothesis explains the production of clouds over the oceans, affecting climate at a global scale, as the result of a coupling between the physicochemical flow of matter and the activity of living systems. El-Hani and Nunes Neto argue that this is a good example of what happens in the transition from
6
L. Baravalle and L. Zaterka
an abiotic ecological to a “life-constrained” ecological system and, thus, support our understanding of the role of life on Earth. In continuity with this discussion, Alejandro Rosas and Juan Diego Morales face up, in Chap. 6, one of the most formidable problems of modern philosophy since Kant, that is, the emergence and the nature of purposiveness, or teleology, in living beings. The authors approach the question drawing on the recent work of the chemist Addy Pross (2012). According to Pross’ view, the origin of life is to be found in certain interactions between three molecular structures: replicators, metabolic enzymes, and membranes. The subsequent evolutionary steps involved tentative complexifications of these interactions, followed by the natural selection of the most stable chemical networks of reaction. Rosas and Morales notice that, throughout this gradual process, it is crucial that associations between molecules provide mutual benefits. In their opinion, this corresponds to a primitive case of cooperative dynamic. Cooperation, in its turn, involves goals. In the case of molecules, the shared goal is to maintain wholes that guarantee their persistence by drawing energy from the environment and replicating. The self-maintenance of the wholes is not merely an effect of the interaction between the molecules but, compatible with Kant’s characterization of a “natural end,” it is itself a cause of the behavior of the molecules. It is hard to say at which stage of the integration of the molecules within the wholes teleology actually emerges, but Rosas and Morales confidently argue that teleology is not an apparent but a real feature of the organic world. The topic of Chap. 7, by Maximiliano Martínez, Alejandro Mosqueda, and Jorge Oseguera, is the relation between evolution and moral realism. The so-called debunking argument, put forward by some naturalist philosophers, is aimed to show that natural selection and moral realism are incompatible. From the very same naturalist standpoint, the authors of the chapter aim to challenge such a conclusion and argue that moral realism can be scientifically grounded. In order to develop their account, Martínez, Mosqueda, and Oseguera take as their critical target Street’s version of the debunking argument (Street 2006). According to Street, the main problem of moral realism is that it needs to postulate moral truths that are independent of our evaluative attitudes. Moral behavior in our species would be a consequence of the fact that natural selection made us track moral truths. Yet, Street considers that an alternative anti-realist explanation, which does not invoke independent moral truths, is possible, more parsimonious and, thus, naturalistically preferable. Martínez, Mosqueda, and Oseguera’s overall strategy against Street’s argument consists in showing that moral realists do not need to commit themselves to the existence of independent moral facts, but just to independent evaluative facts. From an evolutionary perspective, these facts can be considered as facts about what increases fitness in specific circumstances. Explanations of moral behaviour invoking this kind of facts are not less parsimonious than the anti-realist ones and, thus, moral realism is not debunked. Gustavo Caponi discusses, in Chap. 8, the analytic relations between three fundamental concepts of the theory of natural selection, that is, the concepts of biological function, fitness, and adaptation. He assumes that the concept of adaptation needs to be analyzed in terms of the concept of fitness and, in its turn, the concept
1 Introduction to Life and Evolution
7
of fitness needs to be analyzed in terms of the concept of biological function. Caponi holds that this task can be made easier if we consider these three concepts as specifications of other three broader concepts. These are, respectively, the concept of function, the concept of effectiveness, and the concept of design. The main goal of the author is to show how the theory of natural selection interprets and connects these notions so as to provide a solid framework for a naturalization of teleology (a topic that, besides being the subject of Chap. 6, will return in Chap. 11). The cornerstone of Caponi’s analysis is his interpretation of the notion of function as a causal role (derived, but with some important differences, from Cummins 1975). This interpretation is developed in opposition to the etiological conception of function (Wright 1973). In Caponi’s view, the etiological conception leads to unavoidable circularities. On the contrary, insofar as it clearly distinguishes between function and raison d’être, the causal role conception allows a straightforward naturalistic analysis of the notion of effectiveness and, indirectly, of design. In Chap. 9, Lorenzo Baravalle and Davide Vecchi take side in the long-standing controversy between causalist and statisticalist interpreters of evolutionary theory. More specifically, they argue for a dynamical view, according to which selection, drift, migration, mutation, and the other factors of evolution are not just causes, but may be considered as forces of evolution. In order to support this claim, they focus their analysis on one of the most controversial evolutionary factors, that is, genetic drift. Baravalle and Vecchi first argue that drift is a cause because the evolutionary explanations invoking events instantiating drift processes are, compatibly with Woodward’s (2003) manipulationist account of explanation, causal. Then, they argue that the function of the concept of drift in such explanations is precisely that of unifying sundry events in accordance to the specific causal role that they play in a certain evolutionary scenario. Following Hitchcock and Woodward (2003), Baravalle and Vecchi characterize the explanations in which analogous unificatory causal concepts appear as deep explanations. They thus observe that force- explanations in Newtonian mechanics are, in a sense, nothing more than a kind of deep explanation. They are deep explanations because the notion of force plays in them a causal unificatory role. On the base of this common explanatory function, the authors conclude that drift can be considered as a force. Within the heterogeneous family of the semantic approaches to the structure of scientific theories, metatheoretical (or Sneedean) structuralism is certainly the one that has been most developed and refined in Latin America (Diez and Lorenzano 2002). Pablo Lorenzano and Martín Andrés Díaz show, in Chap. 10, how it can be employed in philosophy of biology to solve old problems related to the existence of biological laws, the relation between biological models and theories, and theoretical unification in biology. After having introduced some of the main conceptual tools of metatheoretical structuralism, Lorenzano and Díaz sketch a structuralist reconstruction of a biological theory, i.e. population dynamics. They thus argue for the existence of a “first law” of population dynamics, that is, a guiding principle that heuristically orients the theoretical work of population dynamicists by pointing out what class of circumstances is indissolubly related to the phenomena under study. The guiding principle coordinates the models that population dynamicists formu-
8
L. Baravalle and L. Zaterka
late, in order to account for specific factors influencing demographic processes, within a unified theoretical framework. Through this analysis, Lorenzano and Díaz show that, in spite of their specificities, biological theories do not structurally differ from physical ones. Another application of metatheoretical structuralism is provided in the last chapter by Andrea Soledad Olmos, Ariel Jonathan Roffé and Santiago Ginnobili. Here the goal is to test whether the reduction of functional language developed by systemic analysis is successful. To this aim, the authors first outline the theoretical structure of systemic analysis. This reconstruction guides the subsequent critical discussion. Olmos, Roffé, and Ginnobili raise two main concerns about the adequacy of the systemic approach. The first one is related to its comprehensiveness. The authors argue that although the systemic approach is fruitful to account for some portions of biological practices (especially in areas such as molecular biology, neuroscience, and neuroethology), it cannot adequately explicate the notion of biological function in all its uses. The second concern refers to the explanatory strategy grounding functional systemic analyses. In a systemic analysis, structure is taken to explain function. By contrast, Olmos, Roffé, and Ginnobili argue that in functional explanations it is function that explains structure. In sum, they show that, while it is true that systemic analysis is an important component of functional attributions, it does not by itself account for the use of functional language in biology.
References Abrantes, P. C. (Ed.). (2011). Filosofia da Biologia. Porto Alegre: Artmed. Andrade, E. (1998). La ontogenia del pensamiento evolutivo: hacia una interpretación semiótica de la Naturaleza. Bogotá: Unibiblos. Caponi, G. (2014). Leyes sin causa y causas sin ley. Bogotá: Universidad Nacional de Colombia. Cummins, R. (1975). Functional analysis. Journal of Philosophy, 20, 741–765. Diez, J. A., & Lorenzano, P. (Eds.). (2002). Desarrollos actuales da le metateoría estructuralista: Problemas y discusiones. Bernal: Universidad Nacional de Quilmes. Ginnobili, S. (2018). La teoría de la selección natural. Una exploración metacientífica. Bernal: Universidad Nacional de Quilmes. Hitchcock, C., & Woodward, J. (2003). Explanatory generalisations, part II: Plumbing explanatory depth. Noûs, 37, 181–199. López Beltrán, C. (2004). El sesgo hereditario: Ambitos históricos del concepto de herencia biológica. Mexico City: Universidad Nacional Autónoma de México. Martínez, S., & Barahona, A. (Eds.). (1998). Historia y explicación en biología. Mexico City: Fondo de Cultura Económica. Martínez-Contreras, J., & Ponce de León, A. (2011). Darwin’s evolving legacy. Mexico City: Siglo XXI. Moreno, A., & Mossio, M. (2015). Biological autonomy: A philosophical and theoretical enquiry. Dordrecht: Springer. Pickering, A. (1995). The mangle of practices: Time, agency and science. Chicago: Chicago University Press. Ponce, M. (1977). La explicación teleológica. Mexico City: UNAM. Pross, A. (2012). What is life? How chemistry becomes biology. Oxford: Oxford University Press.
1 Introduction to Life and Evolution
9
Ruiz, R., & Ayala, F. (1998). El método en las ciencias: epistemología y darwinismo. Mexico City: Fondo de Cultura Económica. Street, S. (2006). A Darwinian dilemma for realist theories of value. Philosophical Studies, 127, 109–166. Woodward, J. (2003). Making things happen. Oxford: Oxford University Press. Wright, L. (1973). Functions. Philosophical Review, 82, 139–168.
Chapter 2
Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between Mendelism and Biometry? Lilian Al-Chueyr Pereira Martins
2.1 Introduction The controversy between the Mendelians, led by William Bateson (1861–1926), and the biometricians, led by Karl Pearson (1857–1936) and Walter Frank Raphael Weldon (1860–1906), which took place between 1902 and 1906 in Great Britain, is a subject that has been intensively studied by historians and philosophers of science for the last four decades (Martins 2005). They have presented a variety of explanations, such as the adoption of different and incommensurable paradigms by the parties involved (Farrall 1975), the position-favoring eugenics on the part of Karl Pearson (Mackenzie and Barns 1979; Mackenzie 1981), or the conflict of personalities (Provine [1971] 2001). However, only a few of these studies (e.g., Provine [1971] 2001; Cock 1973) examined in detail the original documents produced by the actors involved in this scientific controversy.1 On the other hand, a later study (Magnello 1998) considers that after 1902–1903, the biometrician Weldon wanted to turn Mendel into a simple case of the law of ancestral heredity2 and worked with Pearson along these lines. In addition, the author states that Weldon’s proposal on the subject is contained in an “unpublished manuscript” on inheritance laws. Greg
Due to the existence of a disagreement between the parties involved in several aspects such as interpretation of empirical facts, the nature of evidence, scientific theory, and methodology, we consider that the discussion between Mendelians and biometricians could be characterized as a scientific controversy (McMullin 1987). 2 According to this law, parents would contribute ½ of the inheritance, the four grandparents with ¼, the eight great-grandparents with 1/8, and so on (Galton 1897, p. 76). 1
L. Al-Chueyr Pereira Martins (*) Department of Biology, Faculty of Philosophy, Sciences and Linguistics of Ribeirão Preto, University of São Paulo, Bairro Monte Alegre, Ribeirão Preto, SP, Brazil e-mail: [email protected] © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_2
11
12
L. Al-Chueyr Pereira Martins
Radick also mentions that “Weldon was at work on the book manuscript in 1904–1905, while in full battle mode with Pearson and others against the growing corps of ‘Mendelians’ led By William Bateson” (Radick 2015, p. 159). Described as a struggle between Darwinians by Alfred Nordmann (1992), the public discussion focusing on aspects related to heredity and evolution took place in journals, books, scientific meetings, and correspondence. The leaders of the biometrical school, Pearson and Weldon, advocated the blending inheritance according to Galton’s law of ancestral heredity (Galton 1897). In their experiments, they adopted statistical methods. They mainly studied characteristics that were inherited in a continuous way, such as weight and height. The results they got emphasized the role of natural selection in the evolutionary process. On the other hand, the Mendelians Bateson, Edith Saunders (1865–1945), and Charles Chamberlain Hurst (1870–1947), among others, interpreted the inheritance in terms of unit-characters (Mendelian factors), particles that did not blend even in the heterozygotes. They undertook experimental crossings with animals and plants in which hybrids were produced.3 They studied characteristics that were inherited in a discontinuous way, such as the coloration of pea seeds, for instance. In these cases, natural selection acted more narrowly because once new features arose by discontinuous variation, natural selection would act in the perpetuation of the type. In the run-up to the controversy, Bateson, who was previously a colleague and close friend of Weldon, held no academic position in Cambridge doing his scholarship-funded research.4 During the same period, Pearson and Weldon were well-known academics at London and Oxford Universities and had several students and collaborators (Martins 2007a, pp. 171–172). Besides that, they were working in a successful biometrician research program. According to Norton, Pearson “saw biometry as an exemplar of his philosophy put into operation” (Norton 1978, p. 15). Of course, Weldon and Pearson wished their line of research to keep attracting new students as well as maintaining their own prestige. According to Allan Cock (1973), the controversy between Mendelians and biometricians had its roots in episodes prior to 1902, such as the controversy on the origin of the garden Cineraria between the botanist William Turner Thiselton Dyer (1843–1928)5 and Bateson (from April to June 1895) (Martins 2006); the critical review of Bateson’s Materials for the study of variation6 by Weldon (1894) and the
We are using the term in the broad sense, including both the crossings between different species and the crossings between varieties that slightly differed, in a similar way to Mendel. 4 Bateson, only succeeded in occupying an academic position in 1908 (Harvey 1985, p. 105). 5 Thiselton-Dyer and Weldon considered that the cultivated forms had differentiated from a single ancestral species entirely by selection and accumulation of small variations. In contrast, Bateson thought that they had originated from hybrids between two or more of the four wild species (Cock 1973, p. 8). 6 In this book Bateson accumulated a huge mass of facts that substantiated the relevance of the discontinuous variations in the evolutionary process (Martins 2013). 3
2 Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between…
13
controversy of homotyposis7 between Pearson and Bateson (from 1901 to 1902) (Martins 2007b; Martins and Venturineli 2011), in which Pearson proposed a biological concept, the homotyposis, and tested it using statistical methods. Such a concept had implications for a conception of heredity. In the discussions that followed this debate, Pearson’s idea that germ cells were undifferentiated would clash with Bateson’s Mendelian view that gametes were differentiated. It is worthwhile mentioning that in 1893 the Evolution Committee (Animals and Plants) of the Royal Society of London was created. Its purpose was to conduct statistical investigations into the variability of organisms (Shipley 1908, p. xxxi). The results of these investigations were published in the Reports to the Evolution Committee of the Royal Society. Its president was Francis Galton and its secretary was Weldon. Among the other members of the Committee were Francis Darwin, A. McAlister, R. Meldola, and E. B. Poulton. In 1894, Bateson published the book Materials for the study of variation, a huge catalogue of facts that substantiated the discontinuity of variation. This book received an unfavorable critical review from Weldon (1894). In 1895, Weldon published an article that was harshly criticized by some members of the Evolution Committee. In this article he asserted that the statistical method was the only one capable of proving the Darwinian hypothesis. In 1897, there were changes in the Committee, which began conducting research on variation, inheritance, selection, and other phenomena related to evolution. In addition, there was the entrance of new members: Ray Lankester, Karl Pearson, Thiselton Dyer, and Bateson. Pearson and Weldon’s dissatisfaction with these facts, as well as with the Royal Society’s procedures for publication of Pearson’s paper on homotyposis, caused them to leave the Evolution Committee in 1900 and contributed significantly to their thinking of creating another journal and eventually founding Biometrika. In 1901 the journal Biometrika began to be published, edited by Pearson, Weldon, and C. B. Davenport. Galton agreed to be the “consulting editor” (Shipley 1908, p. xxxvi; Frogatt and Nevin 1971, p. 12). The foundation of Biometrika, which published only works within the scope of biometry, contributed to widening the gap between Mendelians and biometricians, who began to publish their works in different journals. The start of the actual controversy, which also marked the friendship break between Weldon and Bateson, occurred when he first published an article (Weldon 1902a) in Biometrika in which he criticized the proposal and methodology used by Mendel to analyze the patterns of inheritance in peas (Mendel 1866b). In response Pearson considered that the principle of homotyposis was fundamental in nature and that it could be the source of heredity. He believed that blood-corpuscles, hairs, scales, spermatozoa, ova, leaves, and flowers were homotypes (Pearson et al. 1901, p. 2; Pearson 1901). In his view, heredity was a special case of homotyposis since hereditary characteristics were transmitted by gametes which, analogous to other homotypes, would be undifferentiated. For him (as for Galton and Weldon), inheritance occurred through mixing (continuous or “soft” heredity). Pearson and Weldon believed that the correlation between two or more organs that could vary, belonging to the same animal, or between the parent and progeny organs, could be calculated numerically through a theorem that had initially been used by Francis Galton in his investigations (Galton 1889). 7
14
L. Al-Chueyr Pereira Martins
to the criticism from Weldon, Bateson published in the same year the book Mendel’s Principles of Heredity: A Defence (Bateson 1902). This book contained a translation of Mendel’s article dealing with the patterns of inheritance in sweet peas in the English language, as well as a response to Weldon’s criticisms. Although at that time, Bateson had been accused of adopting an acid tone, he made room for a compromise to consider that the principles of Mendel and Galton’s law of ancestral heredity could coexist provided they were applicable to different cases (Bateson 1902, pp. 104–105). In the same year, George Udny Yule (1871–1951), Pearson’s collaborator, also considered the possibility of the coexistence of the two “laws” as well as the delimitation of their respective spheres of application8 (Yule 1902; Tabery 2004; Martins 2007a). Two years later, a former student of Weldon’s, Arthur Dukinfield Darbishire (1879–1915), after having engaged in a lengthy discussion with Bateson on the results obtained in experimental crossings involving the inheritance of eye and coat color in mice, admitted: Some facts seem to confirm the Mendelian interpretation, while others may be described in terms of the interpretation of heredity adopted by both Galton and Pearson (Darbishire, apud, Anonymous 1904, p 538).9
However, in spite of these attempts at conciliation the controversy lasted until 1906, the year of Weldon’s death. In 1904 the disagreements between the two parties were still present. During the British Association for the Advancement of Science meeting, in the section of Zoology presided over by Bateson, Weldon criticized the Mendelian concept of the purity of gametes (Weldon, apud, Punnett 1926, p. 78). The purpose of this paper is to clarify whether there was an attempt to reconcile Mendelism and Biometry by Weldon. Besides that, it is to elucidate whether he worked together with Pearson to synthesize Mendelism and biometrics, as was suggested by Eileen Magnello (1998, pp. 38, 86, 93). In the published work by Weldon and Pearson, during the dispute we found no evidence of such an intention. Therefore, we will analyze the “manuscript” mentioned by Magnello as well as other original published and unpublished material by Weldon. If Magnello’s interpretation is ascertained, it will reinforce our interpretation obtained from the analysis of several episodes related to the controversy (Martins 2005, 2007a, b, 2008) that
According to Yule, since Mendelians and biometricists were dealing with two diferent kinds of heredity, it was necessary to make a distinction between the phenomena within the race and the phenomena of hybrization that occur on crossing two races admittedly distinct. In this way, the biometry’s study of continuous variation and Bateson’s study’of discontinuous variation would not be incompatible (Yule 1902, p. 196; Tabery 2004, p. 85). 9 The attempt to demonstrate the compatibility between Mendelian and biometric theories is explicit in the title of Darbishire’s paper published in 1905: “On the supposed antagonism of Mendelian to biometric theories of heredity.” 8
2 Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between…
15
one of the main factors that motivated the controversy was the struggle for authority in the field10 of heredity and evolution.
2.2 Weldon’s Objections to Mendel’s Proposal In the period between 1902 and 1904, it is possible to perceive in Weldon’s publications objections both to Mendel’s proposal and to the way in which Bateson interpreted it. One of them was that Mendel had explained the inheritance of characters through the presence of particles (Weldon 1902a, p. 228), relating each character to a particular unit element in the germ cell (Weldon 1903, pp. 286–287). That is, Weldon did not accept that a cellular element, Mendelian factor or unit-character as it was called at the time, was responsible for a certain characteristic. Weldon also considered that the neglect of ancestry compromised the entire work of Mendel since it would become impossible to make predictions about crossing results (Weldon 1902a, p. 252). According to Weldon, for Mendel, the yellow seed pea crossed with the green seed pea would always behave in the same way regardless of its ancestry, and this was not appropriate. In Weldon’s view, only by taking into account the whole effect, produced by a particular parent, as due to particular characters, as Mendel did, do not explain contradictory results observed in the offspring of parents apparently identical in certain characters. This shows that not only the parents, but also their ancestry, must be considered before the results of pairing them can be predicted (Weldon 1902a, p. 252). In addition, he criticized the Mendelian “laws” of dominance and segregation11 stating that they did not apply to some cases, such as the hybrids of the Telophane pea (Weldon 1902a, p. 236). However, among the charges laid against Mendel’s work, Weldon tried a softer tone: I am seeking to summarize the evidence upon which my opinion rests. I do not want to belittle the importance of Mendel’s conquest. I wish simply to draw attention to a series of facts which seem to suggest fruitful lines of inquiry. (Weldon 1902a, p. 235, my emphasis)
Although he did not write it explicitly, the “fruitful lines of inquiry” were precisely those adopted by the biometricians. This approach has been applied to other case studies in the history of biology. See for instance Sapp (1983, 1990). According to Jan Sapp, historians investigating the past have shown that the domestic politics of science, competition, power, and authority play important roles in directing scientific work (Sapp 1990, pp. 300–301). In his view, the competition between scientists and scientific controversies cannot be reduced to the competition of ideas whose strength will decide the outcome. The scientists are also engaged in changing the field socially. In order to receive recognition they use several strategies from teaching to refereeing papers and reviewing research grants. When scientists attempt to impose a definition of the field, each participant tends to uphold those scientific values which are most closely related to him or her personally or institutionally (Sapp 1987, p. xiv). 11 In fact, in his article on the patterns of inheritance in peas, Mendel did not refer to “laws.” This connotation was given later (See Martins 2002). We will discuss this later in this chapter. 10
16
L. Al-Chueyr Pereira Martins
In another article published in the same year, Weldon (1902b) accused Mendel of confusing the resemblance to a race and the resemblance to an individual. He inquired whether in asserting that “individuals who produce yellow seeds reproduce the yellow character of the parental form” Mendel was referring to the race of peas or the individuals of the race. In his view, the description of the colors did not allow him to come to a conclusion. He added that the fact that hybrids have the shape of their parents could be explained through reversion or atavism.12 He also criticized Mendel’s model of inheritance by comparing it to the ones proposed by Hugo de Vries (1848–1935) and August Weismann (1834–1914).13 In his words: Mendel further anticipated speculations of De Vries and Weismann by attributing the inheritance of each unit-character to the presence of a particular unit element in the germ-cell […]. (Weldon 1903, p. 286)
In his view, the Mendelian explanation of the constitution of the germ cells of the hybrids was hypothetical because it involved “a gametic mechanism that could not be demonstrated” (Weldon 1904, apud, Punnett 1926, p. 78). Weldon concluded that the Mendelian hypothesis was neither sufficiently solid, nor offered an adequate methodology to solve certain problems, contrary to the proposal of the biometricians. Thus, in his view, the Mendelian hypothesis required more methods of observation and description to solve problems, such as: to what extent a plant was hairy by considering the crosses between glabrous and hairy forms of Lychnis dioica (Weldon 1904, apud, Anonymous 1904). Although Weldon did not make it explicit, the methodology he suggested the Mendelians should adopt was the same one employed by the biometricians in their investigations.14 Besides criticizing Mendel’s proposal itself, Weldon also criticized Bateson’s interpretation of it. He commented that Bateson had misinterpreted some cases by considering them to be Mendelian inheritance such as the results obtained by Darbishire by crossing pink-eyed piebald waltzing mice with normal pink-eyed albinos (Weldon 1903, p. 286; Darbishire 1902; Martins 2008). Darbishire published the results of his experimental crossings of mice in the form of four reports in the journal Biometrika (Darbishire 1902, 1903a, b, 1904). Weldon was referring to the first one. The main point of disagreement was whether the results of those experimental crossings could be interpreted in Mendelian terms as suggested by Bateson,
These terms were used to refer to the reappearance of a characteristic after several generations. De Vries proposed intracellular pangenesis and Weismann germ-plasm theory. 14 Pearson and his collaborators used to collect large samples of vegetables such as leaves of trees. They counted their veins, hairs, spines, etc. For example, they selected 100 of beech trees roughly the same age and belonging to the same district and collected 26 leaves from each one. They supposed that each tree was represented by 26 leaves. They counted the veins of theses leaves and noticed that the number varied between 10 and 22. They calculated all the possible pairs ½ (26 × 25) = 325 in number; as the correlation table was represented symmetrically considering the beginning of any of the leaves in the pair as the first or second, then a tree had 650 entries. A total of 100 trees studied resulted in 650,000 entries. They built tables by determining the correlation coefficient (Pearson 1901, p. 293; Martins and Venturineli 2011, p. 42). 12 13
2 Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between…
17
or according to Galton’s law of ancestral heredity as suggested by Darbishire and Weldon, following the first results of Darbishire’s experimental crossings of mice (Darbishire 1902). However, there were some problems concerning the start point of Darbishire’s experiments. He confounded heterozygote hybrids with pure dominant homozygotes. In addition to this, Weldon considered the changes Bateson was introducing in Mendel’s proposal to be a problem.15 He stated: In order to fully understand Mr. Bateson’s treatment it is necessary to realize not only Mendel’s doctrines, but the various modifications of these which Mr. Bateson has from time to time proposed. (Weldon 1903, p. 286)
2.3 Weldon’s View on Inheritance Weldon’s conceptions on inheritance may be found in two manuscripts dating from 1901 to 1902 and from 1904 to 1905, and also in a notebook dated 1905. This material has not been published. The English biologist admitted the existence of two kinds of inheritance, namely: blended inheritance and alternative inheritance. The blended inheritance would be determined similarly to a column of mercury in a barometer. It could lead to any result. An example of this kind of inheritance would be the human stature. The alternative inheritance would depend on the proportions in which the different ancestors had contributed such as the coat color of Basset-hound puppies. In this breed it was possible to find white dogs with yellow spots or white dogs with yellow and black spots. Intermediate forms were rare (Weldon, MS of WFRW’s unpublished “Theory of inheritance” with notes on it by Edgar Schuster [1904–1905?], p. 1; Pearson Papers, UCL – Special Collections UCL – 264-2264/2). According to Weldon “the inheritance of characters” depended both on the material contained in the germ cells (gametes) and on environmental conditions. He explained: For every character of a living organism is determined part by the constitution of the germ from which it develops that is by something transmitted to it from its ancestors, and partly by the conditions of the environment in which that development takes place. (Weldon, MS of WFRW’s unpublished “Theory of inheritance” with notes on it by Edgar Schuster [1904– 1905?], p. 1; Pearson Papers, UCL – Special Collections UCL – 264-2264/2)
He illustrated this view with Daphnia, a familiar freshwater crustacean that normally possesses a spine at the back of its carapace. Such a spine may vary both in shape and size depending on the conditions of the water in which young individuals grow up as well as the germ conditions.
Weldon and Pearson seemed not to have realized that Bateson was developing a research program that included testing the principles Mendel had found in peas in a wide range of experimental materials, the search for exceptions and new laws (Martins 2002). 15
18
L. Al-Chueyr Pereira Martins
It is worthwhile paying attention to the fact that at the same time Weldon (1902a) was criticizing the microscopic model proposed by Mendel, considering it to be speculative, he was also adopting a microscopic model of inheritance involving particles: Galton’s theory of stirps. Departing from Darwin’s hypothesis of pangenesis, Galton presented his inheritance theory in two papers (Galton 1872, 1875). According to Galton, a stirp was constituted by the union of several germs, particles that could grow and split. Each germ represented a specific organ or tissue. The ovum would contain the total sum of germs necessary to develop the individual (Robinson 1979, pp. 37–38; Polizello 2008, p. 41). Galton considered that a few of these gemmules became patent and were developed into the cells of the adult. The others would remain latent. The cells contributing to the next generation would predominantly comprise the latent residue of the gemmules in the stirp that had not developed into the adult (Bulmer 2003, p. 103; Polizello, 2011). In the same way as Galton, Weldon believed that the germ would consist of elements that could exist both in the active condition and in the latent condition. He made an analogy between Galton’s elements in active condition with Mendel’s dominant elements and Galton’s latent elements with Mendel’s “recessive.” When referring to the results of his experiments with peas, Mendel considered that the characters which were transmitted without or with few alterations in the hybridization were dominant and the ones that remained latent in the process were recessive (Mendel 1866a, 1913, p. 342; Martins 2002, p. 31). He related those characters to some factors or elements that are present in the germ cells (pollen and ovum). In his view, even the characteristic present in one of the parents that could not be observed in the offspring was present, but hidden (Mendel 1866a, 1913, p. 343). Weldon in a similar way as Galton considered that the struggle between these elements would result in the formation of two groups: one dominant (that would determine the characteristics of the body) and the other latent (that would develop without affecting the characteristics of the body). In the production of germ cells a sample would be transmitted to the germ cells. The latent elements that did not manifest in the offspring had already been dominant in the past generations and could manifest again in the succeeding generations. Therefore, their explanation for dominance and latency is different from Mendel’s.16 Weldon tried to explain the dominance or latency of the determining elements by the statistical law proposed by Galton.17 In his original proposal, Galton represented the proportions in which the ancestors of different degrees contributed to the characters of a generation in the series: 1/2 + 1/4 + 1/8 + … etc. (The parents together Mendel did not present dominance as a result of the struggle between factors or cellular elements. In addition, until 1875, Galton, unlike Mendel, believed that the patent elements were transmitted more feebly than latent elements. 17 After 1875 Galton dedicated himself to statistical work developing the law of ancestral heredity and changed his mind supposing that latent and patent gemmules were equally frequent and had the same chance of being transmitted to the next generation (Bulmer 2003, p. 103). 16
2 Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between…
19
contributing one half the total heritage, the grandparents one quarter, and so on). (Weldon, Unpublished MSs by WFRW of his proposed book on evolution, Chap. XIII, p. 3, 1901–1902, Pearson Papers, Special Collections UCL – 264-1). According to Weldon, the first step to getting a well-founded knowledge about inheritance would be the determination of the correlation coefficient (Martins and Venturineli 2011, p. 40) between the members of a generation of animals and their parents or other ancestors, in as many cases as possible, adopting a procedure similar to the one adopted by Galton (1892), Pearson, and Pearson’s students at University College London. The correlation values found by Pearson in his investigations formed a geometric series similar to that which Galton had proposed for man and lower animals.18
2.4 W eldon’s Unpublished Views of Mendel’s Proposal and Mendelian Contributions Several criticisms present in Weldon’s publications can also be found in Weldon’s manuscripts. For instance, he objected to the Mendelian segregation and to the neglect of ancestry: How, if we examine the results obtained by Mendel and his modern followers a little more closely, we shall see that the hypothesis of germinal segregation, involving a throwing of half the ancestral elements from every germ cell, by which Mendel attempted to account for the results of his own experiment, is quite incapable of covering the facts now known; and if we attempt to express those facts by reference to embryonic determinants at all, we shall find ourselves obliged to invoke elements in which the whole ancestry is represented, such as those inquired by Galton 30 years ago, when he formulated his conception of a hereditary mechanism. (Unpublished MSs by WFRW of his proposed book on evolution, Chap. XII, p. 5, 1901–1902, Pearson Papers, Special Collections UCL – 264-1)
The Mendelian view of the purity of the gametes conflicted with Weldon’s conceptions of inheritance, both in respect to the blending inheritance and to the alternative inheritance. Weldon could not accept that a cellular element (factor) could determine a characteristic and that factors originating from the parent gametes did not mix in the descendants, even in the heterozygotes. He stated: The differences between families are also inconsistent with any hypothesis which involves the belief that albinism is a recessive unit-character always “pure.” (Weldon, Unpublished MSs proposed by WFRW of his book on evolution, Chap. X, p. 2, 1901–1902, Pearson Papers, Special Collections – UCL – 264-1)
He kept questioning the “law of dominance.” He commented that the recessive character in one generation could become dominant in the next generation. In addition, Mendelians had described cases where dominance did not occur. He commented:
In the case of human, lower animal, and plant characters, the correlation coefficient between a parent and its offspring was slightly less than ½ (0.45). 18
20
L. Al-Chueyr Pereira Martins But we are told that the smooth-seeded races of P. arvence do breed true; and the Mendelian view of this race is therefore unsustainable. (Weldon, Unpublished MSs by WFRW of his proposed book on evolution, Chap. XII, p. 10, 1901–1902; Pearson Papers, Special Collections – UCL-264-1).
This criticism, which was also present in Weldon’s publications, was a mistaken interpretation of Mendel’s original proposal. In his paper on hybrid plants, Mendel (1866a) did not refer to specific laws, namely the “law of dominance” or “the law of segregation.” Although in this paper he mainly dealt with experiments in which he detected dominant and recessive characters, he also mentioned some cases where the characters of the offspring were intermediate between the parent ones. This happened for instance in the crossings of short stem peas (1 ft) and long stem peas (6 ft); the length of the hybrids stem varied between 6 ft and 7.5 ft (Mendel 1866a, 1913, p. 343). Additionally, he emphasized that no applicable law governing the development and formation of hybrids had been successfully formulated, mentioning the difficulties involved in this task. He stated: A final decision can only be arrived at when we shall have before us the results of detailed experiments made on plants belonging to the most diverse orders. (Mendel 1866a, 1913, p. 336)
On the other hand, Bateson, as well as other Mendelians, such as Castle (1903), for instance, knew that the principles of dominance and segregation found by Mendel in his article on hybrid plants had no universal application. Bateson himself presented several exceptions in his book (Bateson 1902). He treasured the exceptions and devoted himself to the study of cases that did not follow Mendel’s principles. Years later, he commented on this respect: The dominance of certain characters is often an important but never an essential feature of Mendelian heredity.19 Those who first treated of Mendel’s work most unfortunately fell into the error of enunciating a “Law of Dominance” as a proposition comparable with the discovery of segregation. Mendel himself enunciates no such a law. Dominance of course frequently exists. The consequences of its occurrence and the complications it introduces must be understood as a preliminary to the practical investigation of the phenomena of heredity, but it is only a subordinate incident of special cases, and Mendel’s principles of inheritance apply equally to cases where there is no dominance and the heterozygous type is intermediate in character between the two pure types. (Bateson 1913, pp. 13-14; Bateson’s italics)
In this way, in Bateson’s view, dominance could be detected in several cases, as it was in the experimental crossings performed by his group or by Mendel. However, it was not a general rule, but “a subordinate incident of special cases.” There were cases in which the offspring showed intermediate characteristics between those presented by the progenitors. This did not invalidate Mendel’s principles. Concerning segregation, there were cases in which some characteristics were not inherited independently as Mendel had found in Pisum but always together, a phenomenon that Bateson later called coupling.
Bateson made this statement in several works. See, for example, Bateson and Saunders (1902), p. 11.
19
2 Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between…
21
Weldon accused Bateson and the Mendelians of misinterpreting the results of experimental crossings such as those developed by Darbishire with mice20: Complicated theories were put forward to bring this case in the scope of Mendel’s theory; these theories do not fit the facts, as we shall see next week […]. (Weldon, Unpublished MSs by WFRW of his proposed book on evolution, Chap. XII, p. 11, 1901–1902; - 264-1 – Pearson Papers)
This criticism also appeared in his works published in 1902. However, in this particular case, Mendelian interpretation concerning the inheritance of eye color was correct (Bateson 1903; Martins 2008). At this time, Bateson and Weldon also offered a different interpretation of the results of the experimental crossings of mice performed by other authors such as Georg von Guaita (Bateson and Saunders 1902; Weldon 1902b). Despite the criticisms mentioned above, it is also possible to find some attempts to reconcile Mendel’s hypothesis with Galton’s hypothesis in Weldon’s unpublished manuscripts, as shows the following quotation: Mendel’s hypothesis is more nearly identical with that put forward in 1872 by Francis Galton than with the more limited hypothesis of Weismann. (Weldon, Unpublished MSs by WFRW of his proposed book on evolution, Chap. XII, p. 1, 1901–1902; Pearson Papers, Special Collections UCL – 264-1)
According to Weldon, in 1872 when Mendel’s work was still unknown, Galton had already stated that the germ from which the individual is developed contained a number of elements, each of which would be capable, under suitable circumstances, of determining the appearance of a particular character or group of characters in an adult body. (Weldon, Unpublished MS by WFRW of his proposed book on evolution, Chap. XII, p. 1, 1901–1902; Pearson Papers, Special Collections UCL – 264-1). Weldon also called attention to the fact that both Mendel and Galton had formulated theories of inheritance based upon “direct statistical study of the relation between the visible characters of animals and plants and whose of their descendants.” (Weldon, Unpublished MSs by WFRW of his proposed book on evolution, Chap. XII, p. 1, 1901–1902; Pearson Papers, Special Collections UCL – 264-1). Besides that, Weldon tried to show other similarities between these theories: Galton supposes that every such element is capable of existing either in an active, or as he says in a dominant condition, or in a condition which he calls latent and Mendel has called recessive. (Weldon, Unpublished MSs by WFRW of his proposed book on evolution, 1901– 1902, Chap. XIII, pp. 1–2; Pearson Papers, Special Collections UCL – 264-1; our emphasis)
Concerning the moment at which dominance would be decided, Weldon stated:
Darbishire crossed Japanese waltzing mice having pale fawn and white coats and pink eyes with ordinary white pinked eyed mice. Among the 154 produced offspring, 137 were grey and white, 1 was grey, 7 were yellow and whitish, and 9 were black and white or whitish. Although not mentioned in Darbishire’s first report, all offsprings without exception were dark-eyed. However, their parents were pink-eyed. These results conflicted with the cases of animals that have been studied so far (Darbishire 1902; Bateson 1903, p. 462). 20
22
L. Al-Chueyr Pereira Martins In such cases it is not impossible, and facts to be described in connection with Mendel’s work make it seem not improbable, that the dominance of one or other of the two sets of alternative elements may be decided once for all at the moment of fertilization […]. (Weldon, Unpublished MSs by WFRW of his proposed book on evolution, Chap. XIII, p. 8, 1901–1902; Pearson Papers, Special Collections UCL – 264-1)
Sometimes Weldon praised the contributions brought not only by Mendel but also by the Mendelians: Some effects of selective mating have been deduced by Pearson from the general theory of correlation, apart from any special theory of heredity transmission, but the chief experimental evidence bearing on the matter is that collected by Mendel and his modern followers, working with a different object. (Unpublished MSs by WFRW of his proposed book on evolution, Chap. XIII, p. 10, 1901–1902; Pearson Papers, Special Collections UCL – 264-1; our emphasis).
All these attempts at approximation (mainly concerning Mendel’s proposal) are contained in the unpublished manuscript of 1901–1902. However, most of them are not present in the published work from this period of time. As shown in the second section of this chapter, the works published by Weldon (1902a, b) are full of criticisms of Mendel’s proposal. How can different positions concerning the same subject at the same time in published works and unpublished works by the same author be explained? We will try to elucidate this aspect in the last section of this chapter. A few years later Weldon wrote: It is evident from the facts of growth and regeneration, that the characters of any one stirp which become active in any one generation are determined by the position of that stirp with reference to the rest – i. e., process of the same nature of Mendelian "dominance." (Weldon, UCL – Special Collections – Pearson Papers – Notebooks – MCMV – WFR Weldon – St Johns College Cambridge – Jan. 2/1905 - 264/2, p. 1; our emphasis).
And added: In an individual of pure race, the stirps will each contain the present and ancestral characters of that race. […] The hybrid zygote should contain two types of stirps in equal numbers, each representing the characters of the one parent. It is possible on this assumption to develop a theory of nuclear division, which may give Mendel’s results without eliminating the ancestral influence – i. e. – without a theory of the “pure gametes.” (Weldon, UCL – Special Collections – Pearson Papers – Notebooks – MCMV – WFR Weldon – St Johns College Cambridge – Jan. 2/1905 – 264/2, pp. 1-2; our emphasis)
In both the quotations reproduced above, there is a clear intention to reconcile the model proposed by Mendel with the model proposed by Galton and the cytological studies of that time while still denying that the gametes were genetically pure. This intention is corroborated in a letter from Weldon to Pearson written in the same year: When a stirp goes into a zygote, it carries a lot of properties, but those which are manifested by the body into which the zygote develops are transmitted with increase intensity to the gametes of that body thus establishing that correlation between characters of parent and characters of its reproductive cells, which I was foolish unable to put in. But if a stirp, having become active in this way, be introduced into a zygote which the majority stirps are so active in the directions that its own properties become latent in the body into which the zygote gives rise, then that stirp transmits its properties in a weakened condition to the next generation. If you apply this luminous principle to Peas, you get Mendel’s fact. (Letter from
2 Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between…
23
Weldon to Pearson, 1/1/1905, Pearson Papers, Special Collections UCL, 264/2, p. 1; our emphasis).
2.5 Final Remarks The analysis undertaken shows that in his publications (Weldon 1902a, b), Weldon made several criticisms of Mendel’s proposal (1866a). These criticisms, which also appeared in the unpublished manuscript of 1901–1902, concerned aspects such as the neglect of ancestry, the purity of gametes, Mendel’s “laws” of dominance and segregation, the particulate inheritance, and the conception of unit-characters. He also criticized Bateson’s interpretation of Mendel’s proposal and its occasional introduction of changes to it, as well as his interpretation of some experimental crossings results. Nevertheless, in the manuscript of 1901–1902, there was an attempt to approximate Galton’s theory of stirps and Galton’s law of ancestral heredity with Mendel’s proposal and an appreciation of some results of the work developed by Mendelians. The suggestions of similarities between Galton’s theory of stirps and Mendel’s theory of heredity as well as their relation to the cytological studies of that time were reinforced in the manuscript of 1904–1905 and other unpublished materials of that time.21 As mentioned before, the attempts at approximation (mainly concerning Mendel’s proposal) are contained in the unpublished manuscript of 1901–1902. However, most of them are not present in the published work from this period of time. As shown in the second section of this chapter, the works published by Weldon (1902a, b) are full of criticisms of Mendel’s proposal. How can different positions concerning the same subject at the same time in published works and unpublished works by the same author be explained? That is what we will try to elucidate. First, if Weldon dedicated his time to develop a theory of inheritance that involved Mendel’s principles and the law of ancestral heredity and intended to publish it before his death, surely his problem was not an irreconcilable opposition with Mendel. We will depart from a question which is similar to the one made by Robert Olby (1988, p. 314): Why Weldon, who occupied chairs in prestigious universities (London and Oxford), was respected by his colleagues and students and was developing a successful research program attacked Bateson’s line of research at that time when Bateson did not occupy a chair in any university and was considered an outsider? Perhaps trying to answer it we could find one possible explanation for the discrepancies concerning Mendel’s contribution between Weldon’s published and unpublished works. As Bateson gained space and attracted people enthusiastic about the research program he was developing, Weldon quite possibly began to fear that students and professionals would abandon biometrics and devote themselves to Mendelism. Thus, he attacked, too rigorously, not only several aspects related to Mendel’s con Weldon, UCL – Special Collections – Pearson Papers – Unpublished MS by FRW of ‘his proposed book on evolution in 13 ‘clips’.1901-1902. – 264/1. Chap. 1, p. 5. 21
24
L. Al-Chueyr Pereira Martins
ceptions but also the line of inquiry proposed by Bateson (Weldon 1902a, p. 252). When not only Bateson and his collaborators but also his colleagues abroad such as William Castle, started working on the scope of the Mendelian research program and getting positive results, Weldon’s criticisms turned mainly to the methodology they adopted in their research, their interpretation of Mendel’s proposal and the results of their experimental crossings. At the same time, implicitly and explicitly, he suggested that the proper methodology that could provide a foundation for knowledge of inheritance should be the one adopted by the biometricians. With Weldon’s attacks, Bateson, who wished the development of the new field was possible, began to fear that students and professionals would abandon his line of research since Weldon was considered an authority in the field of evolution and heredity. Thus, he tried to defend it rigorously and to call attention to its relevance (Bateson 1902, pp. 107, 108, 208). When Weldon and Pearson left the Evolution Committee of the Royal Society, stopped publishing in the Reports of the Evolution Committee, and founded Biometrika, they only published papers that followed strictly the biometric methodology. This also meant that they wanted to keep their authority in the field of evolution and heredity. This view is explicit in the quotation below reproduced from the section Letters to the Editor in Nature where Pearson, commented on the criticism published in Biometrika about Mendelian work: But as inventor of the term biometry, I may perhaps be allowed to say what I understand by it as a science, and to restate what I said with some emphasis at the Cambridge meeting. Biometry is only the application of accurate statistical methods to the problems of biology. It is no more pledged to one hypothesis of heredity than to another, but it must be hostile to all treatment which uses statistics without observing the laws of statistical science. The criticism which has been published in Biometrika upon Mendelian work has attacked its to frequent want of method and of logic, and I think no one can have read the recent literature without seeing that the criticism has been effective in its aim. (Pearson 1904, pp. 626–627; Pearson’s emphasis)
When Darbishire, a previous student of Weldon, started accepting that some cases followed Mendel’s principles, he did not publish in Biometrika (Darbishire 1905a, b) as he had previously done. For instance, the paper entitled “On the supposed antagonism of Mendelian to biometric theories of heredity” was published in the journal Manchester Memoirs. On the other hand, biometricians did not publish in the Reports to the Evolution Committee of the Royal Society. However, Bateson and his collaborators such as Edith Saunders and Reginald Crundall (1875–1967) Punnett and H. Kilby, among others, published several reports in the Reports to the Evolution Committee of the Royal Society, presenting the results of experimental crossings with vegetables and animals that followed Mendel’s principles as well as exceptions trying to explain them. These and Bateson’s other collaborators also published in other journals such as Nature. Scholars from other countries, such as Castle in the United States of America and Lucien Cuénot (1866–1951) in France, also worked within this perspective and Bateson started occupying an important place in the field of heredity and evolution. Mendel’s principles became an important subject of research.
2 Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between…
25
This study reinforces the view that the struggle for authority in the field of heredity and evolution was one of the main factors that motivated the controversy. In addition, we believe that the debate can be explained by a struggle for the available resources that were few, with which we agreed with Egon Pearson (1938, p. 36). Bateson was recruiting several students and collaborators who developed their investigations using Mendelian methodology and terminology as well as those introduced by him. Although Weldon, especially in the unpublished material, had tried to reconcile some aspects of Mendel’s proposal with Galton’s proposal, he advocated the use of the methodology adopted by biometricians in their investigations, seeking to preserve authority in the field of heredity and evolution. This explains the disregard of conciliation attempts during the controversy initially made by Yule or Darbishire. Weldon’s attempt to reconcile Mendel and Galton, the founder of the biometric school, someone who had a good relationship with both Biometricians and Mendelians, was a strategy to insert biometry and its methodology into the research that was in vogue at that time, or even better, convince the Mendelians to incorporate biometric methodology in their investigations. However, it was not possible from the analyzed literature to elucidate whether Weldon really wanted to synthesize Mendelism and biometrics. Acknowledgments The author would like to thank the research support from the Brazilian Council for Scientific and Technological Development (CNPq) and the São Paulo State Research (FAPESP). Thanks are also extended to Ms. Gill Furlong from the Special Collections of University College London (UCL) and her staff.
References Anonymous. (1904). Zoology at the British Association. Nature, 70, 536–541. Bateson, W. (1894). Materials for the study of variation treated with special regard to the discontinuity in the origin of species. Baltimore: Johns Hopkins. Bateson, W. (1902). Mendel’s principles of heredity – a defence. Cambridge: Cambridge University Press. Reproduced in: R. C. Punnett (Ed.), Scientific papers of William Bateson, 2, 4–28. Bateson, W. (1903). Mendel’s principles of heredity in mice. [Letters to the Editor]. Nature, 67, 462–463. Bateson, W. (1913). Mendel’s principles of heredity. Cambridge: Cambridge University Press. Bateson, W., & Saunders, E. R. (1902). Experiments in the physiology of heredity. In Reports of the Evolution Committee of the Royal Society (Vol. 1, pp. 1–60). London: Harrison & Sons. Bulmer, M. (2003). Francis Galton. Pioneer of heredity. Baltimore/London: The Johns Hopkins University Press. Castle, W. E. (1903). Mendel’s laws of heredity. Science, 18(456), 396–406. Cock, A. (1973). Bateson, Mendelism and biometry. Journal of the History of Biology, 6, 1–36. Darbishire, A. D. (1902). Note on the results of crossing Japanese waltzing mice with European albino races. Biometrika, 2(1), 101–104. Darbishire, A. D. (1903a). Second report on the result of crossing Japanese waltzing mice with Europea albino races. Biometrika, 2(3), 165–173.
26
L. Al-Chueyr Pereira Martins
Darbishire, A. D. (1903b). Third report on hybrids between waltzing mice and albino races. On the result of crossing Japanese waltzing mice with “extracted” and “recessive” albinos. Biometrika, 2(3), 282–285. Darbishire, A. D. (1904). On the result of crossing Japanese waltzing with albino mice. Biometrika, 3(1), 1–51. Darbishire, A. D. (1905a). On the bearing of Mendelian principles of heredity on current theories of the origin of species. Manchester Memoirs, 48(24), 1–19. Darbishire, A. D. (1905b). On the supposed antagonism of Mendelian to biometric theories of heredity. Manchester Memoirs, 49(6), 1–19. Farrall, L. A. (1975). Controversy and conflict in science: A case study – The English biometric school and Mendel’s laws. Social Studies of Science, 5, 269–301. Frogatt, P., & Nevin, N. C. (1971). “The Law of Ancestral Heredity” and the Mendelian-Ancestral controversy in England, 1889-1906. Journal of Medical Genetics, 8, 1–36. Galton, F. (1872). On blood relationship. Proceedings of the Royal Society, 20, 394–402. Galton, F. (1875). A theory of heredity. Journal of the Anthropological Institute, 5, 329–348. Galton, F. (1889). Natural inheritance. London: Macmillan. Galton, F. (1897). The average contribution of each several ancestor to the total heritage of the offspring. Proceedings of the Royal Society, 61, 401–413. Galton, F. (1892). Hereditary genius: An enquiry into his laws and consequences. 2nd. edition. London: Macmillan. Harvey, R. D. (1985). The William Bateson letters at John Innes Institute. Mendel Newsletter, 25, 1–11. Mackenzie, D. (1981). Sociobiologists in competition: The biometrician-Mendelian debate. In C. Webster (Ed.), Biology, medicine and society, 1840–1940 (pp. 243–287). Cambridge: Cambridge University Press. Mackenzie, D., & Barns, B. (1979). Scientific judgement: The biometry-Mendelism controversy. In D. Barnes & S. Shapin (Eds.), Natural order: historical studies of scientific culture (pp. 191– 210). Beverly Hills: Sage. Magnello, E. (1998). Karl Pearson’s mathematization of inheritance: From ancestral heredity to Mendelian genetics (1895-1909). Annals of Science, 55(1), 35–94. Martins, L. A.-C. P. (2002). Bateson e o programa de pesquisa mendeliano. Episteme. Filosofia e História da Ciência em Revista, 14, 27–55. Martins, L. A.-C. P. (2005). La controversia mendeliano-biometricista: un estudio de caso. In H. Faas, Saal, & M. Velasco (Eds.), (Orgs.), Epistemología e Historia de la Ciencia. Selección de Trabajos de las XV Jornadas (pp. 501–508). Córdoba: Facultad de Filosofía y Humanidades, Universidad Nacional de Córdoba. Martins, L. A.-C. P. (2006). Bateson, Weldon y Thiselton Dyer: la controversia de las Cinerarias. In J. Ahumada, M. Pantalone, & V. Rodríguez (Eds.), (Orgs.), Epistemología e Historia de la Ciencia. Selección de Trabajos de las XVI Jornadas (Vol. 12, pp. 395–401). Córdoba: Universidad Nacional de Córdoba. Martins, L. A.-C. P. (2007a). Weldon, Pearson, Bateson e a controvérsia mendeliano-biometricista: uma disputa entre evolucionistas. Filosofia. Unisinos, 8(2), 170–190. Martins, L. A.-C. P. (2007b). Karl Pearson, William Bateson e a controvérsia da homotipose. Episteme, 26, 1–16. Martins, L. A.-C. P. (2008). Darbishire, Bateson e Weldon: A controvérsia sobre hereditariedade em camundongos (1902-1904). Filosofia e História da Biologia, 3, 213–240. Martins, L. A.-C. P. (2013). William Bateson’s Materials for the study of variation: An attack on Darwinism? In P. Lorenzano, L. A.-C. P. Martins, & A. C. K. P. Regner (Eds.), History and philosophy of life sciences in the South Cone. Texts in philosophy (Vol. 20, pp. 273–295). London: College Publications. Martins, L. A.-C. P., & Venturineli, K. R. (2011). Relações entre biologia e estatística: Karl Pearson e o princípio da homotipose (1901–1902). Revista Brasileira de História da Matemática, 11(23), 39–51. Online version available in: http://www.rbhm.org.br/issues/RBHM%20-%20 vol.11,no23/5%20-%20Lilian%20&%20Katia%20-%20final.pdf. Access in: 26/09/2018.
2 Weldon’s Unpublished Manuscript: An Attempt at Reconciliation Between…
27
McMullin, E. (1987). Scientific controversy and its termination. In H. T. Engelhardt Jr. & A. L. Caplan (Eds.),. (oOrgs.). Case studies in resolution and closure of disputes in science and technology (pp. 49–92). Cambridge: Cambridge University Press. Mendel, G. (1866a). Experiments in plant hybridisation. (1913). In Bateson, W. Mendel’s principles of heredity (trans. Druery, C. T., pp. 335–379). Cambridge: Cambridge University Press. Mendel, G. (1866b). Experiments in plant hybridisation. (1966). In C. Stern and E. Sherwood (Orgs.), The origins of genetics: a Mendel source book (trans. Sherwood, E., Vol. 1, pp. 1–48). San Francisco: W. Frieman and Company. Nordmann, A. (1992). Darwinian’s at war. Bateson’s place in histories of Darwinism. Synthese, 91, 53–72. Norton, B. J. (1978). Karl Pearson and statistics: The social origins of scientific innovation. Social Studies of Science, 8(1), 3–34. Olby, R. (1988). The dimensions of scientific controversy: the Biometric-Mendelian debate. British Journal for the History of Science, 22(3), 299–320. Pearson, K. (1901). Mathematical contributions to the theory of evolution. IX. On the principle of homotyposis and its relation to heredity, to the variability of the individual, and to that of the race. Part I - Homotypos in the vegetable kingdom. Philosophical Transactions of the Royal Society of London. Series A., 197, 285–379. Pearson, K. (1904). Mendel’s law. Nature, 70, 626–627. Pearson, E. (1938). Karl Pearson: An appreciation of some aspects of his life and work. Cambridge: Cambridge University Press. Pearson, K., Alice, L., Ernest, W., Agnes, F., & Cicely, F. (1901). Mathematical contributions to the theory of evolution. IX. On the principle of homotyposis and its relation to heredity, to the variability of the individual, and to that of the race. Part-Homotyposis in the vegetable kingdom. [Abstract] Proceedings of the Royal Society of London, 68, 1–5. Polizello, A. (2008). Modelos de herança no século XIX: a teoria das estirpes de Francis Galton. História e Filosofia da Biologia, 3, 41–54. Polizello, A. (2011). O desenvolvimento das ideias de herança de Francis Galton. Filosofia e História da Biologia, 6(1), 1–17. Provine, W. B. (2001). The origins of theorehical population genetics. Chicago: The University of Chicago. Punnett, R. C. (1926). William Bateson. The Edinburgh Review or Critical Journal, 244, 71–86. Radick, G. (2015). Beyond the “Mendel-Fisher controversy”. Science, 350(6257), 159–160. Robinson, G. (1979). A prelude to genetics. Theories of a material substance of heredity: Darwin to Weismann. Lawrence: Coronado Press. Sapp, J. (1983). The struggle for authority in the field of heredity, 1900-1932. Journal of the History of Biology, 16, 311–342. Sapp, J. (1987). Beyond the gene. Cytoplasmic inheritance and the struggle for authority in genetics. New York: Cambridge University Press. Sapp, J. (1990). Where the truth lies. Franz Moewus and the origins of molecular biology. Cambridge: Cambridge University Press. Shipley, A. E. (1908). Walter Frank Raphael Weldon. 1860-1906. Obituary notices of fellows deceased. Proceedings of the Royal Society, series B, 80, xxv–xli. Tabery, J. G. (2004). The “evolutionary synthesis” of George Udny Yule. Journal of the History of Biology, 37, 73–101. Weldon, W. F. R. (1894). The study of animal variation. (Critical review of Bateson, Materials for the study of variation, treated with special regard to discontinuity in the origin of species). Nature, 50, 25–26. Weldon, W. F. R. (1895). Attempt to the death-rate due to the selective destruction of Carcinus moenas with respect to a particular dimension. Proceedings of the Royal Society, 57, 360–379. Weldon, W. F. R. Unpublished MSs by WFRW of his proposed book on evolution (1901–1902). Special Collections, University College of London, Pearson Papers, 264-1. Weldon, W. F. R. (1902a). Mendel’s laws of alternative inheritance in peas. Biometrika, 1, 228–254.
28
L. Al-Chueyr Pereira Martins
Weldon, W. F. R. (1902b). On the ambiguity of Mendel’s categories. Biometrika, 2, 44–54. Weldon, W. F. R. (1903). Mr Bateson’s revisions of Mendel’s theory of heredity. Biometrika, 2, 286–298. Weldon, W. F. R. (1904–1905?). MSs of WFRW’s unpublished “Theory of inheritance” with notes on it by Edgar Schuster. Special Collections, University College of London, Pearson Papers, 264-2. Weldon, W. F. R. (1905). Letter to Pearson (1/1/1905). Special Collections, University College of London, Pearson Papers, 264/2. Yule, U. (1902). Mendel’s laws and their probably relations to interacial heredity. New Phytologist, 1(193–207), 222–238.
Chapter 3
Blood, Transfusions, and Longevity Ronei Clécio Mocellin and Luciana Zaterka
3.1 Introduction There is a vast literature dealing with the progress of transfusion techniques and investigations of senescence. However, in general, there is little interest in the metaphysical, philosophical, and even political factors that have led to the use of these techniques and the connection of blood with aging. We do not propose a linear historical description of transfusion techniques or the origins of the human struggle against aging. Here we will propose a historical-epistemological reflection about three concrete contexts in which blood transfusions were associated with the fight against senescence. Therefore, our goal is to analyze some historical cases in which the triad blood, transfusion, and longevity is present. When and why did the blood achieve this unprecedented place throughout the history of Biology and Medicine? The answer seems to point to seventeenth-century England and some researchers from the Royal Society. One will see that like some aspirations from the contemporary transhumanism, the goal of these seventeenth- century researchers was to promote a technical intervention in the human body in order to extend its existence. The “body” should be considered a “technical object,” though not necessarily a machine, and the maintenance of life and vitality could be the subject of technological intervention. Based on the spatial-temporal delimitation of the origin of blood transfusions, one may be led to consider that since the seventeenth century these technical operations have been gradually accepted by doctors, R. C. Mocellin (*) Department of Philosophy, Federal University of Paraná, Centro, Curitiba, PR, Brazil e-mail: [email protected] L. Zaterka Center of Natural and Human Sciences, Federal University of ABC, Bairro Bangú, Santo André, SP, Brazil e-mail: [email protected] © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_3
29
30
R. C. Mocellin and L. Zaterka
as well as the association between blood and senescence. However that was not exactly what happened. There was no linear progress, neither of blood transfusions nor of longevity studies. We intend, nonetheless, to detect certain epistemic continuities concerning the triad blood, transfusion, and longevity. Our main goal will be to make explicit some of the philosophical motivations that propelled the three historical contexts studied. From this, we consider it possible to identify a founding epistemic element, be it in modernity as much as in contemporaneity, that is, an experimental philosophy in the domain of life. In spite of that, this convergence does not determine a necessary continuity in terms of ethics, politics, or economy in the investigation and application of blood transfusions over time, and its relations with senescence. Hence, we consider that the history of relations between blood transfusions and aging must be a longue durée dynamic historical process. If we adopt this historiographical concept, originally proposed by Fernand Braudel, it is because we judge it to be pertinent to think about our triad, both in terms of trying to clarify questions concerning the past and to reflect about possible future consequences of the “technization” of the human body (Braudel 1958; Guldi and Armitage 2014). We will highlight a common element to the historical cases analyzed here. It is the application of what we call the “Baconian program” of nature’s research, if not strictly at least in its general spirit of taking knowledge as the result of experimentation. Knowledge about nature in general and the human body in particular would be possible to the extent that researchers were able to manipulate them technically. The progress of science should not only improve our understanding of the natural world, but serve to increase our power over ourselves. In this sense, our analysis with regard to longevity, and hence, life and death, covers concepts coming from natural philosophy, which, as we know, does not distinguish between biology, chemistry, and medicine up to the mid-eighteenth century. We will begin this chapter by analyzing the metaphysical-theological aspirations present at the origin of the Baconian program of research, and then we will indicate the reasons that led to its abandonment. After a virulent controversy among French doctors in the second half of the seventeenth century transfusions were abandoned, even banned in some places, and their goal of promoting longevity was put into disrepute. In addition, we will note that although the experimental philosophy of the mentors of this program Francis Bacon (1561–1626) and Robert Boyle (1627–1691) profoundly influenced the eighteenth-century philosophers, the belief that life was the work of a creator was widely rejected by doctors who also dedicated themselves to philosophizing about the origin of “living matter.” Our focus however will be on discussing the technical results that have reversed the initial euphoria generated by the Royal Society program, leading to the abandonment and even banning of blood transfusions. The second historic case that we will study will give continuity to the application of the Baconian program and will lead to an institutionalization process. This case will also allow us to indicate the connections between senescence and blood research, both with the physiological investigations in vogue in the second half of the nineteenth century and with Charles Darwin’s theory of evolution (1809–1882). Here we will highlight some ideas and practices carried out by the Russian doctor
3 Blood, Transfusions, and Longevity
31
and philosopher Alexander Alexandrovich Malinovsky-Bogdanov (1873–1928). He was the director of the first-world institution devoted exclusively to the study of blood and transfusion, the Institute of Blood Transfusion, established in Moscow in 1926. Finally, our third historical case deals with the problem of blood in the contemporary world, especially in the scope of the philosophical movement known as transhumanism. The fight against aging is contained in the first article of “The Transhumanist Declaration”, which states: Humanity stands to be profoundly affected by science and technology in the future. We envision the possibility of broadening human potential by overcoming aging, cognitive shortcomings, involuntary suffering, and our confinement to planet Earth.1
The case we will address concerns the technical controversy of parabiosis, an experiment in which scientists surgically join animals to create a shared circulation system.
3.2 Metaphysics, Practice, and Forgetfulness The first-known blood transfusion, published by the newly founded Philosophical Transactions, occurred in 1666 in England between two dogs that were tied, their arteries and veins around their necks opened, and the blood was transferred through “needles” made of inserted goose feathers in their respective blood vessels (Lower 1666, pp. 353–358). Perhaps this was one of the first manifest and publicized blood transfusions, but the ideals underlying the concept of transfusion had already been explored and discussed by the Royal Society mentor. In fact, in important works such as De vijs mortis (1616), Historia Vitae et Mortis (1623), and New Atlantis (1627) Francis Bacon affirmed some fundamental ideas on the subject. The first of these ideas concerns the unimaginable possibility of man reversing the course of nature by inserting young spirits into aged bodies through a “transfusion” of spirits, extremely subtle material bodies, responsible for the matter activities: “If it were possible for young spirits to be put into an old body,” he says, “it is probable that this great wheel might put the lesser wheels in motion, and turn back the course of nature” (Bacon 2007, XII, p. 245). In other words, Bacon believes that if one can properly handle the spirits that make up human bodies, one may delay the senescence of these bodies and achieve a longer life. By believing in the possibility of human, material, and operational intervention on bodies, Bacon opens the door to an epistemological conception that points to the human body as a technical object. Accordingly, we are faced with the second fundamental idea explored by Bacon. As ministers and interpreters of nature one must know, explore, intervene, and therefore torment nature, whether inanimate or animate. Hence, one can both approach the work of God and expand one’s dominion over the natural world in order to provide a true science for the welfare of most men. We can thus affirm that In https://humanityplus.org/philosophy/transhumanist-declaration/
1
32
R. C. Mocellin and L. Zaterka
for Bacon, God not only created the world for us humans, but that He created it to be used and experienced by us (Bacon 1963a, IV, p 114). We are in the scope of a scientific-philosophical proposal which is no longer contemplative, but operative. The experimental philosophy will gain here an unprecedented locus and the members of the Royal Society, for the most part, will adhere to the Baconian program of research. Therefore emerges a long-lasting narrative that believes in the potential of nascent science (natural philosophy) and technology (of the mechanical arts) to rescue man from his lost and degenerate state through a new conception of operational and experimental knowledge. The guiding thread of this new method is the control of the natural bodies, and the space itself is the laboratory, because it is possible to carry out a confined, controlled, witnessed, and finally replicated experiment, that is, validated by a series of observers. It is in this new epistemic space that the biological, chemical, and medical study of the blood begins to gain irreversible contours. Bacon believed in the possibility of restoring or rejuvenating bodies through the ‘transfusion’ of spirits which are, as opposed to the passive matter, active bodies, bodily fluids. Bacon’s faith in the unlimited prospects of medical and technological implementations implied a drastic redefinition of the meaning of the natural order of life. A new understanding of nature progressively dissolved the very concept of natural conditioning. Aging itself was one of these natural conditions (Gliglioni 2005, p. 141).
It is clear that this epistemological perspective carried an important ontological implication, which refers to the impossibility to distinguish between natural and artificial. Thus, from now on a man was not only able to accelerate the ordinary course of nature, but was also capable of producing new natures—artificial natures ontologically similar to the original natures. That is why biology, chemistry, medicine, and ultimately the “body” sciences gain an important locus, and experiments in physiology and anatomy have become commonplace. Therefore, the living chemical and biological products are beginning to be of general interest. In this sense, no doubt, Bacon was an enthusiast: Of that other defect in anatomy (that it has not been practised on live bodies) what need to speak? For it is a thing hateful and inhuman (...) but yet it is no less true that many of the more subtle passages, pores, and pertusions appear not in anatomical dissections, because they are shut and latent in dead bodies, though they be open and manifest in live… (Bacon 1963b, IV, p. 386).
Following the recent discovery of blood circulation by William Harvey (1578–1657), published in his important Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus (1628), many of the Royal Society’s experimental philosophers such as Robert Hooke, John Ray, Joseph Glanvill, and Robert Boyle, posed a number of questions about the nature of blood. Harvey initially calculated the total amount of blood that could be drained from sheep, pigs and some other mammals. He then measured the volume of the left ventricles of these animals and calculated that, if the ventricle was empty with each beat, in 1 hour the total volume of blood would be much larger than in the ingestion or even contained in the whole animal (Schultz et al. 2002, p. 177). In fact, this would be true even if one tenth of the blood con-
3 Blood, Transfusions, and Longevity
33
tained in the ventricle was ejected by tapping. So Harvey comes to his famous conclusion: “It is a matter of necessity that the blood make a circuit, that it returns to where it has gone” (Schultz et al. 2002, p. 177). The fact that Harvey opened the way to the new physiology however does not mean that he operated on the same biological and chemical assumptions as the members of the Royal Society. In the mid-sixteenth century, we find two different perspectives in relation to human physiology: iatromechanics using René Descartes’s “man-machine” (1596–1650) and the iatrochemistry that began with Paracelsus (1493–1541) and used a more qualitative and active model for the matter. In fact, the latter understood the human body as a chemical laboratory, believing that through chemistry—distillation, purification, fermentation, etc.—one could discover some of the hidden properties of matter, on the limit, making principles invisible in substances and processes visible. Now many of the experimental philosophers of the Royal Society had united aspects of both traditions in their physiological endeavors. And, in this sense, they distanced themselves from Harvey in the very understanding of the biological nature of blood. Harvey, on the one hand, criticizes the Aristotelian system of the four qualities and moods and, on the other, rejects the so-called chemical principles, such as spirits, salts, ferments, etc. Hence, Harvey states that only blood is the vital principle, the source and origin of life, and also the cause of circulation. He did not, therefore, operate with the distinction between arterial and venous blood, unlike other experimental philosophers who believed that spirits were the vital principle, the productive cause of blood, which would then lead to the conception of the two “types” of blood, depending on how the spirits would act in the blood. This discussion was fundamental, because it brings the problematizations on the homogeneity or not of the blood, as well as the effective cause of the blood circulation. Boyle will demonstrate, through chemical and physiological experiments, the composition and different properties of blood. By means of the distillation of blood, for example, he obtained, besides oily and phlegmatic parts, a clear liquor made up of saline and volatile parts. With this Boyle demonstrated the non- homogeneity of blood, and still its alkaline and volatile constitution, having something in common with the ammonia salt (Boyle 1999a, IX, pp. 44–57.). In this sense, the author of The Sceptical Chymist can introduce a series of questions about the nature of blood in a text published in Philosophical Transactions in 1666. In this text he introduces 16 questions to the reader, among them: can a certain transfusion change the breed from one dog to another? Boyle’s assumption is that transfusion may induce physical changes in the recipient animal. In other words, from the Baconian perspective, Boyle believes that blood has the power to transform the biological scope of the receiving body. As well as this perspective concerning physical change, some members of the Royal Society believed that transfusions could also cause mental alterations. Thus on November 23, 1667, in London, the first experiment with a human, Arthur Coga a man known at the time for being mentally unstable, takes place. The experiment intended, through the blood, to “restore” his mind. What one can observe in the dozens of experiments conducted in the 1660s at the Royal Society by many adherents of experimental philosophy is the belief in the potential for rejuvenation of blood products.
34
R. C. Mocellin and L. Zaterka The technique also had therapeutic potential. In this age of therapeutic bloodletting the concept of injection led to the notion of renewing and invigorating old or diseased blood with an infusion of new, healthy blood (Guerrini 1989, p. 403).
From the publication in Philosophical Transactions of the descriptions made by Boyle of the transfusions practiced by Richard Lower (1631–1691), similar experiments began to be practiced in Paris. In March 1667, Jean-Baptiste Denis (1643–1704), a recent doctor of the Faculty of Medicine of Montpellier, published in the Journal des sçavans a letter in which he reported blood transfusion between two dogs. In this letter he pointed out an important advantage over the English experience of not sacrificing the life of the donor animal (Denis 1667). After successful transfusions, in which both donor and recipient lives were preserved, Denis subjected a man suffering from mental disorders to three transfusions. The man reacted well to the first two, showing improvement, but died shortly after the last transfusion operation. The widow accused Denis of her husband’s death and the case was brought to trial in the Paris Parliament.2 In fact, the case marked the height of a major controversy between advocates and detractors of blood transfusion practices. In general, advocates and detractors of blood transfusion agreed on blood circulation and its importance for maintaining health. The composition of blood was not the subject of dispute either, nor were the theoretical foundations of defenders of either position that fueled the quarrel. The central point of contention was with regard to the effectiveness of transfusions as to the preservation of health and the extension of life. Blood transfusion advocates were not meant to support a theory about blood, but to defend, above all, the right to experiment. Following Boyle, their main objective was the technical progress in the investigation of the human body, arguing that it is necessary to multiply the experiments in order to evaluate their benefits. Boyle’s supporters considered it a way to introduce a remedy more effectively, since fresh blood from a healthy animal was the best resource for the recovery of vitality. The detractors, on the other hand, believed that the introduction of foreign blood into the bloodstream, in addition to corrupting the blood of the individual, introduced characteristics of the donor. For them, blood was an original production of the individual and could not be universally shared. These were some of the objections raised by one of the principal detractors of transfusion, the physician and philosopher Guillaume Lamy (1644–1683). According to him, blood was not only a particularity of the species, but also an individual singularity produced by the digestion of food, and the blood of a donor could not acquire the same physiological characteristics as those of the recipient’s blood.3 After the initial euphoria, the lack of concrete results led to the almost abandonment of transfusions, especially in France whose Parliament of Paris decreed its definitive ban in 1670, but also in England where the practice came to be interpreted
For a judicial review of the case see Pete Moore (2003). For a detailed review of the controversy see Raphaële Andrault (2014).
2 3
3 Blood, Transfusions, and Longevity
35
as a criminal act. Almost a century later, Dr. Jean-Joseph Menuret de Chambeaud (1739–1815) examined in detail the controversy in the entry “Transfusion” of Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers (Duflo 2003). According to him, it was difficult to decide which of the parties, advocates, and detractors of blood transfusion practices, was more in accordance with the truth, but that it was allowed to think that Mr. Denis was the one who had changed the truth most, 1st because he was most interested in sustaining his opinion, 2nd because the transfusion ceased to be practiced not only in France but also in foreign countries, a clear proof of the recognition of its evil effects (1765, XX, p. 552).
In addition to the therapeutic failure, the abandonment of transfusions was also motivated by a gradual change in the explanation about the origin of life and the principles governing its maintenance. In the Baconian project, longevity was based on a theological presupposition that sought in the sciences and techniques a return or at least an approximation with the moment of Creation. Recall, for example, that Boyle proposes his new corpuscular theory of matter, in addition to other things, to demonstrate that the resurrection of bodies was perfectly possible (Boyle 1999b, VIII, pp. 295–313). Thus, experimenting with blood transfusions fulfilled both objectives, as they depended on technical advancement and helped in extending the life span. However, throughout the eighteenth century, at least in the France of the Encyclopedists, the quest for longevity through blood transfusion was no longer the object of experiments, because as we previously saw, countless experiences with living beings proved to be unsuccessful. The return of medical interest in transfusions will occur only in the early nineteenth century. The background for the rehabilitation of this therapeutic practice will no longer be metaphysical-theological, nor based on any philosophical principle, but will be confined to medical practice, particularly in obstetrics. It was precisely the obstetrician James Blundell (1790–1878) who, in 1818, began to carry out direct transfusions with the aid of a new technical apparatus. With the discovery of ABO blood groups by the Austrian physician Karl Landsteiner (1868–1943) in 1901, in addition to technical improvements and preservation of blood, transfusions gained a definite place in medical practices. Advocates of transfusions for a moment stopped claiming it as a source of longevity, an idea that fueled the original design of the Royal Society.
3.3 “Comradely Exchange of Life” The interest in research on longevity and the causes of aging were also influenced by the publication of Darwin’s Origin of Species in 1859. Two research lines converged, the first line sought to point out the physiological causes for aging and death. On the second, hypotheses were suggested to explain the evolutionary causes of these phenomena (Sengoopta 1993).
36
R. C. Mocellin and L. Zaterka
The physiological investigations carried out by the French-American Charles- Édouard Brown-Séquard (1817–1894), successor of Claude Bernard (1813–1878) in the Collège de France, are included in the first perspective. Brown-Séquard attributed aging to the diminution of sexual glands, thus recommending their replacement, which in the case of male animals could be made by inoculating extracts of sexual organs from young animals, especially monkeys. This physiological and hormonal approach of inoculating a biological material to supply the loss of those responsible for the maintenance of vigor and health were pursued by two other great experts in prolongation of life, the Austrian Eugen Steinach (1861–1944) and the Russian Samuel Voronov (1866–1951). The work carried out by Brown- Séquard and his successors on the physiology of aging and hormonal therapies, by replacement, grafting, or by the stimulation of sex glands, led to the consolidation of a new field of research called endocrinology (Celestin 2014). The investigations carried out by the Russian biologist Ilya Ilyich Metchnikov (1845–1916) exemplify the second perspective of research, given that besides proposing an explanation of the physiological processes, he also proposed to understand the dynamism of aging from an evolutionary point of view.4 From his research on inflammatory reactions he discovered a particular type of cell, phagocytes, as well as the mechanism of phagocytosis in the defense of the host organism. Metchnikov’s texts in this domain constitute the origin of immunology in its specific modern sense, which consists of actions that confer host protection based on an active response of an organized system (Tauber and Chernyak 1991; Vikhanski 2016). During an internship trip in Germany (University of Wuerzburg) in 1863 Metchnikov read the Origin of Species of Darwin. He initially had an unfavorable reaction. His reaction was not for the book’s key ideas, such as the common past of species and their diversification through transformism, but because of incongruities with his own observations in comparative embryology. A synthesis of his ideas was set forth in his Études sur la nature humaine, in which he coined the term geron tology to denote the study of senescence (Metchnikoff 1903; Vucinich 1989; Achenbaum 1995). In the Russia of this period the interest in longevity and the maintenance of life transcended purely scientific investigations and was also the object of reflection by theologians and philosophers connected with the Orthodox Church. The central theme was the interpretation given by orthodox Christianity of the resurrection phenomenon. This was the case of the philosopher Nikolai F. Fedorov (1827–1903) who proposed a teleological interpretation of Darwin’s evolution, with man as the final design of creation, although he remained an imperfect creature and subject to natural evolution. This natural imperfection not only could, but should be corrected by science which, allied with Christian morality, was the key to attaining r esurrection and immortality. For Fedorov it was the task of men, through knowledge and religious morality, to build a cooperative and universal society, prosperous and able to Metchnikov studied at the University of Kharkov, and was a professor of zoology and embryology at the University of Odessa and in 1887 he joined the newly created Pasteur Institute in Paris. He was awarded the Nobel Prize in Physiology/Medicine in 1908 (shared with the German bacteriologist Paul Ehrlich). 4
3 Blood, Transfusions, and Longevity
37
control the forces of nature. His texts were collected and published posthumously and his ideas had a great influence on the Russian intellectual environment (Fedorov 1990 [1906]). Perhaps it was not a mere coincidence that several scientists who stood out in this field of research were of Russian origin, since the country’s own culture gave prominence to the subject.5 This brief commentary on the scientific and theological context of imperial Russia lends itself only as a background to highlight some of Bogdanov’s reflections on transfusions and their relation to vitality, which he calls viability, physical and social. Bogdanov defended Karl Marx’s (1818–1883) “social materialism,” which he proposed to complement with the scientific epistemology of Ernst Mach (1838–1916) and Wilhelm Ostwald (1853–1932) (Sochor 1988). For Bogdanov, blood transfusions represented an experience, as much biological as it was social, and a reflection about them offered the possibility of describing the philosophical fundamentals of his notion of socialist collectivism. Bogdanov also considered that the experimental philosophy proposed by Bacon offered the best method to study the origin and development of living organisms (Jensen 1978, p. 61). Unlike his contemporaries, Bogdanov retook the hypothesis put forward by Bacon and his followers that longevity was related to the quality of our blood. However, he maintained that the rejuvenation techniques then employed had little chance of success. According to him, this is because they were based on a mistaken physiological conception, centered on the individual, while the most appropriate would be to develop a systematic and collectivist conception about the physiology of aging. Thus, the investigations about aging should follow the same epistemic model applied to the description of the unity of sciences as well as to the socialization of communal life. Blood transfusions were thus the most suitable method for a collective life-prolonging physiological program, as it employed blood as a vehicle for a universal immunization process (Krementsov 2011, 2014). Bogdanov studied natural sciences at the University of Moscow and graduated in medicine at the University of Kharkov (1899). In these places he came into contact with “mutualistic evolutionism” and its relation to senescence through professors like Kliment Arkadievich Timiriazev (1843–1920) (Moscow) and followers of Metchnikov (Kharkov). Bogdanov was not, however, a professional doctor or directly worked with physiological investigations (Jensen 1978). What, then, could be his interest in blood transfusions as a solution to aging and loss of vitality? He did not present his ideas about the collective importance of transfusions for rejuvenation in his theoretical and “professional” works. For this reason, we first highlights how he refers to transfusions in one of his science-fiction works, The Red Star: A Utopia (1908), and then refers to the general conceptions that led to the creation of the Institute of Blood Transfusion, established in Moscow in 1926 (Bogdanov 1984; Tartarin 1994). Ivan P. Pavlov (1849–1936) was also awarded the Nobel Prize in Physiology/Medicine in 1904. An example of this Russian tradition is the magnificent work The Death of Ivan Ilitch by Leon Tolstoy (1828–1910). It describes the life and death of Ivan Ilitch Metchnikov, the older brother of the biologist mentioned above (Todes 1989, p. 83). 5
38
R. C. Mocellin and L. Zaterka
In the Utopian book Red Star, Bogdanov treated the journey of a Terran, Leonid, to Mars. It was not an idyllic and static world, but a place in which the collectivization of the means of production and of all social relations, up to “physiological collectivism” had led to an advanced stage of civilization. Collectivism provided a pleasant and active life for all, in which intellectual and labor activities were combined. Following the chapters Bogdanov describes, compares, and reflects on the civilization of Mars and that of Earth; we are interested here only in Leonid’s visit to a Martian hospital. The Martians lived a long time, and the doctor accompanying the Terran explained the reason for it: the “comradely exchange of life” (Bogdanov 1984; Adams 1989). Admired, Leonid asks “are you able to rejuvenate old people by introducing young blood into their veins?” Doctor Netti responds: to an extent, yes, but not altogether, because there is more than just blood in the organism, and the body in its turn also has an effect upon the blood. That is why, for example, a young person will not act from the blood of an old one. The age and weakness in the blood are quickly overcome by the organism, which at the same time absorbs from it many elements which it lacks. The energy and flexibility of its vital functions also increase (Bogdanov 1984, pp. 85–86).
Leonid asks about the technique employed and Netti clarifies: such an exchange involves merely pumping the blood of one person into another and again by means of devices which connect their respective circulatory systems. If all precautions are taken, it is a perfectly safe procedure. The blood of one person continues to live in the organism of the other, where he mixes with his own blood and thoroughly regenerates all his tissues (Bogdanov 1984, p. 85).
Finally, Leonid wonders about the reasons why these techniques were not used on Earth, because, he tells Netti, transfusions were already known and even practiced by Earth doctors for a long time. Netti, expressing Bogdanov’s point of view, does not respond categorically, but throws a hypothesis which he considers probable: perhaps there are organic factors which render the method ineffective on Earthlings. Or perhaps it is merely due to your predominantly individualistic psychology, which isolates people from each other so completely that the thought of them is almost incomprehensible to your scientists (Bogdanov 1984, p. 86).
Russian academic research on transfusions was lagging behind those in Western countries, especially in the United States. For this reason, doctor Vladimir Shamov (1882–1962) spent a long time there learning the new techniques of transfusion developed by Alexis Carrel (1873–1944). On his return to his country, Shamov published the first text in Soviet Russia on operations followed by transfusions (1921), describing techniques for detecting blood groups and the use of anticoagulants (Krementsov 2011). After a trip to London in 1921, Bogdanov will also divulge the techniques practiced in the West, especially those described by the British physician Geoffrey Keynes (1887–1982). The book Blood Transfusion described the procedures employed by Keynes during the First World War and the instrumental innovations available to blood transfusion doctors (Keynes 1922). Bogdanov made this book his scientific manual to put into practice his theories on physiological collectivism and longevity.
3 Blood, Transfusions, and Longevity
39
Since his return to Russia in 1922 Bogdanov started a program of blood transfusion with the purpose of increasing physical vitality, in which he himself, his wife, and some friends became guinea pigs. The idea was to make a direct exchange of blood (500–800 ml) between an older person with the blood of young people, provided they were of the same sex (there is no explanation for this reason) and compatible blood groups. The first results were mitigated, though sufficient to encourage Bogdanov and his supporters to propose the creation of a public institution dedicated exclusively to the study of blood and transfusions. This was achieved with the creation, in 1926, of the Institute of Blood Transfusion (Instituta perelivanya krovi) in Moscow. Bogdanov was its first director. In his Struggle for Viability (1927), Bogdanov summed up his overarching vision for life extension, reaffirming that the deterioration of vitality was due to a compromise of the “organizational relationship” of cells and their “inner environment.” Among the tasks of this new state institution were “to study and elaborate issues related to blood transfusions”; to practice “the theoretical and practical instruction of physicians through the organization of occasional and permanent courses on blood transfusions”; “to publish scientific and popular literature on blood transfusions”; “to manufacture standard sera, as well as preparations, apparatuses, and accessories for blood transfusions” (Bogdanov 2002; Krementsov 2011, pp. 69–70). In 1928, after a “comradely exchange of life” with a student suffering from tuberculosis, a disease he considered himself immune to, Bogdanov died. However, his institute continued to function, being the first in the world to organize a national collection and stocking network of blood. His successor was Alexander A. Bogomolets (1881–1946), who structured research programs on the properties of blood and its relation to senescence. In addition to agreeing with the “physiological system” proposed by Bogdanov, he also criticized the hypothesis of his former professor Metchnikov on the formation of connective tissue as the cause of the degeneration of complex cells. Bogomolets will focus his studies on aging around the autocatalysis processes that occur in cells, and his work has had a major influence on the investigations of Soviet researchers in gerontology (Stambler 2014). The motivation for institutionalizing the technical manipulation of blood was theoretical and experimental, but the new bureaucratic elite was also seriously concerned about the premature death of revolutionary leaders. The prime example was undoubtedly Lenin’s death at the age of 54. How to safeguard the health and prolong the life of the drivers of the new Soviet state? It was Leonid B. Krasin (1870–1926), a Soviet engineer and ambassador to England, a disciple and friend of Bogdanov, who proposed the embalming of Lenin’s body. The interesting thing to note is that this conservation, in addition to embodying revolutionary hope in the capacities of the sciences of the future, also manifests an aspiration for the resurrection deeply anchored in Russian Orthodox culture. Finally, Krasin’s project converged with Fedorov’s technical and theological aspirations, which represented an important characteristic of Russian Orthodox Christianity with the promise of the progress of a new “proletarian science.” Perhaps it is not only for political reasons that Lenin’s body is maintained in his Mausoleum, but also because of scientific and theological aspirations rooted in Russian culture (Churilov and Stoev 2013).
40
R. C. Mocellin and L. Zaterka
Although there may be such a convergence of traditions in Russian culture, what we wish to emphasize here is the interconnection brought by Russian biologists and intellectuals between evolutionism, senescence, and physiological processes of blood. In Bogdanov’s case, his great contribution was to offer a philosophical foundation for blood transfusions and to have them connected, as Bacon had done in Royal Society, with the prolongation of life. With his Institute, blood transfusions ceased to be a utopia and began to occupy a central place in the Soviet health system. Russia was then a socialist state under construction whose “scientific organization” of work experiences had a long way to go until it reached the collectivism of the red planet.
3.4 Blood and Transhumanism Still in the nineteenth century, the French physiologist Paul Bert (1833–1886) did an innovative experiment using rats, the so-called parabiose (“para” from the Greek “beside” and bios meaning “life”). This experiment, in which two animals joined together surgically to create a system of shared circulation, was published in his Expériences et considérations sur la greffe animale and was awarded the science prize of the French Academy of Medicine (Bert 1863). Bert’s main objective was to demonstrate that the veins that united the two rats allowed the injected fluid to pass, which then ran from the vein of one animal into the vein of the other (Jebelli 2017). Clive McCay (1898–1967), an American biochemist and gerontologist who researched between the 1930s and 1960s at Cornell University was the first to apply this experimental model to the study of senescence and aimed, above all, to unravel the biological mechanisms of aging. On the Baconian perspective, he believed that if he changed the blood of old rats from that of new rats he might be able to slow the aging of the first rats. He believed there was some substance in the blood that could rejuvenate the older cells. His aim then was to understand how this mechanism worked and what substances would be involved in this “reaction.” This technique, however, fell into disuse after the 1970s, probably because many mice died of a parabiotic disease, which occurred approximately 1–2 weeks after the partners were united, and which could, according to some researchers, be a form of rejection of certain tissues. Hence, only at the beginning of the twentieth century Irving Weissman and Thomas A. Rando of Stanford University brought parabiosis back to scientific practice for the purpose of studying blood stem cells. By linking the circulatory system of an old mouse to that of a young mouse, the scientists produced, at least, some surprising results. In the heart, brain, and muscles, for example, the blood of young rats seems to bring new life to aging organs, making older rats stronger and healthier. Thus, for example, Stanford scientists investigated the possibility of muscle regeneration and liver cell proliferation in the parabiosis scenario. After a certain injury, muscle regeneration was analyzed by the formation of myotubes, cells, or muscle fibers in development with a nucleus located in the center, which expressed embryonic myosin heavy chain, a specific
3 Blood, Transfusions, and Longevity
41
marker of regenerating myotubes in adult animals. Five days after the injury, the muscles in young animals in parabiosis had regenerated considerably. In fact, the technique performed with young rats significantly increased muscle regeneration in older partners. Researchers concluded that regeneration of the aged muscle occurred almost exclusively due to the activation of resident aged progenitor cells and not to the grafting of circulating progenitor cells of young partners (Conese 2017). There is a great controversy surrounding parabiosis. The laboratories, at the moment, aim at identifying the young blood components that are responsible for these changes. Recently, a clinical trial in California began testing the benefits of young blood in elderly people with Alzheimer’s disease. Some more optimistic scientists, such as Tony Wyss-Coray, a neurologist at Stanford University, believe in their rejuvenating efficacy; others more cautious, such as Amy Wagers, a stem cell researcher at Harvard University, claim that this technique lacks the capacity of rejuvenating the tissues or muscles, but only to repair them (Scudellari 2015). Either way, the human dream of achieving longevity, now by means of increasingly sophisticated techniques, seems to be closer than ever. For example, in September 2017, Nature published an article on “plasma power” that explains why Alzheimer’s disease and brain aging could benefit from therapies based on this important liquid component of blood: Albumin accounts for more than half of the total protein content of plasma, and it has a crucial role in balancing the water content of the blood. But it is also a sponge-like carrier protein, long understood to bind and inactivate many of the proteins and metabolites that travel in plasma. In 1996, researchers at Harvard Medical School in Boston, Massachusetts, found that amyloid-β was one such compound. Aggregations of amyloid-β underlie the profusion of plaques that form around neurons in brains affected by Alzheimer’s disease. Although the exact consequences of plaque formation are under debate, most researchers think that aberrant processing of amyloid-β is fundamental to development of the disease — and that the peptides may even damage neurons directly. A widely proposed hypothesis is that cutting levels of amyloid-β in the brain would therefore slow, or even stop, the progression of Alzheimer’s disease (Drew 2017, p. 26).
Indeed, the question of the extension of life and, in the limit, of obtaining a kind of “temporary immortality” or indefinite extension by scientific and technological means has again become a matter of the present time. The adherents of transhumanism, a philosophical movement that encourages the use of science and technology to overcome human limitations, are the most enthusiastic, arguing that we must implement the rational use of technology to change our physical and cognitive condition always for the better. One of the enthusiasts of this movement states that transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn in desirable ways. Current humanity need not be the endpoint of evolution. Transhumanists hope that by being responsible for the use of science, technology, and other means we shall eventually manage to become posthuman, beings with vastly greater capacities than present human beings have (Bostrom 2005, p. 4).
One of the main motivations of this movement continues to be the extension of life through a technical intervention to alter the unwanted results of our biological evolution.
42
R. C. Mocellin and L. Zaterka Transhumanists emphasize that, in order to seriously prolong the healthy life span, we also need to develop ways to slow aging or to replace senescent cells and tissues (Bostrom 2005, p. 8).
Expectations for this movement in scientific and technological progress are the same as those of the Baconian program we have outlined above, although we consider its philosophical aspirations to be different both from Bacon and Boyle’s theological-metaphysical motivations and from those of Bogdanov’s collectivist project. It is now an individualistic aspiration towards the natural limits of the human condition. In previous cases, the solution to these limits and the search for ways to delay aging was for everyone, whether for the members of the Christian community or for all citizens who were part of the socialist state. This does not mean that the promoters of transhumanism defend an exclusion of classes regarding the access of the technological conquests, but they mainly bet on the liberal values as guides for the good functioning of the system. One of the pillars of these values is precisely individualism with the addition of the belief that self-improvement of individuals cannot be achieved only through culture and education, but through technology. Transhumanists place a high value on autonomy: the ability and right of individuals to plan and choose their own lives. Some people may of course, for any number of reasons, choose to forgo the opportunity to use technology to improve themselves (Bostrom 2003, p. 4).
One possible and perhaps inevitable consequence of this individualism is the technical differentiation of individuals even before they are born, according to the desires and economic situation of their parents. As an illustration, much is said about the genetic improvement of our species, but this raises the question of whether such an improvement will extend to all people or only to the most economically privileged portion. If the former alternative were to be used, this would lead to the partition of the human species into two, the “gene-rich” and the “gene-poor,” that is, those that can have access and those that cannot have access to the applications of genomics. In this case, the utopian vision of a humanity free from genetic diseases would turn into a dystopia in which social inequality would be inscribed through the application of advanced technologies in the bodies of individuals themselves. A constructive critique of utopian thinking, indeed, cannot dispense with a detailed analysis of the unanticipated consequences of contemporary scientific technology (Zaterka 2015).
3.5 Concluding Remarks Living healthy and achieving longevity are goals present in many cultures. Alchemists worked hard to search for the philosopher’s stone and elixirs to prolong and improve human life (Newman 2004). Modern and contemporary utopias have accentuated the powers of science and technology, and authors such as Bacon and Bogdanov have used these fields of “philosophical experience” to project their
3 Blood, Transfusions, and Longevity
43
metaphysical and philosophical ideas and ambitions. Naturally, dystopias were pointed out as to how much these ambitions were excessive, and could lead either to the monstrosity (Mary Shelley’s Frankenstein) or to control and oppression in societies dominated by authoritarian regimes (Aldous Huxley’s Brave New World or George Orwell’s Nineteen Eighty-Four). In the historical contexts studied here we try to explain how this cultural dissatisfaction of human beings with their finitude was manifested in objective actions to, if not avoid, at least postpone death to a future as far as possible. We have seen that although there is no space-time linearity in the technical development of transfusions and their use in the retardation of aging, we can identify a Baconian program that promoted an epistemological conception that points to the human body as a technical object. However, we seem to have some profound differences between these attempts to solve the “problem” of senescence. In addition to those we have pointed out above, another difference that we would like to highlight is the notion of temporality that fueled these expectations of longevity. In the case of Bacon and his followers of the Royal Society this temporality corresponds, in fact, to a return of man to the moment prior to the expulsion from Paradise. For Bogdanov and the Russian “mutualistic Darwinists,” the goal was to reach a future moment in the development of our species, which nature alone would not achieve and this, as we have seen, converged with a certain interpretation of the Christian orthodox religion. The goal of religious universalism and socialist collectivism was not a return to lost Paradise, but a project of building a new Paradise in the future. In the case of transhumanism, at least in the way some of its advocates promote it, the time that prevails is that of the present, or an immortal permanence of the present individual. Certainly there is a preoccupation with the future for part of its advocates (especially as to the available resources), but it is the individual of the present that must be preserved, or rather altered, in order to become an immortal cyborg. The triad herein analyzed did not remain only within the theoretical framework. Research projects relating blood transfusion techniques and the extension of life did not remain a utopian achievement. On the contrary, the technical improvement accompanied concrete projects of transfusion experiments linked to the fight against aging. That is, an essential point that differentiates the connection between blood, transfusions, and longevity is the operationality put into practice by its advocates. Finally, our objective was to offer some elements to understand how different societies, different cultural contexts, respond to a common dissatisfaction with the ordinary course of Nature. In the approaches to senescence after Darwin, this human discontent with natural evolution will lead one to dream of the correction and even the acceleration of evolutionary processes, no longer piloted by the blind forces of nature, but by the rational purposes dominated by man. Both Russian revolutionary futurism and transhuman liberal individualism aim at a technological control of the biological evolution of human beings. Of course, biotechnological aspirations of this type were also related to broader social-political programs, such as the various eugenic projects implemented in contemporary times (Adams 1990). Thus, it is essential for those who aspire for democratic and free societies to better understand the longue durée of the experimental philosophy in the domain of life. After all, will
44
R. C. Mocellin and L. Zaterka
transhumanism and the technization of the body lead us to a world where health and longevity will prevail for the majority of people or will we led to an apocalyptic dystopian? May human recklessness not conduct us into the myth of Icarus once again. Acknowledgments We would like to thank Lorenzo Baravalle and anonymous reviewers for useful comments.
References Achenbaum, W. A. (1995). Crossing Frontiers, gerontology emerges as a science. Cambridge: Cambridge University Press. Adams, M. B. (1989). “Red star” another look at Aleksandr Bogdanov. Slavic Review, 48, 1–15. Adams, M. B. (Ed.). (1990). The wellborn science: Eugenics in Germany, France, Brazil, and Russia. New York/Oxford: Oxford University Press. Andrault, R. (2014). Guérir de la folie. La dispute sur la transfusion sanguine (1667-1668). Archives-Ouvertes, 3, 509–532. https://halshs.archives-ouvertes.fr/halshs-01131013. Bacon, F. (1963a). Novum Organum. In J. Spedding, R. Leslie, & D. Heath (Eds.), The works of Francis Bacon, VI (pp. 34–248). London: Longman. Bacon, F. (1963b). Advancement of learning. In J. Spedding, R. Leslie, & D. Heath (Eds.), The works of Francis Bacon, III (pp. 275–498). London: Longman. Bacon, F. (2007). Historia vitae & mortis. In G. Rees (Ed.), The Oxford Francis Bacon, XII (pp. 142–377). Oxford/New York: Oxford University Press. Bert, P. (1863). De la greffe animale. Paris: Imprimerie de E. Martinet. In http://www.biusante. parisdescartes.fr/histoire/medica/resultats/index.php?cote=TPAR1863x118&do=chapitre. Bogdanov, A. (1984 [1908]). Red star: A utopia. In L. Graham, & R. Stites (Eds.), (trans. Rougle, C.). Bloomington/Indianapolis: Indiana University Press. Bogdanov, A. (2002 [1927]). The struggle for viability: Collectivism through blood exchange. (trans. Huestis, D. W.). Philadelphia: Xlibris Corp. Bostrom, N. (2003). The transhumanist FAQ – A general introduction. In www.nickbostrom.com Bostrom, N. (2005). Transhumanist values. Review of Contemporary Philosophy, 4, 3–14. Boyle, R. (1999a). Experiments and notes about the producibleness of chymicall principles. In M. Hunter & E. Davis (Eds.), The works of Robert Boyle (Vol. IX, pp. 1–268). London: Pickering & Chatto. Boyle, R. (1999b). Some considerations about the possibility of the resurrection. In M. Hunter & E. Davis (Eds.), The works of Robert Boyle (Vol. VIII, pp. 295–313). London: Pickering & Chatto. Braudel, F. (1958). Histoire et Sciences sociales: La longue durée. Annales d’Histoire et Sciences Sociales, 13(4), 725–753. Celestin, L.-C. (2014). Charles-Edouard Brown-Séquard: The Biography of a Tormented Genius. Dordrecht/London: Springer. Churilov, L., & Stoev, Y. (2013). The life as a struggle for immortality: History of ideas in Russian Gerontologia (with immunoneuroendocrine bias). In P.-C. Leung, J. Woo, & W. Kofler (Eds.), Health, wellbeing, competence and aging (pp. 81–130). New Jersey: World Scientific. Conese, M., et al. (Org.) (2017). The fountain of youth: A tale of parabiosis, stem cells, and rejuvenation. Open Medicine, 12, 376–383. Denis, J. -B. (1667). Extrait d’une lettre de M. Denis, Professeur de Philosophie et de Mathématique, à M. ∗∗∗ touchant la transfusion du sang, de Paris, ce 9 mars 1667. Journal des sçavans, 69–72. www.gallica.fr. Drew, L. (2017). The power of plasma. Nature, 549, 26–27.
3 Blood, Transfusions, and Longevity
45
Duflo, C. (2003). Diderot et Ménuret de Chambaud. Recherches sur Diderot et sur l’Encyclopédie, 34, 25–44. Fedorov, N. F. (1990). What was man created for? The philosophy of the common task. Selected works translated from the Russian and abridged by Elisabeth Koutaissoff and Marilyn Minto. Lausanne: Honeyglen Publishing. Gliglioni, G. (2005). The hidden life of matter: Techniques for prolonging of life in the writings of Francis Bacon. In R. Solomon, C. G. Martin, & C. G (Eds.), Francis bacon and the refiguring of early modern thought (pp. 129–144). England/USA: Ashgate. Guerrini, A. (1989). The ethics of animal experimentation in seventeenth-century England. Journal of the History of Ideas, 50, 391–407. Guldi, J., & Armitage, D. (2014). The history manifesto. Cambridge: Cambridge University Press. Jebelli, J. (2017). Pursuit of memory: The fight against Alzheimer’s. Boston/London: Little, Brown and Company. Jensen, K. M. (1978). Beyond Marx and Mach: Aleksandr Bogdanov’s philosophy of living experi ence. Dordrecht: Reidel. Keynes, G. (1922). Blood transfusion. London: Henry Frowde and Hodder & Stoughton. Krementsov, N. (2011). A Martian stranded on earth: Alexander Bogdanov, blood transfusion, and proletarian science. Chicago/London: University Chicago Press. Krementsov, N. (2014). Revolutionary Experiments: The Quest for Immortality in Bolshevik Science and Fiction. Oxford/New York: Oxford University Press. Lower, R. (1666). The method observed in transfusing the blood out of one animal into another. Philosophical Transactions (1665-1678), I, 353–358. Menuret de Chambeaud, J.-J. (1765). Transfusion. In D. Diderot & J. d’Alembet (Eds.), Encyclopédie ou Dictionnaire raisonné des sciences, des arts et des métiers (Vol. XVI, pp. 547–553). Paris: Briasson-David-LeBreton-Durand. http://encyclopedie.uchicago.edu/. Metchnikoff, E. (1903). Études sur sa Nature Humane. Paris: Masson Éditeurs. https://archive.org. Moore, P. (2003). Blood and justice: The 17th century Parisian doctor who made blood transfusion history. Chichester: Wiley. Newman, W. R. (2004). Promethean ambitions: Alchemy and the quest to perfect nature. Chicago: University of Chicago Press. Schultz, S. G., et al. (2002). William Harvey and the circulation of the blood: The birth of a scientific revolution and modern physiology. News Physiological Sciences, 17, 175–180. Scudellari, M. (2015). Blood to Blood. Nature, 517, 426–429. Sengoopta, C. (1993). Rejuvenation and the prolongation of life: Science or quackery? Perspectives in Biology and Medicine, 37, 55–66. Sochor, Z. A. (1988). Revolution and Culture: The Bogdanov-Lenin Controversy. Ithaca/London: Cornell University Press. Stambler, I. (2014). A history of life-extensionism in the twentieth century. Rison Lezion: Longevity History Press. Tartarin, R. (1994). Transfusion sanguine et immortalité chez Alexandr Bogdanov. Droit et Société, 28, 563–581. Tauber, A., & Chernyak, L. (1991). Metchnikoff and the origins of immunology. Oxford: Oxford University Press. Todes, D. P. (1989). Darwin without malthus: The struggle for existence in Russian evolutionary thought. Oxford/New York: Oxford University Press. Vikhanski, L. (2016). Immunity: How Elie Metchnikoff changed the course of modern medicine. Chicago: Chicago Review Press. Vucinich, A. (1989). Darwin in Russian thought. Berkeley/Los Angeles/Oxford: University of California Press. Zaterka, L. (2015). Francis Bacon e a questão da longevidade. Scientiae Studia, 13(3), 495–517.
Chapter 4
Performative Epistemology and the Philosophy of Experimental Biology: A Synoptic Overview Maurizio Esposito and Gabriel Vallejos Baccelliere
4.1 Introduction In his 1995 book The Mangle of Practice, the sociologist of science Andrew Pickering distinguished between representationalist and performative idioms in the history and philosophy of science. For Pickering, while the first idiom regards science as a theoretical enterprise aiming to represent the world through increasingly accurate models, the second idiom emphasizes the set of practices, tools, and machines allowing scientists to control and domesticate natural phenomena. For the representationalist view, the main philosophical issue has been to figure out how a theoretical model really “corresponds” to the “real” entities, mechanisms, or processes under study. For the performative perspective instead, the philosophical focus has been how “reality” or “nature”, what Pickering called “material agency,” is technically, materially, and conceptually mastered and tamed. No doubt, the representationalist idiom had largely dominated philosophers’ minds (and is still an important business for several mainstream philosophers of science). On the contrary, the performative idiom has mostly and traditionally concerned many historians, sociologists of science, and STS scholars who have dedicated close attention to experimental practices. Pickering, like many other sociologists and historians of science, was aware that the switch from representationalism to performativism was much more than a turn in philosophical or historical sensibility. Indeed, the problem is not whether history and philosophy of science should take an internalist, externalist, normative, or descriptive approach. The problem goes deeper and points to the fact that the two M. Esposito (*) Departament of Philosophy, University of Santiago de Chile, Santiago, Chile G. Vallejos Baccelliere Department of Biology, Faculty of Sciences, University of Chile, Ñuñoa, Region Metropolitana, Chile © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_4
47
48
M. Esposito and G. Vallejos Baccelliere
idioms outline very different visions about what science essentially is, how it works, and how it changes over time. Whether our focus is on representations and theories or on practices and machines, epistemological concerns shift accordingly. Many traditional philosophical questions about scientific knowledge might lose relevance while others, more important, emerge. For instance, does it make sense to doubt the existence of certain entities (even “non-observable” ones) if we can fully control them or use them as tools? Does it make any sense to be a scientific anti-realist while endorsing a performative idiom? In other words, does it make sense to be an antirealist and a performativist? Conversely, what it means to be a realist when we consider scientific practice above scientific representations? In short, if the problem of “correspondence” between theoretical models and the world is replaced by the problem of engaging practically and materially with it, then a new set of philosophical issues and questions arise. Not surprisingly, Pickering himself constituted his performative epistemology around the concepts of “resistance” and “accommodation,” where the former refers to our failures to properly capture the “material agency” (i.e., experimental failures, misunderstanding of data, etc.) and the latter to the set of realignments and reconfigurations (whether technological or manipulative) for overcoming “resistance” itself. To him, “resistance” and “accommodation” defined the epistemic struggle between human and material agency while the reciprocal tuning, interaction, and stabilization between both agencies characterize successful experimental knowledge. Whereas our proposal is based upon Pickering’s kind of performative epistemology, we think that a much more fine-grained conceptual map is needed in experimental biology (and in experimental sciences more generally). In fact, even though we accept that the notion of experimental action can be approached by a dialectic of resistance and accommodation (whereby reality itself can be defined as all what resists to our intentional interventions), we complement it introducing further concepts and distinctions that illustrate how experimental performances can produce highly reliable knowledge over the nature of biological phenomena. Without pretending to be exhaustive, we propose a tentative and incipient chart of how a performative epistemology might look like considering biologists’ work into a scientific laboratory. Now, one of the most remarkable things we realize when we follow the work of practitioners (biologists, biochemists, etc.) in their labs is the centrality of artifacts employed for the generation of knowledge. These artifacts can be made of biological origins (cellular extracts, purified proteins, cultivated cells, selected or transgenic organisms, etc.), biochemical origins (i.e., aptamers, liposomes, nanodiscs, artificially evolved proteins, etc.), or be entirely artificial (i.e., instruments, machines, technological tools, proteins with non-natural aminoacids, etc.). The essential element all those entities share is that they are highly standardized artifacts shaped for producing specific results under well-controlled experimental situations. Indeed, without standardized processes and entities, without strict control over phenomena and things, and without technical skill and knowledge over the uses of these artifacts, no reliable or stabilized knowledge would be possible. This means that an assorted series of techniques fix the portion of the world we are inquiring, i.e., the entities or
4
Performative Epistemology and the Philosophy of Experimental Biology…
49
processes we detect or manipulate with our experimental activities. One interesting aspect of such a performative view is the crucial assumption that “nature” only emerges from highly artificial and domesticated environments. As already Francis Bacon observed long ago, nature’s deep secrets are only extorted when we step out from “nature” itself. The representation of Nature is not the result of our dispassionate contemplation, but of all what rests or resists to our unsettled and goal-oriented activities. In other words, natural phenomena are “extrapolated” from the complex and artificial ecology of a whole set of experimental systems. It is precisely for this apparent paradox (i.e., we really know nature when we approach it by artificial means) that the discussion over whether we represent nature “out there” independently from our activities loses any meaningful interest. If artificiality is the very condition of objective knowledge, then any ontological stance is itself the conclusion of a set of oriented actions. However, in a laboratory, we do not “construct” reality according to our wishes or whims; we deal with such “reality” insofar it resists our interventions, manipulations, and, eventually, representations. Accordingly, any convincing theory (or representation) is, after all, a successful extrapolation rooted in an artificial environment that practitioners themselves have carefully crafted.1 What we have succinctly mentioned so far can be simplified and ordered by considering four cardinal points defining our domain of performative epistemology, i.e.: (1) Constrained action, (2) Standardization, (3) Epistemic “tightening,” and (4) Extrapolation. We argue that, all together, those cardinal points outline what we call the epistemic experimental space (EES), the metaphorical map outlining our proposal. This chapter is divided into four sections reflecting the aforementioned points and aims to provide a general description of the EES. Indeed, we show that each point presents its own epistemic peculiarities and its own problems, although each one is part of the EES defining our perspective of performative epistemology as applied to the philosophy of experimental biology. Finally, through this scheme we reveal the implicit and sophisticated epistemology that experimental biologists presuppose in their practices for generating testable and reliable knowledge (and therefore testable and effective extrapolations).
4.2 Constrained Action There are no experiments without presuppositions and constraints. Any experimental activity is performed on the background of many tacit and non-tacit assumptions and beliefs. For clarifying this preliminary point, we can use Michael Polanyi’s conceptual distinctions that he proposed in his analysis of experimental skills (Polanyi 1962, Chap. 4). Polanyi argued that any epistemic act is constituted by two basic elements: subsidiary and focal. While the latter refers to the very object or
The way such extrapolation is justified (and what it means) will be discussed in the final section of this essay. 1
50
M. Esposito and G. Vallejos Baccelliere
process investigated (i.e., the object getting our focal attention), the former provides the background conditions making focal attention possible. For instance, if a blind person uses a stick as an orientating tool, he focuses on what happens at the end of the stick through the vibrations he feels in his hand, which he tacitly acknowledged as subsidiary experiential elements. The blind man can only maintain his focus on the stick’s end if the subsidiary elements are in place (stick vibrations on his hand). This is not very different from the experience of playing an instrument where the focal attention is dedicated to the music itself and not to the hands’ movements. While I am reading a newspaper my focal element are the contents of the text. However, I assume a host of many subsidiary elements, physical or conceptual, making possible my own reading and understanding (i.e., the right amount of light, appropriate distance from the paper, linguistic competence, correct grammar, contextual or general knowledge, etc.).2 Thus, as a first approximation, with “constrained action” we refer to any kind of competent activity requiring the effective integration of focal and subsidiary elements. In order to clarify better the meaning of “subsidiary elements” surrounding the experimental practices, we can start by mentioning some very general (and rarely acknowledged) examples; i.e., that the day of the week, research’s height, moon’s position, or the color of the laboratory cloth do not affect the result of the experimental activity. The researcher assumes that her experimental practices can be performed in a different day and with different cloths without affecting the final research outcome. In addition, typical ones are elements like technological tools, flasks, test tube clamps, light, goggles, thermometers, forceps, or computers which are all subsidiary conditions present in most experimental spaces. When all such subsidiary elements work well, they may easily pass unnoticed. Yet, if one of these subsidiary conditions disrupts the experiment, it may enter into the focal attention of the practitioner (i.e., contaminated forceps, damaged computer, a defective thermometer, a protein sticking to the glass of a test tube, etc.). When that happens, the condition affecting the experiment can be controlled and tamed, switching the attention on other variables that were subsidiary in the previous experimental setting. Alexander Fleming’s discovery of penicillin is a telling example: the inhibition of growth of bacteria in a petri dish due to the presence of a fungus became the very object of inquiry, leading to one of the most important medical advances of the twentieth century. However, the domain of subsidiary conditions does not include only material things; it also includes acquaintance and skills over the uses of materials, and reagents employed in the laboratory together with their supposed properties. Such skills mostly derive from previous cycles of standardized practices and deeply stabilized theoretical knowledge (i.e., the acidity constant (pKa) of a buffer or the molecular weight of an element). For instance, before starting her experiment, the practitioner assumes many things about solution’s pH, salt solubility, membrane
It is a common experience that a sudden switch of focal attention can induce someone to commit mistakes in the course of action (i.e., playing, driving, reading, running, etc.). 2
4
Performative Epistemology and the Philosophy of Experimental Biology…
51
permeability, possible contamination of culture media, antibiotic effectiveness, or the generation of sterilized conditions, just to mention a few cases. In short, the practitioner presupposes many subsidiary conditions that form the “invisible” background behind the experimental performance. All these subsidiary conditions remain on the background until one of those conditions generates some kind of trouble and, therefore, it can jump to the focal domain. There are even more interesting subsidiary conditions constraining experimental activities: the general ideas and questions orienting the programs in which the material practices are inserted. Indeed, the practitioner does not produce or handle an experimental system without having some theoretical background about what she is looking for. And, usually, such background corresponds to some specific biological problem or to a particular application requiring some tuning. Depending on the kind of research undertaken, the assumed theoretical background can more or less affect the production of an experimental system, i.e., it can be more or less close to the subsidiary or the focal domain. Furthermore, the relation between the conceptual background and the experimental system depends on the specific context of the inquiry. For instance, if we consider an experiment consisting in characterizing the catalytic mechanism of an enzyme, we can frame it within very different research agendas. On the one hand, it can be linked to a pure enzymology research program, in which the kinetics of the enzyme is more important than its biological origin (i.e., to what kind of cell or organism it belongs) or its functional role. In this case, the enzyme serves as an exemplar for studying a certain kind of enzymatic phenomena. On the other hand, such an enzyme could play a role in the propagation of some type of cancer so that the previous experimental system can be coopted for a cancer research program and/or its prevention, where biological origin and possible functional roles become important. It is interesting to note that, in the first case, the theoretical framework (namely, enzyme kinetics theory) plays a direct role in the production, standardization, and setting of the experimental system. In the second case, the relation is rather indirect; theories about cancer propagation do not play any role in the production, standardization, and setting of the enzymatic experimental system. Thus, for establishing a link between a particular research program and the experimental system, more presuppositions (and therefore other subsidiary conditions) are required. The conceptual framework can also play an important role in establishing the biological relevance of an experimental system. For instance, imagine that a practitioner develops an artificial molecule able to activate an enzyme of bacterial origin, aware that such a molecule does not exist in the environments in which organisms grow and cannot be produced by them. With such a molecule, the experimenter can describe the enzymatic regulation, determine kinetic and thermodynamic constants, and even assess the effects of such a molecule on the bacteria to which the enzyme belongs (for the sake of an example, this molecule does not cause major alterations to the organism’s growth and does not kill it either). However, few practitioners would consider these studies biologically relevant until they consider that the molecule does not play any role in the organism in its natural environment. Yet, if those practitioners realize that the molecule kills or damages
52
M. Esposito and G. Vallejos Baccelliere
bacteria, they could reconsider the molecule as an antibiotic or antiseptic and, therefore, their research could be readdressed becoming biomedically pertinent. A very different (and most frequent) case is when the practitioner observes an effect linked to a molecule known to be produced by an organism or that it is found in the environment in which the organism grows, then the research would have an explicit biological significance. Note that the justification for establishing whether the study of a particular molecule has biological relevance depends on the background knowledge about the kinds of molecules that exist in the natural world and the medium in which the organisms grow and their metabolism. Of course, insofar as knowledge changes, the criteria for establishing relevance change too. Accordingly, the practitioner selects the appropriate (and available) experimental system suitable for revealing some specific phenomenon or process that, in turn, is relevant for her particular research agenda. Now, based on the above descriptions, it should be clear that the experimental action is constrained in all sorts of ways. It assumes background subsidiary (very general, environmental, material, and conceptual) conditions allowing the practitioner to focus on some specific portions of the world and establishing their relevance (or irrelevance) for exploring other processes or phenomena. In an experimental lab, most actions lay on the hardened sediments of previous epistemic activities developed and shared by a particular scientific community, which fix the coordinates for the following experimental explorations. Thus, we can now define our first cardinal point, constrained action, as a kind of praxis enclosed by a set of tacit or explicit skills, acquaintance, and concepts making possible (and even meaningful) the very experimental action. On this account, the experiment can be regarded, in the first approximation, as a form of highly controlled activity whereby focal and subsidiary elements are constantly and contextually retuned. But the whole domain of subsidiary conditions not only constrains and orients the praxis of the experimenter, it also forms the space in which the practitioner forges her expectations for assessing the results of the experimental accomplishments, i.e., the focal elements of the experiment itself. When the practitioner perceives some kind of problem at the level of the focal domain (object or process investigated), she can identify some element of the subsidiary domain (say, a reagent or a broken tool) as the culprit of the frustrated expectation. When that happens, the subsidiary element enters in the focal domain of the practitioner and other kinds of constraints emerge. Yet, no matter to what extent the experimenter is aware of the environment, tools, objects, and protocols which orient her practices, a good amount of her work is deeply shaped by a set of highly standardized techniques and objects which literally carve out, from the assortment of several subsidiary elements, the “material agency” under investigation. In short, the experimental space is constituted by many artificial devices that shape, constrain, inform, and orient experimental actions. Such devices are normally the standardized outcomes of previous cycles of knowledge and provide the conditions for the generation of further epistemic cycles.
4
Performative Epistemology and the Philosophy of Experimental Biology…
53
4.3 Standardization One of the most basic assumptions making experimental practices epistemically reliable is the opportunity to reproduce experiments and results in different times and places. However, as Harry Collins showed long ago (Collins 1987), this is something neither obvious nor simple. Collins distinguished between the two ways of interpreting experimental action: the “algorithmic” and “enculturational” models. The first emphasizes the instructions, rules and protocols for reproducing an experimental system. The second refers to a more informal set of tacit skills, personal abilities, and assumptions orienting the composition, and use, of an experimental system. Collins concluded that no set of “algorithmic” rules could guarantee the success of an experiment because a whole ensemble of unformalizable skills is always present. After all, as anyone having some cooking experience knows, a very detailed recipe of how to make a brownie does not guarantee that we will make a good one because, recipe aside, many tacit or unconscious habits concur for making a good cake. Yet, while habits might be constitutive or essential for the experimental activity, they can also be a threat to its consistent repeatability in different places and times. If any practitioner would require a long training for acquiring many unformalizable assumptions and tacit habits for the repeated experiments performed in distinct times and spaces, the possibility for reproducing them accurately or even approximately would be very rare. This is precisely what happens when experimental systems are not standardized or are poorly standardized. For this reason, we see standardization as an attempt to reduce, as far as possible, the “enculturational” aspects of scientific experiment in favor of the “algorithmical” ones, even though a complete reduction is not materially possible. This does not mean, of course, that anyone with a “recipe experiment book” can make a reliable experimental system or carry on a successful experimental agenda. It only means that experimental standardization should decrease, as much as possible, the set of tacit assumptions and rule of thumb that are behind most of human activities, even though the set of tacit skills acquired in a scientific training is never completely eliminable.3 A further important element of experimental standardization - which involves the fabrication, manipulation, and modification of objects and phenomena - is that it does not necessarily dependent on the theories explaining how such objects and phenomena work. We might compare a standardized object or phenomenon to what Davis Baird and Alfred Nordmann have called a “fact-well-put” (Baird and Nordmann 1994), namely, a fact that is the consistent outcome of a particular experimental tool independently of any particular scientific theory. Indeed, when we have a robust interdependence between instrument and phenomena (i.e., a fact-well-put), we have what Baird and Nordmann called a “striking-phenomenon instrument.” As they put it: This, of course, does not mean that scientific practices invariably move from enculturational to algorithmical approaches. Scientific practices present all sort of mixtures between the two. We are only arguing that a highly standardized experimental system reduces the presence of enculturational elements. 3
54
M. Esposito and G. Vallejos Baccelliere The striking-phenomenon instrument is the product of our ability to tame a phenomenon. It is the product which constitutes our knowledge of the phenomenon… it is technologically/ instrumentally relatively autonomous, presenting, as it does, the striking achievement of instrumental certainty and control over natural agency. At the same time, it is relatively open to a variety of theoretical appropriations (Baird and Nordmann 1994, pp. 73–74).
In short, striking-phenomenon instruments produce “facts-well-put” that can be “explained” by many different and even conflicting theories. To use a metaphor, fabricating things or reproducing particular phenomena in a lab, is not very different from manufacturing a sandwich within a kitchen of an international fast food chain (whereby the same taste and characteristics are preserved despite the different kitchens). In a biology lab, however, what is made and manufactured are experimental systems and tools for making experiments. Such experiments should produce the same results in every part of the world, in every time, and in the hands of every practitioner. The possibility of replicating experiments and outcomes guarantees that the knowledge produced is objective insofar as everyone, with the right skills and equipment, can check it out. Yet, such reproducibility allows the practitioner to play with the experimental system, get used to it, and, eventually, introduce new variables satisfying the researcher’s particular aims and interests. The standardization of an experimental system (or its components) is the very condition for its improvement as well as, using an analogy, the advancement of a paradigm for Thomas Kuhn was the condition for detecting anomalies. Thus, an experimental system is constituted of parts, which are highly standardized units and the product of previous epistemic cycles with their own history. These can be specific tools, machines, techniques, skills, and reagents, and each one can be separated and reassembled and find a convenient configuration in an entire experimental system. For one sample of biological origins, like a bacterial culture, different methods of detection, different technologies, and different kinds of manipulations can be deployed (i.e., different nutrients, stress conditions, or different forms of detection, say, bacterial growth or excretion products). On the other hand, the same method of manipulation or detection, for example, a heat source or a spectrophotometer, can be used for several samples giving rise to different experimental systems. In a way, the practitioner composes an experimental system starting from prefabricated pieces, tested practices, and stabilized conceptual assumptions. Not surprisingly, a good length of time is dedicated for obtaining and preparing the materials for the experiment, what Marcel Weber has called “preparative experimentation” (see Weber 2004). In fact, practitioners consume a good deal of effort in purifying proteins or other macromolecules, cultivating and transforming cells, keeping and feeding organisms, synthetizing molecules, cloning genes, etc. Actually, some of these tasks are not directly performed in a lab but are externalized to other industrial labs, which sell packaged products. The practitioner can order synthetic genes, cell lines, transgenic organisms, etc., as, for instance, a pastry chef who makes his cakes using prepackaged ingredients (pastry chefs do not usually make sugar or grow cacao plants for making their Sacher torte). Nowadays it is very common to use commercial kits for performing an experiment in virtually all areas of experimental biology (for example, kits for DNA isolation). There exists a large
4
Performative Epistemology and the Philosophy of Experimental Biology…
55
biotechnological industry surrounding biological experimentation, which affects and constrains, in different ways, the very experimental activity. Therefore, an experimental system can be seen as the outcome of multiple processes of previous standardizations and preparative experimentation, whether it is carried out in the lab or by specialized industries. Each element or process has its particular history of stabilization and standardization so that the history of an experimental system presupposes the history of many other constrained actions that have been crystallized in standardized techniques, instruments, and skills. Now, as we have already briefly mentioned before, it is important to distinguish between the material knowledge of experimental systems and the theoretical knowledge about their inner mechanisms. In the first case, we have knowledge related to how to assemble or use an instrument, how to get or process a sample, how to purify a macromolecule or, in general, how to make up a functional experimental system, and then realize appropriate manipulations in order to get the expected results in the form of certain kinds of data. The second case, instead, refers to the theoretical knowledge explaining the operation of an experimental system (i.e., physicochemical properties of macromolecules, light waves and fluorescence, molecular interactions, cellular metabolism, etc.). For example, a good gardener knows very well how to take care of his plants, feed them, prune them, water them, etc. Yet, the knowledge about plant physiology, metabolism, hydric potential, cell biology, and all the theoretical principles that could explain why gardener’s plants do not dry up is a different matter. While the first kind of knowledge would certainly be a good candidate for effective standardization, the second kind of knowledge can be applied to many different contexts, aside gardening or botany (for further clarification on this, see the next section). Now, consider a hypothetical scenario where all that we know about protein solutions, enzymology, fluorescence, physico-chemistry, or physiology changes together with the theoretical knowledge about how our instruments work. Such transformation could be so drastic that all the concepts used before for explaining or interpreting experimental results could become useless. However, if all experimental instructions and procedures are followed carefully, we may be successful in building an experimental system that produces the same results obtained despite the existence of new, very different, theories. The results would be, then, “facts-well- put” in the sense of Baird and Nordmann. In short, every time the same standardized process is reenacted, the results may not change, even though theories may radically change (see also Hacking 1983, 1992). One interesting consequence of the split between material and conceptual knowledge is that the former provides a highly reliable, reproducible and, therefore, verifiable knowledge. Insofar we manufacture an artifact, manipulate an organism, assemble an experimental system and control its outcomes, we get a grasp of the produced phenomena that is incomparably more stable than a coherent representation of such phenomena. Indeed, while theories constantly change, standardized “facts-well-put” stay. Standardization is itself the conditio sine qua non particular “material agencies” can be successfully detected and controlled beyond the grasp of any particular theory. Yet, while standardized objects or techniques can be conceived as independent from particular theories, they
56
M. Esposito and G. Vallejos Baccelliere
cannot be understood in isolation from the other elements of the EES we are describing. Despite the stability of standardized things (and knowledge), its solid and consistent results, we are still left with the reasonable doubt that we are not really dealing with the “biological world” as it is outside our lab and as it works beyond our activities. We need two other stages for establishing a meaningful connection between our constrained actions, our standardized skills and tools, and the “material agency” standing before us.
4.4 Epistemic “Tightening” Any experimental activity presupposes different degrees of epistemic confidence over the manipulated entities. In fact, when the practitioner performs an experiment, she could assume at least three existential claims about experimental objects: (a) The nature and properties of the entities she is manipulating are widely known, (b) The nature of the entities she is manipulating is partially known or known only by their effects, and (c) The very existence of the entities under investigation is questioned (they can be experimental artifacts or theoretical possibilities). If we exclude the last case, whereby particular experiments can be performed for assessing whether a particular entity really exists “out there,” we are left with the first two cases, which are far more common in experimental biology (rather than in particle physics, where the third case can apply more frequently). In the first case, with “widely known entities or properties” we mean that the entities or properties are engrained in so many experimental systems and instruments that any existential skepticism about them would question a whole research agenda (including all the tools used for performing experiments or detecting particular entities). It would be preposterous to purify a protein and not believe in its existence (or in the properties used to perform the purification) as well as it would be odd questioning the existence of computers we used for writing this chapter. In the second case, with “partially known entities,” we identify two forms of ontological commitment: the first is when the existence of the entity is accepted but some of its properties are unknown and the second is when an instrumental approach to the whole entity is heuristically assumed for producing particular results. The former is the most frequent case. When we want to know some unknown properties attached to some well-known entity – for instance, to assess the existence of a catalytic site in a protein – we use many of its well-known properties to detect, manipulate, or transform the entity itself in order to have epistemic access to the unknown properties we are interested in. In the protein, before we can assess the presence of a catalytic site we have to purify, solubilize, quantify, stabilize, and put the protein into a desired experimental medium. While performing all these activities, we are using several well-known properties like its isoelectric point, molecular size, stability, hydrophobicity, interaction with chemicals, etc. in order to reveal the partially known or unknown properties of the protein itself. The border separating well-
4
Performative Epistemology and the Philosophy of Experimental Biology…
57
known, partially known and unknown entities shifts constantly in the course of experimental practices. The second form of ontological commitment occurs when we ignore the “real” nature of the entities but we can still manipulate them successfully. Beware, from an instrumentalist perspective, the practitioner does not question the existence of a supposed entity producing the effects under investigation; she only handles the ignorance about what these entities really are. In other words, the practitioner performs her experiment assuming that something “out there” is producing the effects she is manipulating.4 For example, when Wilhelm Johannsen, and later Thomas Morgan, conceived the genes as instrumental entities, they did not question the “existence” of something producing the phenotypic effects they were observing and manipulating (Falk 1984). They only assumed that there existed a “difference maker” they barely knew (posited) and that could nevertheless be successfully controlled and manipulated (and, later on, biochemically identified).5 Without such a crucial presupposition, even an instrumental approach to specific entities would not be very interesting (or eventually meaningful). After all, we can manipulate something that we hardly know, but it would sound ludicrous to manipulate something we think does not exist (unless we are referring to our third case). In short, the existence of both widely known entities (the first case) or instrumental entities (the second case) is not normally questioned and this is for very good reasons. In the former case, as we have seen, existential skepticism would jeopardize the whole research agenda with all its experimental practices and technologies6; in the latter case, existence itself is not the issue (though it is presumed) because the practitioner’s concern is focused on the stabilization, control, and manipulation of the supposed “entity”, and the phenomena in which it participates, through its visible effects. There are many degrees of confidence about the existence of particular entities, depending on the level of interconnection those entities entertain with practices, instruments, experimental system, and the received theoretical knowledge. The different existential claims over uncertain, partially known and widely known entities correspond to distinct stages leading towards what we call epistemic “tightening.” What we want to argue in this section is that any kind of experimental performance – whether it involves widely known properties or only instrumental entities – implies a set of ontological beliefs that are deeply intertwined with the very experimental system (including all the tools, technologies, and practices taming the “material agency”), and such ontological beliefs are linked to the process of epis In fact, it could be the case that some of the properties producing these effects are well-known and successfully used to control the experimental phenomena, even if the nature of the entity having these properties is mostly unknown. Not surprisingly, we do not always know if well-known properties belong to the same object. Yet, this does not necessarily prevent us from manipulating the entity and/or the properties constituting it. 5 On the concept of gene as “difference maker,” see Waters (2007). 6 Thus, while we need very good reasons for doubting about the existence of those entities, we do not really need good reasons for believing in their existence. 4
58
M. Esposito and G. Vallejos Baccelliere
temic “tightening.” With such a notion, we refer to a form of highly stabilized knowledge that needs to be assumed for performing an experiment. When we are using an experimental system that presupposes the existence of some specific entity or process, the interesting question is not whether we should believe in the existence of such an entity or process, but whether it makes sense to doubt it. Our argument is that, from the experimenter’s perspective, unconditional existential skepticism would consist in a highly abstract philosophical game that would only deliver a general and pointless distrust over the entire experimental activity.7 For instance, a nutritionist can surely doubt about the existence of calories, but it would be a misplaced skepticism for someone prescribing a hypocaloric diet. Normally, we do not feel the need for questioning the existence of the “calorie” because it is the outcome of multiple processes of epistemic “tightening” linked to many different sciences and practices, i.e., bromatology, physiology, medicine, sport science, etc. Now, in order to clarify further our point and specify what we mean with epistemic “tightening,” let us start with one important distinction that Hans-Jorg Rheinberger proposed in his Toward a History of Epistemic Things (1997). Rheinberger distinguished between technical and epistemic objects. The former defines the domain of things going from chemical buffers, antioxidants, osmolytes, enzymes, antibiotics up to technical tools of detection. We can even extend the domain to the set of technologies and machines employed in a laboratory. The epistemic objects, instead, are the ensemble of entities or processes that are detected or manipulated in an experiment in order to obtain knowledge about them; i.e., proteins, DNA, cells, cell processes, metabolism, chemical reactions, etc. In relation to this distinction, we should highlight two very important points: first, the epistemic object is not simply a result of an experimental system; it is deeply intertwined with it. This means that the epistemic object is not something wandering in the world waiting to be captured by an experimental system; the epistemic object is itself produced by a given experimental system. Of course, it is reasonable to believe that a particular biological entity or process exists outside the laboratory (and independently from our experimental performances). But such belief is itself ultimately justified by a whole set of experimental strategies and practices deployed for extrapolating from the artificial environment of the lab the lean workings of nature (i.e., when we see an organism we do not see proteins, DNA, or metabolism but we “extrapolate” their existence from our experimental practices). Before expanding on this point, let us revise the second related point: the distinction between technical and epistemic things is not absolute. There are many cases where something can be both a technical and an epistemic object while there are other cases where a technical object can become an epistemic one and vice-versa. The distinction may depend from the specific experimental system and the stage of development we are considering. Robert Kohler’s accurate reconstruction of experimental practices in Thomas H. Morgan’s Flyroom at Columbia University offers a suggestive example of the
And jeopardize promising experimental activities focused on instrumental entities or properties.
7
4
Performative Epistemology and the Philosophy of Experimental Biology…
59
first kind (Kohler 1994). It is well known that the model organism employed by the Flyroom staff was the fruit fly Drosophila. But such a model organism was not just taken from the wild and directly used for performing crossing experiments. It undertook a process of selection and breeding in order to produce an organism fit for the experiments performed in the lab (as we see before on Standardization section). Kohler defined Drosophila as a biological “breeder reactor” able to produce the required mutations for articulating the first genetic map. In the hands of Morgan and his team, the Drosophila became a reliable instrument for defining a “technical object.” However, Drosophila was also the subject of investigation; it was both a technical object (as an engine of mutation) and an epistemic object (as the very subject of genetic research). If the Drosophila case suggests a certain ambiguity in the distinction between these kinds of objects, there are many other cases where the distinction depends on the particular stage of the research and the experimental contexts. For instance, suppose we are investigating an enzyme whose kinetic properties are well known (in the sense specified before). In some specific experimental systems, the enzyme can count as an epistemic object as we may explore many of its biochemical or structural properties that are still unknown. Instead, in a different experimental system, we can use the same enzyme as an auxiliary element for catalyzing a reaction where the process itself is not the one we are investigating. We can use the enzyme for eliminating a contaminant in the medium (as the enzyme RNAse that is frequently used to degrade RNA contaminant in a DNA purification) or to quantify some reagent (i.e., glucose oxidase is used to quantify glucose in clinical blood tests). Therefore, while in one experimental system the enzyme is an epistemic object, in another experimental system it is a technical one. This should not be too surprising. After all, we come to know technical and epistemic objects through their manufacture and manipulation. We detect their properties through experimental interventions and, consequently, their statuses change according to the interests and questions of the practitioners. If we accept that the epistemic object is not something discovered out of the blue, but is itself a cog in the whole machine we call the experimental system, and if we agree that, as part of the system, the same epistemic object can be also a technical one (or change its status according to the experimental system), then we can conclude that the knowledge produced in a lab depends on the contingent history of the experimental systems. We do not discover the “material agency” through experimental activity; we interact with it revealing its different manifestations and properties as they filter through our experimental system. To use a metaphor, each experimental system is like a window opening to particular sections of the “material agency” we are chasing (whatever it is). We only “see” what the experimental system allows us to see (Culp 1995). The key question emerging from these premises therefore is: if the experimental system is composed of dynamic, heterogeneous, and mostly artificial parts, how do practitioners connect it to the external world? After all, if what we “see” coincides with the different experimental systems, and these, in turn, are constituted by many standardized fragments, how can we break the pernicious cycle co-defining an experimental system and its epistemic objects? To be even more explicit: how can we avoid the epistemic feeling that everything is
60
M. Esposito and G. Vallejos Baccelliere
made up by the experimental system? Of course, there are various ways to answer these questions and our answer is, once again, what we have termed epistemic “tightening.” First of all, we use the term “tight” as a replacement of the traditional epistemic category of “truth ” (also see Boon 2012). In fact, while a theory can be true, whether it corresponds to something else, we have no “true” experimental practices. Instead, diverse experimental systems produce “tight” knowledge insofar as they reach a certain coherence over the same epistemic object. For example, if the same entity or process can be manipulated, detected, and used in quite different experimental systems, then we have a good reason to assume that a “tight” or stabilized knowledge is being generated. Various scholars have dubbed the properties that emerge in different experimental systems as “robust properties” (Wimsatt 1981; Culp 1995; Nederbragt 2003; Eronen 2015). A good example of the robust property is the negative electric charge of DNA. We can detect and manipulate such property (and use it to manipulate other entities or the very DNA) through several different and independent experimental systems. We use it for separating DNA fragments in an agarose gel; or for inserting plasmids inside a bacteria or other cells; or for fixing a fragment and “observing” it through an atomic force microscope; or when we realize that most proteins that bind DNA (i.e., transcription factors) have positive surface charge; and, last but not least, we can use it for different methods of DNA purification. Of course, there are many properties of DNA which are not so epistemically “tight.” For instance, its elasticity, its capacity to form unconventional structures within the cell, and its interactions with some intracellular molecules. But, in spite of these unstable properties, it is very hard to doubt its negative charge. This is something so epistemically “tight” that its questioning would bring an overall skepticism over many areas of biological knowledge (not very dissimilar from a nutritionist questioning the existence of calories while prescribing a hypocaloric diet). Therefore, we reach a “tight” knowledge because a given property exhibits a certain stability and consistency over many independent experimental actions. Another useful example of epistemic “tightness ” is the knowldge we have of many enzymes and their properties. We know physicochemical and structural properties like their hydrodynamic radius, their secondary structure, their sequence, their electric charge, their tridimensional structure, their interactions with their ligands, their catalytic activity, etc. We also know properties arising from their presence (or absence) in different cell lines and tissues, their cellular localization, their co-localization with other proteins or cell structures, etc. And, as we have “tight” knowledge about DNA and enzymes and many of its properties, we also have “tight” knowledge about many other biological entities and constituents of biological systems. What we have argued in this section is that while we can certainly be skeptical about robust properties constituting “tight” knowledge, such skepticism would be totally misplaced while many successful techniques, practices, and experimental systems assume these properties. On this account, a meaningful skepticism linked to “tight” knowledge is not whether some property, entity or mechanism really does exist, but
4
Performative Epistemology and the Philosophy of Experimental Biology…
61
whether it makes sense to distrust its existence (and the consequences that practitioners are willing to pay for distrusting it).8 In short, when we reach a “tight” knowledge we have a stable and reliable grasp on our epistemic objects. However, having a stable knowledge over an entity or process does not really solve our initial problem. We have good reasons for believing that the entities we are investigating are not experimental artefacts; they are not simply the outcome of our experimental systems insofar there exists something consistently resisting our different interventions and manipulations. Yet, we are still left with the possibility that such stable properties cannot be really found in the “biological nature,” independently from our experimental actions. Indeed, they might be robust or tight “facts-well-put,” but do not exist as such, or similarly, in nature. We may assume that the inspected properties are “really” part of an epistemic object, but such properties do not “really” manifest themselves in the world “out of the lab,” either because there are other variables in the natural environment that we ignore or because the properties only emerge in some given experimental circumstances. Whereas having “tight” knowledge is a necessary step for connecting experimental practices with the world, we still need another step for tying the experimental knowledge with the “material agency”. We need to “extrapolate” from the highly robust properties revealed by our “tight” knowledge the real functioning of biological nature.
4.5 Extrapolation While capturing the “material agency” through our constrained actions, our standardized techniques and our inherited “tight” knowledge constitute a highly complex business, deducing from the ensemble of all these stages the real status of biological process is even more complicated. Accordingly, in this section, we will only offer some tentative insights on how the concept of “extrapolation” might be defined. Firts of all, the problem of extrapolation begins with one basic - experimental and ontological - pretention: to reproduce, describe, understand, and control natural phenomena as they supposedly exist in the world. The “supposedly” is here very relevant because it reminds us of the fundamental inference that makes the experimental activity meaningful: what the practitioner is making in her lab is relevant for knowing the world outside the lab. In this sense, “extrapolation” is the inference stating that what happens in the lab can happen also, with a certain degree of approximation, outside the lab. Yet, justifying such an inference is far from obvious. The act of “extrapolating” one particular knowledge from another is probably the most speculative element of our points presented so far and it is the most exposed to different epistemic, non-epistemic risks and biases, as Heather Douglas showed Of course, the epistemic price for distrusting about an instrumental (or partially known) entity would be comparatively lower than the epistemic price that a practitioner should assume for distrusting about the existence of “well-known” entities or properties. 8
62
M. Esposito and G. Vallejos Baccelliere
assessing the studies on dioxin cancer on Rat liver (Douglas 2000) and Torsten Wilholt had indicated analyzing the preference biases that can vitiate different kinds of extrapolative practices (Wilholt 2009). Indeed, justifying extrapolation requires a solid set of theoretical beliefs about the experimental system as well as a pondered consideration of the epistemic risks involved in generating wrong, partial, or misleading extrapolations. As we have shown before, an experimental system is itself an artificial thing. We have also claimed that the first step for breaking the vicious circle between artificial and epistemic things is to find robust properties, which constitute what we termed “tight” or stabilized knowledge. But how, and to what extent, such “tight” knowledge refers to the world outside the laboratory and the experimental system remains unclear. After all, how does one experiment, made in a specific space and time and constrained by all kinds of artificial gadgets, really speak of the natural world? In other terms, how practitioners “extrapolate,” from the partial, closed, and limited world of the laboratory to the whole, wide, and messy world “out there”? While the problem of extrapolation paralleled the development of experimental sciences ever since the eighteenth century, Georges Canguilhem and Ernest Nagel were perhaps among the first philosophers seriously concerned with it ever since the 1960s (Canguilhem 2015; Nagel 1961). Then, the issue was largely neglected until the 1990s, when it resurfaced in the context of the use of model organisms in clinical and biomedical contexts (LaFollette and Shanks 1995) and also in the functional characterization of the human genome (Culp 1997; Ankeny 2001). In the same period, some biochemists with philosophical interests drew attention to the in vitro/ in vivo problem (Strand 1999; Jacob 2002), which consists in justifying how a physiological system (in vivo) can be understood through the study of some of its parts in pure chemical contexts (i.e., isolated proteins, DNA, metabolites, etc.). In the last two decades, the problem of extrapolation drew much wider attention. Many philosophers, historians, and scientists began to explore the relations between model organisms and extrapolative practices (Douglas 2000; Steel 2008; Bolker 2009; Ankeny and Leonelli 2011; Baetu 2016). The literature is very large and we will not address it here. We will rather connect the problem of extrapolation with our map of performative epistemology in experimental biology. In the current philosophical terminology, extrapolation is often characterized as a bridge connecting a “surrogate model” with a specific epistemic goal. Tudor Baetu defined the surrogate model as: “…a more manageable experimental setup for studying a phenomenon, where this experimental setup serves as a substitute for another, experimentally less manageable, but physiologically more relevant setup.” (Baetu 2016). That is to say, a tameable system appropriate for extracting reliable information over the real world. From our perspective, a surrogate model is an experimental system or, eventually, an element of the experimental system. Each experimental system, or part of it, can in fact be seen as a surrogate model virtually describing many different biological mechanisms as they supposedly happen in the natural environment (Bolker 2009). In brief, experimental systems (or parts of them) stand as representatives of some specific portions of the world “out there.” We could even suggest that such “surrogate models” are like maps representing a par-
4
Performative Epistemology and the Philosophy of Experimental Biology…
63
ticular process or entity (while we know that the map is not the territory). Of course, we might think that studying whole organisms (in vivo experimentation) could provide more direct knowledge of the organic nature. However, in vivo approaches to biology have a limited grasp over the inner biological mechanism. When a practitioner intervenes on a whole model organism, she has no real access to many of the internal processes. To get epistemic access to these processes she has to perform in vitro experimentation, therefore taking apart small or large elements from the organic system. She has to sacrifice the organism. Then she has to extract some parts of it and perform experiments for detecting specific entities or processes, like, for example, changes in the expression of some specific gene or in the production of a metabolite. For instance, suppose we are interested in studying the metabolism of sugar in some bacteria. We can observe the living model organism in our lab and pretend to infer the metabolism of sugar from some (not necessarily direct) observable physiological effects we have caused with the presence of a certain effector in the culture medium. But, in truth, we are not really observing the processes of bacterial metabolism. We are assuming that the physiological changes we might perceive at the level of the whole organism correspond to an inner biochemical and physiological process. Such an assumption - which underpins the interpretation of the experimental results - is the outcome of a long series of previous extrapolations tied to particular experimental systems (isolated macromolecules, primary or secondary cell culture, fractioned cellular extraction, etc.). In short, if we visualize metabolism as a systemic process involving the whole organism (or a culture of many of them in this case), such a comprehensive perspective is itself the outcome of organism sections (microscopic or macroscopic) fit for being handled in vitro and, then, extrapolated to a whole picture that corresponds, supposedly, to how metabolism really works. A further problem related to the use of model organisms is that they cannot be considered closer to nature than any other experimental systems (as it has been suggested). Model organisms have been selected for specific experimental purposes and survive in highly artificial conditions: they subsist in very “unnatural” regimes of nutrition, health, cleanliness, light, stress (or lack thereof), etc. Therefore, it is far from obvious to deem laboratory organisms as “reliable representants” of organisms in nature. The peculiar nature of model organisms made them fit for producing repeatable results (and we might add, “tight” knowledge) in virtue of their standardization, but if we pretend to “extrapolate” from them the “real” functioning of some specific mechanism, we are not better off than in the case of any other experimental system. In short, a reliable extrapolation cannot rely upon a seemingly “ideal” experimental system or a “exemplary” model organism, but lay on many conceptual, practical, and material justifications, also including the consideration of important non-epistemic risks for using wrong model organisms in situations where extrapolative practices might have social consequences (Douglas 2000). In fact, when the extrapolation is done from a model organism to another organism of a different species (say, humans), important problems emerge and the case has been widely discussed in the recent literature (LaFollette and Shanks 1993,1995; Culp 1997; Ankeny 2001; Ankeny and Leonelli 2011; Steel 2008; Baetu 2016). Many
64
M. Esposito and G. Vallejos Baccelliere
strategies have been proposed for evaluating successful extrapolations, such as phylogenetic closeness and mechanism similarity (Weber 2004). However, the problem is far from being settled, even for the simplest cases. One might be tempted to think that if we could work directly with humans (ethical issues aside) extrapolation issues in biomedical sciences would vanish. After all, we would not “extrapolate” the genetic mechanisms of a disease’s development from other species (say, a knockout mice), but “observe” directly how this disease arises in humans. Yet this would be a deceptive solution because we would have the same problems we mentioned earlier with experimenting on whole organisms. The practitioner needs to perform many in vitro studies involving primary and secondary cell cultures, sectioning organs, tissues, extracts, etc., in order to have effective epistemic access to inner biological mechanisms producing the development of particular diseases. And she also needs to know a great amount of previously extrapolated knowledge to interpret the results. Although we might have some advantages in working with imaginary “knockout humans” or human samples, we would probably have enormous disadvantages that would eventually make the research impossible (or highly unreliable). In fact, model organisms are not only selected for their evolutionary or genetic proximity with humans (when human biology is the main subject) they are also chosen for their aptness for the lab environment and work. They reproduce and grow fast, are not expensive to maintain, and can be genetically standardized (Bolker 1995), all requirements that imaginary laboratory humans could hardly meet (again, with all the ethical issues aside). In short, whether we manipulate whole organisms (in vivo) or parts of them (in vitro), the problem of extrapolation emerges over and over again. In spite of the seemingly speculative nature of extrapolation, the possibility to study one system using another simpler system is a highly successful strategy in science. Many useful drugs are the outcomes of in vitro experimentation and extrapolation. Yet, whether an extrapolation is true or false, good or bad, effective or ineffective can only be established on the pragmatic grounds of a scientist’s finest contextual guesses based upon the best available evidence (i.e., tight knowledge), mutual agreements over experimental success, and a sensible consideration of the epistemic (or non-epistemic) risks involved in getting the right kind of extrapolation. In the end, the reliability of an extrapolation depends on many sophisticated assumptions guiding the set of constrained actions; on the productivity, efficiency, and precision of standardized artifacts constituting an experimental system; and on the robust knowledge employed in a particular research program. Unfortunately, there are no transcendent principles (logical or metaphysical) that could guarantee, beyond our collection of practical activities, technologies, and experimental systems, the plausibility or accuracy of our extrapolations. But this does not imply that any kind of extrapolation would work because the very practice of extrapolating knowledge is so intertwined with, and constrained by, many epistemic elements (theories, practices, standardized objects and procedures, “tight” knowledge, etc.) that only few candidates for extrapolation would sound justifiable, rigorous, or even meaningful.
4
Performative Epistemology and the Philosophy of Experimental Biology…
65
Besides the complex pragmatic elements surrounding extrapolative practices, one could generally consider an extrapolation as reliable (or successful) when “tight” knowledge over particular entities, as revealed by different experimental systems, can be coherently placed within a larger theoretical framework aiming to describe the “real” biological nature. Accordingly, knowledge produced in experimental biology has itself a mosaic character based on innumerable extrapolations; a mosaic that presupposes many different experimental systems and the data they constantly produce (Baetu 2016). Biological theories themselves are constituted by a coherent assemblage of all this experimental information and, therefore, are the outcome of many different extrapolations linked to several experimental systems. Extrapolation, in that sense, would be a kind of inference justified by a whole set of experimental practices – i.e., constrained actions, standardized objects, epistemic “tightening” – giving us a consistent and constantly revisable picture of the world “out of the lab.”
4.6 Conclusion Following Pickering, we can conclude that the change in emphasis, from representation to performance, is not a mere descriptive shift about how to understand scientific knowledge. New philosophical problems arise, and new concepts need to be coined and shaped. For example, in a scientific world where entities are not just represented, described, or explained but are made and used, therefore touched, detached, melted, spliced, knitted, twisted, and transformed, the question about their existence does not really matter. What does matter though is to what extent we can extrapolate, from all these material activities, to the “real” working of the “external” reality. However, before any “extrapolation” can be reckoned as a good one, the experimental objects need to be strategically assembled, fixed, and standardized. Indeed, we have argued that the experimental activity needs to be framed within four principal cardinal points: constrained action, standardization, epistemic “tightening,” and extrapolation. Each assumes diverse epistemic and ontological assumptions, presents its own theoretical problems, and offers its own strategic solutions. Even though we distinguish these points for analytical purposes, we do not consider them as disposed in a hierarchical or chronological order. Rather, the points are part of a dynamic system that we have called the epistemic experimental space (EES), i.e., the space in which the experimental knowledge is produced, justified, and finally extrapolated to the world “out there.” The EES is represented in the Figure 4.1. We can start to read the picture from below and begin with the item “biological theories”. The reader can easily notice that while biological theories are the outcomes of previous extrapolations, they also constrain and orient experimental practices (constrained actions). In turn, the experimental practices are constituted by a set of more or less standardized techniques or instruments that, all together, form an experimental system. In fact, as we have shown, the experimental systems are made of highly standardized elements
66
M. Esposito and G. Vallejos Baccelliere
Fig. 4.1 General map of the EES
capable of reproducing “tight” knowledge and, therefore, to generate consistent epistemic objects. The knowledge produced within the walls of the laboratory can be extrapolated to the world outside and eventually used for delivering representations of the “biological nature.” Such representations, in the form of biological theories, constrain further experimental actions and so on. We might have started our account with the process of epistemic tightening or standardization rather than biological theory. The order is not essential because the different processes are constantly interacting, going backward and forward. The EES represents a system where each point presupposes the existence of the other (of course, to a certain extent and depending on the particular cases). Normally, you cannot get biological theory without extrapolation, but you cannot have extrapolation either without some kind of standardization. Each point works as a condition, or as an outcome, of other points. Through our EES, we have tried to single out some of the conditions making experimental knowledge in biology a highly consistent, reliable, and successful epistemic activity. Acknowledgments We acknowledge the Fondo Nacional de Desarrollo Científico y Tecnológico de Chile (Project grant N. 1171017) for the financial support. We also thank the reviewers for their very insightful feedback and insights.
References Ankeny, R. (2001). Model organisms as models: Understanding the “Lingua Franca” of the human genome project. Philosophy of Science (Proceedings of the 2000 Biennial Meeting of the Philosophy of Science Association), 68, S251–S261. Ankeny, R., & Leonelli, S. (2011). What’s so special about model organisms? Studies in History and Philosophy of Science Part A, 42, 313–323. Baetu, T. (2016). The “big picture”: The problem of extrapolation in basic research. British Journal for the Philosophy of Science, 67, 941–964. Baird, D., & Nordmann, A. (1994). Facts-well-put. British Jorunal of Philosophy of Science, 45, 37–77. Bolker, J. (1995). Model systems in developmental biology. BioEssays, 17, 451–455. Bolker, J. (2009). Exemplary and surrogate models: Two modes of representation in biology. Perspectives in Biology and Medicine, 52, 485–499.
4
Performative Epistemology and the Philosophy of Experimental Biology…
67
Boon, M. (2012). Understanding scientific practices: The role of robustness notions. In L. Soler (Ed.), Characterizing the robustness of science: after the practice turn in philosophy of science. (Boston studies in the philosophy of science; Vol. 292, No. 292) Canguilhem, G. (2015). Le normal et le pathologique. Paris: Presses Universitaires de France. Collins, H. M. (1987). Changing order: Replication and induction in scientific practice. Chicago: Chicago University Press. Culp, S. (1995). Objectivity in experimental inquiry: Breaking data-technique circles. Philosophy of Science, 62, 430–450. Culp, S. (1997). Establishing genotype/phenotype relationships: Gene targeting as an experimental approach. Philosophy of Science, 64(4), S268–S278. Douglas, H. (2000). Inductive risk and values in science. Philosophy of Science, 67, 559–579. Eronen, M. (2015). Robustness and reality. Synthese, 192, 3961–3977. Falk, R. (1984). The gene in search of an identity. Human Genetics, 68, 195–224. Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge: Cambridge University Press. Hacking, I. (1992). The self-vindication of the laboratory sciences. In A. Pickering (Ed.), Science as practice and culture (pp. 29–64). Chicago: University of Chicago Press. Jacob, C. (2002). Philosophy and biochemistry: Research at the interface between chemistry and biology. Foundations of Chemistry, 4, 97–125. Kohler, R. (1994). Lords of the Fly: Drosophila genetics and the experimental life. Chicago: Chicago University Press. LaFollette, H., & Shanks, N. (1993). Animal models in biomedical research: Some epistemological worries. Public Affairs Quarterly, 7, 113–130. LaFollette, H., & Shanks, N. (1995). Two models of models in biomedical research. The Philosophical Quarterly, 45, 141–160. Nagel, E. (1961). The structure of science: Problems in the logic of scientific explanation. Brace and World: Harcourt. Nederbragt, H. (2003). Strategies to improve the reliability of a theory: The experiment of bacterial invasion into cultured epithelial cells. Studies in History and Philosophy of Biological and Biomedical Sciences, 34, 593–614. Pickering, A. (1995). The mangle of practices: Time, agency and science. Chicago: Chicago University Press. Polanyi, M. (1962). Personal knowledge: Towards a post-critical philosophy. Chicago: Chicago University Press. Rheinberger, H. G. (1997). Toward a history of epistemic things: Synthesizing proteins in the test tube. Princeton: Princeton University Press. Steel, D. (2008). Across the boundaries. Extrapolation in biology and social science. Oxford: Oxford University Press. Strand, R. (1999). Towards a useful philosophy of biochemistry: Sketches and examples. Foundations of Chemistry, 1, 269–292. Waters, K. (2007). Causes that make a difference. The Journal of Philosophy, 104(11), 551–579. Weber, M. (2004). Philosophy of experimental biology. Cambridge: Cambridge University Press. Wimsatt, W. (1981). “Robustness. Reliability and Overdetermination”, in M. Brewer and B. Collins. eds. Scientific Inquiry and the Social Sciences. San Francisco: Jossey-Bass, pp. 124–163. Wilholt, T. (2009). Bias and values in scientific research. Studies in History and Philosophy of Science Part A, 40(1), 92–101.
Chapter 5
Life on Earth Is Not a Passenger, but a Driver: Explaining the Transition from a Physicochemical to a Life- Constrained World from an Organizational Perspective Charbel Niño El-Hani and Nei Nunes-Neto
5.1 Introduction In this chapter we address the transition from geochemical to biogeochemical cycles, which corresponds to a transition from a purely physicochemical to a life- constrained system. In order to achieve this aim, we shall base our analysis on our previous works on the functionality, normativity and organization of ecological systems (see Nunes-Neto et al. 2014, 2016a). We consider that an organizational approach to ecological systems offers an adequate ground for explaining this transition, appealing to an understanding of the organizational closure of the system, based on a mutual dependence between constraints exerted by the entities composing it. We will begin by introducing the organizational approach to ecological systems, focusing on the concepts of function and organization, and explaining how this approach grounds the use of such concepts when considering ecological systems. Then, we will explain the case study to which we will apply the organizational approach, namely, a system composed of marine phytoplankton and other marine organisms, ocean water, the atmosphere and clouds, which has been studied by climatologists, biogeochemists, ecologists and evolutionary biologists since the 1970s. In the sequel we will show how the organizational approach to ecological systems allows us to understand the emergence along time of an ecological-biogeochemical C. N. El-Hani Institute of Biology, Federal University of Bahia and National Institute of Science and Technology in Interdisciplinary and Transdisciplinary Studies in Ecology and Evolution (INCT IN-TREE), Ondina, Salvador, BA, Brazil N. Nunes-Neto (*) Faculty of Biological and Environmental Sciences, Federal University of Grande Dourados and National Institute of Science and Technology in Interdisciplinary and Transdisciplinary Studies in Ecology and Evolution (INCT IN-TREE), Dourados, MS, Brazil © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_5
69
70
C. N. El-Hani and N. Nunes-Neto
system connecting marine organisms and cloud formation over the oceans. Surely, this is just one example of life-constrained systems among many, but it will allows us to show how such systems are understood from an organizational perspective. In the final section of the chapter, we will discuss in a broader way the role of life on Earth and propose that different approaches in science and philosophy converge to more or less the same general idea, namely, that life influences physicochemical conditions in a way that ultimately contributes to its self-maintenance.
5.2 T he Organizational Approach to Biological and Ecological Systems In the 1990s and 2000s, a number of versions of the organizational approach have been proposed in the philosophy of biology, being mainly applied to cells and organisms (e.g., Schlosser 1998; Collier 2000; Bickhard 2000, 2004; McLaughlin 2001; Christensen and Bickhard 2002; Delancey 2006; Edin 2008; Mossio et al. 2009; for general discussions on teleology, teleonomy and ends in life sciences, see also the Chaps. 6, 8 and 11, in this volume, by Rosas and Morales 2019; Caponi 2019 and Olmos et al. 2019, respectively). All of them share the idea of grounding functional ascriptions on the fact that biological systems realize a specific kind of causal regime, in which the actions of a set of parts are a condition for the persistence of the whole organization through time. Organizational theories claim that we can ascribe a function to a part of a living system by considering the causal loop involved in the self-maintenance of the latter, more specifically, the role played by that part in this self-maintenance, its specific effect that contributes to the persistence of the biological organization (which is its function). Consequently, organizational theorists claim that the functional effect has an explanatory relevance for an account of the very existence of the functional bearer. In an organism, a function is, at the same time, an effect and a cause of the current presence of a functional trait, according to a causal loop that can be conceived from a systemic perspective (see below). The organizational approach accounts, thus, for the reasons of the existence of a functional trait, endorsing a teleological interpretation of functionality (for other discussions on teleology and functionality in life sciences, see Rosas and Morales 2019; Caponi 2019 and Olmos et al. 2019, all of this in this volume). Let us begin by explaining some basic ideas of the organizational approach, through a formal definition and an example. As stated by Moreno and Mossio, a trait T has a function in the organization O of a system S if and only if the following conditions, Cn, are satisfied: C1: T exerts a constraint that contributes to the maintenance of the organization O. C2: T is maintained under some constraints of O. C3: O realizes closure. (Moreno and Mossio 2015, p. 73).
5 Life on Earth Is Not a Passenger, but a Driver: Explaining the Transition…
71
We can apply the definition to a standard example used when discussing function. On the one hand, the heart (T) exerts a constraining action on the blood flowing through all the body (the system S), and in this way contributes to the maintenance of the organization O of S. This corresponds to C1 in the formalization above, which represents a bottom-up influence (from the part to the whole system). On the other hand, according to C2, the heart (T) is maintained under constraints of the organization O of S. That is to say, the heart depends on other structures (such as the liver, the kidneys, etc.) which constitute the very organization of the system S, since the heart needs, for instance, to receive nutrients (instead of toxic substances) that come from other organs, in order to maintain itself. This is a top-down relationship (from the organization as a whole to the part). Finally, according to C3 the organization O of S realizes closure, because of the very nature of the relationships described at C1 and C2. In sum, we could say that the heart produces an effect (its function, to pump blood) which contributes to the maintenance of other organs (say, the liver), as it makes it possible that nutrients are delivered to them. The liver, once well maintained, then contributes to the other organs, through its function of detoxification. In this way, the liver contributes to the detoxification of the blood and, thus, to the maintenance of the heart. Even more directly, the heart exists (continues to exist, is maintained) because of what it does (its function). Therefore, the (functional) effect of the heart is a cause of the heart. However, in order to understand the central ideas in organizational approaches, we also need to explain and distinguish between closure of processes and closure of constraints. Generally speaking, closure happens when a sequence of physicochemical processes forms a causal loop, that is, a circular sequence of events. It happens, for instance, when a process A causes a process B, which causes C, which, in turn, causes A. Some purely physical or chemical systems are characterized by a closure of processes. To consider a concrete example, think of a closed glass bottle half full of water, receiving solar radiation. The solar radiation traverses the walls of the bottle and heats the water. Once the water reaches a given temperature, it begins to evaporate, and the water vapor, after rising, condensates in the top of the bottle, thus, falling as liquid water, which is now again subject to evaporation. The cycling of water molecules inside the bottle is a thermodynamical, physicochemical, circular flow, constrained only by external entities, in this case the glass and the sun. The glass and the sun act, then, as external constraints, which are not re-generated by the cyclic thermodynamic flow of water. In other words, the glass and the sun act as boundary conditions, since they strongly influence the internal cycle of water without being produced or re-generated by it. The cyclic flow of water is an instantiation of a closure of processes. Now consider the hydrologic cycle in prebiotic Earth. In simple terms, it amounts to a set of processes that generates, under certain boundary conditions, a cycle of causal relations in which each of these processes contributes to the maintenance of the whole, and is, in turn, maintained by the whole: the sun evaporates water from the Earth surface, forming clouds; when rising to higher layers of the atmosphere these clouds get colder and generate rain; and the rain, in turn, contributes to generate water on the Earth surface once again, which evaporates and re-generates clouds,
72
C. N. El-Hani and N. Nunes-Neto
and so on. This is a geochemical example of a closure of processes. But is it also a closure of constraints? We need to turn our attention, then, to closure of constraints. To do so, we should begin by stating what a constraint is, and how they can be differentiated from a process. Constraints are specific structures that have the status of local and contingent causes that reduce the degrees of freedom of the system or entity on which they act (Pattee 1972). We can elaborate further on the notion of constraints by contrasting them with processes. While processes refer to physicochemical changes, constraints are entities that act upon processes, reducing their degrees of freedom while remaining unaffected by them, at least under certain conditions, time scales or from a certain point of view (Moreno and Mossio 2015) (Fig. 5.1a). Very concretely, two examples. First, in an organism, the organs (the heart, the liver, the lungs, etc.) constitute some of the constraints in the organic system. Second, in a cell, the membrane, organelles and enzymes are some of the constraints inside the cell. One may question then whether we are claiming that organs, membranes, organelles and enzymes are not affected by other cell components, or do they not change with time? That’s not what we are claiming. One should pay attention to a key qualification in the definition of constraints: constraints remain
Fig. 5.1 (a) Constraint; (b) dependence between constraints; (c) closure of constraints. Ai, Bi, and Ci are entities within a system; τi, specific time scales; the simple arrows indicate processes; the zig–zag arrow, constraints. (From Moreno and Mossio (2015), figures elaborated by Maël Montévil)
5 Life on Earth Is Not a Passenger, but a Driver: Explaining the Transition…
73
unaffected by the processes they act upon under certain conditions, time scales or points of view. To characterize a structure as a constraint does not entail to claim that they do not change or are affected by other structures and processes, under different conditions, time scales or points of view, other than those assumed when they are explained as constraints (see Fig. 5.1c). In the case of a closure of processes, constraints are, as explained above, only external (e.g., boundary conditions, parameters and restrictions on the configuration space, etc.), which do not depend on the dynamics upon which they act (Umerez 1994; Juarrero 1998). At most, some constraints may be produced within the system of interest, playing a role in generating another constraint in the system, but these constraints are not mutually dependent. Autonomous systems realize closure in a fundamentally different way, which leads to their specificity, namely, their capacity for self-determination, which amounts to their capacity of self-constraint.1 It is this specific way of realizing closure that the concept of closure of constraints intends to capture. Constraints can depend on one another (Fig. 5.1b). From this, one can derive closure of constraints, as a specific mode of dependence between a set of constraints, in which a system that produces some of the constraints harnessing its underlying dynamics realizes closure. In this case, not only physicochemical processes realize closure, but the loop results from a more complex interlink between processes (forming a closure of processes) and constraints (acting on the processes). The key idea here is that of mutual dependence between constraints: formally, a set of constraints C performs closure when, for each constraint Cp, belonging to C, (i) Cp depends directly on at least one other constraint in C (i.e., Cp is dependent), and (ii) there is at least one other constraint Cq, also belonging to C, which depends on Cp (i.e., Cp is an enabling condition) (Fig. 5.1c). This mutual dependence generates the capacity of self-maintenance, which is specific of the way autonomous systems realize closure (for more details, see Moreno and Mossio 2015). For instance, a living cell is capable of self-maintenance because of the mutual dependence between constraints like enzymes and membranes, which harness chemical reactions inside the cell in a cyclic way. But, in turn, each of these constraints contributes to the generation (and re-generation) of other constraints. More specifically, very often the action of an enzyme makes it possible that a particular chemical reaction happens in the conditions observed in a living system (or at least to be performed at a rate that is adequate to the self-maintenance of the cellular system). Then, in a loop, the enzyme and its constraining action are necessary conditions for the outcome of the reaction, which plays a role in the self-maintenance of the cell. This outcome, in turn, is a necessary condition for the production and maintenance of adequate concentrations of the enzyme itself, since the result of the reaction generates a cascade of products (such as other enzymes) that will act on the 1 In living systems, self-constraint involves self-determination in the sense of self-maintenance but not self-generation, as these systems do not generate themselves spontaneously as wholes, at least in the current conditions on Earth. In the remainder of the chapter, we will thus refer to “self-maintenance.”
74
C. N. El-Hani and N. Nunes-Neto
very production of the mentioned enzyme. In other words, there is a closure in this system made possible by the mutual dependence between constraints (Fig. 5.1c). Moreover, the action each constraint exerts in the network of mutually dependent constraints corresponds to the instantiation of functions. Imagine any metabolic cycle in a cell, where an enzyme C2 harnesses a chemical reaction (a process), contributing to the transformation of A2 into C4, and another enzyme, C3, harnesses in turn a chemical reaction leading from A3 to C2. Let’s suppose that C2, produced through the constraining action of C3, is a material condition to the re-generation of new molecules of C4, which in turn constrains a reaction leading to C3. Then we can say that C2 is dependent on C3 and C4 (because it depends on a product resulting from the action of these constraints), while, at the same time C2 is an enabling condition for C4 and C3 (because the production of the latter depends on C2, just as the production of C2 is dependent on C4 and C3). Notice, finally, that the very same structure (Ci), which is described as an unaffected constraint at a given time scale τ, is described as an affected outcome of a process at a different time scale τ. (see Fig. 5.1c). Nunes-Neto et al. (2014) applied this organizational account to ecological systems. They defined an ecological function as a “a precise (differentiated) effect of a given constraining action on the flow of matter and energy (…) performed by a given item of biodiversity, in an ecosystem closure of constraints.” (Nunes-Neto et al. 2014, p. 131). We will take this definition as a basis here, but below we will propose to broaden the range of functional items in the ecological domain in order to include abiotic items, showing how they can play the role of constraints. We can explain the basic idea further by considering the example of a minimal ecosystem with three functional groups: producers, consumers and decomposers of organic matter; and two hierarchical levels (i.e., in a hierarchy of control, cf. Ahl and Allen 1996): the level of the items of biodiversity (which act as the constraints, in a closure of constraints) and the level of the flow of carbon atoms (the processes of interest here, constituting a closure of processes). In this minimal ecosystem, the items of biodiversity are functional groups. The producers of organic matter (plants) constrain the flow of carbon atoms, reducing its degrees of freedom, through carbon fixation in photosynthesis. The flow of carbon atoms becomes more determinate, more harnessed, as these atoms become part of the plant biomass. Part of this biomass (leaves, fruits, sprouts, etc.) is eaten by the consumers (herbivorous animals), which further channel the flow of carbon, when carbon atoms become part of their biomass. When the consumers and producers die, the animal carcasses and plant leaves, fruits and twigs constitute the organic matter that is further processed by decomposers, transforming it into available nutrients for plants and, thus, closing the cycle by reducing once again the degrees of freedom of the flow of carbon atoms. There is a mutual dependence between these constraints. By constraining the flow of matter (in the example, carbon atoms) in a way that reduces its degrees of freedom, the consumers, for example, create enabling conditions to the existence of the decomposers and, in this way, exert an effect on the ecosystem as a whole. But while, on the one hand, they are enabling conditions to the decomposers, they are dependent on the producers of organic matter and on the very decomposers that
5 Life on Earth Is Not a Passenger, but a Driver: Explaining the Transition…
75
mobilize nutrients to the producers. The producers, consumers and decomposers exert specific constraining actions that amount to the functions they play within the ecological system, contributing to its self-maintenance. Therefore, when we say that these are functional groups, the functions they play correspond to their constraining actions on the flow of matter and energy within the ecosystem closure of constraints. Once we have explained the organizational approach to ecological systems, and applied it briefly to an example of a minimal ecosystem (only to help in understanding the main ideas), we can move to the ecological case that interests us most in this chapter, which we consider to be adequate to both exemplify the organizational approach and our general thesis on the role of life on Earth.
5.3 How Marine Life Makes Clouds Lay people and even some scientists may think that clouds are merely a result of the evaporation of water and, as such, an outcome of purely physicochemical processes. However, at least since the 1980s our view on this topic has changed in a very surprising way. This change was brought in 1987 by Charlson and colleagues, through a hypothesis that became known in the literature as the CLAW hypothesis, an acronym of its authors’ names (Charlson, Lovelock, Andreae and Warren). This hypothesis is based on the well-supported observation that marine phytoplanktonic organisms release a sulphur compound that has an impact on global climate, dimethylsulphide (DMS), which derives from dimethylsulphoniopropionate (DMSP), a compound with several biochemical roles in microalgae. For instance, there is evidence that DMSP plays, at least, four functional roles in the metabolism of algae cells: (1) it is a solute contributing to the osmotic balance; (2) acts as an antioxidant; (3) contributes to the cysteine and methionine inhibition, through an overflow mechanism and (4) is a chemical signaller in marine ecology and a chemical defense of the algae against predators (for more details on these issues, including scientific, historical and epistemological analyses, see Ayers and Cainey 2007; Nunes-Neto 2008; Nunes-Neto et al. 2009a, b; Nunes-Neto and El-Hani 2011). Thus, based on empirical observations and some theoretical principles about the relationship between life and physicochemical structures, the CLAW hypothesis proposes that the highest rate of DMS emission to the atmosphere takes place in the warmest, most saline and most intensely illuminated regions of the oceans, and that the DMS released in the ocean is rapidly ventilated to the atmosphere, where it undergoes a series of oxidations, originating cloud condensation nuclei (CCN) for water vapor (Charlson et al. 1987) (see Fig. 5.2 for details). CCNs are acidic particles exhibiting properties that make it possible for water vapor molecules to condensate and, thus, to contribute to the formation of clouds over the oceans. Since clouds reflect solar radiation back to space, they tend to cool the planetary surface. As the concentration of clouds over the oceans increases, less solar radiation reaches the surface waters, and this tends – according to the hypothesis – to reduce the heat,
76
C. N. El-Hani and N. Nunes-Neto
Fig. 5.2 A schematic representation of the CLAW or life-clouds system. (From Nunes-Neto et al. 2009a)
salinity and luminosity of the oceanic surface. As a consequence, less DMS is released by the marine phytoplankton and this, in turn, reduces the production of clouds, closing a negative feedback mechanism. This is, in sum, the mechanism proposed by the CLAW hypothesis. Thereafter, we will refer to the system in which this mechanism is instantiated as the CLAW or life-clouds system. Compared to the model presented in the original paper, we have currently a much richer picture of the mechanism instantiated in the CLAW system (see Fig. 5.2). This is not the space to explore the details, but it is important to notice, for instance, that not only phytoplanktonic organisms are responsible for DMS release. This release is, instead, a result of marine food web interactions (involving viruses, bacteria and zooplankton, see Simó 2001; Ayers and Cainey 2007; Nunes-Neto et al.2009a, 2009b). This system participates in the regulation of Earth climate, due to its contribution to the regulation of the planetary albedo through its influence on the amount of clouds. For this reason it has attracted increasing attention in recent years.
5 Life on Earth Is Not a Passenger, but a Driver: Explaining the Transition…
77
5.4 T he Organizational Approach Applied to the Life-Clouds System Since we are interested in the transition from an abiotic system to a life-constrained ecological system, we could now ask: how would a prebiotic version of the life- clouds system (that is, the system including the ocean, clouds, atmosphere and climate) look like? It is reasonable to assume that in a prebiotic version this was a system based on a closure of processes, just like we have described above in the case of the water cycle in prebiotic Earth. As microorganisms (including photosynthetic marine microorganisms) appeared and began to use resources in an oxidative atmosphere (such as water, oxygen, nutrients, etc.), they started to establish metabolic interchanges with the consequence that the ecological networks, with mutual dependence between their components, became increasingly relevant to the atmosphere and climate dynamics. For the sake of our argument, a simplified explanation of this more complex system can be formulated as follows. According to an organizational view – as one possible theoretical perspective to conceptualize the transition we are interested here – a key change happens in the cycling of sulphur atoms and molecules when biological or ecological structures constraining their flow appear. Namely, what initially was merely a closure of processes (that is to say, with the constraints acting on the flow of processes only externally – as boundary conditions) became a closure of constraints, as these were re-generated by the system itself. We need to point then to the functional complementarities in the organizational closure of the life-clouds system, which allow us to apply the idea of mutual dependence between constraints. That is, we need to show that the functional action of each constraint in the system represents conditions of possibility for (1) the maintenance of another constraining entity and for (2) its own maintenance, indirectly. The functional effect of the marine microbiota is to produce DMS (from DMSP) in the ocean water, which is then ventilated to the atmosphere and suffers from a series of oxidations until the remaining sulphur originates the cloud condensation nuclei, which, in turn, become part of the clouds. The clouds are constraining entities on the flow of sulphur, since they keep the sulphur atoms and molecules as part of their physical structures (rather than as free-floating substances in the atmosphere, with higher degrees of freedom), while they move in the atmosphere. A fraction of these clouds formed over the oceans will move to land and, when conditions for precipitation are fulfilled, they precipitate the sulphur along with the water. Thus, the sulphur atoms and molecules fall on land, reaching soils, lakes and rivers, and are eventually carried back to the oceans, through the rivers. Both the rivers and the rocks along them play a constraining role in this flow, mainly through the mechanical action of river waters on soils and rocks, causing their lixiviation and erosion, which increase the concentration of sulphur in the water. This is just like a channeling, which reduces the degree of freedom of sulphur on land. In the ocean,
78
C. N. El-Hani and N. Nunes-Neto
sulphur will be available to the metabolism of marine organisms, being part of the DMSP – the precursor of DMS – in algae cells, closing, then, the cycle of sulphur. It is important to notice that a mutual dependence relationship obtains between two constraining entities, the marine microbiota and the clouds. Each one of them is both dependent and enabling. The marine microbiota is dependent on the mechanical action of the rivers and rocks in order to obtain sulphur for their metabolism, but this microbiota is also an enabling condition, i.e., a condition of possibility for the existence of clouds over the oceans, since the clouds depend on the sulphur compounds, derived from DMS, for their structures. In turn, the clouds, while being dependent on the microbiota, are an enabling condition for the rivers and rocks in the land. Of course, the clouds are not sufficient conditions for the rivers and rocks, but the adequate conditions for the (re)generation of rivers (which means the flow of water, the availability of dissolved oxygen, the chemical composition of the water, etc.) are, at least in part, enabled by the clouds. Here it is important to notice that in this chapter we propose an extension of the scope or domain of the items of functional ascriptions in ecology (that is, the entities which can be seen as having functions) in order to include inanimate or abiotic items. This is different from what we have proposed before (where we have identified the items of functional ascription as the items of biodiversity, limiting functions, then, only to biotic entities; see Nunes-Neto et al. 2014). This exclusion of all “putative” abiotic entities – such as clouds – as functional items seems inadequate to us now, in the face of the fact that both in organisms and in ecosystems functional roles assumed to be relevant are not restricted to the biotic parts. For instance, let us take a mineralized shell of a bivalve. We assume it has the function of protecting the soft parts of the animal’s body, but the shell is not made by living cells, being made, instead, basically by calcium carbonate (CaCO3) (see Taylor et al. 1969). The same goes for the non-living or non-biotic parts of ecosystems, such as the clouds: although being abiotic structures they can be assumed to be functional in the context of an organizational view.
5.5 Concluding Remarks We have seen that the life-clouds system is one example of how life builds or transforms its environment. Moreover, we have proposed in this chapter that the organizational approach can help explain the transition from a closure of processes to systems characterized by a closure of constraints, that is, to a life-constrained world. In this way, we want to suggest that the organizational approach is a consistent epistemological ground not only for the life-clouds system, but for different scientific perspectives – situated in different disciplines – which, in spite of their differences, share a common view on the relationship between life and its physicochemical environment.
5 Life on Earth Is Not a Passenger, but a Driver: Explaining the Transition…
79
We hold here that four different and independent approaches propose the same general idea, namely, that life influences physicochemical conditions in a way that ultimately contributes to its self-maintenance, although with their respective specificities, with different emphasis or domains of application.2 This convergence can be taken as adding robustness to this thesis, and this is the reason why we mention it here. First, the organizational approach to biological/ecological systems (OABS/ OAES), which we have here mobilized as our main epistemological framework, proposes that organisms and ecological systems are organizationally closed systems, i.e., show a closure that is reached by the establishment of mutual dependence between constraints. Thus, organisms and ecological systems do not show a closure that amounts only to some cyclic pattern in a physicochemical flow, i.e., they do not show merely a closure of processes. Constraints in a living system are mutually dependent: the functional effect of each constraint in the system is the cause of at least one other constraint, and, thus, it is also a cause for itself to persist, since contributing to the persistence of another constraint creates, at least in part, the conditions of possibility for its own maintenance. This has been applied mostly to organisms (see Mossio et al. 2009; Saborido et al. 2011; Moreno and Mossio 2015) but also more recently to ecological systems (Nunes-Neto 2013; Nunes-Neto et al. 2014; Cooper et al. 2016) and socio-ecological systems (Nunes-Neto et al. 2016a). The organizational approach to biological and ecological systems proposes the general thesis we are dealing with, coming from the fields of philosophy of ecology and biology. The harnessing action of a given constraint (an organ in an organism, or a population in a larger ecological system) on the flow of processes generates a specific effect and this effect of the process at the physicochemical level contributes to the creation of conditions of possibility for the existence of the constraint itself. Second, Earth System Science (ESS), which can be regarded, as Margulis (2004) argued, as Gaia theory by another name, proposes that the Earth biota is strongly integrated with the physicochemical (abiotic) environment in such a way that the biota tends to produce adequate conditions for itself (Johnson et al. 1997; Kump et al. 1999; Jacobson et al. 2000; Lenton and van Oijen 2002). According to Schellnhuber (1999), this scientific change of focus proposed by Gaia (from a physicochemical- centered approach to a strongly integrated biogeochemical This is different from a frequent view that geophysical and geochemical structures, processes and conditions we find on Earth are merely a result of astronomic or geological events, still widespread in school science and found in academic science too. The view we highlight here treats such geophysical and geochemical structures, processes and conditions as (partially) effects of the functionality of biological organisms, gathered in populations and ecological communities. This does not mean, of course, that non-living factors do not have any influence in the building of the geophysical and geochemical structures, processes and conditions. To deny this would be simply mistaken. But many geochemical and geophysical elements we know so well – such as soils, biogenic clouds, etc. – are found – as far as we know – only on Earth, not on abiotic planets like Mars and Venus. In interaction with astronomical and geological forces, organisms have dramatically contributed, along history, to build these structures, processes and conditions, and ultimately the latter have contributed to the existence of organisms. 2
80
C. N. El-Hani and N. Nunes-Neto
approach, recognizing the huge influence of life on abiotic factors), and afterwards adopted by the community of Earth System scientists, can be regarded as a “second Copernican revolution” in science (Schellnhuber 1999, p. 19). Earth System Science, in particular, supports the thesis of a life-constrained world from the point of view of climatology and biogeochemistry, which offer a more global geological and ecological perspective on the interaction between living beings and their physicochemical environment (Lovelock 1979 2000; Johnson et al. 1997; Kump et al. 1999; Jacobson et al. 2000; Lenton and van Oijen 2002; Nunes-Neto and El-Hani 2006).3 Third, the Biodiversity and Ecosystem Functioning research program (hereafter, BEF; Naeem 2002; Cardinale et al. 2007 Loreau 2010), which is mainstream in current ecological science, assumes that the functions of the organisms, populations and communities have an effect on the ecosystem properties, many of which are abiotic properties (like temperature, humidity, etc.) (see Nunes-Neto 2013; Nunes- Neto et al. 2013, 2014, 2016b). The BEF supports the general idea of a life- constrained world from the point of view of ecology, highlighting some aspects that are black-boxed in the above-mentioned approaches, such as the specific, micro- scale, mechanisms for the influence of biodiversity on the ecosystems (Naeem 2002; Loreau 2010). Fourth, Niche Construction Theory (NCT), which is becoming increasingly accepted in evolutionary biology, emphasizes the historical, evolutionary aspect of the idea of a life-constrained world (Laland and Sterelny 2006; Laland et al. 2015), something not explicit in the other approaches. According to the NCT, organisms actively transform their physicochemical environments and these ecological transformations have a significance for themselves and for their offspring, and also for different species, showing evolutionary significance, as we can see, for instance, in the case of a beaver dam (see, e.g., Laland et al. 2016). Figure 5.3 offers a synthesis of the possible confluence of these approaches. Although we know that these are ideas increasingly accepted in their respective fields or epistemic communities and that some bridges between these approaches are in fact (somewhat implicitly) pointed out in the literature – such as the one between BEF and NCT (by Loreau 2010), as well as between ESS/Gaia and NCT by environmental scientists (see, for instance, Free and Barton 2007) – the fact they converge with regard to the role of life on Earth is not obvious, as far as we know. Besides this, another main point of our work is that one possible consistent epistemological grounding for the understanding of the transition of a physicochemical system to a life-constrained, proper ecological system, can be found in the organizational approach to biological and ecological systems. We think that the organizational approach offers an elegant and complete grounding for the teleological and Notice that we are not assuming an interpretation of the Gaia theory as claiming that the biosphere would be an organism, despite what is claimed by Lovelock himself. In previous works, we have argued for an interpretation that just considers this theory as concerning a coupling between the biota and the physicochemical environment on Earth. This is not an interpretation found only in our work (see references cited in the body of the text). 3
5 Life on Earth Is Not a Passenger, but a Driver: Explaining the Transition…
81
Fig. 5.3 A schematic representation of the idea that the general subsuming thesis that life strongly influences the external physicochemical conditions, producing adequate conditions for itself, can be supported by four different theoretical perspectives. Each one of them emphasizes some aspects of this influence of life on the physicochemical environment. OAES/OABS Organizational Approach to Ecological Systems/Organizational Approach to Biological Systems, ESS/Gaia Earth System Science/Gaia research program, BEF Biodiversity and Ecosystem Functioning research program, NCT Niche Construction Theory
normative aspects of functional ascriptions (see Mossio et al. 2009; Moreno and Mossio 2015; Cooper et al. 2016; Rosas and Morales 2019 in this volume). Although the influence of the organizational approaches has been recently increasing, we think that they are still underappreciated in the current scientific and philosophical literature, with opportunities lost concerning how we account for the understanding of living systems. Acknowledgments We would like to thank to the reviewers for all the commentaries and criticisms made, which contributed to the improvement of this chapter. CEH thanks CNPq (grant n. 465767/2014-1) and CAPES (grant n. 23038.000776/2017-54) for the support to INCT IN-TREE, and CNPq for productivity in research grant (grant n. 303011/2017-3). NNN also thanks for the support to INCT IN-TREE project.
82
C. N. El-Hani and N. Nunes-Neto
References Ahl, V., & Allen, T. F. H. (1996). Hierarchy theory: A vision, vocabulary, and epistemology. New York: Columbia University Press. Ayers, G. P., & Cainey, J. M. (2007). The CLAW hypothesis: A review of the major developments. Environmental Chemistry, 4, 366–374. Bickhard, M. H. (2000). Autonomy, function, and representation: Communication and cognition. Artificial Intelligence, 17, 111–131. Bickhard, M. H. (2004). Process and emergence: Normative function and representation. Axiomathes – An International Journal in Ontology and Cognitive Systems, 14, 121–155. Caponi, G. (2019). The Darwinian naturalization of teleology. In: L. Barravale, & L. Zaterka (Eds.), Life and evolution Latin American essays on the history and philosophy of biology. Dordrecht: Springer. Cardinale, B., Wright, J. P., Cadotte, M. W., Carroll, I. T., Hector, A., Srivastava, D., Loreau, M., & Weis, J. (2007). Impacts of plant diversity on biomass production increase through time because of species complementarity. Proceedings of the National Academy of Sciences – USA, 104, 18123–18128. Charlson, R. J., Lovelock, J. E., Andreae, M. O., & Warren, S. G. (1987). Oceanic phytoplankton, atmospheric sulphur, cloud albedo and climate. Nature, 326, 655–661. Christensen, W. D., & Bickhard, M. H. (2002). The process dynamics of normative function. The Monist, 85, 3–28. Collier, J. (2000). Autonomy and process closure as the basis for functionality. Annals of the New York Academy of Science, 901, 280–290. Cooper, G. J., El-Hani, C. N., & Nunes-Neto, N. F. (2016). Three approaches to the teleological and normative aspects of ecological functions. In N. Eldredge, T. Pievani, E. Serrelli, & I. Temkin (Eds.), Evolutionary theory: A hierarchical perspective (pp. 103–122). Chicago: University of Chicago Press. Delancey, C. (2006). Ontology and teleofunctions: A defense and revision of the systematic account of teleological explanation. Synthese, 150, 69–98. Edin, B. (2008). Assigning biological functions: Making sense of causal chains. Synthese, 161, 203–218. Free, A., & Barton, N. (2007). Do evolution and ecology need the Gaia hypothesis? Trends in Ecology and Evolution, 22, 611–619. Jacobson, M.C. Charlson, R.J.; Rodhe, H. & Orians, G.H. (2000). Earth system science: From biogeochemical cycles to global changes. Amsterdam: Elsevier Academic Press. Johnson, D. R., Ruzek, M., & Kalb, M. (1997). What is earth system science? Proceedings of the 1997 international geoscience and remote sensing symposium Singapore (pp. 688–691). Singapore: Institute of Electrical and Electronis Engineers. Juarrero, A. (1998). Causality as constraint. In G. Vijver, S. N. Salthe, & M. Delpos (Eds.), Evolutionary systems: Biological and epistemological perspectives on selection and self- organization (pp. 233–242). Dordrecht: Kluwer. Kump, L. R., Kasting, J. F., & Crane, R. G. (1999). The earth system. New Jersey: Prentice Hall. Laland, K., & Sterelny, K. (2006). Seven reasons (not) to neglect niche construction. Evolution, 60, 1751–1762. Laland, K., Uller, T., Feldman, M. W., Sterelny, K., Müller, G. B., Moczek, A., Jablonka, E., & Odling-Smee, J. (2015). The extended evolutionary synthesis: its structure, assumptions and predictions. Proceedings of the Royal Society, 282, 1–14. https://doi.org/10.1098/ rspb.2015.1019.2015. Laland, K., Matthews, B., & Feldman, M. W. (2016). An introduction to niche construction theory. Evolutionary Ecology, 30, 191–202. Lenton, T. M., & van Oijen, M. (2002). Gaia as a complex adaptive system. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 357, 683–695.
5 Life on Earth Is Not a Passenger, but a Driver: Explaining the Transition…
83
Loreau, M. (2010). Linking biodiversity and ecosystems: Towards a unifying ecological theory. Philosophical Transactions of the Royal Society B, 365, 49–60. Lovelock, J. E. [1979 2000] Gaia: A new look at life on earth. Oxford: Oxford University Press. Margulis, L. (2004). Gaia by any other name. In S. H. Schneider, J. R. Miller, E. Crist, & P. J. Boston (Eds.), Scientists Debate Gaia: The next century. Cambridge: The MIT Press. McLaughlin, P. (2001). What functions explain: Functional explanation and self-reproducing systems. Cambridge: Cambridge University Press. Moreno, A., & Mossio, M. (2015). Biological autonomy: A philosophical and theoretical enquiry. Dordrecht: Springer. Mossio, M., Saborido, C., & Moreno, A. (2009). An organizational account for biological functions. British Journal for the Philosophy of Science, 60, 813–841. Naeem, S. (2002). Ecosystem consequences of biodiversity loss: The evolution of a paradigm. Ecology, 83, 1537–1522. Nunes-Neto, N. F. (2008). Bases epistemológicas para um modelo funcional em Gaia. Dissertação de mestrado. Dissertação de mestrado/Master Thesis. Graduate studies Program in History, Philosophy and Science Teaching/Programa de Pós-Graduação em Ensino, Filosofia e História das Ciências. Salvador: UFBA. Nunes-Neto, N. F. (2013). The functional discourse in contemporary ecology. Tese de Doutorado/ PhD thesis. Graduate Studies Program in Ecology. Salvador: UFBA. Nunes-Neto, N. F., & El-Hani, C. N. (2006). Gaia, Teleologia e Função. Episteme, 11, 15–48. Nunes-Neto, N. F., & El-Hani, C. N. (2011). Functional explanations in biology, ecology, and earth system science: Contributions from philosophy of biology. Boston Studies in Philosophy of Science, 290, 185–200. https://doi.org/10.1007/978-90-481-9422-3_13.2011. Nunes-Neto, N. F., Carmo, R. S., & El-Hani, C. N. (2009a). The relationships between marine phytoplankton, dimethylsulphide, and the global climate: The CLAW hypothesis as a Lakatosian progressive problemshift. In W. T. Kersey & S. P. Munger (Eds.), Marine phytoplankton (pp. 169–185). New York: Nova Science Publishers. Nunes-Neto, N. F., Carmo, R. S., & El-Hani, C. N. (2009b). Uma conexão entre algas e nuvens: Fundamentos teóricos da hipótese CLAW e suas implicações para as mudanças climáticas. Oecologia Brasiliensis, 13, 596–608. Nunes-Neto, N. F., Carmo, R. S., & El-Hani, C. N. (2013). O conceito de função na ecologia contemporânea. Revista de Filosofia Aurora (PUC-PR). Nunes-Neto, N., Moreno, A., & El-Hani, C. (2014). Function in ecology: An organizational approach. Biology and Philosophy, 29, 123–141, 2014. Nunes-Neto, N. F., Saborido, C., El-Hani, C. N., Viana, B., & Moreno, A. (2016a). Function and normativity in social-ecological systems. Filosofia e História da Biologia/Philosophy & History of Biology, 11, 259–287. Nunes-Neto, N. F., Carmo, R. S., & El-Hani, C. N. (2016b). Biodiversity and ecosystem functioning: An analysis of the functional discourse in contemporary ecology. Filosofia e História da Biologia/Philosophy & History of Biology, 11, 289–321. Olmos, A. S., Roffé, A. J., Ginnobili, S. (2019). Systemic analysis and functional explanation: Structure and limitations. In: L. Barravale & L. Zaterka (Eds.), Life and evolution Latin American essays on the history and philosophy of biology. Dordrecth: Springer. Pattee, H. (1972). Laws and constraints, symbols and languages. In C. Waddington (Ed.), Towards a theoretical biology: Essays (pp. 248–258). Edinburgh: Edinburgh University Press. Rosas, A., & Morales, J. D. (2019). Cooperation and the gradual emergence of life and teleonomy. In: L. Barravale & L. Zaterka (Eds.), Life and evolution Latin American essays on the history and philosophy of biology. Dordrecth: Springer. Saborido, C., Mossio, M., & Moreno, A. (2011). Biological organization and cross-generation functions. British Journal of Philosophy of Science, 62, 583–606. Schellnhuber, H. J. (1999). ‘Earth system’ analysis and the second Copernican revolution. Nature, 402, C19–C23.
84
C. N. El-Hani and N. Nunes-Neto
Schlosser, G. (1998). Self-re-production and functionality: A systems-theoretical approach to teleological explanation. Synthese, 116, 303–354. Simó, R. (2001). Production of atmospheric sulphur by oceanic plankton: Biogeochemical, ecological and evolutionary links. Trends in Ecology & Evolution, 16, 287–294. Taylor, J. D., Kennedy, W. J., & Hall, A. (1969). The shell structure and mineralogy of the Bivalvia. Introduction. Nuculacea – Trigonacea. London: Bulletin of the British Museum Natural History. Umerez, J. (1994). Jerarquías Autónomas – Un estudio sobre el origen y la naturaleza de los procesos de control y de formación de niveles em sistemas naturales complejos. Tesis de doctorado. Doctorado en Filosofía. Universidad del País Vasco: San Sebastián.
Chapter 6
Cooperation and the Gradual Emergence of Life and Teleonomy Alejandro Rosas and Juan Diego Morales
6.1 T he Fundamental Problem: The Gradual Emergence of Life and Purpose The origin of life is one of those mysterious frontiers that science has not yet been able to conquer. Unlike the problem of consciousness, however, it does seem that science is here close to an imminent breakthrough. The time is now past when theorists would satisfy their lust for knowledge by postulating an élan vital, a hidden force beyond the reach of scientific experimentation, manipulation, and replication. Life is rather the manifestation of matter organized in a particular way. Even though scientists have not yet been able to understand the natural processes that take from inert to living matter, it is clear – at least since Immanuel Kant’s thoughts on the subject – that such processes cannot be understood by postulating hidden forces beyond the reach of natural science. Kant despaired, however, of our capacity to find a proper natural explanation. According to him, science would hope in vain to see the birth of a “Newton of the blade of grass” (Kant 2000). Monod used to say that teleonomy is the central problem of biology (Monod 1971). This problem is particularly pressing for a theory about the origins of life, for such a theory must simultaneously explain the emergence of teleonomic systems. If life emerged from a process that began with inert matter, and inert matter behaves according to laws or regularities without purpose, how could purposeful systems emerge from matter behaving with no purpose at all? This is the “hard problem” of the origins of life. If we understand living beings as having purposes and believe in
A. Rosas (*) Philosophy Department, Universidad Nacional de Colombia, Bogotá, Colombia e-mail: [email protected] J. D. Morales Philosophy Program, Universidad de Cartagena, Cartagena, Colombia © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_6
85
86
A. Rosas and J. D. Morales
the continuity between inert and living matter, it seems plausible to argue that purpose is not really there; it might be just something we project into nature. But it also seems obvious that the mind that does the projecting has the teleological property of acting with a purpose, e.g., acting with the goal of understanding the world. So, even if it is hard to accept that purpose emerged from inert matter, we cannot deny that minds, at least, are purposeful. But if we have reason to believe that minds are continuous with living beings, which are continuous with inert matter, we face the same problem again: if minds are teleological, it is hard to see how they could arise from inert matter, which they did; and if we believe that they did so arise, how can they behave driven by goals? This is the same dilemma that Kant faced when he spoke of “natural ends,” meaning organisms made of matter but apparently impossible to understand without presupposing that they somehow operate through final causes. Now we see that this apparent dilemma must have a solution, even if Kant despaired of our ability to solve it. The crux that has tormented philosophers since the modern ages is how to make sense of teleology in organisms after Newton’s mechanics definitively dispensed with final causes in physical science. The question is still open today whether Darwin can be said to have done the same for the science of living beings. Darwin’s discovery of natural selection certainly made it unnecessary to explain the diversity of species by appeal to the creative acts of a divine intellect. But even if we totally banish – as both authors of this essay do – divine intellect from any explanation of both the origin and diversity of organisms, this is not the same as eliminating teleology or teleonomy from their ontology. After all, teleology – or teleonomy – seems to be a real property of organisms. We use these two terms indistinctly to refer to the property of “behaving toward an end” (as Kant did); we do not use them to distinguish between two explanations of end-directedness (as Mayr 1974 proposed). “End” is not merely an endpoint of movement or just any effect, but an effect that is the cause, or part of the cause, why the organism is configured as it is (see this volume, Sects. 5.2 and 8.4). Our main purpose is to argue that teleonomy is compatible with the type of chemical origins that scientists are currently exploring. We see this question as the fundamental one regarding the origin and nature of life. We argue that teleology or teleonomy, as some prefer to say (Mayr 1974), is real and that it is an emergent property. A property is emergent if it depends on entities that do not themselves have this property but nonetheless can explain, when organized in a suitable way, its existence, i.e., its emergence. The property would not exist were it not for the way those lower level entities are organized. The property, however, is primitive and fundamental because its causal behavior differs from how causes work at lower levels (Barnes 2013). Particularly important for our view is that emergence can happen gradually instead of abruptly, as the items at the lower level progressively adopt the kind of organization necessary for its emergence. This is particularly visible in the emergence of life and teleonomy. Bedau (2012), for example, argued that the life/non-life distinction is not a dichotomy, but lies along a continuous scale, illustrated in the hypothetical roadmap taking from inert matter to proto-cells. Life emerged gradually, step by step, in a process that accumulates and passes forward organized complexity. Because teleonomy is a property of life, this
6 Cooperation and the Gradual Emergence of Life and Teleonomy
87
view inevitably entails that teleonomy arouse gradually as well. And since, as we shall see, cooperation between the parts of a complex organization is crucial to understanding teleonomy as a property of life, we shall also say that cooperation arouse gradually from the chemical association between molecules. In such a view, vagueness is of the essence. We cannot identify with precision a boundary before which no life, no teleonomy, and no cooperation exist, and immediately after which teleonomy, life and cooperation are definitely there. As always with vagueness, such division points – as between being hairy and being bald – do not exist, although a particular head can be definitely bald and another definitely hairy. In the same way, we identify some real or hypothetical structures as definitively teleological and others – real or hypothetical – as not teleological. Any living organism is teleological, but replicating RNA molecules in a lab are not. RNA molecules are autocatalytic, a property that strikes us as a chemical ancestor of biological properties like replicability, fundamental for making sense of other properties like fitness and evolution by natural selection (see below). Nonetheless, we cannot identify with precision a clear boundary between these two states of matter. Some entities, e.g. viruses, will occupy a grey zone. It is important to base our argument in plausible chemical hypotheses. At this stage of scientific knowledge, scientists and philosophers tell different hypothetical stories about the origins of life (Moreno and Mossio 2015, chap. 5). But the stories partially converge. For our purposes here, it does not matter which story we choose, as long as it illustrates the theoretical issues that arise in all of them. Our guiding thread will be the story told in Addy Pross’ 2012 book: What Is life, which generously strives for non-technicality and generality. Pross implicitly adopts an organizational account of living beings: “The problem with the synthesis of a living system is not one of material, but… one of organization” (p. 180). The organizational perspective, developed recently in detail by several researchers (Mossio et al. 2009; Saborido et al. 2011; Nunes-Neto et al. 2014; Moreno and Mossio 2015), allows us to understand more precisely the natural basis of the normative and teleological aspects of living beings. The story told by Pross is standard in many respects. Chemistry, the science of matter reacting and bonding in compounds, builds the bridge between the science of inert matter – physics – and the science of living matter – biology. We will certainly not understand how life arose until we can work out lab-supported theoretical accounts of the processes that took from autocatalytic chemical reactions to an entity that has both a primitive metabolism and the ability to replicate itself. Also, the emergence of teleology is fundamental in the transition from chemistry to biology. Is teleology a real property of organisms? Pross, for example, seems to adopt an undecided stance on this issue. We find in his book passages like this one: Intriguingly, despite the irrefutable teleonomic character of living systems, some biologists still have difficulty in coming to terms with that extraordinary character. The troublesome ‘purpose’ word, now sanitized and repackaged into the scientifically acceptable ‘teleonomy’ word, still leaves many modern biologists squirming uncomfortably…But there is no denying the teleonomic principle” (Pross 2012, p. 14) … The living world screams out teleonomy no matter where you look (Pross 2012, 17).
88
A. Rosas and J. D. Morales
But we also find this passage: … once a replicator has taken on an energy-gathering capability…we would interpret and understand its subsequent behavior as purposeful…In the replicating world, kinetics is the dominant directive and so actions in that world appear purposeful” (Pross 2012, pp. 176–7; emphases in the original).
Are the words “we would interpret … as purposeful” and “appear purposeful” meant to deny real purpose in living beings? It could be. Such choice of words is open to ambiguity. Nonetheless, real purposefulness need not imply consciously pursuing a goal. It does mean, instead, that matter is organized in living beings such as to causally reinforce any behavior or trait that achieves and improves self- maintenance and – in living organisms – adaptation and replication as well (see this volume, Sect. 8.4). A living being is a being that behaves in order to reproduce its activity and organization. Life as we know it is not understandable unless constituted by these functions and goals. In Kant’s famous words, living beings are “natural ends,” even though – as Kant was painfully aware of – this label implies an apparently impossible conciliation of the mechanisms of inert matter with the purposefulness of intentional action. For us purposefulness in organisms is real: it is an emergent property of matter when it is arranged and organized as it is in living beings. This brings us directly to the philosophical issues. Both authors of this essay are philosophers. We take as given the broad lines of the consensus that system-chemists have built around the theory of prebiotic chemistry. In this consensus it is important to pay attention to replication chemistry, autocatalytic and cross-catalytic reactions, and to far-from-thermodynamic-equilibrium systems, systems that remain in a state of constant reactivity as long as energy is flowing into them from the outside. Our purpose here is to link these phenomena to a view of the gradual emergence of life. The idea that several levels of organization link the basic constituents of matter to the higher level properties of biological beings is both scientific and philosophical. We agree with Pross that “the reductionist-holistic divide is more semantic than substantive and that holism, when probed more deeply, can be thought of as just a more elaborate form of reduction” (p. 53). This means, for us emergentists, that we need to accept that any new fundamental properties arising at any level depend on the organization of the immediate lower level and that we can, as it were, observe the gradual emergence of the higher level properties through their pre-cursors at that lower level; however, as emergent, they arise from a novelty-producing interaction and cannot be determined or described by any one of their precursors, or by their simple sum, alone (Morgan 1923; Barnes 2013; Morales 2018). The difficulty lies in accepting genuine emergence while at the same time giving the processes occurring at the lower level some explanatory value and viewing some of the lower level properties in continuity with the emerging properties at the level immediately above. Some understanding must be available for how the properties of life arise from the properties of the chemistry of replicating molecules, or perhaps more generally and accurately, from the more general properties of autocatalytic and cross-catalytic
6 Cooperation and the Gradual Emergence of Life and Teleonomy
89
reactions. We shall be here mainly concerned with how this continuity looks like and how it suggests the gradual emergence of life and teleology.
6.2 The Ahistorical Approach to the Origins of Life What Is Life (Pross 2012) is a book written by a chemist with a flair for philosophy, who builds his hypotheses along two guiding ideas: firstly, there is a fundamental difference to be drawn between the historical details relative to the particular substances involved in the beginnings of life on earth on one hand, and the “ahistorical” theory of the type of natural processes having potential to solve the origins of life issue; and secondly, there is a continuity to be described and elaborated between chemistry and Darwin’s theory of evolution by natural selection. Let us say a little bit more about these two guiding ideas, beginning with the first one. Of course, in view of the fact that we are concerned with a continuous historical transition from a purely chemical to an emergent biological world, the term “ahistorical” might be misleading. It only means that the story must abstract from the precise chemical identity of the molecules at the origins and deal instead with types of molecules and processes. Investigation into the origins of life has invested some effort into establishing the actual substances involved in the origins. Since the groundbreaking experiments by Stanley Miller and Harold Urey, who gave birth to prebiotic chemistry1 by synthesizing amino acids in the lab from a mixture of hydrogen, ammonia, methane, and water vapor, it has become gradually clear that the probability of making accurate hypotheses about prebiotic Earth are low. Conditions of prebiotic “Earth” can mean very different things, a diversity also present when we refer to the conditions of today’s Earth: Are we speaking about the conditions within an erupting volcano, under the arctic ice shelf, at the bottom of the ocean, in a hydrothermal vent, in the hot sands of the Sahara desert, in freshwater lagoon, or in a number of other totally different locations? (Pross 2012, p. 95)
But even if scientists in some proximate future managed to synthesize all or most of the biomolecules present in the conditions of prebiotic earth, there is still a deeper problem: how, through which processes, did these components came together in the precise network of biochemical reactions that is characteristic of life? More precisely, understanding what life is should make us able to build it in the right way from its components: “What I do not understand, I cannot create” (Pross 2012, p. 101). Or more properly: unless we can create life, we do not really understand it.2
Although, in fairness, it should be said that the birth of prebiotic chemistry lies in the observations and experiments by Oparin and his disciples in the 1920s concerning the unusual chemical reactions happening within liquid drops in a colloid (Lazcano-Araujo 1989; Brangwynne and Hyman 2012). 2 We are making the point that understanding life entails being able to produce it. Apparently, Pross quotes Richard Feynman from memory but fails to properly convey the key idea. We thank an anonymous reviewer who confirmed that our reversal of the quote faithfully represents Feynman’s thought. 1
90
A. Rosas and J. D. Morales
In the hypothetical case in which science did find some substances able to make the transition from inert to living matter, it would be irrelevant whether those substances were the same particular ones that gave origin to life on Earth. We would still gain valuable knowledge into the type of processes and reactions that plausibly played a role in the origins. This point leads directly to what Pross calls the “ahistorical” approach: The real challenge is to decipher the ahistorical principles behind the emergence of life, i.e., to understand why matter of any kind would tend to complexify in the biological direction. It is this ahistorical question, independent of time and place, which lies at the heart of the origin of life question… What laws of physics and chemistry could explain the emergence of highly complex, dynamic, teleonomic, and far-from-equilibrium chemical systems that we term life? (Pross 2012, p. 100; emphases added)
6.3 Biology in a Continuum with Chemistry The second guiding idea – the continuity between chemistry and biology – is directly linked to the need for clarity regarding the chemical processes that could explain the emergence of complex teleonomic systems. Pross expresses this idea in a way that sounds overly reductionist: “Biology is just chemistry or to be more precise, a sub-branch of chemistry: replicative chemistry” (Pross 2012, p. 122). “Biology then is just a particularly complex kind of replicative chemistry …” (Pross 2012, p. 163). But since we believe in the gradual emergence of new fundamental properties, we take the idea of continuity as expressing that there is something in “replicative chemistry” that already announces and prefigures the nature of living things and their properties. To be able to identify these chemical antecedents of biological entities, chemical reactions must be examined with the eyes of a biologist, and particularly of a biologist that understands evolution by natural selection. The continuity between replicative chemistry and Darwin’s theory of evolution by natural selection actually picks up a suggestion by Dawkins in The Selfish Gene, according to which natural selection in the biological domain follows a more general rule also applicable to inert matter: the selection of the most stable. This more general rule takes the specific form of the Second Law of Thermodynamics, according to which all closed physicochemical systems tend in time to evolve into the state in which there is no more free energy left for further change to take place. The state with no free energy available is the most stable state of the system. Any further change would have to draw energy from outside the system and into the system. However, it is not possible to explore the selection of the stable without making a distinction between two types of stability. According to the first type of stability, living systems appear to be an anomaly: they are stable despite being in a far-from- equilibrium state (Schrödinger 1944; G. Nicolis and Prigogine 1977; Horowitz and England 2017). They maintain themselves in this far-from-equilibrium-state by drawing boundaries between them and the environment and pumping energy from the outside in. Because they draw energy from the surrounding environment – thus
6 Cooperation and the Gradual Emergence of Life and Teleonomy
91
increasing its entropy – they do not violate the Second Law of Thermodynamics. But the difficulty lies in understanding how they came to exist and maintain themselves by natural processes alone without the intervention of an intelligent designer or engineer. Pross presents here an idea that provides the beginning of a solution: the stability of autocatalytic chemical reactions exemplifies a second type of stability. Autocatalytic reactions are those whose product is, at the same time, an agent catalyzing the very same reaction that produces it. This accounts for the fact that the system stably endures in a state of constant reactivity, that is, in a stable far-from- equilibrium-state, at least as long as the materials necessary for the chemical reactions are readily available.
6.4 The Peculiar Stability of Autocatalytic Reactions Stability in the usual thermodynamic sense is reached in a chemical system when it is no longer able to react because no more free energy is available within it. In the world of replicating systems however, a system is stable… if it does react – to make more of itself, and those replicating entities that are more reactive, in that they are better at making more of themselves, are more stable… greater stability is associated with greater reactivity (Pross 2012, p. 73).
This new type of stability receives the name of dynamic kinetic stability. It is dynamic because stable are not the components but only the process that uses any tokens of molecules belonging to a specific type; tokens that are constantly turning over while the process, i.e., chemical reaction or network of reactions, remains the same. It is kinetic because the stability is measured by the difference between the rate of replication and the rate of decay: the faster the replication and the slower the decay, the more stable is the process and the material structure (type, not token) that sustains it. Also, replicating molecules can evolve: faster replicating RNA molecules replace in time the slower replicating ones, as happened in Spiegelman’s classic experiment (Mills et al. 1967). But the reaction is not self-sustaining unless a constant supply of activated nucleotide bases is available and unless energy is being pumped into the system. And for this to happen, other reactions are required, in a network that ultimately must include metabolic reactions that provide energy to the system by purloining it from the surrounding environment. “Systems chemistry can lead to the smooth merging of living and non-living systems” (Pross 2012, p. 125). The fundamental idea in the continuity hypothesis is that the chemical process that took from non-life to simple life-forms was analogous to Darwinian evolution. Simple life is a system that has “the ability to replicate itself and evolve in a self-sustained way” (Pross 2012, p. 126, emphasis added). Something analogous to Darwinian evolution contains the “principles that would have enabled inanimate matter to complexify in the biological direction toward life” (Pross 2012, p. 126). We shall see in the next section, however, that mere molecular replicators
92
A. Rosas and J. D. Morales
are still a couple of steps behind those entities with the ability to evolve toward further complexity. Pross presents a new area of chemistry called systems chemistry. It studies replicative chemistry and demonstrates how biological concepts can be applied there. Systems chemistry’s characteristic achievements, as described by Pross, are briefly summarized here. We first have to mention Spiegelman’s prowess about 50 years ago. In a now classic experiment (Mills et al. 1967), he got RNA to replicate in a test tube with an enzyme. Günther von Kiedrowsky did it 20 years later without the enzyme (Pross 2012, p. 69). In the late 1970s and early 1980s Sydney Altman and Thomas Cech discovered that RNA functions both as a carrier of genetic information and as an enzyme (Pross 2012, p. 105), providing support for the hypothesis of an RNA world. About 10 years ago, researchers in John Sutherland’s lab synthesized RNA nucleotides from prebiotic starting materials (Powner et al. 2009), silencing the strongest objections against the RNA-world hypothesis. Spiegelman’s lab (Mills et al. 1967) also produced experimental proof that an RNA replicating molecule evolves toward RNA chains that are both shorter and replicate faster. Shorter chains were selected because they had the advantage of faster replication compared to the longer chains. This gives proof of a simple version of fitness and of natural selection at the chemical level, for these molecules are not living entities (not self-sustaining or autopoietic). Gerald Joyce reported a more complex version of chemical evolution (Voytek and Joyce 2009). In his experiments, two different RNA molecules were allowed to replicate in the presence of substrate x. They were unable to co-exist: one drove the other to extinction. In a different substrate y, however, winner and loser exchanged roles. In the presence of five different substrates they coexisted exploiting all substrates differentially, until they evolved to exploit each a different substrate. They, therefore, evolved according to Darwin’s competitive exclusion principle: “Ecological differentiation is the necessary condition for co-existence” (Pross 2012, p. 128). Even more impressive was Joyce’s discovery that a network of two discrete RNA molecules replicate as a team around 20 times faster than a single RNA molecule (Lincoln and Joyce 2009). In this two-molecule system, each molecule was not making copies of itself but facilitating the assembly of the other in a process known to chemists as cross-catalysis. The system replicated itself as a whole due to cooperation between the two discrete molecules. For system chemists, the interpretation of chemical phenomena with biological concepts points to a more fundamental fact: the concepts developed by evolutionary biologists are also applicable in systems chemistry, simply because molecular replication, the phenomena studied by this new discipline, antecedes biological replication, i.e., it existed before life appeared on Earth (in fairness, this is an important – though plausible – presupposition in Pross’ speculations: that what the last 50 years of molecular replication experiments have shown to be possible, in fact did occur, probably with different types of molecules, in the Earth’s prebiotic chemistry). For some chemists, the application of those concepts at the lower level has explanatory primacy (Pross 2012, p. 137); and if there is continuity between chemistry and biology, it is because biology is just a special case of phenomena that made their first appearance on Earth as chemical phenomena, although in the history of
6 Cooperation and the Gradual Emergence of Life and Teleonomy
93
science the chemical phenomena were discovered later than the more familiar biological ones. We now see the full meaning of the words quoted above: “Biology then is just a particularly complex kind of replicative chemistry…” On this particular point we only partially agree with Pross because we believe that genuine novelty, beyond the phenomena studied by replicative chemistry, does emerge at the biological level. We shall propose a plausible view: that the emergent properties at the biological level, in particular teleonomy, arise from the moment when cooperation gets entrenched in the internal organization of individual “transition” entities.
6.5 Complexification and Cooperation Pross suggests that there is a general theory of evolution, “applicable to both chemical replicating systems as well as biological ones” (Pross 2012, p. 153). In this general theory, the relevant casual sequence leading to evolved organisms consists in the following steps: “replication, mutation, complexification, selection.” Complexification refers to the fact that replicators establish increasingly complex chemical networks of reactions as a means to enhancing their dynamic kinetic stability (the chemical equivalent of biological fitness). Replicator molecules gain in dynamic kinetic stability if they get better at maintaining themselves in a state of reactivity, i.e., in a far-from-equilibrium-state. They achieve this by incorporating other molecules into the initial autocatalytic reaction. Ultimately, what needs to be incorporated is some energy-gathering capability that will provide energy to the whole network. Without this capability, a replication reaction would not be self- sustaining and would stop at some point. The reason is as follows: in order for template replication to carry on, the building blocks that bind to the template must also bind to each other. This second binding, however, occurs in a chemical reaction for which some extra energy is required. In other words, the building blocks must be in a high-energy state: in chemical parlance they must be activated. A replicator must be able to activate those building blocks that are in a low-energy state, for most of them will be in such a state (Pross 2012, p. 157). For this reason, in order to be able to replicate in a self-sustained way, replicating molecules need to possess an energy-gathering system. Pross concludes: “In fact, the moment that some non- metabolic (downhill) replicator acquired an energy-gathering capability, could be thought of as the moment that life began” (Pross 2012, p. 158, emphasis in the original). The assumption that ancestral replicators will tend to build networks of reactions with other types of molecules (and not merely with other replicators) leans on the fact that, once formed, molecular replicators would tend to interact with other available molecules. If those molecules happened to improve the dynamic kinetic stability of the replicator, then, even a fortuitous association between them, if it affords mutual benefits, would tend to strengthen the bond by increasing the frequency of the associates. Gerald Joyce’s experiment of a network of two replicating RNA molecules that replicate as a team via cross-catalysis is a demonstration of this ten-
94
A. Rosas and J. D. Morales
dency. In the path toward complexification, it is crucial that associations between molecules have mutual benefits. A replicator that only depletes the source of its monomers, without doing anything to regenerate it, will disappear in the end. And so, in the process of complexification, the really important addition to the replicator molecule would be an enzyme capable of catalyzing the production of activated monomers from readily available inputs to maintain the far-from-equilibrium reactivity of the replicators. Systems chemists can currently picture two main paths in which this could have happened. The first path is for the replicator to associate with an already existing, relatively simple metabolic autocatalytic cycle. These cycles are pictured as a network of molecules, say A, B, C, and D, where A catalyzes the formation of B, B of C, and so forth, ending with D catalyzing the formation of A, so that the system achieves closure and the entire cycle becomes autocatalytic (Kauffman 1995, p. 49). Of course, the cycle of reactions must also make some energy available in order to count as a primitive metabolic cycle. The existence of such autocatalytic cycles requires proof, meaning that they should actually be synthesized in the lab. The proof has not yet been produced, and it seems illogical – so claims Pross – to expect the spontaneous formation of such cycles, for they seem to contradict the Second Law of Thermodynamics. The second path is much simpler. It would consist in the original replicator molecule undergoing a mutation that would allow it to perform a primitive form of photosynthesis to capture energy from light. If that could happen, it seems intuitive enough that a replicator molecule with this capability would be able to out-compete a molecule lacking it. This was confirmed in a theoretical simulation (Wagner et al. 2010); but the reader can confidently infer that such a photosynthetic replicator molecule has not been actually produced in the lab. Chemists currently investigating into the origins of life debate which of the two following scenarios is more plausible: the “replicator first” or the “metabolism first” scenario. There is a notorious problem with both scenarios. Both the formation of self-sustaining replicator molecules or of simple autocatalytic metabolic cycles fly in the face of the Second Law of Thermodynamics. A reasonable alternative is that life began when both scenarios met and the corresponding molecules cooperated. It seems clear that a replicator molecule that would only last some rounds of replication would actually benefit from association with an energy-gathering molecule. Pross points in the same direction (Pross 2012, pp. 158–159). But we should add that the replicator must, reciprocally, provide some benefit to the metabolites, for example, by facilitating their synthesis. Or else this partnership will not last. Although any molecule will have a tendency to associate proportional to its degree of reactivity, some molecules presumably possess a stronger such tendency than others. We can accordingly infer that replicators mutating in a way that increases their tendency to association would have an advantage over others less prone to association. Since the tendency to associate with others is necessary for complexification to take place, a higher proclivity to association enhances the tendency to complexification. However, association alone cannot drive the process toward further complexity. As pointed out above, benefits for the associated molecules must be forthcoming from the association. If we add the benefits to the
6 Cooperation and the Gradual Emergence of Life and Teleonomy
95
chemical proclivity to association, we get a new type of interaction that is also familiar in evolutionary biology: cooperation. The tendency to cooperate is ubiquitous in the biological world (Maynard Smith and Szathmáry 1995). It is in fact this tendency that accounts for the evolution of multi-cellular organisms – an important milestone in the complexification of life. We believe, with Maynard-Smith and Szathmáry (Maynard Smith and Szathmáry 1995, chap. 4), that this phenomenon must be extended to the interactions between replicators, but, beyond them, also to their interactions with other types of molecules (Gánti 2003; Rasmussen et al. 2008; Ruiz-Mirazo and Moreno 2004). It seems to us that any speculation about the origins of life should include a discussion of the chemical basis for association and its benefits. Without mutually beneficial associations between different types of molecules, a tendency to complexification and therefore a transition from merely chemical, short-lived replicators to living organisms would not take place. In the beginnings, in order to evolve toward enhanced forms of reactivity, the ancestral replicators must have evolved into the type of molecules that possessed an increased tendency to associate with others. If some of these associations were instrumental in building autocatalytic and cross-catalytic networks that led to enhanced replication of all associates, then the path to complexification was facilitated by mutually beneficial associations that increased the frequency of the associates and the association. They improved their stability in the sense of dynamic kinetic stability. Spiegelman’s lab experiments (Mills et al. 1967) with RNA showed that RNA chains can replicate and undergo some evolution. In those scenarios evolution is very limited: complexification does not seem to be a plausible outcome of an evolutionary process that involves replicators alone. It is limited to the evolution toward faster and shorter replication chains. In Spiegelman’s experiment, RNA chains evolved from about 4000 bases to chains containing only 218 bases. Years later Oehlenschläger and Eigen (1997) discovered that they could evolve further into faster replicating chains containing only 48 bases. In order for evolution to proceed toward greater complexity, more complexity must emerge first. How does complexity emerge? It emerges through the association of replicators with other types of molecules, e.g., with those providing proto-metabolism. In this sense, natural selection of molecular replicators alone cannot drive the process toward further complexification. Matter must first find its way to build structures with more complex organization (Ruiz-Mirazo 2011; Ruiz-Mirazo and Mavelli 2008; Ruiz-Mirazo and Moreno 2004; Moreno and Mossio 2015, Chap. 5). There is scientific consensus that, beyond molecular replicators, the two most likely candidates that need to step into the process before complexity can evolve further are membranes and closed metabolic cycles. The latter two, combined with replicators, build the hypothetical “protocells” (Rasmussen et al. 2008) or “chemotons” (Gánti 2003). Plausibly, the association would have happened first between any two of these three structures. Presumably, the association between them happens via some sort of chemical bonding. But bonding is not likely to last or propagate unless replicators, membranes, and metabolic cycles gain some benefit in virtue of the association. “Benefit” means simply an increase in the occurrence of the dyad
96
A. Rosas and J. D. Morales
and the molecular structures composing it, e.g., replicators and metabolic cycles. We emphasize this meaning of “benefit” to highlight that this humble “hard fact” – an increase in numbers effected by the association – contains the seed of an emerging reality that leads to life, end-directedness and value. If the association instantiates some form of cross-catalysis as in Joyce’s experiment with RNA chains (Lincoln and Joyce 2009), it would surely bring an increase in the occurrence of the associated structures. Since different tokens of those structures were likely undergoing bonds with other molecules, as soon as a benefit in the above sense arises from one of these bonds, it tends to last simply because the bond, in virtue of instantiating some form of cross-catalysis, propagates its own occurrence. It multiplies its presence in the prebiotic world. This is also evolution by natural selection, but it could not have happened without “cooperation” between molecules, which brought benefits for the participants and growth in complexity. Notice that at this point, because of the benefit falling on the different components of a network, it seems legitimate to speak of cooperation between them. But this really depends on whether the bond is long lasting, which in turn depends not only on whether it can resist attacks and disruption by neighboring molecules but also on whether its benefits outcompete the benefits possibly afforded by other such bonds. Suppose that it survives both tests. In time, a third molecule can undergo a bond with the successfully bonded dyad. If this bond happens to benefit the dyad, and the dyad reciprocally benefits the third molecule, then the thought starts to gain traction that the benefits reinforcing the association actually drive the process of complexification. The traction really depends on whether the so-called “benefits” are not just a temporary episode, but become a structural element of the complexification process. If further similar events of bonding between molecular structures, where benefits arise for the participants, continue taking place, such that they build an ever more complex network of mutually supportive reactions – in closed autocatalytic cycles that harness potential energy from the environment – there comes a point where we must acknowledge that the benefits – and thus cooperation between molecules – are not just a temporary episode, but a structural element of the complexification process. Molecules are really cooperating in an organization that is self-producing or autopoietic. All the parts cooperate with each other to sustain each other in a network. The network perpetuates its organization, firstly, by reproducing itself constantly through time, i.e., by incorporating different token molecules of the molecular types required by the autopoietic entity. And secondly, more copies of itself or of similar networked reactions can arise in a spontaneous manner. So at this point we have to say that cooperation between molecules has emerged in a process analogous to the cross-catalysis between RNA molecules observed in the lab (Lincoln and Joyce 2009). Cooperation between molecules means that they are working together to support their existence by making the occurrence of the participating structures more frequent or more enduring, or both. It is clear that when we say that molecules cooperate we are attributing cooperative behavior, not cooperative “intentions.” The behavior of the molecules goes beyond mere association because, when the association produces a benefit, it feedbacks into the behavior itself to increase its
6 Cooperation and the Gradual Emergence of Life and Teleonomy
97
occurrence: it produces more of the cooperating items and more of the cooperating interaction that keeps on reinforcing the process. This cooperative behavior is reinforced in a general way as long as the cross-catalytic wholes continue to evolve and complexify. This is what lends traction to the thought that a novel behavior has emerged, a behavior that goes beyond mere chemical association and deserves the label “cooperative.”
6.6 Cooperation and Teleonomy To say that molecules cooperate makes sense only if they cooperate in order to bring something about. If used properly, the term “cooperation” implies a goal: in this case the goal is to exist further as parts of an organized, self-maintaining whole. The organized whole is constantly reproducing itself and its activity by drawing energy from the environment, and dynamically replacing its token molecules with others of the same type. The whole and the parts – the types, not the tokens – are reciprocally causes and effects of each other, as Kant famously expressed it (Kant 2000, § 65; for a contemporary version of Kant’s view, see also this volume, Sect. 5.2 and the references given there). As talk of cooperation gains traction to describe the interaction between the parts, so does talk of processes oriented toward a goal. Every improvement to the self-maintenance of the system feedbacks to retain, and promote, the cause of the improvement. The self-maintenance of the system is driving the process of complexification, functioning as a goal toward which the process tends. The question of how purpose and function can manifest themselves spontaneously is a profoundly important scientific question and its resolution would help connect chemistry, representing the objective material (inert) world with biology, representing the teleonomic world (Pross 2012, p. 117).
We could add to this quote that it would help connecting the natural with the human sciences as well (Rosas 2010). The fact that cooperation among molecules signals the beginnings of a realm of biological entities, coupled with the ability of biological evolution to explain human cooperation and moral attitudes (see this volume, Chap. 7) suggests a deep connection between those humble beginnings on the one hand, and the mechanisms holding our societies together on the other. This bridge between the natural and the human sciences is already there once researchers looking into the origins of life use terms like “cooperation”, “cheating,” or “selfishness” (Czárán et al. 2015; Levin and West 2017). Parasitic relations are also possible between molecules. Parasites, free-riders, or cheats have always been a concern for research on the evolution of cooperation (e.g., Trivers 1971). This concern appears also in connection to the origins of life. It shows that sociality, a phenomenon broader than cooperation, is already emerging in the processes that scientists envisage as immediately preceding the first proto-cells. A good example is the model proposed by Czárán et al. (2015): the metabolically coupled replicator system (MCRS). This model was proposed to overcome the error
98
A. Rosas and J. D. Morales
catastrophe problem pointed by Eigen and Schuster (1979). Replicators must consist in long chains of bases to be interesting, but in times anteceding the evolution of a reliable replicase, long replicators were prone to catastrophic errors that made that particular evolution unlikely. Eigen and Schuster proposed a community of short-sequence (below the error threshold) replicators – the hypercycle – that cooperate to achieve both ends. To avoid competition between the different short- sequence replicators, these were conceived as helping not themselves but their neighbors downstream, with the last member helping the first, a sort of chained cross-catalysis closed on itself. But subsequent theoretical research has shown that the hypercycle is not evolutionary stable: it is prey to selfish parasites that help themselves instead of helping the neighbors downstream (Maynard Smith 1979; Boerlijst and Hogeweg 1991). The MCRS is a variation of the hypercycle model with the following characteristics. It is a mutualistic association between RNA replicase and one or several ribozymes that catalyze the production of activated monomers (nucleotides) from geochemically supplied inputs. The RNA replicase makes copies of the ribozyme, which, in turn, provides monomers for template RNA replication (Czárán et al. 2015, p. 42). The RNA replicase is actually a set of replicator molecules, each of which is necessary for the production of the ribozymes, which catalyze the production of monomers. This model meets a dead end (goes extinct) but a spatial version of the model works. The community of replicators in the abstract model disappears as the fastest molecular replicator invades and drives all other replicators to extinction, putting an end to the diversity and, thus, to the possibility of solving the error catastrophe problem. The spatial version of the MCRS offers a solution. In the spatial version – both when neighborhoods lie on a bi-dimensional lattice as well as when neighborhoods are compartmentalized into adjacent pores (mineral surfaces where monomers were presumably produced have a porous structure) – the competition between replicators does not happen as in a well-mixed population but resembles rather the sort of interactions that happen in a structured and viscous population: RNA molecules interact more with their close “kin” than with an “average” individual. In this structured habitat, it is not as easy for fast replicators to drive slow replicators to extinction. Also, if some replicators are selfish free-riders and do not cooperate to sustain their community, the damage they produce reverts to them in the end. As they cause the collapse of their community, they cause their own disappearance before having the chance to “infect” other neighborhoods, due to the viscosity (low mobility) of the population. We claimed above that “social” labels are legitimate because at some point in time it becomes obvious that self-maintenance of an organized network of reactions is driving a natural process as a final cause. To legitimately assert that the goal of self-maintenance is driving the process, cooperation between parts cannot be just a transient episode but must be long-lasting enough for it to be considered structural to the process that reproduces itself with ever improving efficiency. Perhaps the line of descent of these organized systems broke several times at the early beginnings. It might have seemed appropriate to say at that time that the system appeared to “have
6 Cooperation and the Gradual Emergence of Life and Teleonomy
99
an agenda”3: self-maintenance. In particular, it only appeared so if the heritability of this agenda was weak, reached a dead end, and had to begin again. When the internal cooperation of the elements in the network becomes so entrenched that heritability is above chance – something that certainly happens when the networks of chemical reactions incorporate regulatory mechanisms (Moreno and Mossio 2015, p. 33 ff.), then we can say that the goal of self-maintenance through cooperation between the elements of the internal network emerges. This new level of organization, which also builds up gradually, signals the appearance of the living being as an agent “with an agenda.” All subsequent variations of the first living beings will succeed in propagating if they enhance internal cooperation, i.e., internal functional integration, the feature that prompted Kant to choose the combination of words “natural end” as an appropriate label for organisms. Independently of how big or small the role played by chance, the living entity cannot subsequently renounce the property of internal cooperation between elements of the network. It cannot renounce its own evolution toward the continued enhancement of its nature as an agent negotiating with itself and its environment the ways to further the goal of self-maintenance. When we look at the hypothetical chemical processes that led to the first cells, we realize that one goal – self-maintenance – was causally effective in driving the process of complexification: the new molecules, structures, and reactions gradually incorporated into the evolving life-form were so incorporated because of how they contributed to the self-maintenance of the whole. Being incorporated into the system happens by the force of the effect that these new components have on the system itself. Self-maintenance is improved, and this final effect is as much a cause of their incorporation as any physicochemical bonding that initiated the process pushing it from behind, so to say. In this way, cooperation, organization and a real goal with causal efficacy on this temporally extended process have gradually come to existence – emerged – from a starting point where the only endpoint of any chemical reaction was thermo-dynamical equilibrium, a stable state of matter that does not qualify for the novel category of “goal,” because it in no way shapes the particular path that leads to it. In contrast, the self-maintenance of constant internal reactivity that harnesses potential energy from outside the system and converts it into free energy within the system has exerted and continues to exert a causal influence on the further evolution and organization of life. As it seems, our argument relies on the temporal duration of a special type of interaction happening between the parts and the whole. Self-maintenance as a goal emerges if it “rules” long enough over the complexification process via the feedback effect it has on any new incorporation of functional elements to the system, when these improve on self-maintenance. But how long is long enough for the incipient teleology to cease being merely apparent and becoming real? It appears that this question makes a lot of sense for any view of gradual emergence. But in reality,
Organisms are beings “with an agenda.” This is Pross’ way of saying they are teleological (Pross 2012, pp. 9–11, 16, 158, 176, 187). 3
100
A. Rosas and J. D. Morales
gradualism is precisely motivated by the impossibility of establishing a precise division point between a world without and a world with teleology. We can only say that, since the process has led to subjects that pose these questions, the process has clearly lasted long enough for the emergence of teleology to be a definitive fact. Of course, life could eventually disappear from Earth or from the universe at large, but we already know that the emergence of life and teleology are within the real possibilities of matter. Teleonomy is not just an appearance in organisms but a genuine emergent property, exhibited in the goals of self-maintenance through enhanced internal cooperation and enhanced external cooperation and/or competition. Within the individual, cooperation can only be abandoned at the price of dissolution of individuality, and ultimately of life. But between individuals, competition is also important for the process of life. But since cooperation was the spark that initiated the transition from the world of purely chemical replicators into the world of living organisms, we propose (as others before us, e.g., Margulis 1998) that cooperation is ultimately more decisive for life itself than competition. We thus end in a similar positive note as Pross’ book, wishing that this insight might “serve as a ray of hope for the future of humankind” (Pross 2012, p. 191). Acknowledgments Our chapter improved thanks to the generous comments of anonymous reviewers.
References Barnes, E. (2013). Emergence and fundamentality. Mind, 121, 873–901. Bedau, M. A. (2012). A functional account of degrees of minimal chemical life. Synthese, 85, 73–88. https://doi.org/10.1007/s11229-011-9876-x. Boerlijst, C., & Hogeweg, P. (1991). Spiral wave structure in pre-biotic evolution: Hypercycles stable against parasites. Physica, D48, 17–28. Brangwynne, C. P., & Hyman, A. A. (2012). The origin of life. Nature, 491, 524–525. Czárán, T., Könnyű, B., & Eörs Szathmáry, E. (2015). Metabolically Coupled Replicator Systems: Overview of an RNA-world model concept of prebiotic evolution on mineral surfaces. Journal of Theoretical Biology, 381, 39–54. Gánti, T. (2003). The principles of life, with commentary by James Griesemer and Eörs Szathmáry. Oxford: Oxford University Press. Horowitz, J., & England, J. L. (2017). Spontaneous fine-tuning to environment in many-species chemical reaction networks. PNAS, 114, 7565–7570. Kant, I. ([1790] 2000). Critique of the power of judgment. Cambridge: Cambridge University Press. Kauffman, S. (1995). At home in the universe. The search for laws of self-organization and complexity. Oxford: Oxford University Press. Lazcano-Araujo, A. (1989). El origen de la vida. Evolución química y evolución biológica. Mexico City: Trillas. Levin, S. R., & West, S. W. (2017). The evolution of cooperation in simple molecular replicators. Proceedings of the Royal Society B, 284, 20171967. Lincoln, T. A., & Joyce, G. F. (2009). Self-sustained replication of an RNA enzyme. Science, 323, 1229–1232.
6 Cooperation and the Gradual Emergence of Life and Teleonomy
101
Margulis, L. (1998). Symbiotic planet: A new look at evolution. New York: Basic Books. Maynard Smith, J. (1979). Hypercycles and the origin of life. Nature, 280, 445–446. Maynard Smith, J., & Szathmáry, E. (1995). The major transitions in evolution. Oxford: Oxford University Press. Mayr, E. (1974). Teleological and teleonomic: A new analysis (pp. 91–117). XIV: Boston Studies in the Philosophy of Science. Mills, D. R., Peterson, R. L., & Spiegelman, S. (1967). An extracellular Darwinian experiment with a self-duplicating nucleic acid molecule. Proceedings of the National Academy of Sciences of the United States of America, 58, 217–224. Monod, J. (1971). Chance and necessity. New York: Random. Morales, J. D. (2018). The emergence of mind in a physical world. Bogota: Universidad Nacional de Colombia. Moreno, A., & Mossio, M. (2015). Biological autonomy: A philosophical and theoretical enquiry. Dordrecht: Springer. Morgan, C. L. (1923). Emergent evolution. London: Williams & Norgate. Mossio, M., Saborido, C., & Moreno, A. (2009). An organizational account of biological functions. British Journal for the Philosophy of Science, 60, 813–841. Nicolis, G., & Prigogine, I. (1977). Self-organization in non-equilibrium systems. New York: Wiley. Nunes-Neto, N., Moreno, A., & El-Hani, C. (2014). Function in ecology: An organizational approach. Biology and Philosophy, 29, 123–141. Oehlenschläger, F., & Eigen, M. (1997). 30 years later—A new approach to Sol Spiegelman’s and Leslie Orgel’s in vitro evolutionary studies. Origins of Life and Evolution of the Biosphere, 27, 437–457. Powner, M. W., Gerland, B., & Sutherland, J. D. (2009). Synthesis of activated pyrimidine ribonucleotides in prebiotically plausible conditions. Nature, 459, 239–242. Pross, A. (2012). What is life? How chemistry becomes biology. Oxford: Oxford University Press. Rasmussen, S., Bedau, M. A., Liaohai, C., Deamer, D., Krakauer, D. D. C., Packhard, N. H., & Stadler, P. F. (2008). Protocells: Bridging nonliving and living matter. Cambridge, MA: MIT Press. Rosas, A. (2010). Evolutionary game theory meets social science: Is there a unifying rule for human cooperation? Journal of Theoretical Biology, 264, 450–456. Ruiz-Mirazo, K. (2011). Protocell. In M. Gargaud, R. Amils, J. Cernicharo Quintanilla, H. J. Cleaves, W. M. Irvine, D. Pinti, & M. Viso (Eds.), Encyclopedia of astrobiology (Vol. 3, pp. 1353–1354). Heidelberg: Springer. Ruiz-Mirazo, K., & Mavelli, F. (2008). Towards ‘basic autonomy’: Stochastic simulations of minimal lipid- peptide cells. Biosystems, 91, 374–387. Ruiz-Mirazo, K., & Moreno, A. (2004). Basic autonomy as a fundamental step in the synthesis of life. Artificial Life, 10(3), 235–259. Saborido, C., Mossio, M., & Moreno, A. (2011). Biological organization and cross generation functions. The British Journal for the Philosophy of Science, 62, 583–606. Schrödinger, E. (1944). What is life? The physical aspect of the living cell. Cambridge, UK: Cambridge University Press. Trivers, R. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35–57. Voytek, S. B., & Joyce, G. F. (2009). Niche partitioning in the co-evolution of two distinct RNA. PNAS, 106, 7780–7785. Wagner, N., Pross, A., & Tannenbaum, E. (2010). Selection advantage of metabolic over non- metabolic replicators: A kinetic analysis. Biological Systems, 99, 126–129.
Chapter 7
Evolutionary Debunking Arguments and Moral Realism Maximiliano Martínez, Alejandro Mosqueda, and Jorge Oseguera
7.1 Debunking Arguments The recent debate over evolutionary debunking arguments against moral realism gained strength in 2006 with the publication of “A Darwinian Dilemma for Realist Theories of Value” by Sharon Street (2006). She argues that the best explanation of the content of our moral judgments is based on evolutionary biology and does not appeal to the independent moral truths posited by moral realism. Therefore, the realists face a dilemma: either accept that there is no relationship between the content of our moral judgments and moral truths – which would lead us to moral skepticism – or offer an explanation of what the relationship is between those two – which would be unacceptable on scientific grounds. Either way, the moral realist is in trouble. But debunking arguments challenging moral realism can be found in the literature much earlier. The first version is found in The Descent of Man where Darwin offers an evolutionary explanation of moral standards. Michael Ruse and Edward O. Wilson (1986), and Richard Joyce (2006), developed the basic argument on which Street expanded on. Debunking arguments are epistemic arguments in the sense they aim to undermine the justification of our moral beliefs, but they can be divided into different types depending on the reasons they appeal to. Here, we will start distinguishing M. Martínez (*) Departamento de Humanidades, Universidad Autónoma Metropolitana, Unidad Cuajimalpa, Mexico City, Mexico e-mail: [email protected] A. Mosqueda Posgrado en Ciencias Sociales y Humanidades, Universidad Autónoma Metropolitana, Unidad Cuajimalpa, México City, Mexico J. Oseguera Philosophy Department, Florida State University, Tallahassee, FL, USA © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_7
103
104
M. Martínez et al.
between two different types of reasons in order to pave the way for analyzing Street’s argument. In this section we present a description of the arguments mentioned and point out some of the differences between them.
7.1.1 The Modal Argument Charles Darwin developed the basis of what we know today as the theory of biological evolution through natural selection (see Darwin 1859), which he then applied to human beings (see Darwin 1888). Morality was not something that he left out of his analysis. Darwin hypothesized that, while morality can give little or no advantage to a human being and their children over another human being of the same tribe, a tribe composed of humans who have a feeling of patriotism, loyalty, obedience, courage, sympathy, who are always willing to help others and who are willing to sacrifice themselves for each other, has a significant advantage over another tribe lacking these characteristics. This resulted in a prevalence in the population of its characteristics. This evolutionary explanation of morality motivates skepticism towards moral realism if we consider the following thought experiment that Darwin stated: If “men were reared under precisely the same conditions as bee-hives, there can hardly be any doubt that our unmarried females would, like worker bees, think it a sacred duty to kill their brothers, and mothers would strive to kill their fertile daughters; and no one would think of interfering” (Darwin 1888, p. 73). This hypothetical case points to the contingency of our moral beliefs. We have the moral beliefs that we have only because of how we evolved, but if we had evolved differently, we would have different moral beliefs and our morality would be different. This is problematic because it conflicts with the objectivity, inescapability, and necessity present in our traditional conception of morality and, therefore, seems to undermine it. In other words: if the moral truths are necessary and objective, as we seem to consider them, it is problematic to demonstrate that our moral judgments are originated from contingent and historical contexts that could be otherwise. This problem is known in the literature as the “contingency challenge” (Lillehammer 2010, p.365). Since it seeks to undermine the justification of our moral beliefs appealing to their contingency, Darwin’s argument can be considered a modal argument. It is important to note that this argument only targets moral realisms that hold that moral truths are necessarily true.
7.1.2 The Parsimony Argument On the other hand, Michael Ruse and Edward O. Wilson offered an argument of ontological parsimony against moral realism. They characterize moral claims as distinctly prescriptive in an objective way:
7 Evolutionary Debunking Arguments and Moral Realism
105
they [our moral claims] lay upon us certain obligations to help and to co-operate with others in various ways. … Morality is taken to transcend mere personal wishes or desires. … moral statements are thought to have an objective referent whether the Will of a Supreme Being or eternal verities perceptible through intuition (Ruse and Wilson 1986, p. 178).
In this popular way of understanding morality, moral facts become true in virtue of objective referents, be they theological or moral entities. Their argument aims to debunk this objectivity in morality by making those properties irrelevant to the explanation of moral phenomena. The first step they take is to offer an evolutionary explanation of why we experience our moral judgments as objective and with a prescriptive force: “human beings function better if they are deceived by their genes into thinking that there is a disinterested objective morality binding upon them, which all should obey. We help others because it is “right” to help them and because we know that they are inwardly compelled to reciprocate in equal measure” (Ruse and Wilson 1986, p. 179). In other words, evolution selected cognitive mechanisms that make us experience our moral judgments as objective, which helped us to be more effective cooperators and, therefore, to maximize our biological fitness. The key point here is that this explanation does not require the existence of an objective morality. Our moral judgments and their prescriptive and objective phenomenology can be explained without having to raise objective references for moral statements, such as a “Will of a Supreme Being or eternal verities perceptible through intuition.” If this hypothesis is true and can successfully explain our moral phenomena, then, offering an “objective basis for morality is redundant”; it does not play a necessary explanatory role (see Ruse and Wilson 1986, p. 254). Even if the entities that make objective moral propositions did not exist, we would still make the moral judgments we make. On the other hand, if they exist, we have no reason to suppose that evolution puts us in correspondence with them. If these entities are redundant, what reason do we have to posit them? The conclusion then is that the objectivity of morality is an illusion, which would imply that moral realism is false. Richard Joyce (2006, p. 189) interprets Ruse and Wilson in the following way. There are two competing hypotheses that could explain our moral judgments. One, let us call it Hypothesis A, is the evolutionary explanation offered by Ruse and Wilson: we have the moral beliefs that we have because evolution designed us that way. According to an alternative explanation, Hypothesis B, there are entities (for example, a supreme being, irreducible moral properties) that are intuited or perceived, which gives objectivity to moral claims. Given that Hypothesis A has the same explanatory power as Hypothesis B, but does not raise additional ontological entities, we can apply Ockham’s razor and deny Hypothesis B. But according to Joyce, this conclusion is too hasty, since an ontological reduction of Hypothesis B to Hypothesis A is possible: an explanation of moral properties could be offered in terms of natural or physical properties. With such a reduction, Hypothesis B would not imply an ontology greater than Hypothesis A. In this way, Ockham’s razor could not be applied, so the argument of Ruse and Wilson only applies to non-naturalist realist theories. But a naturalistic theory that intends to make such a reduction would have the burden of proof, because it would have to offer a clear and plausible explanation of what such a reduction consists of. According to Joyce, the prospect of a
106
M. Martínez et al.
theory of this type is very unlikely since it could not satisfy a desideratum that he considers key: to explain “the inescapable practical authority” of morality. We will not dwell on what this desideratum consists in because it falls out of the scope of this paper. To sum up: Darwin’s contingency challenge is the seminal evolutionary debunking argument. Ruse and Wilson framed it in ontological terms as an argument against non-naturalist moral realism on which Joyce expanded to include naturalist strands of moral realism. We will now move on to analyze Sharon Street’s more complex argument. As we will point out, she takes elements of these formulations and develops them further. After our analysis, a response to both the modal and the ontological argument will become apparent.
7.2 Street’s Debunking Argument In “A Darwinian Dilemma for Realist Theories of Value” (2006), Sharon Street introduced one of the most discussed arguments in metaethics based on evolutionary premises and against moral realism. With this debunking argument she attempts to undermine moral realism by appealing to the evolutionary origins of our moral beliefs. Roughly, the idea of the argument is that the best explanation of the content of our moral judgments is an explanation based on evolutionary biology that does not appeal to the independent moral truths posited by moral realism. We have reconstructed Street’s argument as follows: 1 . Realism: Moral truths are independent of our evaluative attitudes. 2. Evolution: Natural selection has had an important influence on the content of our moral beliefs. 3. If the realist does not want to be incompatible with science, she has the challenge of explaining the relation between (1) and (2). 4. Dilemma: Either (a) there is no relation between (1) and (2), or (b) there is a relation between (1) and (2): natural selection favored the ancestors who grasped moral truths. 5. (a) leads to moral skepticism. 6. (b) is an unacceptable explanation on a scientific basis. Therefore, since moral realism cannot give a satisfactory explanation of the relation between (1) and (2) moral realism is debunked. The first premise exposes, according to Street, one of the most important characteristics of moral realism. For Street, the “claim of realism about value (...) is that there are at least some evaluative facts or truths that hold independently of all our evaluative attitudes” (Street 2006, p. 110). For moral realism the truth or falsity of moral judgments does not depend on our evaluative attitudes, which include states such as desires, attitudes of approval and disapproval, unreflective evaluative tendencies such as the tendency to experience X as counting in favor of or demanding Y, and consciously or unconsciously held evaluative judgements, such as judgements
7 Evolutionary Debunking Arguments and Moral Realism
107
about what is a reason for what, about what one should or ought to do, about what is good, valuable, or worthwhile, about what is morally right or wrong, and so on (2006, p. 110).
This does not imply that there is no relationship between the subjects and what makes the moral judgments true or false. The independence that moral realism claims only consists in the following: what makes a moral judgment true or false does not depend on what a subject or a group believes, desires, etc. Street characterizes moral realism in this way “because it is independence of this type of mental states that is the main point of contention between realists and antirealists about value” (Street, n. 1, p. 156). The second premise is supported by evolutionary biology, which explains morality as a trait that increases the fitness (survival and reproduction) of those who possessed it. Street points out that “one enormous factor in shaping the content of human values has been the forces of natural selection, such that our system of evaluative judgements is thoroughly saturated with evolutionary influence” (Street 2006, p. 114). The intuition behind this premise is “that just as evolutionary forces shaped our eyes and ears, so they shaped our moral beliefs” (Vavova 2015, p. 104). To demonstrate the influence that natural selection has had on the content of our moral judgments, Street cites six judgments whose wide acceptance can be explained by evolutionary biology: • The fact that something would promote one’s survival is a reason in favor of it. • The fact that something would promote the interests of a family member is a reason to do it. • We have greater obligations to help our own children than we do to help complete strangers. • The fact that someone has treated one well is a reason to treat that person well in return. • The fact that someone is altruistic is a reason to admire, praise, and reward him or her. • The fact that someone has done one deliberate harm is a reason to shun that person or seek his or her punishment (Street 2006, p. 115). Evolutionary biology explains the widespread human acceptance of these judgments based on the idea that they promoted reproductive success and survival more effectively than alternative judgments (see Street 2006, p. 115). In this sense, we consider being negligent with our children as something incorrect because it does not promote our reproductive success or our survival. Despite cultural, historical, and social differences, these six judgments have been widely accepted because they increased our fitness. This shows, according to Street, that “the content of human evaluative judgements has been tremendously influenced ... by the forces of natural selection” (Street 2006, p. 121). According to Street, “[c]ontemporary realist theories of value claim to be compatible with natural science” (Street 2006, p. 109). This poses a challenge to moral realism. As stated in the third premise, if moral realism does not want to be incompatible with natural science then it “needs to take a position on what relation there
108
M. Martínez et al.
is, if any, between the selective forces that have influenced the content of our evaluative judgements, on the one hand, and the independent evaluative truths that realism posits, on the other” (Street 2006, p. 121). This challenge generates the dilemma indicated in the fourth premise: “[r]ealists have two options: they may either assert or deny a relation” (Street 2006, p. 121). What is required of moral realism is that it takes a position on this dilemma. In this sense, denying that there is a relation is an option that the realist can choose in order to take a position. (a) is an interesting option because it allows realism to recognize the influence of evolutionary forces on the content of our evaluative judgments, and thus not be incompatible with science, without their notion of independence being in danger, since it does not commit to linking such influence with independent evaluative truths. If the realist denies that there is a relation, then she would have to accept that evolution has pushed us to adopt precisely just the moral judgments that accord with independent truths. But this would be a matter of luck. It would be fortunate that the moral judgments that natural selection promoted are precisely the moral judgments that moral realism considers true. In this way, denying that there is a relation between (1) and (2) “leads to the implausible skeptical result that most of our evaluative judgements are off track due to the distorting pressure of Darwinian forces” (Street 2006, p. 109), as mentioned in the fifth premise of the argument. The second horn of the dilemma is to accept that there is a relation between (1) and (2). We can account for this relation from a tracking account: natural selection made us track those events that satisfy the truth conditions of our moral judgments. “According to this hypothesis, our ability to recognize evaluative truths, like the cheetah’s speed and the giraffe’s long neck, conferred upon us certain advantages that helped us to flourish and reproduce” (Street 2006, p. 126). The individuals who captured such facts and made judgments in accordance with them, had more fitness than those who did not. The tracking account is a scientific explanation because it offers a hypothesis about how the course of natural selection explains the wide presence of certain moral judgments rather than others in humans (see Street 2006, p. 126). As this explanation is a scientific explanation, it is subject to competition with other theories under scientific standards. In this competition, the tracking account is overcome by an alternative explanation called the adaptive link account: the tendency to adopt certain moral judgments contributed to fitness because our ancestors forged adaptive links between their surrounding circumstances and appropriate responses to them, making them act, feel and believe in ways that were advantageous (see Street 2006, pp. 126–127). In living organisms there are several mechanisms that serve to link the circumstances of the organism with their responses in ways that tend to promote fitness. “A straightforward example of such a mechanism is the automatic reflex response that causes one’s hand to withdraw from a hot surface, or the mechanism that causes a Venus’s-flytrap to snap shut on an insect” (Street 2006, p. 127). Street argues that the adaptive link account is superior to the tracking account at least with respect to three common criteria of scientific adequacy: parsimony, clarity, and explanatory power.
7 Evolutionary Debunking Arguments and Moral Realism
109
The tracking account is less parsimonious because it “posits something extra that the adaptive link account does not, namely independent evaluative truths” (Street 2006, p. 129). The tracking account postulates independent moral truths supported by moral facts to explain why it is adaptive to make certain judgments. In contrast, the adaptive link account explains the adaptive advantage of such judgments without the need to postulate independent evaluative truths. With respect to parsimony, the adaptive link account is preferable because its explanation is simpler and does not multiply the ontology of the world since it does not postulate independent evaluative truths. Regarding the criterion of clarity, Street argues that the tracking account becomes obscure upon closer examination: [A]ccording to the tracking account, making certain evaluative judgements rather than others promoted reproductive success because these judgements were true. But let’s now look at this. How exactly is this supposed to work? Exactly why would it promote an organism’s reproductive success to grasp the independent evaluative truths posited by the realist? The realist owes us an answer here (Street 2006, pp. 129–130).
The only explanation that the tracking account can give about why certain moral judgments promoted fitness is that such judgments are true. But this answer is unsatisfactory because of the following question: exactly why does the fitness of an organism promote independent evaluative truths? Conversely, the adaptive link account holds that we make such judgments simply because they were adaptive, not because they are true. Finally, Street argues that the adaptive link account has more explanatory power than the tracking account. “Its appeal to the truth and falsity of the judgements in question sheds no light on why we observe the specific content that we do in human evaluative judgements; in the end, it merely reiterates the point that we do believe or disbelieve these things” (Street 2006, p. 134). First, the tracking account cannot explain the remarkable coincidence that the moral truths that it posits is exactly equivalent as the judgments that are explained by the adaptive link account. Second, the adaptive link account explains why we tend to make judgments that today we would clearly consider as false, for example, the judgment that we should help more people from our group and less to people who do not belong to our group: it was adaptive for our ancestors to cooperate with close individuals and to be wary of the non-close. Lastly, the adaptive link account also explains why, out of all the possible moral judgments, we have the ones we have: our moral judgments are those that were appropriate for the circumstances of our ancestors. The tracking account, by contrast, does not explain these issues. In this way, the sixth premise of Street’s debunking argument is supported. The above shows that moral realism is in trouble: either it has to deny the relation between (1) and (2) and fall into a skepticism, or it has to adopt an explanation that is scientifically inferior to the unrealistic explanation of the adaptive link account. The argument, according to Street, shows that the adaptive link account is better for explaining the content of our moral judgments. But such an explanation does not appeal to independent evaluative truths, instead it explains the content of our
110
M. Martínez et al.
e valuative judgments based on what promoted survival and adaptation. Therefore, moral realism is undermined because the forces of evolution determined in an important way the content of our moral judgments in directions that have nothing to do with the independent evaluative truths postulated by moral realism.
7.3 A Version of Moral Realism Based on Realism Itself: A Critical Examination of Street’s Argument The discussion about Street’s debunking argument has been extensive and several aspects of her argument have been examined. David Copp (2008), for example, argues that the tracking account and the adaptive link account are compatible, so he proposes an alternative realistic explanation to explain the relation between (1) and (2), which he calls society-centered moral theory. Erik Wielenberg (2010), on the other hand, discusses the first horn of the dilemma and tries to show that skepticism does not followed from denying that there is no relationship between the independent evaluative truths posited by the realist and the influence of natural selection on the content of our evaluative judgments. In a recent paper, Marc Artiga (2015) uses the naturalistic theory of teleosemantics to try to show that the tracking account is not inferior to the adaptive link account. Our strategy will be different: we will try to defend moral realism from its fundamental characteristics. We will offer a detailed explanation of the notion of independence of the evaluative attitudes in order to introduce into the discussion other important characteristics of moral realism – such as cognitivism, representational language, and moral facts – and thus offer a more precise explanation of the tracking account; one that shows that it is not really an inferior explanation to the adaptive link account. Our argument is based on a review of the first premise of Street’s argument. “Moral realism” is a technical term and, therefore, there is not a single correct definition. For Street, “[the] claim of realism about value ... is that there are at least some evaluative facts or truths that hold independently of all our evaluative attitudes” (Street 2006, p. 110). The independence of the evaluative attitudes is undoubtedly one of the main characteristics of moral realism. However, we believe that to fully understand this characteristic it is necessary to make explicit that moral realism is a form of cognitivism. One of the reasons why moral realism holds that what makes a moral judgment true or false is independent of our evaluative attitudes is because it considers that moral judgments do not pretend to express our opinions, desires, beliefs, emotions, or moral theories, but we intend to describe the world. This claim makes moral realism a form of cognitivism. In general, “the key thought for cognitivism is that the sentence [a moral statement] purports to describe how things are” (Bedke 2018, p. 293). Moreover, a “view is cognitivist if it allows for a central class of judgments within a domain to count as beliefs, capable of being true or false in virtue of their more or less accurate representation of the facts within the domain” (Shafer-Landau
7 Evolutionary Debunking Arguments and Moral Realism
111
2003, p. 17). Moral realism is a form of cognitivism because it holds that evaluative judgments are statements that can be true or false by virtue of correctly reporting certain facts. “Realist not only think that moral language and thought purport to describe or represent, but they think there are mind-independent moral properties and facts that we sometimes describe or represent accurately” (Bedke 2018, p. 296). For moral realism, the truth or falsity of moral judgments does not depend on our evaluative attitudes but depends on their correct description or representation of the facts. It is important to note that for any kind of moral realism the facts are the truth conditions of our evaluative judgments. That is, a moral realist cannot disregard the existence of facts as one of his fundamental theses, because, after all, it is the facts that determine whether an evaluative judgment is true or false. It is because of this commitment to cognitivism that, for moral realism, the truth or falsity of evaluative judgments is independent of our evaluative attitudes. It is then important to recognize that for this view of moral realism the moral discourse is of a representational type and that its truthmakers are facts. Moral language behaves very similarly to other representational languages. Both the sentence “Corruption is common” and “Corruption is incorrect” are statements that we can affirm or deny and to which we can assign truth values based on the facts. By taking moral language as representational, our evaluative judgments aim to describe a reality that is independent of our way of speaking and thinking about it. Recognizing the cognitivist position of moral realism allows us to better understand the claim that the truth or falsity of evaluative judgments is independent of our evaluative attitudes and also helps us to distinguish moral realism from other metaethical positions. Unlike non-cognitivism, expressivism, and emotivism, moral realism holds that “moral or normative talk is fully representational, that is fully and straightforwardly fact-stating and truth-evaluable, that it expresses beliefs, that it attempts to describe the normative part of the universe” (Enoch 2018, p. 30). And unlike constructivism, moral realism holds that moral judgments “are not made true by our decision-making procedures, or by our endorsing them, or by anything about us and our perspectives” (Enoch 2018, p. 30). We must understand then the moral realists’ notion of independence of the evaluative attitudes as a cognitivist thesis, which states that for moral realism evaluative language is a representational language with which you try to represent or describe reality. Facts play an important role since they are the truth makers of this representational language. How does this explanation clarify or specify the tracking account mentioned above? Faced with the dilemma generated by the thesis of moral realism and the thesis of evolutionary biology, Street believes that moral realism can claim that there is a tracking relation between independent evaluative truths and the influence of natural selection on the content of our evaluative judgments. According with the Street’s version of the tracking account, “our ability to recognize evaluative truths ... conferred upon us certain advantages that helped us to flourish and reproduce” (Street 2006, p. 126). The evaluative judgments that provided the most selective advantages to our ancestors were those that were true. In this way, moral realism can point to the evolutionary advantages of grasping evaluative truths: “Surely ... it
112
M. Martínez et al.
is advantageous to recognize evaluative truths; surely it promotes one’s survival ... to be able to grasp what one has reason to do, believe, and feel” (Street 2006, p. 125). Now, it should be noted that it is confusing to say that moral realism proposes to grasp independent evaluative truths, as Street affirms. This suggests that the thesis of moral realism about moral truths is completely ontological and we think it is not so. As we stressed, the thesis about independent evaluative truths consists in affirming that our evaluative judgments are representational statements with which we try to describe or represent a certain reality. In this sense, the truth or falsity of an evaluative judgment does not depend on the evaluative attitudes of the agent but on the correct reporting or description of the reality in question. Undoubtedly there is an ontological element in this thesis since it assumes that there are facts by virtue of which evaluative judgments are true or false, depending on whether or not they are adequately represented. But it is a different thing to affirm that there are (independent) facts than to affirm that there are (independent) evaluative truths. The thesis about independent evaluative truths seems to be more a semantic thesis that exposes a cognitivist way of understanding moral discourse. We also believe that we must avoid thinking about our ability to recognize evaluative facts by virtue of which an evaluative judgment can be true or false as a special capacity. J. L. Mackie had already expressed this concern from the rarity argument: If there were objective values, then they would be entities or qualities or relations of a very strange sort, utterly different from anything else in the universe. Correspondingly, if we were aware of them, it would have to be by some special faculty of moral perception or intuition, utterly different from our ordinary ways of knowing everything else (Mackie 1977, p. 38).
Among other difficulties that arise in such an argument, for Mackie there would be an epistemological difficulty in accounting for our knowledge of the entities or evaluative traits and their links with the natural features of which would be consequences that suggests the need to postulate a special capacity for it. But why is it necessary to postulate a special capacity that allows us to see the evaluative traits? As Platts points out, “why, to change the case, we cannot account for the recognition that people make of the malicious, the loyal, the aggressive, the dishonest, simply in parallel terms to those who realize their recognition of others traits in the world?” (Platts 1983, p. 4). As we have mentioned, for moral realism evaluative language is representational; with it we try to describe the facts in the world. We recognize them in the same way that we recognize other non-evaluative features in the world. According to Street, the tracking account states that we have an ability to recognize evaluative truths that gives us certain adaptive advantages that helped us grow and reproduce. But we have tried to show that we should not understand such ability as a special capacity and that we should not confuse the semantic thesis of independent evaluative truths with the ontological thesis of evaluative facts. Consequently, the description of the tracking account should be specified. The tracking account that explains the relationship between (1) and (2) would consist in
7 Evolutionary Debunking Arguments and Moral Realism
113
that we can recognize the evaluative facts that exist in the world from which an evaluative judgment is true or false by virtue of representing such facts correctly, and some of those facts that are truthmakers are also evolutionary facts: facts about what promote fitness. Before testing this new version of the tracking account under the criteria of parsimony, clarity and explanatory power (the scientific standards Street uses to attack it); we would like to point out that there seems to be a gap in Street’s argument. Street argues that if the moral realist does not want to be incompatible with science, she has to explain the relation between her thesis that evaluative truths are independent of our evaluative attitudes and the evolutionary thesis that natural selection has had an important influence on the content of our evaluative beliefs. This generates, according to Street, the following dilemma: moral realism has to deny that there is a relation between the realistic thesis and the evolutionary thesis or has to accept that there is a relation between them. Street herself proposes the tracking account as an option that moral realism has to explain such a relation. The problem with the tracking account is, according to Street, that it is unacceptable on a scientific basis because it is an inferior explanation to the adaptive link account in relation to the criteria of parsimony, clarity, and explanatory power. The strange thing in this step of the argument is that the parameters in relation to which the explanation of the tracking is inferior have nothing to do with the explanation of the relation between (1) and (2), which was supposed to be the challenge that moral realism had to explain. We believe that for two explanations to compete it is important that both are trying to explain the same phenomenon (in this case the relationship between theses (1) and (2) in Street’s argument). While the tracking account does explain such a relationship, the adaptive link account does not do so. How then can these explanations compete if they do not try to explain the same phenomenon? The adaptive link account does not explain the relationship between (1) and (2). The criteria of parsimony, clarity, and explanatory power from which Street contrasts the tracking account with the adaptive link account revolve around explaining the tendency that we have to adopt certain evaluative judgments rather than others, why such judgments contributed to fitness, and why we observe the specific content we do in evaluative human judgments. All these aspects strictly correspond to the evolutionary thesis. But note that the explanation of the relation between the thesis of independent evaluative truths and the thesis that natural selection has influenced in important ways the content of our moral beliefs is left aside. Given this caveat, the question for Street would be: why the fact that the tracking account is inferior to the adaptive link account with regard to the evolutionary thesis makes the tracking account also unacceptable to explain the relationship between the realistic and the evolutionary theses? No doubt the result would be different if we contrast the tracking account and the adaptive link account under the criteria pointed out by Street (not only in relation to the evolutionary thesis but in relation to the explanation of the relationship between the realistic thesis (1) and the evolutionary thesis (2)), which is the core of the dilemma she stresses.
114
M. Martínez et al.
7.4 R eassessing the Superiority of the Adaptive Link Account Over the Tracking Account So far, we have tried to contextualize moral realism and the tracking account from the basic tenets of moral realism itself: its cognitivist character, its claim that evaluative language is representational, and the feature that evaluative judgments are true or false by virtue of corresponding properly with particular facts. So, the tracking account would hold that we can describe facts of the world independently of our interests and desires by virtue of which our evaluative judgments are true or false. And some of the facts that determine the truth values of our moral judgments are evolutionary. For example, the judgment “Caring for our children is correct” is true in virtue of the evolutionary fact that taking care of our children promotes our fitness. Or the judgment “Not to be reciprocal before cooperative attitudes is incorrect” is true in virtue of which not being reciprocal before cooperative attitudes does not promote our fitness. In this sense, evolutionary facts are facts about fitness. But what kind of fitness are we talking about here? We take a multi-level selection approach a là Mayr (2002), Matthen (2003), Okasha (2006), and Martínez and Moya (2011) in which selection operates primarily on organisms since it “has direct effects on both a higher level (characteristics of [groups and] populations) and a lower level (characteristics of genetic pools)” (Martinez and Moya 2011, p. 5). This focus on the organismal level means that the fitness we are primarily considering is individual fitness. Now, altruism and other moral behaviors are usually explained through genetic or group fitness, so it could be argued that if we focus on individual fitness, the implication will be that in some scenarios acting in a non-altruistic way will be seen as morally permissible. For example, cheating in a prisoner’s dilemma scenario could increase the individual fitness of an organism if the other organisms involved are not cheating. The cheater would be free riding on the non-cheating group by getting benefits at the expense of the others. If this translates into more offspring for the cheater, then the cheater would be fitter and we would have to conclude that cheating is morally obligatory or at least morally permissible. But as Trivers’ (1971) and Axelrod’s (1981) analyses show, mechanisms could evolve to make the non-cheaters more individually fit than the cheater. The non-cheaters could evolve mechanisms to become “reciprocal altruists,” i.e., organisms that can distinguish cheaters from non-cheaters, which leads them to cooperate with other non-cheaters, but not with cheaters. This would preclude the cheaters from getting the benefits of cooperation and make the non-cheaters more fit, and therefore, make cheating morally impermissible. But what if there are subtler cheaters, who cheat only when they realize that their cheating is not going to be discovered? This more refined cheater would be ripping the benefits of cooperation while sometimes free riding without having being discovered and, therefore, not losing future opportunities to cooperate with non- cheaters. This strategy would seem to be more adaptive than the one of the non-cheater and, therefore, not morally impermissible. But this strategy would
7 Evolutionary Debunking Arguments and Moral Realism
115
encounter problems because, as Trivers (1971) points out, a complex system of mechanisms would be developed to identify that kind of subtle cheaters and excluding them from the benefits of cooperation. Unnoticed cheating would become so costly in terms of resources and effort that would not be adaptive anymore. However, calling attention to the points just mentioned, and the ones we mention at the end of the last section, not only allows us to reexamine and strengthen the moral realistic position with something of greater justice, it also allows us to reevaluate another of Street’s attack points: the scientific inferiority of the tracking account compared with the adaptive link account. Recall that, for Street, the latter is a better scientific theory than the former given its alleged superiority in three aspects: parsimony, clarity and explanatory power. Let us see how the version of moral realism that we have just proposed allows us to reconsider this issue. According to Street, the tracking account is less parsimonious than the adaptive link account because it postulates independent evaluative truths to explain why it is adaptive to make certain judgments. The tracking account is also less clear because it fails to answer the question of why grasping independent evaluative truths promotes the reproductive success of an organism. And finally, the tracking account has less explanatory power because, unlike the adaptive link account, it does not explain three key issues: (a) why the truths that the realist proposes turn out to be exactly the same judgments that form adaptive judgments between the circumstances and the answers, (b) why we tend to make certain judgments that we would consider false today, and (c) why, out of all the possible moral judgments, we have the ones we have. For Street, since the adaptive link account is scientifically superior to the tracking account, which is directly linked to moral realism (remember the dilemma presented in the fourth premise), then the latter must be abandoned. We believe that the tracking account is no less parsimonious, obscure, or with less explanatory power than the adaptive link account, as Street argues. Let us see why. With regard to parsimony, it is possible to hold now that the evaluative truths are not something postulated by the tracking account, much less ontologically. As we have insisted, the thesis behind the postulation of independent evaluative truths is that moral judgments are true or false by virtue of describing or representing facts in an adequate manner. So the truth or falsity of judgments is independent of our evaluative attitudes. In this sense, the tracking account does not postulate independent truths to explain why it is adaptive to make certain evaluative judgments; it simply says that such judgments were adaptive because they correctly described an evolutionary and independent fact. Clearly there is nothing extra, ontologically speaking, postulated by the tracking thesis. In relation to the criterion of clarity, the problem with the tracking account was that it could not explain why to grasp the independent evaluative truths promotes the reproductive success of an organism. Again, moral realism does not propose to grasp truths but argues that moral judgments are intended to describe or represent a reality, so their truth or falsity depends on whether they manage to do it properly. The question then would be: why does it promote the fitness of an organism to correctly represent a fact from a moral judgment? From the tracking account perspective we could say that correctly representing an evolutionary fact
116
M. Martínez et al.
through an evaluative judgment promotes our fitness since it prevents us from adopting judgments that would cause our own detriment (e.g., because an individual or group believes that they are true). One example is avoiding to take as true the judgment “it is right not to feed your own children,” simply because an individual or group believes it. On the other hand, the tracking account would show how such a judgment is false because it is an evolutionary fact, independent of the group’s or personal evaluative attitudes, that not feeding their own children goes against their fitness. In short, we are more likely to adopt judgments that promote fitness if the truth or falsity of evaluative judgments rests on independent facts (contrary to them depending on our evaluative attitudes). By simple evolutionary logic, it is more adaptive to make judgments whose truth conditions are independent of us than to make judgments whose truth rests either in fictions (see Mackie 1977) or in the swing of our evaluative attitudes since the former are anchored in a less contingent reality. Lastly, for Street, the adaptive link account has greater explanatory power in relation to three relevant issues in dispute: (a) how to explain the remarkable coincidence that the moral truths posited by the realist are exactly equivalent to the judgments that are explained by the adaptive link account, (b) why do we tend to make certain judgments that we consider false today, and (c) why, out of all the possible evaluative judgments, we have the ones we have. We think the version of moral realism (and its respective tracking account) we developed here copes with those three questions. With regard to the first one, moral realism does not postulate independent evaluative truths but facts independent of evaluative attitudes. Therefore, the first issue in dispute is not problematic because the facts are part of the reality that has causal effects and, given the previous explanation of why representing these facts is adaptive, it explains the coincidence that Street asks to explain: we have the evaluative judgments we have because they are adaptive by representing facts. With respect to the second issue, it can be answered from the tracking account that the adaptive facts can change over time. What is adaptive in t1 may not be adaptive in t2. The fact that we tend to help only those of our group and to discriminate against strangers could have been adaptive in the ancestral evolutionary history, but it would not be so any more given our current globalized context. This would explain why we tend to maintain such a judgment even if we consider it false. For the moral realism we advocate here it is enough that the value of truth be determined by a fact, but if the fact changes, the conditions of truth also change (this will be important when we discuss the contingency challenge below). Finally, the third point, why of all the logically possible evaluative judgments do we have the ones we have? According to Street the adaptive link account answers that we have only those judgments that were adaptive. But according to the thesis of tracking we defend, which appeals to facts as conditions of truth, we have those judgments that have truth conditions based on facts. It is not factually possible that all logically possible judgments are adaptive, so we have those judgments that we have by virtue of referring to facts independent of our evaluative attitudes and which
7 Evolutionary Debunking Arguments and Moral Realism
117
conferred fitness on those who adopted them. In short, we consider as true only those judgments whose conditions of truth are adaptive facts. For example, it is a fact that to value plants more than human beings or to exhort the murder of children does not confer fitness. For this reason we do not consider them true even if they are logically possible. We think the version of moral realism we defend here answers the modal and the parsimony debunking arguments we described in the first section. With regard to the latter, we just argued above that moral realism does not posit an extra ontology: facts that already exist in the world are the truthmakers of our moral judgments. In no way does this imply postulating the existence of extra moral facts of a different kind. Moreover, it is not necessary to posit a special moral faculty. With regard to the contingency challenge or the modal debunking argument, from our perspective it is possible to argue that moral realism does not need to defend the existence of a necessary relationship between our moral judgments and eternal and immutable moral facts. A more modest version that defends a simple relationship between our moral judgments and the facts that are their truthmakers is enough to answer the contingency challenge. Under this view, moral truths can change over time if the circumstances change, but that does not make them less real. As mentioned earlier with the examples of having in-group preferences and discriminating against strangers, what is adaptive in t1 may not be adaptive in t2. This would mean that in t1 a proposition like “Giving preferential treatment to members of your group than to members of other group is morally permissible” could be true while false in t2. The fact that the truth-value of a proposition changes over time does not mean that it is not based on facts that are independent from our evaluative attitudes. Only a stronger version of realism would hold moral truths are necessary, but this is not a view that we are defending here. Before reaching a conclusion a clarification about cultural evolution is in order. Take the example of the Fore people from Papua New Guinea, who have the maladaptive tradition of eating their dead, including their brain, which contained infectious prions responsible of an epidemic that has led to many more deaths. Think also of some Christian traditions that are not necessarily maladaptive, but are not precisely adaptive, like crossing oneself when passing by a church.1 These traditions might create social cohesion, which could be translated into a positive contribution to biological fitness. If the overall contribution to fitness of a particular conduct is positive, then it would be prescribed in our view; if the contribution is negative, it would be forbidden; and if it is neutral, it would be permissible. In the case of the cannibal Fore people, it seems that even though the tradition is promoting social cohesion, its effect is not overall positive since a considerable amount of people are dying. So the proposition “Cannibalism is morally permissible” would be false even if it is part of the actual morality of that people. (It is important to remember that we are not concerned with what people or groups of people believe is morally good or We want to thank an anonymous referee for pointing out these cases.
1
118
M. Martínez et al.
bad but with what is actually good and bad). If the tradition consisted in eating the whole body but the brain, where the infectious prions are, then there would be no epidemic and the positive effects of social cohesion might be beneficial in terms of fitness in which case the tradition could be morally acceptable. Think now of crossing oneself when passing by a church. If it does not contribute positively to fitness, it is not prescribed in our view. And if the contribution is not negative, it is not forbidden either. In our view, this action would be simply permissible.
7.5 Conclusions In this paper we offered a way for moral realism to avoid some of the main challenges posed by debunking arguments. By bringing attention to and developing the basic tenets of moral realism (which are ignored in the literature) we were able to respond to the seminal, modal debunking argument (or the contingency challenge), to the parsimony debunking argument, and to the debunking argument formulated by Street. Our strategy was to highlight three fundamental characteristics of moral realism that, from our perspective, cannot be ignored in the debate: its cognitivist character, its claim that evaluative language is representational, and the assumption that facts are the truthmakers of evaluative judgments. Focusing on these elements allowed us to argue that the tracking account is not inferior to the adaptive link account, one of the main points of Street’s criticisms to moral realism. If our argument is correct, moral realism and the tracking account can explain the relationship between the thesis of realism and the evolutionary thesis. Therefore, moral realism is not debunked. Acknowledgments We are grateful to our anonymous referees for their valuable comments on the manuscript. Thanks also to the audiences at the ISHPSSB São Paulo meeting (July 2017), the Seminario de Investigadores at Instituto de Investigaciones Filosóficas - UNAM (February 2018), and the AIFIBI meeting in Bogotá (July 2018) for useful discussions. AM wishes to thank to the Beca CONACyT para Estancias Posdoctorales Nacionales for financial support.
References Artiga, M. (2015). Rescuing tracking theories of morality. Philosophical Studies, 172, 3357–3374. Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211, 1390–1396. Bedke, M. (2018). Cognitivism and non-cognitivism. In T. McPherson & D. Plunkett (Eds.), The Routledge handbook of metaethics (pp. 292–307). New York: Routledge. Copp, D. (2008). Darwinian skepticism about moral realism. Philosophical Issues, 18, 186–206. Darwin, C. (1888). The descent of man and selection in relation to sex. Princeton: Princeton University Press. Darwin, C. (1859). On the origin of species. London: John Murray. Enoch, D. (2018). Non-naturalistic realism in metaethics. In T. McPherson & D. Plunkett (Eds.), The Routledge handbook of metaethics (pp. 29–42). New York: Routledge.
7 Evolutionary Debunking Arguments and Moral Realism
119
Joyce, R. (2006). The evolution of morality. Cambridge, MA: MIT Press. Lillehammer, H. (2010). Methods of ethics and the descent of man: Darwin and Sidgwick on ethics and evolution. Biology and Philosophy, 25, 361–378. Mackie, J. L. (1977). Ethics. Inventing right and wrong. London: Penguin. Martínez, M., & Moya, A. (2011). Natural selection and multi-level causation. Philosophy and Theory in Biology, 3, 1–14. Matthen, M. (2003). Is sex really necessary? And other questions for Lewens. British Journal for the Philosophy of Science, 54, 297–308. Mayr, E. (2002). What evolution is. New York: Basic Books. Okasha, S. (2006). Evolution and the levels of selection. New York: Oxford University Press. Platts, M. (1983). La naturaleza del mundo moral. Análisis Filosófico, 3, 1–11. Ruse, M., & Wilson, E. O. (1986). Moral philosophy as applied science. Philosophy, 61, 173–192. Shafer-Landau, R. (2003). Moral realism. A defence. Oxford: Clarendon Press. Street, S. (2006). A Darwinian dilemma for realist theories of value. Philosophical Studies, 127, 109–166. Trivers, R. L. (1971). The evolution of reciprocal altruism. The Quarterly Review of Biology, 46, 35–57. Vavova, K. (2015). Evolutionary debunking of moral realism. Philosophy Compass, 10, 104–116. Wielenberg, E. J. (2010). On the evolutionary debunking of morality. Ethics, 120, 441–464.
Chapter 8
The Darwinian Naturalization of Teleology Gustavo Caponi
8.1 Introduction In the theory of natural selection, the concepts of biological function, fitness, and adaptation keep close and precise links. Those relationships, however, are not explicit, and their elucidation requires analytical work. Here, I will attempt that elucidation, showing that, in the framework of that theory, the concept of biological function must be presupposed for defining the concept of fitness, and that both the concept of function and the concept of fitness must be assumed, and articulated, for defining the concept of adaptation. However, for fully understanding these three biological concepts and their relationships, I will show them as specifications of three notions of broader application: the general notion of function, the very large notion of effectiveness, and the idea of design. The concept of biological function, I will argue, is a specification of the general notion of function, and the concept of fitness is a specification of the notion of effectiveness, which, in some cases, seems better expressed by terms such as “efficacy” or “efficiency.” For its part, the concept of adaptation, I will also claim, is a specification of the idea of design. These three more general concepts, as I will try to show, are related to each other in a way that is isomorphic to the way in which the concepts of biological function, fitness, and adaptation are interrelated at the theory of natural selection. It is necessary to presuppose the general concept of function to understand the concept of effectiveness, and both, the concept of function and the concept of effectiveness, are necessary for the delimitation of the concept of design and the concept of designed object. However, the development of this conceptual concatenation, which is also an argument against the etiological conception of functional attributions, is just my immediate target. G. Caponi (*) Department of Philosophy, Federal University of Santa Catarina, Florianópolis, SC, Brazil e-mail: [email protected] © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_8
121
122
G. Caponi
What interests me the most, and ultimately, is to show how that connection of ideas makes possible and legitimizes the naturalization of teleology brought by Darwinism. Concerning this point, my position approaches Francisco Ayala (2004) and James Lennox (1993), but specially Elliott Sober, for whom: Darwin is rightly regarded as an innovator who advanced the cause of scientific materialism. However, his effect on teleological ideas was quite different from Newton’s. Rather than purge them from Biology, Darwin was able to show how they could be rendered intelligible within a naturalistic framework (Sober 1993, p. 82).
The Darwinian naturalization of teleology allows accepting the idea of biological design without contesting a purely naturalistic, even materialistic, ontology. Therefore, it must remain clear from the beginning that, although I will refuse the etiological conception of functional attribution by accepting the causal role point of view, I will not accept Cummins’ rejection of naturalized teleology. For Cummins (2010), the idea of naturalized teleology only makes sense if we accept the etiological conception of the functional attributions that he correctly rejects. But I will show that the Darwinian explanation of adaptation can be recognized as a naturalization of teleology without any involvement with the etiological conception of functions. Regarding that point, Cummins sets a false dilemma that will be avoided here. Assuming an elucidation of the general concept of function that is very close to that of Cummins, although a bit more liberal, I will intend to show the legitimacy of the naturalization of teleology carried out by Darwin. However, unlike the etiologist Karen Neander (1999, 2018), I do not consider that the key of this naturalized teleology is the notion of function. From my point of view, that key is a more complex notion: a notion that supposes the more primitive, or basic, notion of function as part of its definiens. That is the concept of evolutionary adaptation, which is equivalent to the notion of natural or biological design. For that reason, I will begin my discussion by making a preliminary delimitation of the notions of evolutionary adaptation and of design, but without referring, for now, in this last case, to the notion of biological design. This notion will be examined only in the last section.
8.2 Adaptation and Design The evolutionary concept of adaptation refers to lineages’ character states that are capable of being evolutionarily explained by natural selection (West-Eberhard 1998, p. 8; Griffiths 1999, p. 3). This concept is different from the physiological concept of adaptation, which refers to “short-term physiological adjustments by phenotypically plastic individuals or to a change in the responsiveness of muscle/ nerve tissue upon repeated stimulation” (West-Eberhard 1998, p. 8). These adjustments are cases of “ontogenetic adaptations” (Sober 1984, p. 204) or “physiological adaptations” (Griffiths 1999, p. 3): two terms for a single important biological concept that, despite its importance, we will not analyze in these pages. Here, the distinction between the concepts of evolutionary and physiological adaptation will
8 The Darwinian Naturalization of Teleology
123
be much less important than to establish the equivalence between the concept of evolutionary adaptation and the notion of biological design. However, while in literature concerning the theory of natural selection, the concept of evolutionary adaptation is rather clear; the concept of biological design is still surrounded by a persistent cloud of confusion and suspicion, which we must overcome in order to understand the Darwinian naturalization of teleology. The definition given by Elliot Sober (1984, p. 208) is a very good approximation to the concept of evolutionary adaptation. In The Nature of Selection, he said: “A is an adaptation for a task T in population P if and only if A became more prevalent in P because there was selection for A, where the selective advantage of A was due to the fact that A helped perform task T”. Sober (1993, p. 84) makes clear that “to say that a trait is an adaptation is to comment not on its current utility, but on its history.” The evolutionary concept of adaptation, unlike what Reeve and Sherman (1993, p. 9), or Bock and Wahlert (1998, p. 143), tried to do, should be understood as a historical notion that necessarily involves a reference to natural selection and, therefore, to the evolutionary history of the trait to which it is considered as an adaptation. The concept of adaptation does not refer to the current usefulness or convenience of a trait (Sterelny and Griffiths 1999, p. 217). Still, in view of my objective, Sober’s definition of adaptation presents a small difficulty not too hard to correct: it does not make explicit the appeal to the notion of function that, in fact, it is assuming. There, the expression “task T” might be changed by “function F” without generating any change in the meaning of the whole sentence. This would make clearer the relationship that the concept of adaptation keeps with the concept of function. Regarding this subject, the definition of adaptation given by Jonathan Coddington seems better than the one proposed by Sober. For Coddington (1994, p. 56), an adaptation is “an apomorphic feature that evolved in response to natural (or any other kind of) selection for an apomorphic function.” Here, very clearly, the concept of biological function is used to define adaptation. This also happens in the definitions given by Douglas Futuyma (2005, p. 548). Appropriately, he distinguishes between adaptation conceived as process, and adaptation considered as the outcome of that process. In both cases, and again correctly, the notion of function is regarded as more primitive than the notion of adaptation. In the first case, adaptation is defined as a process of genetic change in a population whereby as a result of natural selection, the average state of a character becomes improved with reference to a specific function, or whereby a population is thought to have become better suited to some feature of its environment (Futuyma 2005, p. 548).
In the second case, an adaptation is characterized as “a feature that has become prevalent in a population because of a selective advantage conveyed by that feature in the improvement in some function” (Futuyma 2005, p. 548). Thus, when referring to adaptation as a result, both Coddington and Futuyma speak of features or character states, and Coddington (1994, p. 56) appeals to the language of systematics saying that: “adaptations are always apomorphies, though not all apomorphies are adaptations.”
124
G. Caponi
An adaptation is always a derived, or evolved, character state: an apomorphy, whose derivation from a primitive, or ancestral, character state, the plesiomorphy, resulted from a process of natural or sexual selection. Given two states (C1 and C2) of the character C (where C1 is the plesiomorphy, and C2 is the apomorphy), it can be said that C2 is an adaptation if, and only if, its derivation from C1 is the outcome of a selective process resulting from the fact that in performing a certain biological function C2 was more efficient than C1. The webbed posterior feet of Chironectes minimus, which are apomorphic by reference to the plesiomorphic character state given by the non-webbed posterior feet of the other Didelphidae (opossums), are in fact an adaptation because this derived character state, typical of gambas d’agua, resulted from a selective pressure associated with aquatic locomotion. The webbed posterior feet allow a more efficient performing of this function or task (if we adopt Sober’s formulation). Therefore, if, following Coddington, we adopt the language and the conceptual framework of Systematics, we can reformulate both of Futuyma’s definitions of adaptation that I already referred to: adaptation as process and adaptation as outcome. In the first case, we can say that adaptation is a process of change in a character state that results from natural or sexual selection. As consequence of this process, the average state of that character is improved with reference to a specific function. At the same time, if we consider adaptation as the outcome of that process of selection, we will say that an adaptation is a derived, or apomorphic, character state that has become prevalent in a population because of a selective advantage conveyed by that apomorphy in the performing of a biological function. This duality between adaptation as process and adaptation as outcome is important because it can help us in defining a designing process and its outcome, a designed object. A designing process is a change of the features of an object, which is ruled by an increase in the efficiency with which that object fulfills a function. Accordingly, for “designed object,” we can provide the following definition: X is a designed object to the extent that its traits are the outcome of a process of change ruled by the increase of efficiency by which such object fulfills, or fulfilled, a function. Here we are also faced with a strictly historical notion, like the evolutionary notion of adaptation. The notion of designed object refers to the fact that an object has been modified, in the past, by virtue of an increase of its functional efficiency. It does not require that this functional efficiency nor the optimized function is still present. A broken tinfoil phonograph that remains mute on the shelf of a museum is an object designed even if it no longer performs the function for which it was conceived. Furthermore, what we also should definitely to avoid, is the identification between design and “convenience” or “suitability.” A flat stone used to play stone skipping in a lagoon is not a designed object; although its shape is very convenient for performing that function. By contrast, a stake made by applying a few cuts on a branch or stick is, indeed, a designed object. However, the amount of design implied in the stake could be lower than the amount of design involved in a lytic ax, and the efficiency of that stake could be lower than the efficiency for skipping of the flat pebble. That is to say, the water erosion that flattens the stones making them better to skip on water’s surface is not a design process. The water erosion is not a process ruled by the increase in convenience for stone skipping.
8 The Darwinian Naturalization of Teleology
125
It is for the same reason that the use that makes a good and old pair of boots much more comfortable than a pair of new ones is not a designing process either. The accommodation to the user’s feet, however convenient, is not ruled by the increase in comfort. Yet, the four or five cuts that we make on a stick, so that it could work as stake, is, in fact, a mild design process, and the same goes for the carving of a lytic ax. In both cases, an agent cuts and hits in trying to make the modified objects more suited to performing a specific function. Briefly, just as the notion of biological function is part of the definiens of adaptation, the notion of function is part of the definiens of design. This is so because the notion of function is more primitive than the notions of adaptation and design. Consequently, to consider the notion of function as a part of the definiens of adaptation and design, without incurring in circularity, it is necessary to avoid the etiological conception of functional attributions. We must admit the notion of function as causal role. This is indispensable for defining the notions of adaptation and design.
8.3 The Mistake to Avoid If we assume the etiological conception of functional attributions (Wright 1973), we should accept that saying ‘the function of x (in the system or process z) is y’, presupposes that: [1] X causes y. [2] X is in z because it causes y. Accordingly, saying “the function of the pedals (x), in the movement of the bicycle (z), is to move the rear wheel through the mediation of the chain and the pinions (y),” supposes that: [1] The movement of the pedal causes the movement of the rear wheel through the mediation of the chain and the pinions. [2] The pedals are in the bicycle because they cause the movement of the rear wheel. That is to say, the pedals were placed in the bicycle to do y, and, accordingly, it can be said that, along this conception, function is equivalent to raison d’être. This equivalence restricts the domain of supposed valid functional attributions. It clearly forbids saying that “the moon has a function in the movement of the tides”, “the sun has a function in the water cycle”; or that “the tobacco smoke may have a function in the development of lungs tumors.” However, what is most important here is the equivalence between function and raison d’être. That equivalence would turn circular the definition of design that I already gave. If “function” and “raison d’être” are conflated, “to have the function x” becomes the equivalent of “to be designed for x”. Wright did not talk about design, but, in fact, his elucidation of the notion of function leads to conflate this latter concept with that of design. Philip Kitcher, another enthusiast of the etiological conception, recognizes this explicitly. For him “entities have functions when they are designed to do something, and their function is what
126
G. Caponi
they are designed to do” (Kitcher 1998, p. 492) and that, as we will see at the end, is also the case of Allen and Bekoff. The superposition between function, raison d’être, and design that Kitcher (1998, p. 492) assumes has its correlate in the biological translation of the etiological conception of functional attributions proposed by Millikan (1989) and Neander (1991). This conception of biological functions leads to admit that saying ‘the function of x (in the individuals of the lineage z) is y’ always presupposes these two others assumptions: [1] X produces or causes y. [2] X is there, in z, because natural selection rewarded the performing of y in the ancestral forms of z. Thus, to say ‘the function of coloration x, in the butterflies of the lineage z, is aposematic supposes that: [1] The coloration x causes, in fact, the warning effect implied in the notion of aposematism.1 [2] The butterflies of the lineage z have the coloration x, because natural selection rewarded that warning effect (y) in the ancestral forms of z. Thereby, “having a function y” would be the same as “having been selected for y,” or as “being an adaptation to y.” Nonetheless, if this equivalence is right, we should assume that the definitions of adaptation that appeal to the concept of function are circular. But not in the sense pointed by Nanay (2010, p. 415)2: it will be so because the attribution of the predicate “adaptation” presupposes that we are considering a structure fashioned by natural selection. And this is possible, if and only if, that structure was already performing a function prior to the selection process itself. Not any other function, but quite the one whose better performance will be selectively rewarded. In other words: functional performance is prerequisite of selection, and the condition of “be performing the function y” pre-dates “being A feature of a living being is aposematic insofar as it warns the possible predators of that living being about the toxic or distasteful character of it. 2 The circularity to which Nanay alludes has to do with the individuation of the functional traits. But I am not referring to that problem. I do not considering this difficult because it only arises if we accept the etiological conception of functional attributions. If we assume the idea of functions as causal roles, the identification of the functional item should be considered as depending on the causal process that is being analyzed. On the other hand, if we want to discuss the concept of adaptation, the problem of determining which structures, or which configurations, should be considered adaptations (cf. Lewontin 2000, p. 77), or selection targets, does not need to be raised either. In that case, as Coddington pointed, we are talking about an evolved character state that is individualized on the basis of a comparison, or contrast, with a primitive character state. The selective explanation supposes the identification of a polarity between a primitive and an evolved character states; and it attempts to explain this derivation by appealing to natural selection. Its starting point, malgré Lewontin, is not a description of the organism taken as an isolated whole that can be broken down into distinct parts in a more or less arbitrary way. Its starting point is the previous verification of a difference, and derivation, between two character states (cf. Brandon 1990, p. 172). 1
8 The Darwinian Naturalization of Teleology
127
selected for y.” So, selectional explanations cannot be built without appealing to functional attributions, and “adaptation” cannot be defined without appealing to the concept of function. Nevertheless, if we reject the etiological conception, considering that the concept of function is more primitive than the concept of adaptation, and a part of its definiens, we can avoid that circularity. But, in addition to the circularity to which I am referring, the etiological conception of functional attributions carries another difficulty: it clashes the ways of validating functional attributions that effectively govern biological sciences. This is particularly clear in the field of physiology. If we accept the etiological point of view, no physiologist should be able to make or to accept functional attributions without supposing that, at least in principle, she is able to give an evolutionary justification of such attribution. However, it is very important to observe that in physiological research, and in the validation of the results there obtained, the theory of natural selection does not play any significant role. The experiments and observations that are currently made to establish the causal role that a structure, or a reaction, plays in the whole functioning of an organism, in the operation of its subsystems, or in the realization of some of the many processes involved there, have nothing to do with natural selection. What really matters is whether an organic reaction plays, or does not play, an actual causal role in the total functioning of the organism or in the accomplishment of some of the processes that are supposed to be necessary for such functioning. Therefore, if it were concluded that this role does exist, the physiologist would maintain her functional attribution even if there were evidences showing that the structure, or structural configuration, that enables that reaction, was not selected for the effect that has been characterized as functional. For that reason, to argue that the ultimate evolutionary justification of the functional attributions made in physiology should be considered as just possible of being offered does not add too much to the discussion. Even in this weaker version, the etiological point of view involves a claim that is made to physiology without considering the methodological rules that, tacitly but effectively, guide its progress. Physiology, however, it is not the unique domain of biology where the exigencies of the etiological conception of function do not fit. As Nunes-Neto et al. (2014) clearly showed, if we accept such demands, much of the functional language that today permeates ecology should be avoided. From the etiological point of view, to say that a species plays an important pollination function in a given ecosystem would be just a bad metaphor, but this type of functional imputations are ubiquitous in ecology and they have a distinct and precise meaning (Gayon 2010, p. 134). They refer to the causal role that a component of the ecosystem performs in one of the many process there involved, and it is clear that no selectional justification of this attribution can be given. Even if we accepted the idea of species selection, we could never say that species are selected for performing those ecological functions or for making ecosystems more efficient. Natural selection does not work for the good of the ecosystem, and the selection of species, if we want to consider this concept, has nothing to do with the optimization of ecosystems.
128
G. Caponi
It is also interesting to remark that, because of the false synonymy between adaptation and function to say that “an exaptation is a biological structure that fulfills a function for what it was not selected” would be a definitive nonsense, and it is very easy to conceive a situation that shows how contrived is that restriction. Let us imagine a small island where, among other plants, we find a species of fern that produces an intense stench resulting from the metabolization of a toxic substance that contaminates the soil. Besides, let us also image that, suddenly, a plague of invading rats arrives at the island and decimates almost all its vegetation but leaves the population of this fern unharmed because of that odor. In such a case, it would seem fair to say that, from the very beginning of the rats’ raid, and prior to any generational succession that would allow to speak about a selective pressure, the stench played an important defensive function for the ferns that produced it. Moreover, if the stench had not played that defensive function, it is possible that those plants would have become extinct before they could give rise to any generational succession that would allow some new selective pressure to act on their population. Moreover, if once that protective role is played, natural selection can then reward, in successive generations, any hereditary modification that can make it more efficient; that selective reward will only be possible because the defensive function to be optimized is already being played. However, according the etiological conception of functional attributions, all this way of reasoning would be wrong because the ability to produce that odor that protected the plants from the very onset of rodent invasion was not selected for that protective effect. Accidental advantages, it is said, are not functions. Under the etiological point of view, in cases like that, there is just exaptation, and no adaptation; so, there is no function to be attributed. However, if we accept Futuyma’s definition of “exaptation,” the only thing to say is that exaptation is not adaptation. “Exaptation,” as Futuyma (2005, p. 547) defined it, is “the evolution of a function of a gene, tissue, or structure other that the one it was originally adapted for,” something different and much more easy to formulate and understand than what Elisabeth Vrba and Stephen Jay Gould (1982, p. 5)3 said in fact. Vrba and Gould, entangled in the etiological conception of functional imputations, ended up arguing that there is only legitimate functional attribution when there is an explanation by natural selection that endorses such attribution. If not, there are only “effects,” being that some of those unselected-for effects could be beneficial for the Gould and Vrba coined the term “exaptation” but not of the concept that it refers. In 1875, Anton Dohrn considered and discussed the phenomenon that Gould and Vrba called “exaptation.” Dohrn (1994, p. 67) named it “succession of functions” and made clear that, initially, the new functions that a pre-existing structure could come to play arose as a mere byproduct of the modification that natural selection operated on such a structure by virtue of its previous functional performances. Structure S changes as consequence of selective pressures linked to the performance of a biological function x, and as byproduct of such change, S begins to also perform another biological function y. Then, eventually, y can become the target of another selective pressure that produces other alterations of S, being possible that y could come to be the main, or (at the end) the unique, function of S. Dohrn’s way of reasoning supposes that function y appeared before the pressure that rewarded it. 3
8 The Darwinian Naturalization of Teleology
129
living beings that generate them. In that case, according to the perspective of Gould and Vrba, we could talk of “exaptations” but never of “adaptations” or “functions.” However, the notion of exaptation is easier to understand if we admit a definition of it closer to that given by Futuyma, and this other alternative presupposes a conception of functional imputations long far from the etiological point of view. It is also remarkable that the etiological conception of functional attributions does not match very well with Niko Tinbergen’s four questions. Tinbergen (1963, p. 16) clearly stated that, regarding every behavior, it was possible and imperative to ask four questions: (1) a question about the causes that triggered the occurrence of that behavior; (2) a question concerning its survival value; (3) a question related to its evolutionary history; and (4) a question concerning its ontogeny. However, if we accept the etiological point of view, we must conclude that the second question would be absorbed by the third (Godfrey-Smith, 1998, p. 462). As Godfrey-Smith (1998, p. 462) correctly pointed out, “Tinbergen … uses the term ‘survival value’ rather than function in the official formulation of the question [2]. But generally he uses these two expressions interchangeably.” Therefore, if we accept that any reference to biological function or survival value implies per se an allusion to natural selection, we should also conclude that the second and the fourth are just the same question. However, it is clear that this is not the case. That is what John Krebs and Nicholas Davies (1997, p. 4) show when, considering the four questions enunciated by Tinbergen, they point the four answers that should be given to the question: “Why the starling (Sturnus vulgaris) forage in the way it does?” If we think about causation, the answer will regard “the proximate factors which caused the bird to select a foraging site or prey type” (Krebs and Davies 1997, p. 4). By contrast, if we think in terms of function, the answer will be about “how path choice and prey choice contribute to the survival of the bird and its offspring” (Krebs and Davies 1997, p. 4). Meanwhile, if we think about ontogeny, “this answer would be concerned with the role of genetic predispositions and learning in an individual’s decision making” (Krebs and Davies 1997, p. 4). Last, but certainly not least, would come the evolutionary question concerning “how starling behavior has evolved from its ancestors” (Krebs and Davies 1997, p. 4). In this case, the “answer might include an investigation of how the starling family has radiated to fill particular ecological niche and the influence of competition from other animals in the evolution of starling behavior and morphology (e.g. bill size; body size)” (Krebs and Davies 1997, p. 4). Thus, while this last answer refers to ultimate causation and implies historical knowledge concerning the past of the lineage, the second answer, alluding to proximate factors, refers to the actual conditions of existence under which the starling is currently struggling for life. For answering the second question, it is necessary to know how the ancestor of the starling was. It is obvious that the question concerning the biological function of the behavior (the question about its survival value) is posed for being answered without addressing the evolutionary question. Despite Allen and Bekoff (1998, p. 575), it is interesting to see the clearness that Tinbergen himself had concerning this autonomy and anteriority of the functional question in relation to the evolutionary question. He emphasized this point when he wrote that “Even if in the present-day animals were
130
G. Caponi
created the way they are now, the fact that they manage to survive would pose the problem of how they do this” (Tinbergen 1963, p. 424). In addition, besides recognizing that the functional question can and should be considered independently of the evolutionary question, we must also assume that the evolutionary question cannot be considered without having raised the functional question. That was also clearly pointed out by Tinbergen (1963, p. 424): The part played by natural selection in evolution cannot be assessed without proper study of survival value. If we assume that differential mortality in a population is due to natural selection discriminating against the less well-equipped (the less “fit”) forms, we have to know how to judge fitness, and that only can be done through studies of survival values.4
In other words, for the sake of articulating and justifying a selectional explanation, it is necessary to already know the ecological conditions under which a morphological or behavioral trait is selected. If we affirm that there was a selective pressure favorable to a particular trait, we firstly have to know which was the functional performance allowed or optimized by that trait. That is to say, the survival value, the biological function, and the functional advantage of the trait have to be previously known so that, based on it and on other data, the selectional explanation may be then articulated. However, the main problem of the etiological conception of functional imputations is that it ignores a critical point: selectional explanations suppose functional attributions. In fact, in order to understand what selective pressures are, and what a selective explanation is, we need the notion of function. In its simplest form, these explanations have this structure: ► Explanans • [A] In population P the x1 and x2 states of the character x are given. • [B] There is a selective pressure favorable to x1 acting in P. ►
Explanandum In P, x1 is more frequent than x2.
Nevertheless, “[B]” simply means that “x1 is more efficient than x2 in the performance of a biological function y.” For a selective pressure favorable to a particular biological configuration could come to exist, this configuration must have a specific functional performance that is higher than the functional performance of an alternative configuration. This is a point that the so-called modern history theory of function, that was proposed by Godfrey-Smith (1998) does not come to contemplate either. This version of the etiological conception refers to the selective context “which explains the recent maintenance of a trait”; but this reference to a selective context also presupposes an allusion to biological functions that are performed with greater or lesser efficiency. Otherwise, how could a selective pressure Here, it is opportune to remember what Godfrey-Smith (1998, p. 462) said about the equivalence between the terms “survival value” and “function” in Tinbergen writings. I quoted it a little earlier. Tinbergen use the term “survival value” in a sense of ecological utility or ecological function, not in the sense of “survival rates.” Using a distinction that I will remember later, for Tinbergen the “survival value” is the ecological fitness and not the w of population genetics. 4
8 The Darwinian Naturalization of Teleology
131
be set? A selective pressure supposes variants whose efficiency in determined functional performance are unequal. One should be functionally more efficient than the alternative. Thus and again, to understand how natural selection works, we need a notion of function that precedes the notion of adaptation, and this is also valid as a reason for rejecting those pluralist positions that consider the etiological conception as valid in some contexts but inapplicable or unnecessary in others.5 The etiological conception is never valid because it aims to base the functional attributions on selected explanations that, in fact, necessarily presuppose such attributions. Even when the history of a biological structure is reconstructed identifying the selective pressures that shaped it throughout its evolution, this reconstruction is made assuming functional attributions that do not depend on the theory of natural selection. That is to say, this independence, and priority of the functional analyses, is not only present in physiology or functional anatomy but also present in those evolutionary inferences that, very accurately, Daniel Dennett (1996, p. 212–4) considered as a special kind of reverse engineering. For that reason, we need something that only the understanding of functions as mere causal roles can provide. This other understanding of the concept of function not only avoids all the difficulties brought by the etiological conception, but it also gives the basis for elucidating the notions of efficacy, adaptation, and design.
8.4 Functions as Causal Roles According to the causal role conception of functional imputations, to say that “the function of x – in the process z – is y” just demands to suppose that: [1] X produces or causes y. [2] Y has a causal role in the occurrence of z. Therefore, to say that the function of sun (x) in the water cycle (z) is to evaporate water (y) only requires accepting that: [1] The heat produced by x causes the evaporation of water (y). [2] The evaporation of water (y) has a causal role in the water cycle (z). This is, of course, an understanding of the causal attributions that is very close of that proposed by Cummins (1975); but it is wider and still more tolerant. This is so because it does not make reference to something that may be considered as systems of which the functional item should be thought of as a component. That may be the case, but it is not necessary to be so. If having a function is to play a causal role in a process, every causal process, even one that has occurred only once, without being
We can find this pluralist point of view in Amundson and Lauder (1998), Godfrey-Smith (1999), Wouters (2003), and Nunes-Neto and El-Hani (2009), but there are many others examples. 5
132
G. Caponi
a recurrent cycle, can be functionally analyzed. That is to say, given any causal process it is possible to identify the elements and sub-processes that are involved in it, attributing them a causal role, a function, in the occurrence, or completion, of that main or wider process in which they participate. This way of understanding functional attributions assumes a clear and sharp separation between the notions of function and raison d’être: when it is said that the moon has a function in the movement of the tides, it is not being said that the raison d’être of the moon is contributing to the occurrence of this phenomenon. The moon plays a role in the movement of tides; it has a causal role in that process, but nothing in the moon was modified in order to better fulfill that function, nor is the moon there for performing that function. That is why we do not consider the moon as a designed object, and why we do not consider the moon-water systems as being so. The idea of raison d’être is only applicable when referring to components of designed objects: some features of these objects, precisely those that allow characterizing them as designed, have a raison d’être. This is so because those features resulted from a process directed by the increase, or optimization, of a functional performance of the object that possesses them. Moreover, if it is clearly true that this way of understanding functional imputations involves the presupposition that every causal process can be functionally analyzed, it is also true that this promiscuity is limited by the fact that functional attributions are ternary predicates. A functional item x has a function y only within the framework of a specific causal process z. Functional attributions are not binary predicates such as “x is at north of y”: they are ternary predicates like “being equidistant point between x and z.” If this fact is not neglected, many of the alleged difficulties of the conception of functions as causal roles vanish immediately. I return to a classic example: the cardiac movement (M) pumps the blood (P) and makes noise (N). That is to say, M causes both P and N. However, in general, we tend to say that P is a function of M but not N. In other words, we establish a discrimination that tends to equate function and raison d’être: the evolutionary history of heart was ruled by that functional performance. However, if we keep in mind the triadic character of functional imputations, we can see that this seems so because, tacitly, we presuppose that the process of reference for our functional attributions is blood circulation, or the organic functioning as a whole. For that reason, if we thought that the process of reference of our functional imputation is the operation of the polygraph, we will have to admit that the noise of the heart does have a function there – in the functioning of this device. In biology, of course, the process of reference of all functional imputations is always the life cycle or the sub-processes within it. It is for this reason, and not through the mediation of an evolutionary explanation, that we tend to think that P is a function of M but not so N. It is this same process of reference, on the other hand, that allows discriminating between the good and the bad functioning of an organic structure. When we talk about the function of a biological structure, of an organic reaction, or of a behavior, we tend to assume that we are referring to the contribution that these items can make to the realization and completion of the life cycle of living beings where such elements are present. This provides us an insight about the notion
8 The Darwinian Naturalization of Teleology
133
of biological function: the biological function of a structure, organic reaction, or behavior (or of any other morphological, physiological or ethological peculiarity of an organism) is just the causal role played by those items in the realization and completion of the life cycle of the living beings in which they occurs. This is tantamount to what we can find in the old and noble dictionary of Abercrombie, Hickman, and Johnson (1957, p. 93): “The function of a part of an organism is the way in which that part helps maintain the organism to which it belongs alive and able to reproduce.” Reproduction is part of the life cycle.6 It is clear that the concept of biological function that I am proposing here is very close to the organizational account of functions proposed by Álvaro Moreno and Matteo Mossio (2015, p. 73). But, I believe there is a difference between them: the organizational concept of function seems to be more restrictive than the biological concept of function that I am proposing here. The point is in the requirement that the existence of the functional item presupposes the functioning of the organism to whose maintenance that item contributes. If this dependence does not occur, the organizational concept of functions leads to reject the functional attribution. This functional attribution would be correct in the case of a heart, or of any other organ and maybe also in the case of the rodents’ repellent stench produced by the plants of the example proposed in the preceding section. But that would not be the case with a pacemaker or with any other prosthetic device whose operation may contribute to preserve the existence of a living being. Nor would it be the case of the mollusk shell carried by hermit crabs from the superfamily Paguroidea. If we accept the organizational conception, that shell whose existence precedes the existence of the crab and may last after its death would not be a genuine functional item. But, if we accept the simple biological conception function that I am trying to elucidate, we can say that this shell plays an important protective function in the life cycle of the hermit crab. However, beyond this possible difference between both ways of understanding the concept of function, what really interests me is to show that the concept of biological function is more primitive than the concept of adaptation and that we do not need the theory of natural selection to define it. The only thing we need to get that definition is to specify the general concept of function, considering that such a general concept is equivalent to the notion of causal role. Thus, by accepting the The notion of biological function, as it is proposed here, may look like the notion of biological role as it was proposed by Bock and Wahlert (1998). Specially, in this shorter and more informal formulation: “The biological role of a faculty, and hence of the feature, may be defined as the action or the use of the faculty by the organism in the course of its life history” (Bock and Wahlert 1998, p. 131). They used the expression “biological role” because they used the term “function” for the way in which an organic structure acts or works without any reference to its causal contribution to the life cycle of the whole living being (Bock and Wahlert 1998, p. 131). But, independently of that terminological difference, which in itself is not too important, I consider that is relevant to point that I would never say that biological functions “cannot be determined by observations made in the laboratory or under other artificial conditions” (Bock and Wahlert 1998, p. 132). In physiology, for example, the laboratory is the main place for the individualization of biological functions. 6
134
G. Caponi
conception of functions as causal roles, the concept of biological function can be regarded as a specification of the general notion of function, and in such specification, the life cycle is considered the reference process. Therefore, when we say that the “biological function of x, in the life cycle z, is y,” we are assuming that: [1] X is part of z.7 [2] X produces or facilitates y. [3] Y has a causal role in the realization of z. Hence, if we consider that the ability to endure and reproduce in a given environment defines the fitness of an individual living being, we can also accept the definition of function proposed by Futuyma (2005, p. 548): “the way in which a character contributes to the fitness of an organism.” It is also remarkable that this definition of biological function is equivalent to the definition I am proposing here but also to the definition given by Abercrombie, Hickman, and Johnson. However, the most important point is that these three definitions of biological functions can be considered as presupposed in the definitions of adaptation that I provide in the first section and in the outline of the selective explanations that I introduced in the current section. Although for that outline and those definitions of adaptation come to be completely clear, we need an additional elucidation. We have to define the concept of biological efficiency there presupposed. That elucidation should be a specification of the general notion of efficiency that remain presupposed in the definitions of designed object and designing process that I provided in the first section. However, before that, I want to highlight something that needs to remain very clear. It is obvious that I am rejecting the historical character of the functional imputations that is affirmed by the etiological conception, but it is also very important to stress that the definitions of adaptation that I assumed do recognize the historical character of this latter concept. Thus, although with regard to the concept of function, I differ with the historical point of view of Gould and Vrba (1982, p. 5); at the same time, I totally agree with them, as with Sober, about the historical character of the concept of adaptation. What I argue, differently from Gould and Vrba, is that the concept of adaptation, being itself of historical nature, supposes an element of non- historical nature. This non-historical element is the concept of function. As far as this goes, my position is close to that of Michael Ghiselin. According to him, although function “is not an historical concept”; adaptation is, indeed, “an historical concept” (Ghiselin 1997, p. 307). My difference with Ghiselin is that he does not recognize the legitimately teleological character of the Darwinian notion of adaptation and of natural selection explanations (cf. Ghiselin 1997, p. 294).
7 The point [1] just means that the operation of x is not independent of the occurrence of z. This clause, I think, is the only, but very important, concession that we should make to the organizational account of function proposed by Moreno and Mossio (2015, p. 69).
8 The Darwinian Naturalization of Teleology
135
8.5 Effectiveness and Fitness I argued at the beginning that the notion of efficiency antecedes the notion of design: this is the reason why it can be used to define the notion of designed object. This is not the case, however, for the notion of function: it precedes the notion of design, and fortunately for my strategy of analysis, it precedes the notion of efficiency too. We need firstly the notion of function to understand the notion of efficiency and then we need both notions, function and efficiency, to understand the notion of design. Although in choosing pebbles to play stone skipping, we might remark that the flattened pebbles are more efficient in fulfilling that function than those more rounded; we will not say that these pebbles are designed objects. We will not say so because the profiles of those pebbles were modified by physical agents that had not a relationship with the fulfillment of any function. The easiest way to see that the notion of efficiency does not suppose the notion of design is to remind the equivalence between efficiency and effectiveness. Instead of saying that flattened pebbles are more efficient for stone skipping than rounded pebbles, we can simply say that they are more effective than rounded ones in fulfilling that function and that would not imply any change of direction or any loss of information as far as the capability of the flattened pebbles to skip is concerned. What happens with flat stones and their propensity to spring in water is something similar to what happens when, looking for a material that is a good thermal conductor, we say that, for the fulfillment of that function, iron is more efficient than lead. What we are saying about these materials is simply that iron is a more efficient and a more effective thermal conductor than lead. Iron fulfills this function, that causal role, faster and with lower heat loss than lead. At the same time, if we were looking for a malleable material capable of transmitting heat more slowly than iron, we would say that lead is a less efficient conductor, less effective than iron, being precisely for this reason that we would prefer lead as insulating material. We would say all that without implying that those chemical elements were designed to be more or less efficient drivers of heat. The coefficient of thermal conductivity of metals measures their efficiency in fulfilling the function, the causal role, which they can fulfill as transmitters of heat; however, this does not imply that they are materials designed for that performance. The artifacts that we can build with those materials for fulfilling the function of heat transmission or heat isolation are, indeed, designed objects, and this is the case of the artificial alloys produced to achieve a predetermined coefficient of conductivity. The reference to the coefficient of thermal conductivity also makes clear the intrinsically comparative nature that the notions of efficiency and effectiveness have. Judging something as being efficient for a given functional performance implies a tacit recognition that there are other alternatives less effective for performing that function. Alternatively, it can also imply the acknowledgment that this element could have a different configuration than it has and that, in this case, it would be less efficient than it is in fact. However, despite how obvious this may result, I think it is still important to point the difference between the eminently comparative
136
G. Caponi
character of efficiency judgments and the purely analytical character of functional attributions. By attributing a function to an element within a system or process, we only allude to that element and point out its causal role within that system or process. That is why I said that it is an analytical operation: classically, it is spoken of “functional analysis” (Hempel 1970; Cummins 1975). Attributing functions is not to draw tacit or implicit comparisons of efficiency or effectiveness. It is important to highlight, in addition, that the notion of effectiveness, and its derivatives, does not have any value burden. We can talk about the greater or lesser effectiveness of a poison, just like of the greater or lesser thermal conductivity of a material, without it being necessary for our evaluation to be biased by our preferences or goals. Both a pest exterminator concerned with the rats that she wants to kill and a doctor concerned with a substance that her patient (maybe the pest exterminator) ingested can speak of the greater or lesser effectiveness of a poison. Likewise, an engineer can talk about the greater thermal conductivity of a material both when they are interested in increasing that conductivity and also when they are interested in decreasing it. In all these cases, the comparisons of effectiveness, or efficiency, are not relative to the subject that compares but to the process under analysis and to the function, or causal role, that are under consideration. The same goes for the idea of malfunction. In some cases, to say that a functional item does not work well means that it does not have the efficiency that other things, similar to that item, have in performing an analogous causal role in a comparable causal process. In other cases, it means that the functional item has lost the efficiency that it previously had in the functional performance, or the causal role, that we are considering. A change in behavior in a pollinating species can lead us to consider that it is no longer operating well in the performance of that function. New food preferences might lead these pollinators to visit less assiduously the flowers that they used to frequent as often happens with hummingbirds that prefer the sugar water that some people leave in plastic drinking fountains that hang in their gardens. That is to say, the notion of effectiveness does not suppose allusion to preferences or plans of the observer, but it does suppose the reference to a functional performance. It can be said, therefore, that the notion of function is simpler and more primitive than that of effectiveness and that both are simpler and more primitive than design: that is why they can be part of the latter’s definiens. It is here, thus, that we can hint that scale of increasing epistemological complexity that goes from mere causal attributions to functional attributions, from there to comparisons, or estimations of efficiency (or effectiveness), arriving, at last and only in some cases, to the attributions of design that are of intrinsically historical nature. This is also the case of the notions of biological function, fitness, and adaptation. As I pointed from the beginning, they keep a relation between them that is isomorphic with the relation kept by the general notions of function, effectiveness, and design. This is so because, as I also have already argued, the former are just specifications of the latter. We saw it clearly in the case of functions in general and biological functions in particular and now we can see it in the case of fitness. Like thermal conductivity, fitness is a dispositional predicate. As Popper (1992, p. 445) once said of all dispositional predicates, a concept like fitness, or like thermal conductivity, “describes dispositions to behave in a certain regular or law-like manner.” Being better to say that they
8 The Darwinian Naturalization of Teleology
137
describe the dispositions to behave in a certain regular way under specific conditions and being also very important to remark that, in many cases, this disposition can occur in different degrees. The same happens with thermal conductivity. To say that certain material is a thermal conductor means that under a more or less restrictive set of conditions it will transmit heat. But, as we pointed, in different materials, this transmission can be done with greater or lesser speed and with greater or lesser heat loss. Thermal conductivity is the possible effectiveness of as material in the performing of a causal role; and, in the same way, fitness is the efficiency by which under certain circumstances a character may contribute to the performance of a biological function in a specific environment (cf. Mills and Beatty 2006, p. 9). But it is undeniable that fitness is a notion much broader and definitively more heterogeneous than thermal conduction: what makes one configuration of heart more efficient than another is quite different from what makes one variant of a mimetic color more efficient than another. Nevertheless, for every biological function that could be considered, we will find pertinent physiological and ecological remarks that will allow us to determine, under the conditions and requirements that are being considered in each analyzed case, what is a more or less efficient performance of that causal role in the realization and completion of the life cycle.8 Fitness, therefore, is a causal notion; and not a measure such as “differential reproductive success” – the w of population genetics. This last magnitude, causally null or inert, is only a common measure, but not part of the definiens, of that genuine causal notion that is ecological fitness (Ginnobili 2010, p. 45). The fitness that I am talking about is the efficiency by which a biological function may be performed under certain conditions in a specific ecological context: an efficiency that can also be predicated of the individuals that perform that function. In that case, ecological fitness can be defined as the disposition to survive and reproduce (again under certain conditions and in a given environment) that an inheritable characteristic confers to a living being (Mills and Beatty 2006, p. 9; Rosenberg 2006, p. 175). But it is obvious that the capacity of the individuals depends on the efficiency with which they are able to perform their biological functions. The ecological fitness, which Richard Burian (2005, p. 62) has called “relative engineering fitness,” must always be considered in reference to a specific biological function. If we attribute it to the individual as a whole, without referring to the fulfillment of the functional demands that are at stake, the causal content of the notion weakens and it begins to be confused with the mere w. When Sober (1984, p. 88) concludes that the notion of fitness is “causally inert,” he is referring to the “overall fitness” of an individual organism and not to those features of an organism that, taken together, determine this overall fitness. The greater ability to run (that allows the gazelle x to escape from the lion with more promptness and less energy waste than the gazelle y) is, as Sober recognizes, a genuine disposition: a property that Bock and Wahlert (1998, p. 150) refer to a notion of biological efficiency based on energetic economy. If this notion were really operational, this would allow to think that fitness is a notion not as heterogeneous as I said. But I do not know if measuring this energetic efficiency can be anything like measuring thermal conductivity. 8
138
G. Caponi
allows to causally explaining the higher success that x has in avoidance of predators. That property, or capacity, can be measured (though not defined) by virtue of the relative reproductive success that it can confer and that can be done with all the other characteristics of the gazelle that determine its relative reproductive success. Nevertheless, if the values so obtained are averaged, we obtain a magnitude that is null in terms of causal explanation. This magnitude, which is the effect or overall outcome of different dispositions resulting from particular properties that are causally efficacious, can be regarded as the overall fitness of x. But it is very important to see that this overall fitness is a mere global effect that finds its true cause in the several properties that make a living being able to perform with greater or lesser effectiveness the different biological functions that he must fulfill in order to survive (Sober 1984, p. 88–9). The overall fitness of x is just the w of x. It is not its ecological fitness, and if this difference is not properly seized, we might conclude that the expression “survival of the fittest” is a mere pleonasm. However, if fitness is defined as ecological efficiency, whose specific cause must be identified at each concrete situation, then the differential reproductive success, the w, can be considered as a measure and indicator of that efficiency and not as the definition of the ecological fitness. In some cases, that ecological fitness was called “adaptedness” (Brandon 1990, p. 15), which means the condition of being adapted. Nevertheless, although in ordinary language that could sound innocuous, if we are discussing the conceptual configuration of evolutionary biology, that terminological option can be harmful. The notion of fitness, as I have been saying, should be assumed to define adaptation as effectiveness must be assumed to define design. That is why, by substituting fitness by “adaptedness,” we produce an illusion of circularity that is as unnecessary as deceiving. If “adaptedness” is the condition of being an adaptation, it cannot integrate the definiens of this last notion. Fortunately, if we do not reduce ecological fitness to population genetics’ w, we can easily give out “adaptedness” and easily avoid the appearance of circularity that this expression can generate when used in the definition of adaptation. If we keep the notion of ecological fitness, we shall be able to take the last step in our analysis of the Darwinian naturalization of teleology: we will be able to establish the equivalence between not only “adaptation” and “naturally designed object” but also between “natural selection” and “natural design.”
8.6 Biological Design: Natural Selection Far from being the exclusive patrimony of those few vestiges of natural theology that still haunt some dark aisles, the notion of biological design is not unknown in the current discourse of biological sciences.9 It can be delimitated by saying that an
Paradigmatic and classical cases of that normal use of the concept of design in biology can be found in the book of Wainwright, Biggs, Currey, and Gosline (1980) and in all the volumes organized by Weibel, Taylor, and Bolis (1998). 9
8 The Darwinian Naturalization of Teleology
139
organic structure x is liable of been considered as biologically designed to perform function y, if and only if, the following conditions are met: [1] Y it is a biological function of x. [2] X is the outcome of a process of changes caused by natural selection as consequence of x being fitter than its actual alternatives in the accomplishment of y. It is undeniable that this definition of the notion of biological designed structure is not very different from the definition of natural designed object that was given by Collin Allen and Mark Bekoff (1998, p. 578). However, behind the terminological similarity, there is a crucial conceptual difference. My definition avoids the redundancy in which they, as Kitcher, incur because their acceptance of the etiological conception of functional attributions (cf. Allen and Bekoff 1998, p. 574). The admission of this thesis, that Allen and Bekoff (1998, p. 574) call “the standard line”, implies that [1] and [2] mean the same thing. Thus, as Kitcher assumed, the predicate “being a biologically designed object” is equivalent to the predicate “having a biological function.” But this is not what we are saying, and it would bring us back all the difficulties of the etiological conception that were already pointed out. In fact, all the preceding discussion on the concepts of function and fitness tended to avoid this circularity, showing that a correct delimitation of these more primitive notions allowed us to arrive at the definition of the designed object without incurring in it. It is undeniable, however, that my definition of a biologically designed object incurs in a kind of redundancy; it is just a new definition of adaptation. Though, this is not a problem; as I already said, the concept of biologically designed object and the concept of adaptation are really equivalent. This redundancy is my thesis. Without refusing the definitions of adaptation given in the first section, we can formulate another one that shows, very clearly, such equivalence. It can be said, in effect, that a state of character xn + 1 (derived from a primitive state of primitive xn) is an adaptation to perform a function y if and only if the following conditions are met: [1] Y is a biological function of x. [2] Xn + 1 is the outcome of a process of changes produced by natural selection because of xn + 1 was fitter (more efficient or effective) in the realization of y than the alternative variants of x. Furthermore, besides conflating the concepts of adaptation and of biologically designed object, we, as Francisco Ayala did (2009, p. 7), can also say that natural selection is nothing but a natural process of designing. Concerning this point, we agree with Kitcher: “design can stem from the intentions of a cognitive agent or from the operation of selection,” although we refused his etiological conception of functional attributions. Natural selection does not only modify the features of living beings making them more efficient, more apt, or fitter, in the performance of different biological functions; natural selection is also ruled by those differences of fitness. Stubbornly and mechanically, natural selection rewards and retains each peculiarity that might increase the efficiency with which a biological function is performed. It is not an intentional process of designing; but it is not alien,
140
G. Caponi
or contingently linked, to functional efficiency. Even more efficiency is the only thing that matters in it. Maybe we can compare natural selection to a blind watchmaker, but it must also be taken into account that, despite its blindness, natural selection is the strictest and most rigorous of all watchmakers. The relationship between natural selection and fitness is very different from the relationship that exists between water erosion and the convenience that certain stones have for playing stone skipping. Water erosion does produce, indeed, flat pebbles that are very convenient to play the role that they can have in that game. Nevertheless, water erosion is not ruled by that convenience: water erosion has nothing to do with stone skipping. Natural selection, on the contrary, is strict and punctiliously ruled by differences in biological fitness. Moreover, whatever the causal roles that these stones may have in different natural processes, there is no natural process that could come to modify them as consequence of a greater efficiency in the performance of such causal roles. This goes for all natural processes except for natural selection and for the making of objects by certain animals like us. Although the increase of humidity makes air more efficient as electrical conductor, facilitating the occurrence of lightning, that increase in humidity is not addressed by the increasing of this functional efficiency. The causal role that air can have as electrical conductor does not determinate or facilitate at all its own humidification. By contrast, the change in a coloration caused by natural selection, because of the optimization of an aposematic function, is tight and clearly causal linked with this increase in functional efficiency. The increase in conductive capacity of air is just a secondary effect of a process that has nothing to do with it, and the same goes for the increasing of efficiency in skipping that some pebbles can undergo as result of hydric erosion. In both cases, we are considering the causal role that these objects play in two causal process, and we verify the incidence of changes that produce an increase the efficiency with which those objects perform such functions (or causal roles). It is clear, however, that those increases in effectiveness are definitively accidental. There is no design in neither of those cases. If there is design in natural selection, it is because the increase in functional efficiency is not only an effect of the process, but it also has a causal role there. Ignoring possible contraints, which are always present in any design process, natural selection always leans towards efficiency. A cryptic coloration arises by natural selection insofar as small differences that help to hide, more efficiently, individuals of the lineage are rewarded with differential reproductive success. Therefore, the coloration is not only advantageous, but it is also to obey this advantage. In other words, in that coloration, there is not just aptation, which is the condition of being aptus or fit; but there is also ad-aptation: the aptness, the fitness, arises by cause of that same aptness or fitness (Gould and Vrba 1982, p. 4). This does not happen in mere ex-aptation. In exaptations, there is biological utility but not ad- aptation (Gould and Vrba 1982, p. 5); and it is there that the naturalization of teleology, operated by Darwinism, resides. When we talk about evolutionary adaptation, we are talking about a functional adequacy, a special kind of aptation, which is not configured by a series, or convergence, of fortunate coincidences, but by virtue of
8 The Darwinian Naturalization of Teleology
141
that same functionality. Far away and long ago, George Gaylord Simpson (1947, p. 489) saw this with full, and never equaled, clarity: “Adaptation does exist and so does purpose in nature, if we define “purpose” as the opposite of randomness, as a causal and not merely accidental relationship between structure and function, without necessarily invoking a conscious purposeful agency.” Acknowledgments I want to thank my colleague Jerzy Brzozowski for his invaluable help in the writing of this work.
References Abercrombie, M., Hickman, C., & Johnson, M. (1957). Dictionary of biology. London: Penguin. Allen, C., & Bekoff, M. (1998). Biological function, adaptation, and natural design. In C. Allen, M. Bekoff, & G. Lauder (Eds.), Nature’s purposes (pp. 571–588). Cambridge, MA: MIT Press. Amundson, R., & Lauder, G. (1998). Function without purpose: The uses of causal role functions in evolutionary biology. In C. Allen, M. Bekoff, & G. L. George (Eds.), Nature’s purposes (pp. 335–370). Cambridge, MA: MIT Press. Ayala, F. (2004). In William Paley shadow: Darwin explanation of design. Ludus Vitalis, 12, 50–66. Ayala, F. (2009). En el Centenario de Darwin. Ludus Vitalis, 17, 1–16. Bock, W., & Wahlert, G. (1998). Adaptation and the form-function complex. In C. Allen, M. Bekoff, & G. Lauder (Eds.), Nature’s purposes (pp. 117–168). Cambridge, MA: MIT Press. Brandon, R. (1990). Adaptation and environment. Princeton: Princeton University Press. Burian, R. (2005). The epistemology of development, evolution and genetics. Cambridge: Cambridge University Press. Coddington, J. (1994). Homology and convergence in studies of adaptation. In P. Eggleton & R. Vane-Wright (Eds.), Phylogenetics and ecology (pp. 53–78). London: Linnean Society. Cummins, R. (1975). Functional analysis. Journal of Philosophy, 20, 741–765. Cummins, R. (2010). Neo-teleology. In A. Rosenberg & R. Arp (Eds.), Philosophy of biology (pp. 164–174). Malden: Wiley-Blackwell. Dennett, D. (1996). Darwin’s dangerous idea. London: Penguin. Dohrn, A. (1994). The origin of vertebrates and the principle of succession of functions [1875]. History and Philosophy of the life Science, 16, 3–96. Futuyma, D. (2005). Evolution. Sunderland: Sinauer. Gayon, J. (2010). Raisonnement Fonctionnel et Niveaux d’Organisation in Biologie. In J. Gayon & A. Ricqlès (Eds.), Les Fonctions: des Organismes aux Artefacts (pp. 125–138). Paris: Presses Universitaires de France. Ghiselin, M. (1997). Metaphysics and the origin of species. Albany: State University of New York. Ginnobili, S. (2010). La Teoría de la Selección Natural Darwiniana. Theoria, 67, 37–58. Godfrey-Smith, P. (1998). A modern history theory of function. In C. Allen, M. Bekoff, & G. Lauder (Eds.), Nature’s purposes (pp. 453–478). Cambridge, MA: MIT Press. Godfrey-Smith, P. (1999). Functions: Consensus without Unity. In D. Buller (Ed.), Function, selection, and design (pp. 185–198). Albany: State University of New York. Gould, S., & Vrba, E. (1982). Exaptation: A missing term in the science of form. Paleobiology, 8, 4–15. Griffiths, P. (1999). Adaptation and Adaptationism. In R. Wilson & F. Keil (Eds.), M.I.T. encyclopedia of the cognitive science (pp. 3–4). Cambridge, MA: MIT Press. Hempel, C. (1970). The logic of functional analysis. In C. Hempel (Ed.), Aspects of scientific explanation (pp. 297–330). New York: Free Press.
142
G. Caponi
Kitcher, P. (1998). Function and design. In C. Allen, M. Bekoff, & G. Lauder (Eds.), Nature’s purposes (pp. 479–504). Cambridge, MA: MIT Press. Krebs, J., & Davies, N. (1997). The evolution of behavioral ecology. In J. Krebs & N. Davies (Eds.), Behavioral ecology (pp. 3–14). Malden: Blackwell. Lennox, J. (1993). Darwin was a teleologist. Biology and Philosophy, 8, 409–421. Lewontin, R. (2000). The triple helix. Cambridge: Harvard University Press. Millikan, R. (1989). In defense of proper functions. Philosophy of Science, 56, 288–302. Mills, S., & Beatty, J. (2006). The propensity interpretation of fitness. In E. Sober (Ed.), Conceptual issues in evolutionary biology (pp. 3–24). Cambridge, MA: MIT Press. Moreno, A., & Mossio, M. (2015). Biological autonomy. Dordrecht: Springer. Nanay, B. (2010). A modal theory of function. The Journal of Philosophy, 107, 412–431. Neander, K. (1991). Functions as selected effects. Philosophy of Science, 58, 168–184. Neander, K. (1999). The teleological notion of function. In D. Buller (Ed.), Function, selection, and design (pp. 123–142). Albany: State University of New York. Neander, K. (2018). Does biology needs teleology. In R. Joyce (Ed.), The Routledge handbook of evolution and philosophy (pp. 64–76). London: Routledge. Nunes-Neto, N., & El-Hani, C. (2009). O que é Função? Scientiae Studia, 7, 353–401. Nunes-Neto, N., Moreno, Á., & El-Hani, C. (2014). Function in ecology: An organizational approach. Biology and Philosophy, 29, 123–141. Popper, K. (1992). The logic of scientific discovery. London: Routledge. Reeve, K., & Sherman, P. (1993). Adaptation and the goals of evolutionary research. The Quarterly Review of Biology, 68, 1–32. Rosenberg, A. (2006). Darwinian reductionism. Chicago: University of Chicago Press. Simpson, G. (1947). The problem of plan and purpose in nature. Scientific Monthly, 64, 481–495. Sober, E. (1984). The nature of selection. Chicago: Chicago University Press. Sober, E. (1993). Philosophy of biology. Oxford: Oxford University Press. Sterelny, K., & Griffiths, P. (1999). Sex and death. Chicago: Chicago University press. Tinbergen, N. (1963). On aims and methods of ethology. Zeitschrift für Tierpsychologie, 20, 410–433. Wainwright, S., Biggs, W., Currey, J., & Gosline, J. (1980). Mechanical design of organism. New York: Wiley. Weibel, E., Taylor, R., & Bolis, L. (1998). Principles of animal design. Cambridge: Cambridge University Press. West-Eberhard, M. (1998). Adaptation: Current usages. In D. Hull & M. Ruse (Eds.), Philosophy of biology (pp. 8–14). Oxford: Oxford University Press. Wouters, A. (2003). Four notions of biological functions. Studies in the History and Philosophy of Biological and Biomedical Sciences, 34, 633–668. Wright, L. (1973). Functions. Philosophical Review, 82, 139–168.
Chapter 9
Drift as a Force of Evolution: A Manipulationist Account Lorenzo Baravalle and Davide Vecchi
9.1 Introduction Wondering about the explanatory structure of evolutionary theory – that is, roughly, the way in which it provides patterns of explanation applicable to a wide set of phenomena (Kitcher 1989) – many authors have depicted it as a dynamical theory or, in other words, as a theory of forces. According to Elliott Sober (who popularised this conception in the philosophy of biology): A theory of forces begins with a claim about what will happen to a system when no forces act on it. The theory then specifies what effects each possible force will have when it acts alone. Then the theory progresses to a treatment of the pairwise effects of forces, then to triples, and so on, until all possible forces treated by the theory are taken into account. Since most objects in the real world are bombarded by a multiplicity of forces, this increase in complexity brings with it an increase in realism (Sober 1984, p. 31; emphasis in the original).
Taking Newtonian mechanics as the paradigmatic example, we could say that a theory of forces must include: • A first law – the zero-force law – describing how the system behaves when no forces act on it (i.e., in the case of Newtonian mechanics, the principle of inertia)
L. Baravalle (*) Center of Natural and Human Sciences, Federal University of ABC, Santo André, SP, Brazil e-mail: [email protected] D. Vecchi Centro de Filosofia das Ciências, Faculdade de Ciências, Universidade de Lisboa, 1749-016, Lisbon, Portugal © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_9
143
144
L. Baravalle and D. Vecchi
• A (or a set of) consequence law(s) that describes the direction, the magnitude and the outcome of the forces acting in the system (exemplified in classical mechanics by Newton’s second law) • A causal characterisation of the acting forces (e.g., the laws of gravitation, buoyancy etc.) Likewise, we might consider the Hardy-Weinberg principle as the zero-force law of evolutionary theory, the equations calculating the effects of selection, drift, mutation, recombination and migration in population genetics as its consequence laws and specific ecological configurations instantiating these laws as the causal characterisation of the theory (Caponi 2014). According to a common conception, in Newtonian mechanics the notion of force is explanatory because it subsumes the possible causes of change in motion under specific theoretical/nomological descriptions. Analogously, according to the dynamical interpretation of evolutionary theory, evolutionary forces would conceptually subsume sets of ecological factors according to their effects, thus allowing unified causal explanations of the changes in genotypic and phenotypic distributions. In spite of its apparent plausibility (many textbooks of evolutionary biology and population genetics adopt the force analogy; see, for instance, Gillespie 2004; Rice 2004; Futuyma 2005; Hartl and Clark 2007), this view has been nonetheless repeatedly questioned. On the one hand, the so- called statisticalists (Matthen and Ariew 2002; Walsh et al. 2002) have argued that a clear-cut causal characterisation of selection and drift is unviable. Since, as we have just seen, a well-defined notion of force depends on the identification of the specific kind of causal process instantiating it, if the statisticalists were right, the dynamical interpretation would be untenable. On the other hand, Brandon (2006), who defends a heterodox version of the dynamical interpretation, has more specifically denied that drift may be considered as a force because – differently from Newtonian forces – it lacks predictable and constant direction. In this chapter, we aim to defend a rather standard version of the dynamical interpretation from these criticisms by showing that genetic drift can be genuinely characterised as a force. Many authors have attempted to do this before. On the one hand, against the statisticalist view, Reisman and Forber (2005) and Shapiro and Sober (2007) have argued – by invoking Woodward’s (2003) manipulationist (or interventionist) theory of causation – that it is possible to offer a clear causal characterisation of drift as the process that, depending on the size of a population, may produce proportional fluctuations in genetic and phenotypic distributions. On the other hand, Stephens (2004, 2010), Filler (2009), Hitchcock and Velasco (2014), Pence (2016) and Roffé (2017) have defended the directional features of drift against Brandon’s criticism. Stephens (2004, 2010), Filler (2009) and Roffé (2017) have championed the view that drift has a direction because it “pushes and pulls” populations towards homozygosity. Hitchcock and Velasco (2014) and Pence (2016), by contrast, have argued that, differently from Newtonian traditional forces, drift has a stochastic direction, like Brownian motion. These two sets of solutions are in our opinion only partially satisfactory. Reisman, Forber, Shapiro and Sober may have well succeeded in defending the causal
9 Drift as a Force of Evolution: A Manipulationist Account
145
c haracter of drift, but they do not clearly defend the thesis that it can be considered as a force1, while the authors who focused on directionality do not provide an explicit characterisation of the causal features of drift. Our goal is to offer a synthesis of the two approaches in order to overcome their limitations: we aim to argue that drift is a force because it is a cause of a certain kind. In a nutshell, we shall support this claim by pointing out a connection between the notion of force and the notion of explanatory depth interpreted in manipulationist terms (see especially Hitchcock and Woodward 2003). Our argument is that since the notion of force in Newtonian mechanics is what allows this theory to provide deep explanations, and given that the notion of drift plays an analogous role in evolutionary theory, then drift can be considered as a force of evolution. Insofar as our analysis applies to other evolutionary factors – e.g., selection and mutation – it offers overall support to the dynamical interpretation. The plan of the chapter is as follows. In Sect. 9.1, we spell out in more detail the statisticalists’ and Brandon’s criticisms to the interpretation of drift as a force. In Sect. 9.2, we outline the replies to these criticisms and situate our proposal within this context. In Sect. 9.3, we introduce the manipulationist account of explanatory depth. In Sect. 9.4, we discuss in what sense the notion of force endows Newtonian mechanics’ explanations with explanatory depth. In Sect. 9.5, we show how to apply our analysis to drift. In the conclusion, we sketch out how our approach can be generalised to other evolutionary forces.
9.2 Drift as a Force and Its Enemies The dynamical interpretation of evolutionary theory has been under attack particularly in the last two decades (a notable exception is Endler 1986). The disagreement between dynamicalists and statisticalists covers quite a broad set of issues, but here we shall focus, for obvious reasons, just on those which concern the possibility of considering genetic drift as a force. As we have seen in the introduction, a force is a cause that, in the context of a theory, is usually characterised as acting with a certain magnitude and direction. Statisticalists are sceptical about both the claim that drift is a cause (or, more precisely, that it is a cause clearly differentiable from selection) and the claim that it has a predictable and constant direction. We shall start by introducing the criticisms levelled by statisticalists against the former claim while reserving the criticisms concerning the latter for later in the section in connection with Brandon’s criticisms. Arguments against the causal interpretation of drift can be found in a number of statisticalists’ papers, but we shall mainly refer to the articles by Walsh, Lewens and Ariew (2002) and Matthen and Ariew (2002). In their view, drift is nothing more Shapiro and Sober (2007) are nominally committed to the force analogy. Yet, besides their manipulationist argument for considering drift as a cause, they do not provide any further strong reason to conceive it, in addition, as a force. 1
146
L. Baravalle and D. Vecchi
than sampling error, thus a purely statistical (as opposed to causal) concept. In order to illustrate their point, the statisticalists compare evolutionary processes to a series of coin tosses. If we toss a fair coin for a large number of times, it is overwhelmingly likely that we will obtain a ratio of heads and tails very close to 50:50 (albeit not necessarily 50:50). Likewise, in an infinitely large and panmictic population an advantageous allele will, over a certain number of generations, almost certainly reach fixation by natural selection. Nonetheless, if we reduce, in the first case, the number of the tosses and, in the second, the size of the population, these outcomes will become increasingly less likely: “the law of large numbers tells us that the likelihood of significant divergence from these predictions is an inverse function of the size of the population. The small size of a population increases the chances of error” (Walsh et al. 2002, p. 459). If we toss the coin just, say, four times, any distribution of heads and tails is almost equally likely. Analogously, the change of allelic distributions in a small population over a certain number of generations will not necessarily reflect the differences in fitness between them. From the statisticalist standpoint, it is precisely this deviation from the expectation what evolutionary biologists usually call genetic drift. Now, the statisticalists stress that, although there obviously are physical factors that cause each single toss of the coin to land either head or tail, they are irrelevant to explain the differences in the expected outcome in short or large series of tosses. As a matter of fact, the same kind of physical factors cause a coin to land head or tail in different series of tosses. What explains the differences in the expected outcomes is the number of tosses, which is not – properly speaking – a causal but rather a structural feature of the experimental setup. This conclusion can be easily extended to drift. Although an evolutionary outcome is obviously instantiated by a set of births, deaths and reproductions, what explains the fact that in a small population the expected allelic distributions over a certain number of generations diverges from the outcome predicted by differences in fitness is not that set of causal factors but the size of the population. The latter, in its turn, does not properly play a causal role but just define – to use a phrase by Anya Plutynski – “a condition on the possibility of sampling error” (2007, p. 165; emphasis in the original). Since, moreover – as in the case of the coin – the causal factors supposedly acting when the sample is small or large are exactly the same (i.e., myriads of births, deaths and reproduction), it is impossible to distinguish in any intermediate case if the evolutionary process is due to drift or selection. But, if drift and selection are not clearly differentiable at a causal level – the statisticalists conclude – then they cannot count as distinct forces: instead, they both just denote statistical outcomes. Against the dynamical interpretation, the statisticalists also observe that, due to its non-deterministic nature, differently from Newtonian forces, drift lacks a predictable and constant direction. To be fair, Sober himself acknowledged this point, although he simply drew the moral that when compared to selection and other evolutionary forces drift “is a force of a different colour” (1984, p. 117). Of a different opinion is Stephens (2004), who first pointed out – in the context of the present debate – that, relying on elementary population genetic models (basically the classic Wright-Fisher model), it is possible to say that drift’s direction is homozygosity.
9 Drift as a Force of Evolution: A Manipulationist Account
147
Given a two-gene locus not subject to selective pressure, these mathematical models predict that, after a certain number of generations, one of the two alleles eventually reaches fixation while the other disappears – albeit we cannot predict which one. The main argument against this characterisation of drift’s direction has been provided, as anticipated, by Brandon (2006, see also Brandon 2010). Differently from the statisticalists, Brandon is not critical of the dynamical interpretation as a whole: in a sense, he also believes that evolutionary theory is a theory of forces. His point is rather that drift is equivalent to the principle of inertia of evolutionary dynamics instead of a force. Drift cannot be a force in the sense defended by Stephens because, if we take seriously the Newtonian analogy, the supposed directionality of drift (towards homozygosity) is nonsensical. To claim that, due to drift, either one homozygous genotype or the other will, in the long run, predominate in a population (i.e., without saying which of them will predominate) is problematic. Since Newtonian forces are vectors, it is not possible to simply say that a force is acting on an object, without any other qualification. The statement that drift is acting on a population, without specifying its direction, is analogously meaningless or incomplete.
9.3 Causes and Forces As sketched out in the introduction, two main strategies have been deployed in order to defend the dynamical interpretation of drift: against the statisticalists, Reisman and Forber (2005) and Shapiro and Sober (2007) have argued that the size of a population has not just statistical relevance but that it is a genuine causal factor too; against Brandon, some authors have insisted on the correctness of Stephens’ analysis (Filler 2009; Stephens 2010; Roffé 2017) while others have attempted to liberalise the Newtonian analogy so as to include “non-canonical” forces too (Hitchcock and Velasco 2014; Pence 2016). We shall not pay much attention to this latter subset of solutions to Brandon’s challenge since they do not have much to do with our proposal; nonetheless – as we shall spell out more clearly below – we share with them the spirit of a “liberalisation” of the Newtonian analogy (a position defended also by Luque 2016). Reisman and Forber’s and Shapiro and Sober’s attempts to rescue a causal interpretation of drift are grounded on the fact that evolutionary biologists do not refer to drift exclusively as an effect or as an outcome (see Ohta 2012, for instance). Basically, their goal is to show that, contrary to the statisticalists’ view, the size of the population is a genuine causal factor and not merely a statistical variable (see also Millstein 2006 for a similar view). In order to do this, they need to neutralise the statisticalists’ premise according to which the causes of evolution are located uniquely at the level of individuals’ births, deaths and reproductions. To this aim, as anticipated, they adopt Woodward’s account of causation and causal explanation (Woodward 2003). According to Woodward, given two variables X and Y, “a necessary and sufficient condition for X to be a direct cause of Y with respect to a variable set V is that there be a possible intervention on X that will change Y or the p robability
148
L. Baravalle and D. Vecchi
distribution of Y when one holds fixed at some value all other variables in V” (2003, p. 55). Woodward’s account allows non-local interactions (i.e., those that – unlike the interactions involved in specific coin tosses, or in specific sequences of births, deaths and reproductions – do not require the transmission of some physical mark or some conserved physical quantity from causes to effects) to be causal. The relevant relation in manipulationist causation is, in fact, counterfactual dependence rather than production (differently from traditional accounts of causation; cf. Salmon 1986; Dowe 2000). In this sense, although the size of a population is clearly somehow determined by the individuals that compose it, it is nonetheless a genuine causal factor on its own. As illustrated by Reisman and Forber through an experiment carried out by Dobzhansky and Pavlovsky (1957), it is possible to intervene on the size of a population – while differences in fitness are held fixed – in order to increase or decrease the random “fluctuations” of allelic distributions.2 Since random fluctuations are counterfactually dependent on size, there are no reasons – in accordance to Woodward’s account – to deny that drift is a cause. Instead of focusing on its causal features, Stephens (2010) and Roffé (2017) argue that drift is a force on the grounds that it has indeed a specific direction. While Stephens asserts that drift is directed to “remove variation from natural populations” (2010, p. 721) and thus to push populations towards homozygosity,3 Roffé argues that, although one cannot predict the direction which a population evolving by drift will move towards, “when an allele frequency is greater than 0.5, it is more likely that this frequency will go up rather than the reverse, even in the very next generation” (2017, p. 551). According to Roffé, Brandon is wrong in his criticism because, making exception for the case in which – for a two-gene locus – both alleles have exactly the same frequency, the prediction of the Wright-Fisher model when drift is at work is the increased probability of fixation of the most common allele. A different – and, for our purposes, more interesting – defence of drift as a force against Brandon’s attack is given by Filler (2009). That a concept may count as a force depends, in his opinion, mainly on the fact that it is able to play an appropriate unificatory role concerning a variety of phenomena – this criterion was already implicit in Sober’s (1984) original proposal – and that it has a precisely mathematically specifiable magnitude. When compared with these criteria, the requirement according to which a force must have a specific direction is secondary. Insofar as drift Dobzhansky and Pavlovsky separated 20 replicate populations of Drosophila in 2 groups of 10 populations each. The first group was composed by large populations while the second by very small populations (10 males and 10 females). Dobzhansky and Pavlovsky aimed to track two allelic types that, initially, were present in each population with a ratio of exactly 50:50. By reducing the size of the second group of populations they intended to simulate a founder effect (which is usually considered as a common cause of drift). After a certain number of generations, they let all the populations grow to the same size. Finally, they let selection act freely and after a number of generations necessary to reach the equilibrium, they recounted type frequencies in each population. The result was that while in large populations the degree of variance between frequencies at equilibrium was small, in small populations it was far greater. 3 Of course, also purifying selection is an eliminative process. Stephens is rather contrasting eliminative forces with forces generating variation (e.g., mutation). 2
9 Drift as a Force of Evolution: A Manipulationist Account
149
unifies disparate phenomena – parent sampling, gamete sampling, founding of new populations, splitting of populations etc. – and has a specifiable mathematically precise magnitude, the fact that it has not a specific direction (but only a “disjunctive” one, i.e., either dominant homozygosity or recessive homozygosity) does not invalidate its status as a force. Albeit sympathetic with Filler’s deflationary attitude, Pence (2016) and Luque (2016) are sceptical about his specific proposal. They suspect that, by focusing exclusively on mathematical magnitude, it weakens too much the criteria for “forcehood,” thus making the concept of force trivially applicable and ultimately vacuous. We partly agree with them. In this sense, one of our goals in this chapter may be interpreted as an attempt to find better criteria for “forcehood.” Like Filler, we think that one of the crucial features of the notion of force is its capacity of unifying various phenomena by focusing on their shared characteristics. Yet, differently from him, we do not believe that the unificatory virtues of the notion of force are merely attained through the identification of its mathematical magnitude or its specific direction. To be sure, these are important features of a Newtonian force: if Stephens and Roffé were right and drift may indeed be conceived as having a specific direction, this would obviously strengthen the analogy between evolutionary theory and Newtonian mechanics. But, from our point of view, the most salient feature of the notion of force in Newtonian mechanics is that it captures certain shared causal characteristics of the phenomena that it unifies.4 In this sense, our analysis should be considered as closer to that carried out by Reisman and Forber or Shapiro and Sober. As a matter of fact, what we want to show is that the manipulationist account of causal explanation provides the conceptual resources to characterise drift not just as a cause of evolution, but as a force as well. Forces are “deep causes,” not in the sense that they are ontologically “fundamental” (like, for instance, micro-physical interactions), but because they are constitutively invoked in deep explanations, that is, explanations able to support a wide range of counterfactuals.
9.4 The Manipulationist Account of Explanatory Depth The manipulationist conception of explanatory depth has been initially developed by Hitchcock and Woodward (2003; see also Woodward and Hitchcock 2003) and later refined by – among others – Woodward (2006, 2010, 2016), Ylikoski and Kuorikoski (2010), Weslake (2010) and Blanchard, Vasilyeva and Lombrozo (2018). It is based on two main assumptions. The first is that to explain means to invoke invariant generalisations connecting the explanandum with its explanans. The
Henceforth, except if otherwise specified, we shall use the word “force” to refer to component forces, and not to net forces. As a matter of fact, we take the expression “net force” to denote a theoretical representation of the combinatorial effects of interacting forces (as represented in the consequence laws of a theory of forces) while only the component forces are causally efficient. 4
150
L. Baravalle and D. Vecchi
s econd is that there are straightforward ways to compare the degree of invariance of two or more generalisations. An invariant generalisation is a statement that captures the counterfactual dependence between a variable (a property, an event, etc.) and its putative causes. In accordance with what we have already seen in the previous section, in the manipulationist framework “a generalisation is invariant if it would continue to hold under an appropriate class of changes involving interventions on the variables figuring in that generalisation” (Woodward and Hitchcock 2003, p. 2; emphasis in the original). Notice that (differently from other counterfactual accounts of causation and causal explanation; e.g., Lewis 1986) it is precisely the possibility of intervening on the variables figuring in that generalisation what allows to establish the counterfactual dependence between an effect and its causes and, thus, guarantees the explanatory power of the generalisation. The manipulationist framework does not require that interventions must be “physically possible.” Rather, it just requires that they are “logically possible and well-defined” (Woodward 2003, p. 128), in the sense that “we have some sort of basis for assessing the truth of claims about what would happen if an intervention were carried out” (p. 130; emphasis in the original). Explanations of the same phenomenon invoking different invariant generalisations can be “more or less explanatory” – that is, they can be deeper or less deep – depending on the range of invariance of the generalisations figuring in them. In order to understand the concept of “range of invariance,” it is useful to think of an invariant generalisation as a linear regression equation:
Y = a1 X1 + a2 X 2 +…+ an X n + U
(9.1)
While ax represents fixed coefficients, U is a placeholder for other possible causal influences not explicitly taken into account in the equation – that is, background conditions. Interventions on the values of Xs determine corresponding changes in the value of Y. By showing how Y covaries with Xs in non-actual situations, invariant generalisations allow to answer a class of so-called what-if-things-had-been- different questions. The range of invariance of a generalisation is the property of the generalisation that determines, so to speak, “how many” what-if questions the generalisation is able to answer. The second assumption of the manipulationist conception of explanatory depth is that the range of invariance of a generalisation is mainly determined by what different authors call insensitivity (Woodward 2006; Ylilkoski and Kuorikoski 2010) or stability (Woodward 2010; Blanchard et al. 2018).5 An explanatory generalisation is less sensitive, or more stable, than another if and only if it makes explicit the counterfactual dependence of the explanandum on variables treated as background conditions by the other generalisation. In order to illustrate these concepts, let us consider an example proposed by Woodward and Hitchcock (2003). Imagine we want to explain why a plant grew up to a certain height and that we know that the Blanchard et al. (2018) think that stability should be better characterised as denoting two distinct explanatory virtues, that is, breadth and guidance. We do not need to enter into such details here. 5
9 Drift as a Force of Evolution: A Manipulationist Account
151
variable “plant height” is counterfactually dependent on the amount of water and fertiliser the plant received. We may represent this invariant generalisation as:
Y = a1 X1 + a2 X 2 + U
(9.2)
Y is the variable “plant height”, a1X1 is a certain amount of water, a2X2 is certain amount of fertiliser and U is a set of background conditions. (9.2) explains the actual height of the plant by supporting a certain class of counterfactuals: if the plant had received a different amount of water or fertiliser, it would have reached a different height. In this way, (9.2) allows answering a non-trivial set of what-if questions. Nonetheless, according to Hitchcock and Woodward (2003), explanations invoking (9.2) are quite shallow. The reason is that (9.2) can be easily disrupted: [2] would fail if we were to spray the plant with weed killer or heat it to a very high temperature. Less dramatically, there are many possible conditions that will not destroy the plant, but which will alter the effect of water and fertiliser on plant height. There may be physical changes in the root system of the plant or the surrounding soil that would change the way in which given amounts of water affect plant height (Woodward and Hitchcock 2003, p. 5).
The problem with (9.2) is, in brief, that it makes plant height counterfactually dependent on factors (the amount of water and fertiliser) that are extremely sensitive to a change in the background conditions. Alternatively, imagine that we have a theory that describes the details of the physiological mechanisms governing plant growth by referring to a series of biochemical reactions. Such a theory would allow deriving an invariant generalisation like
Y = a1 X1 + a2 X 2 +…+ an X n + U ’
(9.3)
where Y is the plant’s height and the Xs denote a set of variables relative to the physiological features of the plant, while U’ is a subset of U. (9.3) might be thought as “unpacking” U in a set of known variables such that, being manipulable, explicitly spell out how they interfere in the causal outcome (i.e., plant height). In making explicit those variables, (9.3) allows explaining, for instance, how the composition of the soil, temperature, air or noise pollution, presence of pests etc. can, in terms of types of biochemical reactions, make a difference to Y. Since it is difficult to imagine a generalisation about plant height that takes into account all background conditions (consider logically possible metaphysical scenarios such as drastic changes in the physical structure of the universe – for instance, a change in the composition of matter or in the structure of space), (9.3) still must include a variable for background conditions U’. Nonetheless, (9.3) is more stable than (9.2) because it is insensitive to many of those circumstances that disrupted (9.2). Accordingly, an explanation invoking (9.3) is deeper that an explanation invoking (9.2) because (9.3) “enable[s] inferences to more counterfactual situations” (Ylikoski and Kuorikoski 2010, p. 209) than (9.2). Explanatory depth is not just related to the stability of the explanatory generalisations, but also to the choice of the variables taken into account. In this respect,
152
L. Baravalle and D. Vecchi
Hitchcock and Woodward observe that “ideally, one would like to formulate generalisations that are not sensitive at all to the ways in which the values of the variables figuring in them are produced” (2003, pp. 186–7). Remember that, in the manipulationist framework, it is precisely the (logical) possibility of interventions what allows establishing the counterfactual dependence between an effect and its causes. Importantly, variables that can be manipulated independently from the specificity of the scenario under study make the generalisation in which they appear portable to other scenarios. (9.2), besides being shallow in the sense that it does not take into account potentially disrupting background conditions, is also shallower than the explanation invoking (9.3) because it relates Y to variables quite sensitive to the way in which they are manipulated. The amount of water and fertiliser is certainly relevant to explain the height of many plants, but are not relevant in all cases (think about, for instance, aquatic plants): the explanation invoking (9.2) is thus not easily portable to other scenarios relatively analogous to the specific one taken into consideration, but different in some important respect. On the contrary, (9.3), which makes reference to certain types of biochemical reactions, captures features of the process of growth common to all plants and, therefore, is extremely portable. The reason for this high degree of portability is precisely that the conditions of manipulability of the invariant relation are abstracted away from the specific growth pattern of the actual plant under study, so as to cover analogous phenomena as well. In its turn, portability is achieved because the choice of variables concerning biochemical reactions, so to speak, carves nature at its joints by spelling out what all phenomena concerning plant growth share. Stability and portability of a generalisation are strictly related. Although, as far as we can see, not every stable generalisation is also portable, portable generalisations are generally stable. We suggest that highly stable and portable generalisations provide the deepest possible explanations of a natural phenomenon. This characterisation relates explanatory depth to unification. As a matter of fact, stability and portability together account for the ability of a generalisation to cover a broad set of phenomena. Differently from Kitcher’s classic account of explanatory unification (Kitcher 1989), which makes a generalisation explanatorily deep insofar as it is derived from a set of theoretical statements or inferential patterns, the manipulationist account derives the unificatory virtues of a class of invariant generalisations mainly from their causal features (Woodward 2016). Stable and portable generalisations are more frequently encountered in the fundamental sciences in which phenomena are explained by invoking physical properties assumed to be shared by all physical objects (positions, velocities, masses, charges etc.). Nonetheless, this is not necessary (Weslake 2010). On the contrary, it is precisely the possibility of finding stable and portable generalisations in different disciplines what permits, as we shall see starting from the next section, to transpose the notion of force to them.6 Quite clearly, the criteria for explanatory depth here adopted are not metric, i.e., they do not allow arranging distinct explanations of a given phenomenon on a univocal scale ranging from the shallowest to the deepest. Rather, they have to be interpreted, more modestly, as comparative criteria. This does not mean, as we shall see in the next sections, that they cannot be useful as analytic tools. 6
9 Drift as a Force of Evolution: A Manipulationist Account
153
9.5 Force-Explanations as Deep Explanations In the last section, we have seen that, in order to be deep, an explanation must contain stable and, possibly, portable generalisations. In this section, we aim to argue that, to the extent that they invoke generalisations maximally integrating background conditions, Newtonian mechanics explanations allow providing very deep explanations of changes in motion. We moreover want to argue that this is precisely because they employ the notion of force, which permits to understand the counterfactual dependence between physical effects and its causes regardless of how the interventions on the causes are performed. We therefore shall conclude that, besides the fact of having a certain direction and magnitude, one of the main characteristics (in our opinion, the most fundamental) of the notion of force is the extreme portability of the generalisations in which it appears. We believe that this is the characteristic that we have to take into account when we assess the possibility of transferring the notion of force to other theoretical contexts. Let us illustrate our claim with the help of another example presented by Hitchcock and Woodward (2003, p. 187–8). Imagine that you drop an object from a certain height and that you want to know why it takes a certain time to fall. One way to do this is by invoking the invariant generalisation between (X) – the height from which the object has been dropped – and (Y) – the time it takes to fall – also known as Galileo’s law of free fall. This would provide a perfectly acceptable explanation of the phenomenon at stake. Nonetheless, the invariant relation described by Galileo’s law has a limited range of invariance: “it would fail to hold if the object were dropped from a height that is large in relation to the earth’s radius or if it were dropped from the surface of a massive body of proportions different from those of earth (such as Mars)” (Hitchcock and Woodward 2003, p. 187). Alternatively, we may explain Y by invoking Newton’s second law along with his law of gravitation, which describe the behaviour of the dropped object in a way that is insensitive to the above-specified potentially disrupting background conditions. As a matter of fact, Newton’s law of gravitation remains invariant under changes in the mass and radius of the massive object upon which the object is dropped: “an intervention that increases the mass of earth would count as an intervention on background conditions with respect to Galileo’s law, but as an intervention on a variable explicitly figuring in Newton’s laws” (p. 188; emphases in the original). Leaving aside relativistic considerations, we could say that, since Newton’s laws are able to take into account all variables relevant to explain the behaviour of the dropped object, the explanation invoking Newton’s laws is a very deep explanation of the phenomenon.7 No matter which potentially perturbing conditions are made explicit, Newtonian
In the last section we noticed – regarding our hypothetical generalisations concerning plant height – that some background conditions are ineliminable. This is possibly true also of generalisations dealing with more fundamental features of the physical world, but we shall not discuss this issue here. 7
154
L. Baravalle and D. Vecchi
laws are able to show how they are related to the outcome, thus enabling inferences concerning any counterfactual situation. This, of course, does not apply just to the behaviour of falling bodies. Similar considerations can be extended to the behaviour of any material object in a Newtonian universe: planets, colliding objects, pendulums, springs etc. The explanations invoking Newton’s laws are so deep not just because they are stable but also – and, perhaps, mainly – because they are portable. In its turn, they are portable arguably in virtue of the fact that they employ the notion of force. As the historian of physics Max Jammer has noticed “the usefulness of the concept of ‘force’ is that it enables us to discuss the general laws of motions irrespective of the particular physical situation with which these motions are situated” (Jammer 1956, p. 244; see also Sklar 2013, Chap. 6). Within a manipulationist framework, this claim can be interpreted as stating that forces are variables that are maximally insensitive to the concrete circumstances in which they are manipulated. That is, while the degree of invariance of hypothetical pre-Newtonian generalisations relating – without mentioning universal forces – the motion of planets, the period and length of a pendulum, the behaviour of a billiard ball after a collision, and so forth, would be limited to changes in the specific variants appearing in them (as illustrated by less portable generalisations like, besides Galileo’s law, Kepler’s law of planetary motion or Huygens’ law of elastic collision), the possibility of conceiving all these phenomena as the product of the same forces allows to capture their causes in a highly portable form. Any change in motion, in a Newtonian world, can be causally explained by invoking generalisations virtually neglecting the specificity of physical bodies (excluding mass, which is a property shared by all of them) that we need to manipulate in order to account for the relevant phenomena. Any change in motion is simply counterfactually dependent on the specific component forces represented by the values of the manipulated variables. Differently from Filler and other authors cited in Sect. 9.2, we think that it is the high portability of force-explanations, rather than the fact that forces have magnitude and precise direction, that should be taken as the characteristic unificatory feature of the notion of force. Again, we do not reject the idea that magnitude and direction are important in the characterisation of a force (of course, they are crucial in the representation of Newtonian forces), and certainly Stephens and Roffé are right in the claim that if drift had a precise direction this would reinforce the analogy between Newtonian mechanics and evolutionary theory. In spite of that, our account aims to point out what is a more fundamental feature of forces: their ability to provide very deep causal explanations – in the manipulationist sense – of a class of phenomena. Surely, the fact that the notion of force allows Newtonian laws to provide such deep explanations of changes in motion does not obviously entail that any concept playing an analogous role with respect to another class of phenomena is, ipso facto, a force. We think, nonetheless, that it would be reasonable to accept that a concept able to play an analogous role may be considered as a force. Even though it seems to us that not many concepts can satisfy the conditions to be a force outside physics, we still shall argue that genetic drift is one of them.
9 Drift as a Force of Evolution: A Manipulationist Account
155
9.6 Drift as a Force Following our argument in the last two sections, we think that the minimal requirement for a concept to play the role of a force within a theory is to allow deep explanations of a certain class of phenomena. This requirement, in its turn, is satisfied just by those concepts permitting to formulate generalisations that are highly stable (thus answering a broad range of what-if questions) and portable (i.e., insensitive to the specific ways in which the variables denoted by the concept are manipulated). According to this latter criterion, drift would – or, at least, might – be reasonably considered as a force if we had a concept able to identify all the causal processes that are characteristically and systematically related to stochastic evolutionary outcomes.8 As we have seen in Sect. 9.2, Reisman and Forber (2005) and Shapiro and Sober (2007) rescue the causal character of drift against statisticalists’ criticisms by showing how, by adopting a manipulationist framework, the effects of drift, Y – that is, random fluctuations in allele frequencies and, possibly, removal of genetic variation and homozygosity – are counterfactually dependent on X, the size of the population under study. We may manipulate the size of a population by shrinking it, thus increasing the effects of drift, or vice versa we may diminish the effects of drift by increasing the size of the population indefinitely, allowing selection to act increasingly undisturbed. We think that this characterisation, albeit valuable, does not capture the features for which drift can be considered as a force. This is, at least partially, because it leaves aside cases in which the effects of drift are not caused by a change in the size of the population. Many phenomena commonly associated to drift – like parent sampling, gamete sampling, founding of new populations, splitting of populations, bottlenecks – are indeed often related to a reduction of the size of the population. In all such cases, there is a difference between the potential and actual reproducers that is captured by the distinction between census and effective population size.9 For instance, gamete sampling is the process by which only a percentage of the gametes produced is represented in the next generation because some individuals over-reproduce and others do not at all. So, for example, given a census population size N, the idea of
Even though also mutation might be considered as a stochastic process, in traditional population genetic models it is commonly conceived as a deterministic one (see, for instance, Hartl and Clark 2007). 9 Effective population size can be conceptualized in a variety of ways. When it is estimated that human effective population size is around 10.000 individuals (Yu et al. 2004) even though current census size is over 7 billion, reference is to the genetic variability in sampled genes. The reason for this discrepancy – already highlighted by Wright (Ohta 2012, p. 2) – is that a bottleneck in the history of the lineage drastically reduces effective population size. Another, more general and less refined, conceptualisation of effective population size merely draws on the difference between potential and actual reproducers. While both ways of conceptualising effective population size are legitimate (indeed there are many more, see note 11), the vernacular conceptualisation is sufficient to ground our argument. 8
156
L. Baravalle and D. Vecchi
gamete sampling is that only the genetic contribution of the effective population size Ne is represented in the next generation. Thus, the passage from N to Ne clearly involves a reduction in the size of the population. However, significantly, other potential cases might not involve reduction of the size of the census population. Suppose for instance that in a small population of 10 organisms with equal fitness (e.g., clones) reproducing by parthenogenesis, 9 have one offspring but one has mono-ovum twins: in this case, all potential reproducers do as a matter of fact reproduce, so all members of the census population do, but one of them reproduces more. In this case, there is no reduction of the census population to the effective population as the two coincide: in fact, there are 10 potential and 10 actual reproducers. In spite of this, because all organisms have equal fitness, we are inclined to consider the evolutionary outcome as an instance of drift. Still other cases might involve the expansion of the effective population size. For instance, in vitro fertilization allows infertile organisms (i.e., members of the census population who are not potential reproducers) to have offspring. As a consequence of these considerations, even though population size reduction is undoubtedly a good proxy to test drift hypotheses in many cases, it cannot, in our opinion, be considered as the cause of drift outcomes. As a matter of fact, we think that population size reduction is an effect, rather than a cause, of drift.10 In order to clarify this prima facie unintuitive statement, let us introduce what we think are the causes of drift. They are, as a first approximation, chance events (in our examples, for instance, the spontaneous division of the ovum in two embryos and the availability of a certain technology). With this expression we do not refer to indeterministic events (even though they might sometimes be), but rather to the environmental circumstances (like lightning strikes, floods, droughts, forest fires, etc.) and reproductive phenomena (like meiosis) instantiating what Millstein (2002, 2005; relying on Beatty 1984, 1992; Hodge 1987) calls “indiscriminate sampling.” This characterisation has been more or less explicitly endorsed by many population geneticists (e.g., Fisher 1921; Wright 1932; Dobzhansky 1937; Fisher and Ford 1947). The rationale of this view is that drift, as well as natural selection, is an iterative and inter-generational process of sampling of parents or gametes in a population. While selection is a process of discriminate sampling – since parents, and thus gametes, are sampled according to their differential fitness – drift is indiscriminate insofar as the differences in fitness are irrelevant to the sampling process. In an attempt to improve Millstein’s proposal, Gildenhuys suggests to consider chance events as non-interactive, non-pervasive and indiscriminate causes, that is, NINPICs: [NINPICs] are (i) non-interactive insofar as they have the same sort of causal influence on the reproduction of individuals of each type in the population (most are deadly for individuals of all types); (ii) non-pervasive insofar as they affect only some population members in any given generation or time slice; and (iii) indiscriminate insofar as they are just as likely
This is not to say that the size of a population cannot play a causal role in evolutionary dynamics but just that it is not because of the causal role that population size plays that drift can be considered as a force. 10
9 Drift as a Force of Evolution: A Manipulationist Account
157
to affect one population member as any other population member, regardless of what variant types they are (Gildenhuys 2009, p. 522; emphases in the original).
Thus, for instance, a forest fire is usually considered as a chance event – and, therefore, as a cause of drift outcomes – because (i) potentially, it has the same reproductive effects on each type of organism in a population (combustion is virtually lethal for any organism); (ii) in most actual cases, it does not affect all population members, but only a subset of them (i.e., those that incidentally happen to live in the geographical area affected by the fire); (iii) the chances of a population member being affected by the fire are independent from the fact that she has any specific genomic or phenotypic property (except, of course, the property of being spatially proximal to the geographical area affected by the fire). Taking for granted that this is a satisfactory characterisation of the notion of chance events, we can ask in what sense NINPICs are causes of the characteristic drift effects. The key to understand this is the so-called Kolmogorov forward equation of diffusion theory (Hartl and Clark 2007, p. 106 ff.). The equation describes the diffusion of an allele as the sum of two functions M(x) and V(x) such that M(x) denotes the effects of systematic forces like selection (but also mutation and migration) and V(x) the variance in allele frequency due to binomial sampling (i.e., drift, as considered in the Wright-Fisher model). Following Gildenhuys (2009, p. 528 ff.), the equation that determines V is p(1−p)/2Nev where Nev is variance effective population size. For simplicity, we shall equate, in our discussion, Nev to Ne.11 Since p (the allele frequency) is a kind of background condition, it is Nev (or Ne) the variable that we need to know in order to solve Kolmogorov forward equation. Let us now turn our attention to what we call a “drift generalisation”; that is – as we said at the beginning of this section – a generalisation able to identify all the causal factors that are characteristically and systematically related to stochastic evolutionary outcomes. In manipulationist terms, the “stochastic evolutionary outcome” is the Y of a linear regression equation of which we need to find the Xs, its causes. If the stochastic evolutionary outcome in the Kolmogorov forward equation is V, and Ne is what determines V values, then Y is Ne. This is why, as mentioned before, the size of a population – more precisely, the effective size of a population – is the effect rather than a cause of drift. It is the variable that, when plugged in the Kolmogorov forward equation, amplifies or reduces the stochasticity of the overall evolutionary dynamic of the allele under study. Nonetheless, when we consider drift as a cause – that is, when we want to know the Xs that produce a change in Ne – we are referring to NINPICs, that is, the set of chance events materially responsible for such a change. We can make this claim clearer as follows. One way to understand Ne – both in the Wright-Fisher model and in diffusion theory – is as a variable that tracks the There exist at least three ways in population genetics to conceptualise effective size: besides variance effective population size, they are inbreeding effective population size and eigenvalue effective population size. Although they have different functions within population modelling, they all represent the number of actual reproducers in contrast with the number of potential reproducers denoted by census population (see note 9). 11
158
L. Baravalle and D. Vecchi
amount of actual variance in progeny number per parents (Charlesworth 2009). Without entering into excessive technicalities, we could say that a large amount of variance normally implies a reduced Ne, while a small amount implies a large Ne. In other words, variance in progeny number is large in the case in which there is a great difference between the number of offspring of the members of a population, while in the case in which each parent in a generation has a single offspring we have no variance, i.e., the effective population is identical to the census population. Nonetheless, the latter is an ideal case; in the real world this rarely, if ever, happens. The variance in progeny number may be due to natural selection, but we shall here focus exclusively on the variance due to non-selective factors (as usual in population genetics models). These are the “chance” events that constitute the causes of drift. The idea is that the NINPICs characterisation captures the shared features of all of them in a way that, necessarily, if we manipulate NINPICs, the variance in progeny number correspondingly increases or is reduced and this, in its turn, stochastically biases the evolutionary dynamics. A small amount of NINPICs yields less variance in progeny number and allows maintaining a large Ne, with the effect that – in accordance to the Kolmogorov forward equation – (putatively) deterministic forces, like selection, act relatively undisturbed. On the contrary, a large amount of NINPICs increases the variance in progeny, thereby reducing Ne and – in the case in which the differences in fitness between the members of the population are small – producing the fluctuations in allele frequencies characteristic of drift. Generalisations counterfactually relating effective population size with NINPICs are maximally insensitive to background conditions insofar as they leave no potentially disrupting factors unconsidered. Although, quite obviously, in a specific situation it is impossible to enumerate all the ecological and reproductive factors counting as non-interactive, non-pervasive and indiscriminate causes, this is an epistemic limitation that does not affect the explanatory depth of the invariant generalisation. The invariant generalisation relating effective population size to NINPICs just says that, independently from our ability to identify and enumerate all these factors, if they have NINPICs characteristics, they are thus causally responsible for the fluctuations in effective population size and, if not, they are not. The generalisation is nonetheless stable because a change in the NINPICs would necessarily lead to a change of effective population size, in any specific evolutionary scenario. The stability of such generalisation is, in turn, guaranteed precisely by the availability of a causal notion of drift like NINPICs. NINPICs confers the manifold causes of drift the status of a force because, paraphrasing Jammer, they enable us to discuss the general effects of drift irrespective of the particular physical situation with which these causes are situated.
9 Drift as a Force of Evolution: A Manipulationist Account
159
9.7 Conclusion In this chapter, we have defended a notion of force grounded on the manipulationist interpretation of explanatory depth. According to this interpretation, an explanation is deep if it contains in its explanans an invariant generalisation that is stable (i.e., it is not easily disrupted by background factors) and portable (i.e., it contains a description of the causes of the explanandum that is highly insensitive to the specific circumstances in which they occurred). We have argued that a “force” may be any unitary set of causes of a certain class of phenomena such that, when invoked to explain a phenomenon pertaining to this class, it is able to provide a very deep explanation. Hence, we have shown that both Newtonian forces and drift (this latter considered as NINPICs) satisfy this condition and, thus, may be properly considered as forces. We have suggested that characteristic features of Newtonian forces such as mathematical magnitude and direction, albeit important for the formal representation of a force, are secondary with respect to its specific explanatory role within the theory. Accordingly, although a characterisation of drift as a directional force would surely strengthen the Newtonian analogy, a cause may be an instance of a force even without specific direction. This liberalisation of the notion of force is, in our opinion, the first step towards a refinement of the dynamical interpretation of evolutionary theory free of the burden of having to be completely isomorphic to Newtonian theory (in the spirit of authors like Filler 2009; Pence 2016 and Luque 2016). The crucial characteristic of a force is its unificatory causal role. In the case of drift, this is achieved by individuating all those causal factors that act unsystematically to perturb the (putatively) deterministic evolutionary dynamic of an allele through the fluctuations of the effective size of the population under study. We do not intend here to attempt an analysis of the other forces of evolution, but it is easy to imagine how it could be extended, for instance, to mutation and natural selection. Mutation might be considered a force if stable and portable generalisations concerning its evolutionary role were formulated. The intuitive problem at this stage is that the processes of genomic change instantiating mutation are multifarious (ranging from point mutations to chromosome duplications), even though this variability is somehow analogous to the variability of the NINPICs instantiating drift as a process. Mutation affects evolutionary dynamics in a way that is clearly conceptually different from that of selection and drift for the reason that the focus is on the causal role of genomic change in evolution. Interestingly, it has been argued that mutation might bias the evolutionary process (Stoltzfus and McCandlish 2017). Consider for instance the preponderance of transitions (i.e., a mutation from one purine to another or from one pyrimidine to another) compared to transversions (i.e., mutations from a purine to a pyrimidine or vice versa). It has been calculated that transitions are three times more common than transversions. If this bias generates a systematic evolutionary effect – for instance in the form of a predictable and constant direction – then we can consider mutation as a force.
160
L. Baravalle and D. Vecchi
Selective causes have the effect of increasing or reducing, in a population, the frequency of an allele depending on its overall contribution to the survival and reproduction of the individual organisms expressing it phenotypically. Such causes may be instantiated, as in the case of drift, in an impressive variety of ecological and developmental ways. It is arguably difficult to provide a satisfactory definition of the notion of ecological fitness that could be used to characterise the “discriminateness” of selection against the “indiscriminateness” of drift; nonetheless, it is precisely to this shared feature of selective factors that biologists refer to when they talk about selection as a cause. Acknowledgments We would like to thank Victor Luque, Elliott Sober, Luciana Zaterka and two anonymous reviewers for useful comments. We acknowledge the financial support of the Fondo Nacional de Desarrollo Científico y Tecnológico de Chile (Grant N° 1171017 FONDECYT REGULAR). Lorenzo Baravalle acknowledges the financial support of the Fundação de Amparo à Pesquisa do Estado de São Paulo, FAPESP (Grant N° 2017/24766-3), the Conselho Nacional de Desenvolvimento Científico e Tecnológico do Brasil, CNPq (Grant N° 402619/2016-1) and the Fundação para a Ciência e a Tecnologia de Portugal, FCT (Contract N° DL57/2016/CP1479/ CT0064). Davide Vecchi acknowledges the financial support of the Fundação para a Ciência e a Tecnologia, FCT (Contract N° DL57/2016/CP1479/CT0072; Grant N. UID/FIL/00678/2019; BIODECON R&D Project grant PTDC/IVC- HFC/1817/2014).
References Beatty, J. (1984). Chance and natural selection. Philosophy of Science, 51, 183–211. Beatty, J. (1992). Random Drift. In E. K. Keller & E. A. Lloyd (Eds.), Keywords in evolutionary biology (pp. 273–381). Cambridge, MA: Harvard University Press. Blanchard, T., Vasilyeva, N., & Lombrozo, T. (2018). Stability, breadth and guidance. Philosophical Studies, 175, 2263–2283. Brandon, R. N. (2006). The principle of drift: Biology’s first law. Journal of Philosophy, 103, 319–335. Brandon, R. N. (2010). A non-newtonian model of evolution: The ZFEL view. Philosophy of Science, 77, 702–715. Caponi, G. (2014). Leyes sin causas y causas sin ley en la explicación biológica. Bogotá: Universidad Nacional de Colombia. Charlesworth, B. (2009). Effective population size and patterns of molecular evolution and variation. Nature Reviews, Genetics, 10, 195–205. Dobzhansky, T. (1937). Genetics and the origin of species. New York: Columbia University Press. Dobzhansky, T., & Pavlovsky, O. (1957). An experimental study of interaction between genetic drift and natural selection. Evolution, 11, 311–319. Dowe, P. (2000). Physical causation. Cambridge, NY: Cambridge University Press. Endler, J. A. (1986). Natural selection in the wild. Princeton: Princeton University Press. Filler, J. (2009). Newtonian forces and evolutionary biology: A problem and solution for extending the force interpretation. Philosophy of Science, 76, 774–783. Fisher, R. A. (1921). Review of the relative value of the processes causing evolution. The Eugenics Review, 13, 467–470. Fisher, R. A., & Ford, E. B. (1947). The spread of a gene in natural conditions in a colony of the moth Panaxia Dominula. Heredity, 1, 143–174. Futuyma, D. J. (2005). Evolution. Sunderland: Sinauer.
9 Drift as a Force of Evolution: A Manipulationist Account
161
Gildenhuys, P. (2009). An explication of the causal dimension of drift. British Journal for the Philosophy of Science, 60, 521–555. Gillespie, J. H. (2004). Population genetics: A concise guide. Baltimore: John Hopkins University Press. Hartl, D. L., & Clark, A. G. (2007). Principles of population genetics (4th ed.). Sunderland: Sinauer. Hitchcock, C., & Velasco, J. D. (2014). Evolutionary and Newtonian forces. Ergo, 1. Hitchcock, C., & Woodward, J. (2003). Explanatory generalisations, part II: Plumbing explanatory depth. Noûs, 37, 181–199. Hodge, J. (1987). Biology and philosophy (including ideology): A study of Fisher and Wright. In S. Sarkar (Ed.), The founders of evolutionary genetics (pp. 231–285). Dordrecht: Springer. Jammer, M. (1956). Concepts of force: A study in the foundations of dynamics. New York: Dover. Kitcher, P. (1989). Explanatory unification and the causal structure of the world. In P. Kitcher & W. C. Salmon (Eds.), Scientific explanation (pp. 410–505). Minneapolis: University of Minnesota Press. Lewis, D. K. (1986). Philosophical papers (Vol. 1). Oxford: Oxford University Press. Luque, V. (2016). Drift and evolutionary forces. Theoria: Revista de Teoría, Historia y Fundamentos de la Ciencia, 31, 397–410. Matthen, M., & Ariew, A. (2002). Two ways of thinking about fitness and natural selection. Journal of Philosophy, 99, 55–83. Millstein, R. L. (2002). Are random drift and selection conceptually distinct? Biology and Philosophy, 17, 33–53. Millstein, R. L. (2005). Selection vs drift: A response to Brandon’s reply. Biology and Philosophy, 20, 171–175. Millstein, R. L. (2006). Natural selection as a population-level causal process. The British Journal for the Philosophy of Science, 57, 627–653. Ohta, T. (2012). Drift: theoretical aspects. In Encyclopedia of Life Sciences. Chichester: John Wiley & Sons, Ltd. http://www.els.ne0; https://doi.org/10.1002/9780470015902.a0001772. pub3. (last accessed 05/09/2018). Pence, C. H. (2016). Is genetic drift a force? Synthese, 194, 1967–1988. Plutynski, A. (2007). Drift: A historical and conceptual overview. Biological Theory, 2, 156–167. Reisman, K., & Forber, P. (2005). Manipulation and the causes of evolution. Philosophy of Science, 72, 1113–1123. Rice, S. H. (2004). Evolutionary theory: Mathematical and conceptual foundations. Sunderland: Sinauer. Roffé, A. (2017). Genetic drift as a directional factor: Biasing effects and a priori predictions. Biology and Philosophy, 32, 535–558. Salmon, W. (1986). Scientific explanation and the causal structure of the world. Princeton: Princeton University Press. Shapiro, L., & Sober, E. (2007). Epiphenomenalism – The Dos and the Don’ts. In G. Wolters & P. Machamer (Eds.), Thinking about causes: From Greek philosophy to modern physics (pp. 235–264). Pittsburgh: University of Pittsburgh Press. Sklar, L. (2013). Philosophy and the foundations of dynamics. Cambridge, UK: Cambridge University Press. Sober, E. (1984). The nature of selection. Cambridge, MA: The MIT Press. Stephens, C. (2004). Selection, drift, and the “forces” of evolution. Philosophy of Science, 71, 550–570. Stephens, C. (2010). Forces and causes in evolutionary theory. Philosophy of Science, 77, 716–727. Stoltzfus, A., & McCandlish, D. M. (2017). Mutational biases influence parallel adaptation. Molecular Biology and Evolution, 34, 2163–2172. Walsh, D. M., Lewens, T., & Ariew, A. (2002). The trials of life: Natural selection and random drift. Philosophy of Science, 60, 195–220. Weslake, B. (2010). Explanatory depth. Philosophy of Science, 77, 273–294.
162
L. Baravalle and D. Vecchi
Woodward, J. (2003). Making things happen. Oxford: Oxford University Press. Woodward, J. (2006). Sensitive and insensitive causation. Philosophical Review, 115, 1–50. Woodward, J. (2010). Causation in biology: Stability, specificity and the choice of levels of explanation. Biology and Philosophy, 25, 287–318. Woodward, J. (2016). Unificationism, explanatory internalism, and autonomy. In M. Couch & J. Pfeifer (Eds.), The philosophy of Philip Kitcher (pp. 121–146). Oxford: Oxford University Press. Woodward, J., & Hitchcock, C. (2003). Explanatory generalizations, part I: A counterfactual approach. Nôus, 37, 1–24. Wright, S. (1932). Evolution in Mendelian populations. Genetics, 16, 97–159. Ylikoski, P., & Kuorikoski, J. (2010). Dissecting explanatory power. Philosophical Studies, 148, 201–219. Yu, N., Jensen-Seaman, M. I., Chemnick, L., Ryder, O., & Li, W. H. (2004). Nucleotide diversity in gorillas. Genetics, 166, 1375–1383.
Chapter 10
Laws, Models, and Theories in Biology: A Unifying Interpretation Pablo Lorenzano and Martín Andrés Díaz
10.1 Introduction Three metascientific concepts subject to philosophical analysis are law, model, and theory. Throughout the twentieth and twenty-first centuries, three general conceptions of scientific theories can be identified: the “classical (or received)” view, the “historical (or historicist)” view, and the “semantic (or model-theoretic)” view. For the classical view, in its most general approach, theories should be represented as deductively or axiomatically organized sets of statements. Laws, on the other hand, are an essential component of these: they constitute the axioms by means of which they are metatheoretically represented (Carnap 1939, 1956). In the beginnings of the classical view, models were conceived as marginal phenomena of science (Carnap 1939). Subsequent authors (Braithwaite 1953; Nagel 1961) strive to incorporate the models into the framework of this classical view and recognize their importance. Historicist philosophers of science, with their alternative notions to the classical concept of theory (pattern of discovery in Hanson 1958, ideal of natural order in Toulmin 1961, paradigm or disciplinary-matrix in Kuhn 1970, research program in Lakatos 1970, and research tradition in Laudan 1977), let shine a certain concepThe original version of this chapter was revised. The correction to this chapter is available at https://doi.org/10.1007/978-3-030-39589-6_12 P. Lorenzano (*) Center of Studies in Philosophy and History of Science, National University of Quilmes, Bernal, Buenos Aires, Argentina National Scientific and Technical Research Council (CONICET), National University of Quilmes, Bernal, Buenos Aires, Argentina M. A. Díaz Center of Studies in Philosophy and History of Science, National University of Quilmes, Bernal, Buenos Aires, Argentina © Springer Nature Switzerland AG 2020 L. Baravalle, L. Zaterka (eds.), Life and Evolution, History, Philosophy and Theory of the Life Sciences 26, https://doi.org/10.1007/978-3-030-39589-6_10
163
164
P. Lorenzano and M. A. Díaz
tion about the laws different from the classical one. At the same time, alternative proposals to the classical one developed, which highlight the function of models in scientific practice (Achinstein 1968; Hesse 1966; Harré 1970), as well as investigating what role analogies and metaphors play in the construction of models (Black 1962; Hesse 1966) or of other components, linked to these, raised by historicist philosophers, such as exemplars (Kuhn 1970). At the present time, the importance of models in scientific practice is being emphasized. The semantic view—which deals with the subject matter of models within the framework of a general conception of scientific theories—is being imposed as an alternative to the classical and historicist views of scientific theories,1 and model views of science are being developed—which deal with questions of the relationship between models and experience and between models and general theories independently of a general metatheory of science (Cartwright et al. 1995; Morrison 1998, 1999; Cartwright 1999; Suárez and Cartwright 2008). According to the semantic view of theories, concepts relative to models are much more fruitful for the philosophical analysis of theories, their nature, and function than concepts relative to statements; the nature, function, and structure of theories can be better understood when their metatheoretical characterization, analysis, or reconstruction is centered on the models that they determine and not on a particular set of axioms or linguistic resources through which they do it.2 Therefore, the most fundamental component for the identity of a theory is a class (set, population, collection, family) of models. With the emphasis on models, one might think that not only can the term, or the concept, of “law” be disposed of3 but also that the issue of laws should not be discussed. However, models must be identified in some way. And in the semantic view this is usually done through the laws or principles or equations (what they are called is the least important issue) of the theory to which they belong (thus, models would constitute the semantic or model-theoretic counterpart of such laws or principles or equations). On the other hand, for model views, models do not form part of theories (in some usual, encompassing sense of the term), and they are independent— “autonomous”—with respect to them. Yet, models would also be represented, or be identified, by means of principles, equations or laws, although not universally so. The aim of this article is to present the explication of the concepts of law, model, and theory, and of their relationships, made within the framework of Sneedian or metatheoretical structuralism and of their application to a case from the realm of biology: population dynamics. The analysis carried out will make it possible to support, contrary to what some philosophers of science in general and of biology in
“Over the last four decades, the semantic view of theories has become the orthodox view on models and theories” (Frigg 2006, p. 51). 2 This idea has been developed in different particular ways, giving rise to different approaches, variants, or versions, which despite their differences constitute a family, the semantic family. For a characterization of this family, and of some of its members as well as a reference to many of them, see Lorenzano (2013) and Ariza et al. (2016). 3 For skeptical positions about any notion of law and the substitution of the term “law” by other notion, such as “(fundamental) equations” or “(basic) principles,” see Cartwright (1983, 2005), Giere (1995), and van Fraassen (1989). By the way, Carnap himself had already considered the possibility of dispensing with the term “law” in physics (Carnap 1966, p. 207). 1
10 Laws, Models, and Theories in Biology: A Unifying Interpretation
165
particular hold, the following claims: (a) there are “laws” in biological sciences, (b) many of the heterogeneous and different “models” of biology can be accommodated under some “theory”, and (c) this is exactly what confers great unifying power to biological theories. To begin with, the structuralist explication of the metascientific concepts of law, model, and theory and their application to population dynamics4 will be presented successively. Next, the relevance of the previous analysis to the issues of the existence of laws in biological sciences, the place of models in theories of biology, and the unifying power of biological theories will be stressed. Finally, the chapter will conclude with a discussion of the presented analysis.
10.2 T he Concept of Law from the Point of View of Metatheoretical Structuralism5 In scientific as well as in philosophical literature, many authors speak not just plainly about laws but about natural laws, or laws of nature, on the one hand, and about scientific laws, or laws of science, on the other hand, too. Such expressions, besides, are commonly used as if the expressions belonging to one pair were interchangeable with the expressions belonging to the other pair, i.e., as if they were synonymous or had the same meaning. However, we consider it convenient to distinguish the first pair from the second one since they correspond to different approaches or perspectives (e.g., Weinert 1995). The first pair corresponds to an approach of an ontological kind—corresponding to how things themselves are— while the second one corresponds to an approach of an epistemic kind—centered in what we know. Some philosophers have argued that a philosophical treatment of laws should be given only for the laws of nature and not for the laws of science. While others consider it more appropriate to refer to the laws of science than (only) to the laws of nature because, in any case, it is the laws of science that would provide important keys to understanding what a law of nature is. In what follows when we speak about laws, we will be talking about scientific laws, or laws of science, and not about natural laws, or laws of nature.6 At least as of 1930 the problem of what a law is, i.e., the problem of finding the necessary and sufficient criteria or conditions which a statement should satisfy in order to be considered or in order to function as a law, is discussed. According to the classical view (Hempel and Oppenheim 1948), a law is a true lawlike statement that has the following properties: it is universal with an unlimited
The analysis of population dynamics is based on Díaz and Lorenzano (2017). See Balzer et al. (1987) for a complete and technically precise presentation of this metatheory. 6 For a more extensive discussion about the nature of laws as well as an analysis of natural laws within the framework of metatheoretical structuralism, see Forge (1986, 1999) and Lorenzano (2014–2015). 4 5
166
P. Lorenzano and M. A. Díaz
or at least unrestricted scope of application; it does not refer explicitly or implicitly to particular objects, places, or specific moments; it does not use proper names; and it only uses “purely universal in character” (Popper 1935, sects. 14–15) or “purely qualitative” predicates (Hempel and Oppenheim 1948, p. 156). Despite successive and renewed efforts there is not a satisfactory adequate set of precise necessary and sufficient conditions as a criterion for a statement to be considered a “(scientific) law.”7 The discussions in the field of general philosophy of science have also been held, and have taken place, in the special field of philosophical reflection on biology and its different areas such as classical genetics, population genetics, evolutionary theory, and ecology, among others. Some authors deny the existence of laws in biology in general, and in ecology in particular. Two main arguments have been put forward against the existence of laws in biology. The first one is based on the alleged locality or non-universality of generalizations in biology (Smart 1963); the second one is based on the alleged (evolutionary) contingency of biological generalizations (Beatty 1995). At least three responses to these arguments can be found. The first one consists in submitting them to a critical analysis. This approach is chosen by Ruse (1970), Munson (1975), and Carrier (1995), among others. The second one is to defend the existence of laws, or principles, in biology but arguing that they are non-empirical, a priori. This strategy is followed by Brandon (1978, 1981, 1997), Sober (1984, 1993, 1997), and Elgin (2003). The third one is to defend the existence of empirical laws, or principles, in biology but arguing for a different explication of the concept of law or of non-accidental, counterfactual supporting, generalizations (Carrier 1995; Mitchell 1997; Lange 1995, 2000; Dorato 2005, 2012; Craver and Kaiser 2013). Our proposal will be of this third kind. But in such a manner that it will allow to consider “theoretical pluralism,” “relative significance” controversies and some kind of contingency as not exclusive of biology (agreeing with Carrier 1995 on this) and to understand better the role played by different laws or lawlike statements of different degrees of generality in biology (capturing some of the points made by Ruse 1970 and Munson 1975) as well as the “a priori” component pointed out by Brandon, Sober, and Elgin.8 With respect to the existence of laws in ecology in particular—and taking into account the classical proposal of differentiating between two types of genuine laws: on the one hand, laws of unlimited, unrestricted scope or fundamental laws and, on the other, laws of limited, restricted scope or derivative laws that would follow from more fundamental laws (Hempel and Oppenheim 1948, p. 154)—we must distinguish the claim that there are no laws in ecology at all, which is hardly tenable given at least the so-called “Malthus law” of exponential population growth (Ginzburg 1986; Turchin 2001; Berryman 2003) and the more asserted and discussed claim
See Stegmüller (1983) and Salmon (1989) for an analysis of the difficulties of the classical explication of the notion of scientific law. 8 For a more detailed discussion of the two first kinds of responses, see Lorenzano (2006, 2007, 2014–2015), Díez and Lorenzano (2013, 2015). 7
10 Laws, Models, and Theories in Biology: A Unifying Interpretation
167
that there are no fundamental and/or general nomological principles in ecology (see e.g. Peters 1991; Lawton 1999). Ecologists and philosophers of ecology have taken part in the debate, resulting in a large number of publications on the subject. Following Linquist et al. (2016), we can distinguish three main reactions to the denial of the existence of fundamental laws in ecology in a similar way as we did in the case of biology in general: either to take issue with the kind of evidence used to justify Lawton’s skeptical claims (Linquist 2015) or to cite examples of candidates for laws (Murray Jr. 1979, 1992, 1999, 2000, 2001; Turchin 2001; Berryman 2003; Ginzburg and Colyvan 2004), or to reject the concept of law that Lawton and other skeptics employ (Cooper 1998; Colyvan and Ginzburg 2003; Lange 2005). Our position will be, again, of this last kind. And, once more, in such a way that it will allow to understand better the role played by different laws or lawlike statements of different degrees of generality in ecology. With respect to the non-universality of biological generalizations, we contend that universality is a too demanding condition. What matters is not strict universality but rather the existence at least of non-accidental, counterfactual supporting, generalizations, which we take as uncontroversial present in biology, though generally more domain restricted and ceteris paribus than in other areas of science such as mechanics or thermodynamics. As many philosophers of biology and of physics, we also accept a broader sense of lawhood that does not require non-accidental generalizations to be universal and with no exceptions in order to qualify as laws. This minimal characterization of laws as counterfactual-supporting facts is similar to the one defended in Dorato (2012), and it is also compatible with some proposals about laws in biology in particular, such as the “paradigmatic” (Carrier 1995) and “pragmatic” (Mitchell 1997) ones. Whether one wants to call these non-accidental, domain restricted, generalizations “laws” is a terminological issue we will not enter here. What matters is, tagged as one wills, that these non-accidental generalizations play a key role in biology in general and in ecology in particular. We will show that in the case of population dynamics (PD). Within the structuralist tradition, when dealing with the subject of laws, discussions, even from their beginnings with Sneed (1971), though not with that terminology, focus on those scientific laws which, starting with Stegmüller (1973), are called “fundamental laws” of a theory. However, accepting the problems for finding a definition of the concept of a law, when the criteria for a statement to be considered a fundamental law of a theory are discussed within the framework of metatheoretical structuralism, one tends to speak rather of “necessary conditions” (Stegmüller 1986, p. 93), of “weak necessary conditions” (Balzer et al. 1987, p. 93), or, better still, only of “symptoms”, some even formalisable (Moulines 1991, p. 233), although in each particular case of reconstruction of a given theory, it seems, as a general rule, to be relatively easy to agree, on the basis of informal or semi-formal considerations (for example, on its systematizing role or its quasi-vacuous character), that a given statement should be taken as the fundamental law of the theory in question (Moulines 1991, p. 233).
168
P. Lorenzano and M. A. Díaz
On the other hand, metatheoretical structuralism draws a distinction between the so-called fundamental laws (or guiding principles) and the so-called special laws. This distinction, which will be developed later, elaborates the classical distinction between two kinds of laws with different degrees of generality in a different way as well as the Kuhnian distinction between the symbolic generalizations — “generalization-sketches” (Kuhn 1974), “schematic forms” (Kuhn 1974), “law sketches” (Kuhn 1970, 1974) or “law-schema” (Kuhn 1970) — and their “particular symbolic forms” (Kuhn 1974) adopted for application to particular problems in a detailed way.9 Very briefly, five criteria can be mentioned as necessary conditions, weak necessary conditions or “symptoms” for a statement to be considered a fundamental law/ guiding principle in the structuralist sense: 1. Its cluster or synoptic character. This means that a fundamental law should include “all the relational terms (and implicitly also all the basic sets) and, therefore, at the end, every fundamental concept that characterize such a theory” (Moulines 1991), “several of the magnitudes,” “diverse functions,” “possibly many theoretical and non-theoretical concepts” (Stegmüller 1986), “almost all” (Balzer et al. 1987), “at least two” (Stegmüller 1986). 2. To be valid in every intended application. According to this, it is not necessary that fundamental laws have an unlimited scope, apply every time and everywhere, and possess as universe of discourse something like a “big application,” which constitutes an only one or “cosmic” model, but it rather suffices that they apply to partial and well-delimited empirical systems: the set of intended applications of the theory (Stegmüller 1986).10 3. Its quasi-vacuous character. This means that they are highly abstract, schematic, and contain essential occurrences of T-theoretical terms, which in a structuralist sense are terms whose extension can only be determined through the application of a theory’s fundamental law(s)11 so that they can resist possible refutations, but which nevertheless acquire specific empirical content through a non-deductive process known as “specialization” (Moulines 1984). 4. Its systematizing or unifying role. Fundamental laws allow including diverse applications within the same theory since they provide a guide to and a conceptual framework for the formulation of other laws (the so-called “special laws”), which are introduced to impose restrictions on the fundamental laws and thus apply to
On the other hand, the expressions “fundamental law” and “special law” are not used here in Fodor’s sense (Fodor 1974, 1991)—the former for laws of basic or fundamental sciences, the latter for laws of special sciences—but rather in the sense used by structuralists, i.e., for different kinds of laws within a theory. 10 The validity of laws can be regarded as exact—and thus as strict or non-interferable laws—or, rather, to the extent that they usually contain not only abstractions but also various idealizations, as approximate, as already pointed out by Scriven (1959) and more extensively by Cartwright (1983)—and so as non-strict or interferable laws, and compatible with various specific treatments of this situation, such as those referring to ceteris paribus clauses (Cartwright 1983), “provisos” (Coffa 1973 and Hempel 1988), or “normicity” (Schurz 2009). 11 For more on the structuralist T-theoretical/T-non-theoretical distinction, see Sect. 10.3. 9
10 Laws, Models, and Theories in Biology: A Unifying Interpretation
169
particular empirical systems (Moulines 1984).12 It is clear that the distinction between fundamental and special laws is relative to the considered theory. 5. To possess modal import. Fundamental laws express non-accidental regularities, are able to give support to counter-factual statements (if they are taken “together- with-their-specializations” within a theory, in the sense that we will introduce later of theory-net), even when they are context-sensitive and with a domain of local application, and that, in its minimal sense, instead of attributing natural necessity, necessity of the laws is attributed, and, in that sense, they should be considered as necessary in their area of application, even when outside such an area it does not need to be that way (Lorenzano 2014–2015; Díez and Lorenzano 2013). Fundamental laws/guiding principles are “programmatic” or heuristic in the sense that they tell us the kind of things we should look for when we want to explain a specific phenomenon. But, as said before, taken in isolation, without their specializations, they say empirically very little. They can be considered, when considered alone, “empirically non-restrict.” In order to be tested/applied, fundamental laws/ guiding principles have to be specialized (“concretized” or “specified”). These specific forms adopted by the fundamental laws are the so-called “special laws.” It is worth emphasizing that the top-bottom relationship established between laws of different levels of generality is not one of implication or derivation but of specialization in the structuralist sense (Balzer et al. 1987, Chap. 4): bottom laws are specific versions of top ones, i.e., they specify some functional dependencies (concepts) that are left partially open in the laws above. That is the reason why they are called “special laws” instead of “derivative laws” like in the classical view of laws according to which the laws with a more restricted or limited scope are assumed to be logically derived or deduced from the fundamental laws. Actually, “special laws” are not derived or deduced literally from the fundamental laws (at least are not derived or deduced only from them) without considering some additional premises. Formally speaking, the specialization relation is reflexive, antisymmetric, and transitive and does not meet the condition of monotonicity. When the highest degree of concretization or specificity has been reached, i.e., when all functional dependencies (concepts) are completely concretized or specified, “terminal special laws” are obtained. This kind of special laws, proposed to account for particular empirical situations, can be seen as particular, testable and, eventually, refutable hypotheses to which to direct “the arrow of modus tollens” (Lakatos 1970, p. 102).
10.2.1 Laws in Population Dynamics Population dynamics is a part of population ecology that studies the variation in the number of individuals in a population over time. Although during the nineteenth century it began with the works of Malthus and Verhulst, we can affirm that it had By saying it in a model-theoretic way, fundamental laws determine the whole class of models of a theory while special laws determine only some of them, which constitute a subclass of the class of models. 12
170
P. Lorenzano and M. A. Díaz
its formal beginnings in the first decades of the twentieth century, when the ecologists adopted a vision centered on populations, by means of which they tried to explain using mathematical equations their behavior over time. In this way the most important developments of population dynamics have been made when the equations of competition and predation of Lotka (1925) and Volterra (1926), and the curves of Pearl (1925), among others, were added to the exponential equations of Malthus (1798) and the logistic equation of Verhulst (1845). These authors sought to determine how the number of individuals or size of the population was modified based on certain characteristics of the species and factors external to the population. Population dynamics (PD) studies the changes in population size over time. It is about populations, defined as the set of organisms of the same species that inhabit a given area, whose sizes (N), i.e., the number of individuals, change (ΔN) over time (from t to t + 1) (i.e., from Nt to Nt + 1). It talks about populations (POP)—organisms (O), or, better, sets of organisms Pot(O) belonging to the same species which make up populations, where pop symbolizes a population (pop ⊆ Pot(O)) and POP the set of populations such that pop ∈ POP—and of the size of populations (N) that changes (ΔN) over time (from t to t + 1) (i.e., from Nt to Nt + 1), depending on the demographic processes (DP) of birth (B), death (D), immigration (I), and emigration (E). The connections between its different components can be graphically represented in the following way (see Fig. 10.1): Population dynamics (PD) intends to account for changes in population size over time, i.e., for organisms that make up a population in which a set of demographic processes occur (birth, death, emigration, and immigration) that determine that the size of the population is modified in a given interval of time. Examples of cases of closed populations (i.e., of populations in which neither immigration nor emigration occur) are the following: 1 . The post-glacial expansion of Corylus avellane population in Nortfolk, UK 2. The decrease of the population of North Atlantic northern right whale (Eubalaena glacialis) 3. The post-glacial expansion of Pinus sylvestris population in Nortfolk, UK 4. The fluctuation of the Australian sheep bow-fly (Lucilia cuprina) 5. The growth of adult carob trees (Prosopis flexuosa) population
popt = {o1, o2, ok}
DP = {B, D, I, E}
popt+1 = Nt + B – D + I – E
T
DT t nt
t+1
nt+1
Fig. 10.1 Connections between different components of population dynamics
10 Laws, Models, and Theories in Biology: A Unifying Interpretation
171
6. The competitive system of the species Drosophila willistoni and D. pseudo-obscura 7. The interspecific competition between juvenile Hobsonia florida (Polychaeta, Ampharetidae) and oligochaetes 8. The cycling of predator and prey populations of the Canada Lynx (Lynx canadiensis) and its principal prey, the snow-shore hare (Lepus americanus) For every specific case ecologists have to postulate specific biotic and other population factors13 that act together with specific demographic factors as well as the specific manner of their combination in the rate of population change14 that accounts for the observed (or estimated) change in population size. This means that in order to account for the observed (or estimated) change in population size, the following parameters have to be specified: (i) Types and numbers of components of the rate of population change (RPC) acting together (i.e., types and numbers of demographic processes (DP) and of population factors ((Fi)i ≤ k)), and (ii) (a) The specific mathematical form assumed by RPC and (b) the specific mathematical form assumed by the equation in which all basic concepts occurred (being it continuous, discrete, exponential, linear, logistic, etc.) Thus, ecologists propose equations, which contain these two types of specifications.
There are different ways of classifying the different types of factors that affect demographic processes. For instance, Krebs (2008) distinguishes two types of factors, external and internal, while Berryman (2003) distinguishes three types of factors, biotic, genetic, and abiotic. We follow this tripartite distinction but with a slightly different denomination. So, members of the set of population factors ((Fi)i ≤ k) are the so-called set of environmental factors (Famb) (such as physical and chemical conditions of the environment and resources), set of genetic factors (Fgene) (related to genetic properties of the distinct individuals that place them in different classes with respect to age, sex, or size), and set of biotic (or biological) factors (Fbio) (variables or “parameters” like carrying capacity, competition coefficient, capture efficiency, and conversion efficiency of predator, which measure the biotic interactions such as predation, parasitism, and intra- or interspecific competition). While the extension of the concepts environmental and genetic factors can be determined independently from population dynamics, and thus are PD-non-theoretical, the extension of most of biotic factors can only be determined by means of PD and in that sense a subset of them are PD-theoretical. 14 The rate of population change (RPC) is a functional whose domain is the set of demographic processes (DP), the size of the population (Nt), and the whole of population factors ((Fi)i ≤ k) in an instant of time (T), and its co-domain is the set of real numbers: RPC: DP × Nt × (Fi)i ≤ k × T → ℝ. It is definitely a PD-theoretical concept: its extension can only be determined by applying the very same PD. 13
172
P. Lorenzano and M. A. Díaz
The following are the specifications introduced to account for the above examples and the respective equations to which they give rise15: 1. The post-glacial expansion of Corylus avellana population is (i) not affected by population factors and (ii) has no population limitations of continuous and exponential variation: dN/dt = (b − d) ⋅ nt (Bennett 1983). 2. The population of the North Atlantic northern right whale is (i) affected by genetic factors and has a (ii) continuous and exponential variation: k dN / dt = [ln ∑ x =0 b( x )d ( x ) / G ] ⋅ nG (Caswell et al. 1999). 3. The post-glacial expansion of Pinus sylvestris population in Norfolk, UK, is (i) affected by environmental and biotic factors and has (ii) carrying capacity and continuous logistic variation with linear dependence between b, d and Nt: dN/dt = [(b − d)(1 − nt/K)] ⋅ nt (Bennett 1983). 4. The population of Australian sheep bow-flies is (i) affected by environmental and biotic factors and has (ii) carrying capacity and continuous logistic variation with linear dependence between b, d and the population size in an instant of time previous to t: dN/dt = [(b − d)(1 − nt − τ/K)] ⋅ nt (Nicholson 1957). 5. The population of adult carob trees is (i) affected by environmental and biotic factors (carrying capacity and critical population size) and follows (ii) carrying capacity and continuous logistic variation with non-linear dependence between b, d, Nt and C: dN/dt = [(b − d)(1 − nt/K)(1 − C/nt)] ⋅ nt (Aschero and Vázquez 2009). 6. The competitive system of populations of species Drosophila willistoni and D. pseudoobscura is (i) affected by environmental and biotic factors and has (ii) carrying capacity and continuous logistic variation with linear dependence between b, d and Nt: dN/dt = [(b − d)(1 − nt/K)θ] ⋅ nt (Gilpin and Ayala 1973). 7. The interspecific competition between populations of juvenile Hobsonia florida and oligochaetes is (i) affected by biotic factors (populations of other species with carrying capacity and competition factors) and follows (ii) carrying capacity and continuous logistic variation with dependence on b and d with Nt1, K1, Nt2 and α: dN1/dt = (b1 − d1)[(K1 − nt1 − αnt2)/K1] ⋅ nt (Gallagher et al. 1990). 8. The cycling of predator and prey populations of the Canada Lynx and its principal prey, the snow-shore hare is (i) affected by biotic factors (populations of other species with capture efficiency and conversion efficiency of predator) and follows (ii) exponential continuous variation and therefore demographic pro-
It is worth noting in relation to the equations mentioned in these examples that different authors usually designate the same base sets or functions of population dynamics by different terms and symbols. For N and the demographic processes (b, d, i, and e) its use is generalized and uniform, but for the population factors there is much diversity of terms and symbols between the different authors, especially in what we label the “rate of population change” (and symbolize by RPC instead of by r as in Turchin 2001, p. 19) as well as in the presentation of the so-called “LotkaVolterra equations.” This generates the coexistence of different terminology and symbology for the same factors or rates and the erroneous impression that the concepts are different as well as the distinct formulations of the equations in which they occur. In order to avoid this impression and ambiguity we have unified the use of terms and symbols. 15
10 Laws, Models, and Theories in Biology: A Unifying Interpretation
173
cesses without dependence on population size: dN/dt = [(b − d) − αP] ⋅ nt (for prey) and dN/dt = [(−d) + βV] ⋅ nt (for predator) (Gotelli 2001). In the terminology of metatheoretical structuralism each specific equation, which applies to a particular case, should be considered a special law; moreover, to the extent that all concepts are completely concretized or specified, each equation should be seen as a terminal special law. In all these cases, it happens that the value of the rate of population change (RPC), calculated out of the values of chosen types and numbers of population factors ((Fi)i ≤ k), multiplied by the size of the population at instant t (or initial instant) (Nt) matches or fits (exactly or approximately) with the values of the changes in population size over time (ΔN). So far, we have identified some of the different equations or special laws that have been proposed in population dynamics. In order to try to identify some fundamental law/guiding principle in population dynamics, the strategy we will use is to ask what all the different equations/special laws of PD have in common. It is worth noting that the key metatheoretical question is not “from what fundamental laws or general principles or equations are all specific equations or special laws of PD deduced?” but “what do all specific equations/special laws of PD have in common?” Answering this question is not only a feasible task; it will also shed light on the relationship between laws of PD, and moreover, as we shall see later, on the relationship between models of PD, and PD as a theory, in the sense of a theory-net, and on the unifying power of PD in particular and of theories in general. One might respond to this question by denying that there is one particular feature (or set of features) that all equations/special laws of PD share and argue that the case of ecological laws is analogous to Wittgenstein’s games (1953, § 66 and ff.): what ties different equations/special laws together and what makes them belong to PD is some kind of family resemblance between them rather than the existence of a fixed set of shared features, providing necessary and sufficient conditions for membership to them. However, this answer begs another question because we still want to know in what sense the different equations/special laws of PD are similar to each other. It seems unlikely that the desired similarities can be read off from the mere appearance of them, and this is all that the Wittgensteinians can appeal to. Moreover, what matters is not that they are similar to each other in appearance but rather that they share certain structural features: the equations/special laws of PD possess the same structure (of the same logical type), meaning that they all are specifications/ specializations of one and the same fundamental law/guiding principle of PD, respectively. And thus, as we shall see later, they form a theory or, better, a theory- net, the theory-net to which they all belong. In specific PD applications only specific laws or equations appear, and that is all that we have in standard textbooks. However, we would like to suggest that they are specific versions of a general, fundamental law or a guiding principle for
174
P. Lorenzano and M. A. Díaz
the application in question. Nevertheless, in contrast to other empirical theories like those that belong to physics such as classical particle mechanics or thermodynamics,16 the fundamental law/guiding principle of PD is not “observed” in the standard literature, but it is only “implicit” there. Thus, population dynamics (PD) is guided by a general guiding principle implicitly presupposed in specific applications. Roughly, this fundamental law/guiding principle (PDGP) states the following: PDGP: The change in the size of a population in a given time interval is due to the presence of a set of (types of) demographic processes (birth, immigration, death, emigration), i.e., demographic functions, and of a set of (types of) population factors (environmental, genetic and biotic), i.e., population factors, such that the observed (or estimated) change in the size of the population in a certain period of time corresponds to the product of the rate of population change multiplied by the size of the population at the beginning of the time interval. All interconnected concepts of PD can be graphically depicted as follows (see Fig. 10.2, where besides the components already present in Fig. 10.1 appears a symbolic representation of the set of (types of) population factors (Fi) at the top and of the fundamental law/guiding principle of PD (PDGP), in which also occurs the concept of rate of population change (RPC), at the bottom):
Fi = {Famb, Fbio, Fgen}
popt = {o1, o2, ok}
DP = {B, D, I, E}
popt+1 = Nt + B – D + I – E
T
DT t
t+1
nt
nt+1
DN = RPC(ádf1,…, df4ñ, nt, áfi,…, fxñ, t) . Nt Fig. 10.2 Graphical depiction of the elements of PD
For an analysis of these theories from a structuralist point of view, see among others Balzer et al. (1987). 16
10 Laws, Models, and Theories in Biology: A Unifying Interpretation
175
As mentioned before, fundamental laws/guiding principles are programmatic/ heuristic in the sense that they tell us the kind of things we should look for when we want to apply the theory to a specific phenomenon. In the case of PD fundamental law/guiding principle, its heuristic character can be read as follows: PDGP: When population size changes over time, look for (types of) biotic and other population factors (environmental and genetic) that acting together with (types of) demographic processes (natality, immigration, mortality, emigration) in the rate of population change, which, multiplied by the population size at the initial time, “match”/“fit” the observed (or estimated) change in population size. As we already mentioned, in every specific case ecologists have to look for specific biotic and other population factors that act together with specific demographic factors and discover the specific manner of their combination in the rate of population change that accounts for the observed (or estimated) change in population size. This means that PD fundamental law/guiding principle guides the process of specialization, since, as we saw before, in order to obtain special laws that account for the observed (or estimated) change in population size, (i) types and numbers of components of the rate of population change (RPC) that act together (i.e., types and numbers of demographic processes and of population factors), and (ii) (a) the specific mathematical form assumed by RPC, and (b) the specific mathematical form assumed by the fundamental law/guiding-principle (being it continuous, discrete, exponential, linear, logistic, etc.) have to be specified.
10.3 T he Concept of Model from the Point of View of Metatheoretical Structuralism The use of the term “model” is not restricted to scientific contexts. On the contrary, it is used in all kinds of everyday situations, being used, moreover, as much to refer to the thing “painted” or modeled as for the “painting” or model of some original. In the sciences it began to be used towards the end of the nineteenth century, through the allusion to “mechanical models” or, with different terminology, “mechanical analogies”, proposed and discussed, among others, by Maxwell (1855, 1861), Thomson (1842, 1904), and Boltzmann (1902) and Duhem (1906), or, in the context of German physics, where the term “Bild”, in singular, or “Bilder”, in plural was usual (Helmholtz 1894; Hertz 1894; Boltzmann 1905, who discussed the “models” and developed a “Bild conception” of physics in particular and of science in general). Its use, however, was not limited to the field of physics, but extended to other domains of science, being of central importance in many scientific contexts. But neither in colloquial contexts nor in the diverse scientific contexts is the term “model” used in a non-unitary way, but rather it is an ambiguous, multivocal or polysemic expression that expresses more than one concept. At the same time, as we pointed out, it must be taken into account that different terms, such as the mentioned
176
P. Lorenzano and M. A. Díaz
“Bild” or “analogy” as well, have been used to refer to models. The term model applies to a bewildering set of objects from mathematical structures, graphical representations, computer simulations, to specific organisms or objects. And the means by which scientific models are expressed goes from sketches and diagrams to ordinary text, graphics and mathematical equations – just to name some of them. In biology in general and ecology in particular the term model is used in different ways, calling “models” different entities, whether the equations themselves, idealized representations of empirical systems, organisms or physical objects, among other things. Thus, in biology, for instance, it is standard practice to speak about the Lotka-Volterra model of predator-prey interaction or the double helix model of DNA or models organisms or models “in vivo” or “in vitro”. On the other hand, different authors have proposed different typologies and classifications (neither necessarily exhaustive nor, much less, exclusive) in order to analyze models and to understand their nature and function in science. But although the literature in philosophy of science has concentrated mainly on the so-called “theoretical models” (Black 1962), not everyone agreed with the exact role that models played in the empirical sciences nor with their relevance for them, as well as with their relation with the laws and empirical theories and the eventual need to take them into consideration as components of the latter. Nowadays, as it Jim Bogen states in the back cover of the book Scientific Models in the Philosophy of Science (Bailer-Jones 2009), “The standard philosophical literature on the role of models in scientific reasoning is voluminous, disorganized, and confusing.” Despite this, one of the axis already mentioned that would enable the organization at least part of such a literature, and with which the book ends, is what is identified as one of the “contemporary philosophical issues: how theories and models relate each other” (Bailer-Jones 2009, p. 208). On this issue, and regardless of differences in particular developments, we have to main positions: the aforementioned model views, for which models do not form part of theories (in some usual, encompassing sense of the term), but they are rather independent—“autonomous”—with respect to them, and the semantic view, for which the most fundamental component for the identity of a theory is a class (set, population, collection, family) of models. But whether within model or semantic views, there is an attempt to understand not only what they are, but also how they work and even how models are constructed from detailed case studies belonging to different sciences. Although other authors had already pointed out the importance of models in biology and had tried to analyze them (Beckner 1959; Beament 1960; Holling 1964; or later Schaffner 1980, 1986, 1993), Levins (1966) occupies a central place in the discussion about models and model building in biology in general and ecology. From then on his proposal about the existence of a three-way trade-off between generality, realism, and precision, such that a model builder cannot simultaneously maximize all of these desiderata, has been much discussed (e.g. Orzack & Sober 1993; Levins 1993; Orzack 2005; Odenbaugh 2003; Weisberg 2006; Matthewson & Weisberg 2009).
10 Laws, Models, and Theories in Biology: A Unifying Interpretation
177
Returning to the issue of the relationships between models and theories, the discussion in ecology continues nowadays with authors who strongly criticize the discipline affirming that it is an accumulation of models without any connection (as in the case of Roughgarden 2009) but on the other hand many ecologists maintain the existence of a common core of biological assumptions (Cooper 2003; Levins 1993) and that the differences in different components result in a family of mathematical equations (or “models”, as the ecologists call them) (Cooper 2003). The latter has led many authors to argue that theoretical unification in ecology is possible through an iterative process that includes recognizing similarities between ostensibly competing models (Cooper 2003; Lange 2005; Sagoff 2003), developing a common theoretical framework and constructing a new super model within that framework (Fox et al. 2011). As would be expected, being a member of the semantic family, the structuralist view shares with all the other family members the fundamental thesis on the centrality of models for metatheoretical analysis. But, on the other hand, it can differ from other members of the semantic family in its characterization of the precise nature of these entities that are called models, although eventually sharing it with some of them. A model, in its minimal informal meaning, is a system or structure which intends to represent, in a more or less approximative way, a “portion of reality,” made up by entities of various kinds, which makes true a series of claims, in the sense that in this system “what the claims say occurs,” or more precisely, the claims are true in this system. Models are conceived as systems or structures, i.e., mathematical structures. In the standard version of metatheoretical structuralism, these structures are set- theoretical or relational structures of a certain kind,17 constituted by a series of basic domains (sets of objects) and of relations (or functions) over them, i.e., as entities of the form: 〈D1,…, Dk, R1,…, Rn〉, where Rj ⊆ Di1 × … × Dik.) (the Di’s represent the so-called “base sets,” i.e., the “objects” the theory refers to, its ontology, whereas
In trying to be as precise as possible, metatheoretical structuralism prefers the use of (elementary) set theory—whenever possible—as the most important formal tool for metatheoretical analysis. However, this formal tool is not essential for the main tenets and procedures of the structuralist representation of science (other formal tools such as logic, model theory, category theory, and topology as well as informal ways of analysis are also used). Besides, there are also uses of a slight variant of Bourbaki notion of “structure species” in order to provide a formal basis of characterizing classes of models by means of set-theoretic predicates (Balzer et al. 1987, Ch. 1) and of a version of the von Neumann-Bernays-Gödel-type of language including urelements for providing a purely set-theoretical formulation of the fundamental parts of the structuralist view of theories (Hinst 1996). There is even a “categorial” version of metatheoretical structuralism that casts the structuralist approach in the framework of category theory rather than within the usual framework of set theory (see Balzer et al. 1983; Sneed 1984; Mormann 1996). The choice of one formal tool or another or of a more informal way of analysis is a pragmatic one, depending on the context which includes the aim or aims of the analysis and the target audience. Nonetheless, in standard expositions of metatheoretical structuralism, as well as in the presented here, models are conceived of as set-theoretical structures (or models in the sense of formal semantics), and their class is identified by defining (or introducing) a set-theoretical predicate, just as in the set-theoretical approach of Patrick Suppes (1957, 1969, 1970, 2002; McKinsey et al. 1953). 17
178
P. Lorenzano and M. A. Díaz
the Rj’s are relationships or functions (set-theoretically) constructed out of the base sets).18 In order to provide a more detailed analysis of empirical science, metatheoretical structuralism distinguishes three kinds of (classes, sets, populations, collections or families of) models. Besides what are usually called (the class, set, population, collection or family of) “theoretical models” or simply (the class, set, population, collection or family of) “models”—also called (the class of) “actual models” in structuralist terminology—, the so-called (class of) “potential models” and (class of) “partial potential models” are taken into account. To characterize these structuralist notions, two distinctions are to be considered: the distinction between two kinds of “conditions of definition” (or “axioms”, as they are also called) of a set-theoretical predicate, and the distinction between the T-theoretical/T-non-theoretical terms (or concepts) of a theory T. According to the first distinction, the two kinds of conditions of definition of a set-theoretical predicate are (1) those that constitute the “frame conditions” of the theory and that “do not say anything about the world (or are not expected to do so) but just settle the formal properties” (Moulines 2002, p. 5) of the theory’s concepts and (2) those that constitute the “substantial laws” of the theory and that “do say something about the world by means of the concepts previously determined” (Moulines 2002, p. 5). According to the second distinction, which replaces the traditional, positivistic theoretical/observational distinction, it is possible to establish, in (almost) any analyzed theory, two kinds of terms or concepts, in the sense delineated in an intuitive formulation by Hempel (1966, 1969, 1970) and Lewis (1970): the terms that are specific or distinctive to the theory in question and that are introduced by the theory T—the so-called “T-theoretical terms or concepts”—and those terms that are already available and constitute its relative “empirical basis” for testing—the so-
In a complete presentation, we should include, besides the collection of so-called principal base sets D1,..., Dj or D1,..., Dk, also a second kind of base sets, namely, the so-called auxiliary base sets A1,..., Am. The difference between them is the difference between base sets that are empirically interpreted (the principal ones) and base sets that have a purely mathematical interpretation, like the set ℕ of natural numbers, or the set ℝ of real numbers (the auxiliary ones). Here, auxiliary (purely mathematical) base sets are treated as “antecedently available” and interpreted, and only the proper empirical part of the models is stated in an explicit way.On the other hand, in philosophy of logic, mathematics, and empirical science has been intensively discussed what would be a better way of understanding the nature of sets occurring in the relational structures and of the models themselves. In relation to sets, according to the standard interpretation of “sets-as-one” (Russell 1903) or “the highbrow view of sets” (Black 1971) or “sets-as-things” (Stenius 1974) sets themselves, though not necessarily their elements which may refer to concrete entities, should be considered as abstract entities, while according to the interpretation of “sets-as-many” (Russell 1903) or “the lowbrow view of sets as collections (aggregates, groups, multitudes)” (Black 1971) or “sets-of” (Stenius 1974) sets have not to be interpreted that way. For theoretical models, even though they are usually considered as abstract entities, there is no agreement about what kind of abstract entities they are, i.e., what is the best way of conceive them—either as interpretations (Tarski 1935, 1936) or as representations (Etchemendy 1988, 1990), or as fictional (Godfrey-Smith 2006; Frigg 2010), or as abstract physical entities (Psillos 2011). However, due to space limitations, we will not delve into these issues. 18
10 Laws, Models, and Theories in Biology: A Unifying Interpretation
179
called “T-non-theoretical terms or concepts,” which are usually theoretical for other presupposed theories T′, T″, etc. According to the standard structuralist criterion of T-theoreticity (originated in Sneed 1971 and further elaborated in detail in the Structuralist program; see Balzer et al. 1987, Ch. II), a term is T-theoretical (i.e., theoretical relative to a theory T) if every method of determination (of the extension of the concept expressed by the term) depend on T, i.e., if they are T-dependent, if they presuppose or make use some law of T; otherwise, a term is T-non-theoretical, i.e., if at least some method of determination (of the extension of the concept expressed by the term) does not presupposes or make use of some law of T, if it is T-independent. Now we are in a position to characterize these structuralist basic notions: 1. The class of potential models of the theory Mp is the total class of structures that satisfy the “frame conditions” (or “improper axioms”) that just settle the formal properties of the theory’s concepts, but not necessarily the “substantial laws” of the theory as well. 2. The class of (actual) models of the theory M is the total class of structures that satisfy the “frame conditions,” and, in addition, the “substantial laws” of the theory. If A1,…, As are certain formulas (“proper axioms” or simply “axioms”) that represent the laws of the theory, models of the theory are structures of the form 〈D1,…, Dk, R1,…, Rn〉 that satisfy the axioms A1,…, As. (And that is the reason why, as it was mentioned before, models may be considered the model- theoretic counterpart of theory’s laws.) 3. The class of partial potential models Mpp are obtained by “cutting off” the T- theoretical concepts from the potential models Mp (Mpp := r(Mp), where r, the “restriction” function, is a many-one function such that Mp → Mpp). If potential models are structures of type x (x = 〈D1,…, Dk, R1,…, Rn〉), partial potential models Mpp are structures of type y (y = 〈D’1,…, D’j, R’1,…, R’m〉), where each structure of type y is a partial substructure of a structure x.19 (And let us call a specific structure of type y, with specific instances of the T-non-theoretical concepts, a “data model” of T). Now, let us identify all these kinds of models in population dynamics, starting with data models and then moving on to potential models first and then to theoretical models that result in successful applications to end with the classes of potential models, models, and partial potential models.
19 A structure y is a substructure of another structure x (in symbols: y ⊑ x) when the domains of y are subsets of the domains of x and, therefore, the relationships (or functions) of y are restrictions of the relationships (or functions) of x. A structure y is a partial substructure of x (also symbolized by y ⊑ x) when, besides being a substructure of x, there is at least one domain or relationship (or function) in x that has no counterpart in y. The important thing is that the partial substructure y contains less components – domains or relationships (or functions) – than the structure x. Thus, structures x and y are of different logical types. If y is a substructure (either partial or not) of x, it is also said, inversely, that x is an extension of y.
180
P. Lorenzano and M. A. Díaz
10.3.1 Models in Population Dynamics If the examples of closed population cases given above (in Sect. 10.2.1) are to be represented in the structuralist format, they should be conceived as data models of PD. That is, they should be conceived as structures of type y of partial potential models: y = 〈O, T,