Scientific Concepts and Investigative Practice 9783110253610, 9783110253603

Recent philosophy and history of science has seen a surge of interest in the role of concepts in scientific research. Sc

202 77 5MB

English Pages 308 Year 2012

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Scientific Concepts and Investigative Practice: Introduction
Concept as Vessel and Concept as Use
Rethinking Scientific Concepts for Research Contexts: The Case of the Classical Gene
The Dynamics of Scientific Concepts: The Relevance of Epistemic Aims and Values
Goals and Fates of Concepts: The Case of Magnetic Poles
Mathematical Concepts and Investigative Practice
Experimentation and the Meaning of Scientific Concepts
Exploratory Experiments, Concept Formation, and Theory Construction in Psychology
Early Concepts in Investigative Practice – The Case of the Virus
Scientific Concepts in the Engineering Sciences: Epistemic Tools for Creating and Intervening with Phenomena
Modeling Practices in Conceptual Innovation: An Ethnographic Study of a Neural Engineering Research Laboratory
Conceptual Development in Interdisciplinary Research
List of Contributors
Index
Recommend Papers

Scientific Concepts and Investigative Practice
 9783110253610, 9783110253603

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Scientific Concepts and Investigative Practice

Berlin Studies in Knowledge Research Edited by Günter Abel and James Conant

Volume 3

De Gruyter

Scientific Concepts and Investigative Practice Edited by

Uljana Feest and Friedrich Steinle

De Gruyter

Series Editors Prof. Dr. Günter Abel Technische Universität Berlin Institut für Philosophie Straße des 17. Juni 135 10623 Berlin Germany e-mail: [email protected] Prof. Dr. James Conant The University of Chicago Dept. of Philosophy 1115 E. 58th Street Chicago IL 60637 USA e-mail: [email protected]

Figure 2 p. 113 taken from Jonkers, A. R. T.. Earth’s Magnetism in the Age of Sail. p. 36, fig. 2.1. 쑔 2003 John Hopkins University Press. Reprinted with permission of The Johns Hopkins University Press.

ISBN 978-3-11-025360-3 e-ISBN 978-3-11-025361-0 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.dnb.de. 쑔 2012 Walter de Gruyter GmbH, Berlin/Boston Printing: Hubert & Co. GmbH & Co. KG, Göttingen ⬁ Printed on acid-free paper Printed in Germany www.degruyter.com

Contents Uljana Feest & Friedrich Steinle Scientific Concepts and Investigative Practice: Introduction . . Vasso Kindi Concept as Vessel and Concept as Use . . . . . . . . . . . . . . . . . . . Miles MacLeod Rethinking Scientific Concepts for Research Contexts: The Case of the Classical Gene . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ingo Brigandt The Dynamics of Scientific Concepts: The Relevance of Epistemic Aims and Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . Friedrich Steinle Goals and Fates of Concepts: The Case of Magnetic Poles . . . Dirk Schlimm Mathematical Concepts and Investigative Practice . . . . . . . . . . Theodore Arabatzis Experimentation and the Meaning of Scientific Concepts . . . . Uljana Feest Exploratory Experiments, Concept Formation, and Theory Construction in Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . Corinne L. Bloch Early Concepts in Investigative Practice—The Case of the Virus Mieke Boon Scientific Concepts in the Engineering Sciences: Epistemic Tools for Creating and Intervening with Phenomena Nancy J. Nersessian Modeling Practices in Conceptual Innovation: An Ethnographic Study of a Neural Engineering Research Laboratory . . . . . . . . . Hanne Andersen Conceptual Development in Interdisciplinary Research . . . . . . List of Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 23

47

75 105 127 149

167 191

219

245 271 293 297

Scientific Concepts and Investigative Practice: Introduction Uljana Feest & Friedrich Steinle 1. Introduction This edited volume is built on the premise that scientific concepts can make productive contributions to the investigative practices of scientists. As a rough heuristic, we will start explaining this thesis by outlining two possible contrast classes: First, the articles in this volume take concepts, rather than theories, as their primary units of analysis. Second, the main questions asked do not concern the meaning or reference of concepts, but rather their roles in scientific activities, such as reasoning, experimenting, modeling, explaining, and others. It needs to be stressed, though, that even though concepts are here specifically analyzed with regard to their role in investigative practices, we are not claiming that the above-mentioned contrast classes are irrelevant to the issues addressed here. Quite on the contrary, as the contributions to this volume exemplify, there is a range of possible positions with regard to the question of what concepts are and by virtue of what they can perform the functions attributed to them. For example, with regard to the relationship between concepts and theories, possible positions can include anything from the claim that certain concepts are autonomous with regard to theories (i. e., that whatever conceptual changes occur, these are independent of theoretical changes), to the claim that concepts are deeply embedded in existing theories (i. e., that the dynamics of conceptual change are closely related to those of theoretical change). Likewise, it is obvious that an account of how concepts contribute to scientific practices might well be motivated or justified by a specific analysis of the meaning of scientific concepts. As argued by several articles in this volume, when we look closely at scientific practice, traditional accounts of meaning often fail. This, in turn, is a decisive motivating factor for many of the positions argued for by the contributors to this volume.

2

Uljana Feest & Friedrich Steinle

Not surprisingly, therefore, several authors put forth alternative proposals of how to think about meaning. When pondering the significance of scientific concepts in investigative practice, the following two questions are especially central: 1. How should we think about the nature of concepts such that they can play a role in investigative practice? 2. How should we think about the nature of investigative practices such that concepts can play a role in them? In this introduction, we will address each of these questions. Section 2 provides a general overview of accounts of concepts in 20th-century philosophy and philosophy of science, thereby setting the stage for some of the approaches presented in this volume. Section 3 continues this overview by discussing some more recent works that have attempted to link accounts of concepts with considerations about scientific practice. Section 4 addresses questions and problems regarding the notion of investigative practice in the philosophy of science and the history of science, respectively. Lastly, section 5 gives brief summaries of the articles collected in this volume.

2. Concepts: Some Received Views The received view about scientific concepts has it that early 20th-century logical positivism tried to formulate criteria of cognitive meaningfulness for concepts that required that concepts be defined in terms of empirical observations, which were supposed to provide necessary and sufficient criteria of application (e. g., Salmon 1984). This assumption was demolished in several steps. First came the recognition that scientific concepts can at best be partially interpreted, not exhaustively defined, by empirical criteria of application (Carnap 1936). Then came the claim that the meanings of scientific concepts cannot be given in isolation from the theoretical networks they occur in, but are rather implicitly provided by the places they occupy within such networks (e. g., Sellars 1948, Hempel 1952). And last came the claim that the neat distinction between the implicit definition and partial empirical interpretation of concepts could not be upheld, because it appeared to presuppose the distinction between a theoretical and observational vocabulary, which was increasingly regarded as problematic (e. g., Quine 1951). What emerged was a generalized semantic holism, according to which con-

Scientific Concepts and Investigative Practice: Introduction

3

cepts derive their meanings from the totality of “theoretical” assumptions about their referents, with no strict line between formal and empirical aspects of such assumptions. It is worth noting that while most contemporary philosophers of science do not buy into the kind of generalized semantic holism implied by Quine’s analysis, the general gist of the development just outlined occurred as part of a broader trend within philosophy, which came to question a central methodological assumption of positivist philosophy of science. The assumption was that philosophical analyses can and should proceed by means of purely formal analyses, i. e., by spelling out the logical implications of particular definitions, such as that of explanation or confirmation. This philosophical method gave rise to some well-known paradoxes, which were due to tensions between logical implications and linguistic intuitions. Since our linguistic intuitions are typically informed by our beliefs about the world, the method of logical analysis gradually gave way to a method that included the analysis of such beliefs and the ways in which they are formed. Adding to this development a skepticism about armchair-philosophy, many scholars in history and philosophy of science today agree that philosophical analysis cannot divorce formal accounts of scientific notions (such as concepts, theories, confirmation, reduction, etc.) from the study of specific contexts in which these notions are applied by scientists, the belief and goals scientists have when applying them, and the cognitive and material constraints that guide such applications. Summing up the above, we can say that mid-20th-century philosophy of science saw two interrelated developments. The first was a rejection of a particular understanding of concepts as exhaustively captured by definitions. The second was a questioning of a particular understanding of philosophical methodology as consisting in the formal analysis of definitions. If we take seriously both of these insights, what does this mean for the philosophical study of scientific concepts? Intuitively, concepts refer to classes of objects or phenomena. Spelling out what is meant by a given concept involves spelling out empirical beliefs (both ‘theoretical’ and ‘observational’) about the objects or phenomena in the extension of the concept. Since such beliefs change in the course of scientific development, this means that concepts can change. As a result, important works within the history and philosophy of science have attempted to trace specific conceptual developments and to provide philosophical accounts of those developments, (e. g., Shapere 1984; Nersessian 1984) arguing that they can be explicated in terms of the dy-

4

Uljana Feest & Friedrich Steinle

namic cognitive possibilities inherent in the graded structures of scientific concepts themselves (e. g., Andersen et al. 2006; Nersessian 2008). The works just mentioned were specifically constructed to answer to a challenge that arose in mid-20th-century history and philosophy of science, often summarized by the names of Thomas Kuhn and Paul Feyerabend. Kuhn had provided historical examples of radical conceptual shifts, which – as is well known – shook up received views about the cumulative and progressive nature of science, putting in its place a picture of shifts so radical as to not only change the content of scientific concepts, but also the very criteria by which their adequacy was to be judged (Kuhn 1962). In turn, Kuhn and Feyerabend (1962) viewed this as giving rise to a phenomenon typically referred to as “incommensurability,” resulting from a lack of independent standards that would allow for an evaluation of the rationality of conceptual change. Perhaps due to the similarity between Feyerabend’s and Quine’s considerations about meaning, as well as the fact that Kuhn’s mature work increasingly focused on scientific concepts and theories it has become common to view the historical turn in the philosophy of science as closely connected to a description theory of meaning within a generally holistic outlook. According to it, the intension of a concept (which is determined by the entirety of descriptions provided by the theory in which the concept figures) determines its extension. Therefore, any change in description changes the referent, making it hard to see how successive concepts could lead to an improved understanding of the same object. A natural response to the problem just outlined is to question one of the crucial premises it results from, namely the description theory of meaning. One particularly prominent approach is commonly known as the Kripke/Putnam account, or causal theories of meaning (e. g., Putnam 1975). Roughly, particular terms pick out their referents in a stable manner even if scientists have very inadequate knowledge of, or shifting beliefs about, those referents. This move, while seemingly guaranteeing referential stability through conceptual change, has two major disadvantages. By positing that reference is fixed, it is almost inevitably committed to some kind of essentialism that many philosophers of science, even in the realist camp, would rather steer clear of (e. g., Bloch 2011). Second, when the reference class of a given term changes, causal theorists have no principled response to the question of whether the term referred to the new (broader/narrower) reference class all along, or whether

Scientific Concepts and Investigative Practice: Introduction

5

scientists are simply mistaken in their decision to make the reference class broader or narrower (e. g., Wilson 1982).

3. Concepts and Practices Given the problems with descriptive and causal theories of meaning, we can thus appreciate efforts to formulate accounts of concepts that can account for conceptual change while at the same time allowing for this change to be rational and gradual. A detailed reconstruction of the “nitty-gritty details of the transition between two incommensurable conceptual frameworks” (Kindi & Arabatzis 2008), furthermore, can also potentially account for differences between cases where conceptual change in fact resulted in a change in reference and cases where it resulted in a mere change in beliefs about the referent. In this vein, Shapere (1984) pointed to the chain of reasons that connect different concepts or conceptual frameworks. Nersessian (1984) further elaborated on this approach, arguing that the incommensurability problem was an artifact of an ahistorical view of concepts. She presented a historical reconstruction of the reasoning processes underlying the development of the concept of the electromagnetic field from Faraday to Einstein, suggesting that this shift could be accounted for by a particular version of the prototype theory of concepts (Rosch), according to which concepts are characterized by clusters of features, which roughly fall into four general classes (“stuff,” “function,” “structure,” and “causal power”). Following up on this early work, Nersessian has, over the years, continued to be interested in the application of insights from the cognitive science literature. In this vein, she draws on accounts of model-based reasoning, arguing that “concepts specify constraints for generating members of a class of models” (Nersessian 2008, 187), where these models in turn facilitate further conceptual change. The relevance of analogical reasoning to concept formation (and creativity more generally) has been explored not only with regard to the physical sciences, but to mathematics as well (e. g., Schlimm 2008). In a similar vein, Andersen, Chen, and Barker (2006) presented an account of conceptual change that is based on a psychological theory of concepts, the frame theory by Lawrence Barsalou (1992). According to it, concepts are not merely clusters of empirical attributes, but they are also framed in terms of “a hierarchy of nodes” (Andersen et al. 2006, 42), where attributes relevant to classification can each take several different values.

6

Uljana Feest & Friedrich Steinle

This accounts for the fact that two objects can be exemplars of the same concept even if they share only a few features. It also suggests how existing conceptual structures can both enable and prevent specific scientific discoveries by constraining the kinds of inferences scientists are able to make at a given point in time (see also Barker 2011). This brief overview illustrates one possible way in which theories of concepts can be linked to scientific practices, in this case, practices of reasoning about specific objects of research. The approaches just mentioned are explicitly naturalistic, insofar as they draw on psychological theories of concepts in order to account for conceptual change in terms of the cognitive processes scientists actually engage in (e. g., Nersessian 2008). But there are also other ways in which connections can be forged between scientific concepts and investigative practices, which do not have to assume the adequacy of any one psychological theory of concepts, thereby avoiding possible problems with the notion of a concept as a theoretical category in psychology (e. g., Machery 2009). Such accounts might be underwritten by non-naturalistic theories of meaning, in particular pragmatist approaches in the Wittgensteinian tradition, which (roughly) try to spell out the dictum that “meaning is use.” One such theory is Robert Brandom’s neo-pragmatist theory of content, which tries to account for concepts in terms of the inferential relations they are embedded in, and thus in terms of inferences they enable (e. g., Brandom 1994, 2000). Making inferences, clearly, is a form of human practice, and hence inferential (or causal) role semantics, insofar as it is applicable to scientific concepts, is obviously relevant to questions about the relationship between scientific concepts and investigative practice. In this vein, Ingo Brigandt, in a discussion of the nature of scientific reasoning, has drawn on inferential role semantics, according to which “the meaning of a term is constituted by how the term figures in inference, and the content of a concept is determined by how it figures in reasoning” (Brigandt 2010, 33/34). Brigandt extends John Norton’s material theory of induction (Norton 2003) to scientific reasoning more generally, arguing that “a material inference is licensed by the empirical content embodied in the concepts contained in the premises and conclusion” (op. cit. 31).1 One implication of this idea is that scientific inferences – insofar as they are valid – are valid only relative 1

As we will see in section 4 below, however, for Brigandt the inferential role of a concept is only one of several components of a scientific concept’s meaning.

Scientific Concepts and Investigative Practice: Introduction

7

to the empirical (but fallible) knowledge “embodied in” our concepts. Hence, the arguments in question may well fail to be sound. Other approaches in the philosophy of science have also stressed the ways in which concepts (or components of concepts) figure in processes of scientific reasoning, though they have not framed this in terms of a more general account of meaning. For example, Corinne Bloch has argued for an account of “definition,” according to which “scientific definitions … spell out causally fundamental distinguishing characteristics,” (Bloch forthcoming) and as such are vital tools in the classificatory and explanatory practices of scientists. Looking at an entirely different kind of scientific practice – that of designing experiments and making inferences from experimental data – Uljana Feest (2010) has argued that operational definitions are important tools, since they provide preliminary specifications of the objects of study, formulated in terms of paradigmatic conditions of application, where these conditions of application, in turn, are cast in terms of experimental operations and the data they generate. The link between scientific concepts and experimental practice has also been drawn by other writers, albeit in different ways. For example, Friedrich Steinle (1997) has highlighted the existence of what he termed “exploratory experiments,” thus arguing that experiments often take place in realms were no concepts are available, or at any rate, where existing theoretical/conceptual frameworks turn out to be inadequate. According to Steinle, such experiments take place in contexts of “the formation of classificatory and conceptual frameworks” (Steinle 1997, 71). Steinle’s main point was that novel concepts can get introduced and stabilized by way of exploratory experiments. Hence, insofar as such concepts codify novel empirical insights, exploratory experiments have an important epistemic function, not previously recognized (see also Steinle 2002a, 2002b). Whereas Steinle and others working on exploratory experiments have focused on their ability to underwrite phenomenological descriptions/concepts, others have argued that experimental practices can also ground what is in the literature traditionally often referred to as “theoretical concepts,” or (if we want to avoid the theoretical/observational dichotomy) concepts of “hidden entities” (Arabatzis 2011). The motivation behind such efforts is to forge a link between scientific concepts and experimental practices in a way that overcomes the dichotomy between theory and world. Simply put, the idea is that skeptical problems, often taken to result from this dichotomy, become less

8

Uljana Feest & Friedrich Steinle

persuasive once we recognize the extent to which our knowledge is based on physical interventions with the world (see also Rouse 2011).

4. Historiographical and Philosophical Challenges The contributors to this volume all work in history and philosophy of science (HPS), with some more firmly based in the history, others in the philosophy of science. What brings the two fields together in this particular instance, is (1) on the one hand an interest in the historical processes by which scientific knowledge, as encapsulated in concepts, is generated, and (2) on the other hand a desire to construct philosophical accounts that do justice to the ways in which science is actually practiced (in this case: how scientific concepts are actually used or formed). Both of these motivations give rise to methodological issues pertaining to the approaches in the history of science and the philosophy of science, respectively. Within historiography of science after the practical turn, it is often emphasized that the objects of study are scientific practices rather than scientific or epistemic concepts. This move is meant to express a critique of a type of intellectual history that studies conceptual change purely in terms of shifts in cognitive content, without inquiring into the practical and material circumstances that made such shifts possible. In this vein, Lorraine Daston and Elizabeth Lunbeck (2011), in a recent edited volume about scientific observation, emphasize that their focus is on the history of observational practices, not on the history of the concept of observation treated as an actors’ category. This does not mean, of course, that there can be no historical investigations of concepts. The point, rather, is that such a history will reveal the extent to which “abstract categories like ‘experiment’ and ‘classification’ were anchored in concrete practices” (Daston & Lunbeck 2011, 3). The question, then, is how concepts, in addition to being “anchored in” scientific practices, can also make a contribution to scientific practices. In turn, philosophers of science in the 20th century have traditionally focused their attention on the finished products of scientific activities as opposed to the activities themselves. There is a complicated set of explanations for this, having to do with the distinction between discovery and justification and the fact that philosophical accounts tend to focus on formal relations rather than historical processes and/or on material circumstances. The ques-

Scientific Concepts and Investigative Practice: Introduction

9

tion, then, is whether there are any persuasive reasons why philosophy either should not or cannot analyze scientific practices. With respect to the first question, it seems that the kind of ‘history of concepts’ rejected by many contemporary historians of science is one that looks at series of intellectual achievements without taking into consideration the material, instrumental, social etc. conditions that made them possible. In other words, it is a history that ignores any kind of practice, including scientific practice. If this is the case, it seems to us that the approach taken in this edited volume, namely one that emphasizes what one might want to call “conceptual practices,” the practices connected to concept use, is much needed. To be sure, a focus on scientific concepts will come with a focus on epistemic aspects of science. What we wish to argue, however, is that such a focus on epistemic aspects of science is not only fully compatible with a focus on the practices of science, but indeed allows for novel and original insights about such practices. With regard to the second question, it should be emphasized that the word “practice” clearly refers to a temporal process. As such, it is not immediately obvious how it can be studied by means of traditional philosophical methodology of mid-20th-century philosophy of science. It is perhaps for this reason that there are still very few explicit philosophical attempts to get a grip on the notion of scientific practice (but see Rouse 2002 for an exception). As already outlined in section 2 above, however, the received view about philosophical methodology has long been destabilized ‘from within.’ This is most recently evidenced by the great resonance the Society for the Philosophy of Science in Practice has met with since its founding in 2007.2 While we cannot go into great detail here, we would like to address one worry that philosophers of science might have, i. e., that a focus on practice will detract from philosophical goals like normative analysis. In response, we would like to argue that the analytical focus on concepts is especially well-designed to alleviate such worries. Most of the contributors to this volume highlight the importance of scientific concept especially with regard to practices of reasoning. As such, the approach taken in this volume is especially amenable to traditional philosophical concerns with normative analyses. 2

See for example, Feb_2012.pdf

http://www.philsci.org/files/SPSP_Newsletter_-

10

Uljana Feest & Friedrich Steinle

5. The Papers in this Volume In her essay, “Concept as Vessel and Concept as Use,” Vasso Kindi contrasts the idea of concepts as entities comprising sets of descriptions that sharply delineate classes of objects from one another with one that she (following Ian Hacking and Wittgenstein) refers to as “uses of words in their sites.” Versions of the former idea, which Kindi also describes as “concepts as vessels” are traced back to various writers in the history of philosophy, such as Kant, Frege, Carnap, and James. Kindi argues that all of these writers share some kind of dichotomy between concepts as empty forms and the empirical content they are ‘filled’ with. This understanding of concepts, Kindi argues, gives rise to the problem of incommensurability as soon as two concepts are compared and it is found that they cannot be reduced to a common core of empirical content expressed in observational sentences. By contrast, the alternative conception of concepts argued for in Kindi’s paper, makes this and related problems (such as that of rational theory change, the schema/content distinction) dissolve. According to this alternative conception, concepts are equated with “technique[s] of using a word” (Wittgenstein), which are seen as “molded by history” (Hacking) and – more generally – as flexible, open, and tied to practice. Kindi elaborates on her analysis by addressing a number of potential problems, concerning (a) the stability of concepts (are concepts always in flux?), (b) the question of whether all concepts only exhibit gradual transitions (is conceptual change never discontinuous?), and (c) the question of whether there are any in-principle ways of distinguishing between two different words for the same concept and two different concepts that attach to the same word. In her response she follows Wittgenstein’s view of concepts as “elastic,” suggesting that the three questions can only be answered by very detailed investigations and on a caseby-case basis. This answer has far-reaching implications for the question of how to study concepts philosophically since it suggests that we need to pay attention to the ways in which words were actually used in specific scientific circumstances. This, in turn, raises much bigger issues regarding the relationship between philosophy and history of science. But Kindi’s analysis not only suggests that historical investigations of language use will be relevant to the philosophy of concepts, but also that her philosophical account of concepts has an impact on how we study the historical trajectories of specific “words-in-their-sites.”

Scientific Concepts and Investigative Practice: Introduction

11

Hence, she argues that her “concepts as use” accounts will prevent us from seeking some unchanging core. Recognizing that allusions to a Wittgensteinian notion of family resemblance is often an all-too-easy way to avoid hard philosophical and historical work, Kindi concludes by outlining how she envisions that this idea be unpacked in the course of specific historical/philosophical studies. In particular, she suggests that the way to go is not to look for resemblances between various concepts (or their referents), but rather to inquire into the criteria according to which the speakers of a given context classify them as resembling each other. Drawing on Kindi’s notion of concepts as vessels, Miles MacLeod, in his article, “Rethinking Scientific Concepts for Research Contexts: The Case of the Classical Gene,” also argues that the traditional accounts of concepts (both descriptivist and causal) are committed to a notion of concept that makes conceptual change rather mysterious. As he points out, this picture suggests that such change presents us with series of disjointed concepts. By contrast, McLeod argues that scientific change is not as discontinuous as this picture suggests, and that we in fact need a different understanding of what concepts are to get a better grip of this. The account of concepts suggested by the author downplays the representational function of concepts in favor of emphasizing that concepts (1) are open-ended and (2) possess what he calls “central epistemic attributes.” By the former the author means that the ways in which scientists use concepts are often wide enough to allow for quite different further developments. According to MacLeod, this explains why scientists are often not bothered by what seem to be deep conceptual differences between different usages. By the latter, he means features attributed to the referent of the concept which are used to reason about, or experimentally intervene in the course of investigative practice. As examples of such features he names (a) “conceptual principles,” which outline very basic identity conditions for the research object in question, (b) causal or explanatory principles, and (c) principles of experimental individuation.3 3

As we will see below, he thereby expresses intuitions that are common to several contributions to this volume: For example, Corinne Bloch’s article focuses especially on the explanatory principles provided by the definitions of scientific concepts as providing identity conditions. In turn, Uljana Feest and Theodore Arabatzis each emphasize the importance of empirical/experimental conditions of individuation.

12

Uljana Feest & Friedrich Steinle

MacLeod illustrates both aspects of concepts by means of a case study, concerning the classical gene, and then turns to a slightly more abstract discussion of both features of concepts. According to it, the open-endedness of a concept is characterized by the fact that the concept is considered to refer to an (as of yet not well-known) aspect of reality, but does not have to give a correct representation of it in order to be scientifically useful. What makes them scientifically useful, rather, is that they contain central epistemic attributes which are goal-directed, constitutive of research methods, representational, and interpretable. With this analysis, MacLeod tries to capture his idea that while concepts do make representational claims about their referents, they also figure in the larger contexts of ongoing research activities, which he seeks to analyze with his account. Ingo Brigandt’s paper, “The Dynamics of Scientific Concepts: The Relevance of Epistemic Aims and Values,” also studies conceptual dynamics by means of examples from biology. He argues for a tri-partite account of a concept’s content, as being comprised of reference, inferential role, and epistemic goal. It is the last of these three components that he highlights as making an original and contribution to the literature about scientific concepts. According to the author, a chief virtue of his account is that it can account for the dynamics of conceptual change. On this approach concepts are formed to perform specific intellectual tasks, e. g., to address certain explanatory problems. Differently put, they are used to pursue what Brigandt calls “epistemic goals.” Such epistemic goals (as part of a concept) drive concept change and – going beyond previous accounts of how concept change is possible – explains why conceptual change happens. Brigandt illustrates this by means of three concepts from biology: (1) the concepts of evolutionary novelty, (2) the concept of homology, and (3) the concept of the gene. The first of these three concepts has become central to the relatively recent field of evo-devo, which aims at a synthesis of evolutionary and developmental biology. Yet, there is substantial disagreement over how to define ‘evolutionary novelty.’ As Brigandt argues, this is not a hindrance to practical research since the main function of the concept is to set a problem agenda, namely that of explaining the evolutionary origins of novelty, where this shared problem agenda plays an integrative function across different approaches and hence enables interdisciplinary research. This in turn fits with the author’s contention that integration describes a dynamic process involving individual concepts rather than the stable unification of whole fields. – In a similar vein, the author ar-

Scientific Concepts and Investigative Practice: Introduction

13

gues that while in the 19th century there were significant semantic changes between pre- and post-Darwinian understandings of the concept of homology (which refers to corresponding structures in different species), these semantic differences only concerned the concept’s inferential role, but not its epistemic goals, which are the systematic morphological description of several species, and the taxonomic classification of species. Brigandt also suggests that in the 20th century the concept came to be tied to different epistemic goals, associated with novel fields of study, which in turn led to conceptual diversification. – In the case of the gene concept, Brigandt compares the classical with the molecular conception, arguing that all three components of the concept (reference, inferential role, and epistemic aims) changed. The author goes on to highlight the amount of semantic variation that exists in contemporary biology. He tries to account for this both by positing the existence of a very broad common epistemic goal (to account for gene function), while at the same time arguing that individual scientists may use the concept in specific contexts to pursue different epistemic goals. Brigandt concludes with a discussion of why epistemic goals should be considered part of a concept’s meaning, despite the fact that they are clearly not intended to represent scientific beliefs about the concept’s referent. Like Brigandt, Friedrich Steinle, in his “Goals and Fates of Concepts: The Case of Magnetic Poles,” also highlights that concepts and concept use are closely tied to research goals. As he reminds the reader, there can be a great variety of goals, both of an epistemic and non-epistemic nature. The paper focuses on epistemic goals broadly construed. Steinle identifies as overarching the goal “to make sense of a domain,” where this can mean a number of different things, depending on the context, and on the more specific goals within a given research project (e. g., the goal to create order, to mathematize, to formulate laws, etc.). Steinle connects this to the idea that concepts be regarded as tools that help attain such goals. He spells this out by means of a case study that traces the various functions the concept of magnetic poles played in the investigative practices from the middle ages through the 19th century. What emerges is a complex and dynamic picture in which the concept in question at times serves a particular function, whereas in other contexts different aspects of the same concept are emphasized, enabling the concept to serve a different function. And at times the concept’s function was simply taken over by more abstract mathematical tools. In this vein, Steinle points out that the concept of magnetic pole early on

14

Uljana Feest & Friedrich Steinle

served the function of enabling the formulation of a regularity for magnetic attraction and repulsion. For a long time the theoretical foundations of the concept were not discussed, yet it was widely and successfully used both by practitioners (e. g., navigators) and by scientists. By virtue of allowing for the formulation of an empirical law, the concept was later (by the 17th century) also appealed to in explanations of electromagnetic phenomena. However, as Steinle recounts, the function of accounting for observed magnetic data on earth in mathematical terms was eventually taken over by a new mathematical tool (developed by Gauss), which made the original concepts fade into the background. Describing the shift of focus that took place between Faraday and Maxwell, Steinle points to the gradually diminishing importance of the concept, while its practical utility remains undisputed. Dirk Schlimm’s contribution, “Mathematical Concepts and Investigative Practice,” addresses the question which theory of concepts best captures the kinds of practices we find in mathematics – a field of inquiry where concepts are especially central. Schlimm begins by contrasting two accounts of concepts, the Fregean and the Lakatosian. According to the former, concepts are fixed and determinate, that is, they do not change, and they have clear-cut boundaries. By contrast, the Lakatosian notion assumes that concepts evolve and are revised, and hence must have a more open structure than was held by Frege. As Schlimm argues, Lakatos’s theory of mathematical concepts was specifically designed to make sense of the dynamics of mathematical practice. However, he also points out that there are mathematical practices (especially that of deriving deductive proofs) that are better understood in the light of the Fregean notion. As Schlimm describes, Lakatos was aware of this, but thought there was a tradeoff between rigor and creativity, and therefore warned against a philosophy of mathematics that exclusively focused on the rigor of Fregean concepts, arguing that it was unable to account for creative reasoning. Drawing on several examples of mathematical concepts (point, line, number, integer) and Kitcher’s (1983) account of patterns of mathematical change, but also on considerations by the 19th century mathematician Moritz Pasch, the thesis argued for by Schlimm is that in the history of mathematics some concepts were introduced by rigorous definition and formal proofs, whereas others have evolved, shifted, been revised, etc., and that in addition we can find different types of mathematical practices during the same time period, depending on the kind of problem addressed. Consequently, Schlimm argues for a pluralism about

Scientific Concepts and Investigative Practice: Introduction

15

concepts, according to which no one account can fully capture all aspects of the ways in which concepts figure in mathematical practice. Schlimm concludes by arguing that this main point also carries over to other scientific concepts. In the context of this edited volume, two points are especially interesting: whereas other contributors to this volume argue vehemently against a Fregean notion of concepts as inadequate for an understanding of scientific practice (e. g., Kindi, MacLeod), Schlimm’s analysis suggests that at least with regard to mathematical practice the Fregean account is not wholly inadequate. Second, like others in this book, Schlimm also addresses the issue of the introduction of new concepts, arguing that especially here the Lakatosian notion is adequate. In particular, he suggests that much of mathematical research has to do with introducing new concepts and this is often done on the basis of a paradigm or a not yet fully articulated understanding of their defining characteristics. In the literature one occasionally finds another distinction between two types of concepts, one about readily accessible phenomenal features that can be explored and described by empirical means, and one about hidden entities, which are more removed from empirical access. While few will still endorse a radical distinction between theoretical and observational concepts, there is nonetheless an assumption – deeply engrained in 20th-century positivist and post-positivist philosophical discourse about “theoretical entities” – that concepts intended to refer to hidden entities are epistemologically problematic and semantically intractable. If they receive their meanings from their place in scientific theories, so it is held, this makes it hard to pin down precisely what are their referents, and this gives rise to skeptical problems. In his paper, “Experimentation and the Meaning of Scientific Concepts,” Theodore Arabatzis challenges this assumption, arguing that it stems from a general philosophical neglect of experimentation. As a result of this neglect of experimentation, he argues, philosophers have failed to see that (1) H-E concepts can function independently of theory, (2) H-E concepts can be stable across theoretical ruptures, and (3) there is an interplay between H-E concepts and experimentation in the sense that they can result from experiments and also guide experimentation. Arabatzis elaborates on these three points with a variety of examples. Trying to avoid a dichotomy between theoretical and observational language, he introduces the idea that there are different “levels” of theory, some more speculative and abstract, and some more “lowlevel,” i. e., tied to specific experimental circumstances and effects and

16

Uljana Feest & Friedrich Steinle

shared background assumptions. Arabatzis suggests that an agreement about the empirical identity conditions of a H-E concept can be achieved even where there is disagreement at ‘higher’ levels. Arabatzis frames this point within the debate about integrated history and philosophy of science, arguing that historians can provide insights into the actual dynamics of research and thereby offer solutions to philosophical challenges that result from outmoded accounts of meaning. Whereas Arabatzis focuses on the ways in which experimentally grounded knowledge can fix a hidden-entity concept’s reference, even in the absence of theoretical agreement about it, Uljana Feest, in her article, “Exploratory Experiments, Concept Formation, and Theory Construction in Psychology,” argues that with regard to cognitive psychology and cognitive neuroscience such a distinction between empirical and theoretical knowledge cannot very straightforwardly be drawn. She argues that both the more ‘theoretical’ and the more experimentally grounded criteria of individuation of the relevant objects of research are in flux throughout the dynamic research process. As a consequence, the thesis of her article is that concept formation and theory construction go hand in hand. Feest develops her argument in several steps. First, she explains her thesis by comparing and contrasting it to those of two mid20th-century authors (the philosopher Carl Hempel and the psychologist Clark Hull). She shows that each of these authors capture aspects of her thesis, but fail to spell it out, due to specific intellectual limitations imposed by philosophical and psychological discourse at the time. She proceeds by situating her thesis vis-à-vis current positions in the philosophy of psychology and neuroscience, arguing that they are compatible. And lastly, she shows how her thesis fits in with her own earlier writings about the importance of operational definitions as empirically individuating the objects of research, arguing that operational definitions figure as tools throughout the process of concept formation and theory formation. Corinne Bloch’s article, “Early Concepts in Investigative Practice – The Case of the Virus,” is situated within the framework of other work by the author, according to which scientific definitions, in general, specify the hidden causal structures that are thought to distinguish the scientific kinds in question from other scientific kinds. Bloch argues that if we think of definitions in this way, it remains a little mysterious (a) how scientists determine that a previously unknown phenomenon warrants the introduction of a new concept, and (b) how they go about empirically identifying and studying instances of the phenomenon in ques-

Scientific Concepts and Investigative Practice: Introduction

17

tion. She addresses both questions by taking up a suggestion by Uljana Feest (2010), according to which unexpected experimental effects can (conceivably in conjunction with other reasons) give rise to a hypothesis concerning the existence of a particular phenomenon. The experimental situation that gave rise to the effect in question, in turn, can be drawn on to construct a preliminary, “operational” definition of the concept in question. Bloch explores this suggestion by means of a case study about the emergence and stabilization of the concept of the virus as distinct from that of the bacterium, arguing that while this story clearly exhibits a transition from a more operational to a more causal understanding of the concept, the distinction between the two, ultimately, is one of degree, not of kind, because operational definitions are not randomly constructed, but embody some kind of already existing prior theoretical understanding. Bloch begins by laying out Robert Koch’s (1884) three criteria for a bacterial etiology, carefully laying out how this definition early on ran into anomalies, i. e., cases of disease that continued to be treated as if they were caused by bacteria even though they did not fully conform to the criteria. When Beijerinck (in 1888/9) posited the existence of another kind of causal agent, this clashed with existing theoretical commitments and available experimental techniques and thus was not accepted. By the 1930s, however, ideas about the virus had begun to have traction, and with this development came the understanding that this is a phenomenon of a separate kind. This understanding was accompanied by something like an operational definition that allowed for the empirical identification of the occurrence of viruses, which gradually came to be supplemented with other empirical techniques. By the late 1950s, the research in question allowed for a refined definition of the virus, this time formulated in much more explicitly causal terms. The paper argues that (1) the initial hypothesis about the existence of the virus was motivated by more than the simple failure of some experimental results to conform with Koch’s criteria for bacteria, and (2) once there was an operational definition of viruses in place, this made it possible to pursue theoretical alternative that were ultimately laid down in a more refined definition. In her article, “Scientific Concepts in the Engineering Sciences: Epistemic Tools for Creating and Intervening with Phenomena,” Mieke Boon addresses the work concepts do in relation to the investigation and production of phenomena in the engineering sciences. In order to do so, she lays out some groundwork both with respect to the notion

18

Uljana Feest & Friedrich Steinle

of a concept and with respect to the notion of a phenomenon. With respect to the former, she rejects the notion (well-known from the empiricist tradition) that concepts can be reduced to, or defined in terms of, a definite set of observable features. Instead she argues that when we understand a word, we have a lot of knowledge not immediately related to observable objects. For example, concepts allow us to recognize objects by virtue of being embedded in larger structures of (among other things) higher-order categories that enable us to say what kind of an object the concept applies to (a view not so dissimilar from that of Andersen et al’s 2006). This knowledge, in turn, enables us to ask specific questions and make particular inferences. Boon argues that this surplus meaning has “epistemic content” insofar as it enables the asking of empirically testable questions. Boon situates her notion of phenomena vis-à-vis that of Bogen and Woodward (1988), but focuses less on whether there are really mind-independent phenomena and more on the question what it takes to form a concept of a phenomenon, when our interest is spurred by the aim of designing and producing material phenomena. She argues that the kinds of empirical questions engineering scientist typically ask concern not merely phenomena as they occur in nature, but rather phenomena that they want to produce, use for technical purposes or physically intervene in. This means that the knowledge they aim for concerns not only existing phenomena, but also phenomena that do not yet exist. Looking at the process of conceptualization (i. e., the process whereby a concept of a phenomenon is formed), she argues (a) that this process draws on already existing concepts with the aim of fitting them together in a material setting under very specific circumstances, and (b) that the result of the process (the concept) can helpfully be thought of as a design. Like Boon, Nancy Nersessian, in her article, “Modeling Practices in Conceptual Innovation: An Ethnographic Study of a Neural Engineering Research Laboratory,” also focuses on the material, and often interdisciplinary, contexts of concept formation in the laboratory. Her article ties together literature about the dynamics of scientific concepts with the literature on models and modeling (both physical and computer simulations models). The basic idea is that the construction of models requires conceptual work, and in turn, the novel insights enabled by means of models and simulations lead to conceptual change. Nersessian makes this connection by presenting an original account of the role of analogical reasoning in science as not merely involving cognitive transfer from an already existing model of a source phenomenon to a target phe-

Scientific Concepts and Investigative Practice: Introduction

19

nomenon, but instead involving the construction of a source model with the explicit aim of answering to a particular question regarding the target phenomenon. The author is the principal investigator in a long term ethnographic project with a number of researchers that aims to understand the integrated cognitive-social-cultural nature of research practices in bio-engineering. In this paper she especially focuses on one case study drawn from this work, arguing that this case illustrates how concepts are both generated by investigative practices of simulation modeling and generative of such practices. In the case at hand, researchers developed a computational model of an in-vitro model (a dish), constructed to understand networks of living neurons. The simulated dish-model was specifically constructed as an analogical source for a physical dishmodel system. Nersessian describes several phases of this research, which eventually resulted in the conceptual innovation in neuroscience (as Nersessian points out, she and her collaborators did not know that this would happen when they started observing this laboratory). The paper provides a detailed description of the case to support her conclusion that whereas the intention was from the outset to build an analogy, the nature of the analogy was determined incrementally and involved configurations of several models, both simulated and physical. She illustrates the iterative nature of this process by showing how the concepts of spike and burst were initially imported into physical (in-vitro) modeling from the physiology of individual cells, whereas that of noise was taken from engineering. Subsequently, these concepts were adjusted and changed and new concepts introduced in the course of constructing simulation models (i. e., second-order models). In turn, these new concepts allowed for inferences that were transferred to the in vitro-model and thus thought to provide novel insights into in vivo phenomena. Hanne Andersen’s article, “Conceptual Development in Interdisciplinary Research,” also extends her previous work about the cognitive foundations of conceptual change to address the fact that much contemporary research is conducted by networks of scientists, often coming from different communities. Andersen takes as a starting point the question of how trading partners in interdisciplinary collaborations “hammer out local coordination.” The author proceeds by bringing together accounts of the type of expertise required for productive interdisciplinary research (interactional and contributory expertise) with previous work by herself and others (on the graded structure of the concepts that make up scientific lexicons), though she argues that this latter work

20

Uljana Feest & Friedrich Steinle

needs to be extended in order to incorporate not only (a) the distributed character of knowledge that exists in interdisciplinary collaborations, but also (b) the element of trust required for such collaborations. Andersen starts out by summarizing the main features of her previous work, which demonstrates how the graded structure of concepts can help explain when conceptual change does (or does not) occur. She then raises the question of how this model fares when applied to communication across disciplines working on the same subject matter. The issue at stake, she argues, is “how scientists combine their cognitive resources.” Andersen attacks this question by bringing the topics of distributed cognition and social epistemology to bear on it. The basic idea of the former is that knowledge about a given scientific object is distributed across several disciplinary conceptual networks. It is this fact that enables a researcher to employ the inferential resources inherent in the concepts of a neighboring discipline. As Andersen points out, however, this idea needs to be supplemented with some account of trust in the expertise of one’s interdisciplinary collaborators, a question that has been discusses in the field of social epistemology. She thus points at a variety of important and exciting topics that are relatively new to the literature about scientific concepts.

Reference List Andersen, H., Barker, P. / Chen, X. (2006), The Cognitive Structure of Scientific Revolutions, Cambridge: Cambridge University Press. Arabatzis, T. (2011), “On the Historicity of Scientific Objects.” In: Erkenntnis 75 (3), 377 – 390. Arabatzis, T. / Kindi, V. (2008), “The Problem of Conceptual Change in the Philosophy and History of Science.” In: Vosniadou, S. (ed.), International Handbook of Research on Conceptual Change. London: Routledge, 345 – 373. Barker, P. (2011), “The Cognitive Structure of Scientific Revolutions.” In: Erkenntnis 75 (3), 445 – 465. Barsalou, L. W. (1992), “Frames, Concepts and Conceptual Fields.” In: Lehrer, A. / Kittay, E. (eds.), Frames, Fields and Contrasts: New Essays in Semantical and Lexical organization, Hillsdale, NJ: Erlbaum, 21 – 74. Bloch, C. L. (2010), “Definitions and Reference Determination: The Case of the Synapse.” In Burian, R., Gotthelf, A. (eds.), Concepts, Induction, and the Growth of Scientific Knowledge. (forthcoming) Bloch, C. L. (2011), “Scientific Kinds Without Essences.” In: Bird, A; Ellis, B.; Sankey, H. (eds.), Properties, Powers and Structures: Issues in the Metaphysics of Realism, Routledge Studies in Metaphysics: Routledge, 233 – 255.

Scientific Concepts and Investigative Practice: Introduction

21

Brandom, R. (1994), Making it Explicit. Reasoning, Representing, and Discursive Commitment, Cambridge, MA: Harvard University Press. Brandom, R. (2001), Articulating Reasons. An Introduction to Inferentialism, Cambridge, MA: Harvard University Press. Brigandt, I. (2010), “Scientific Reasoning is Material Inference: Combining Confirmation, Discovery, and Explanation.” In: International Studies in the Philosophy of Science 24 (1), 31 – 43. Bogen, J. / Woodward, J. (1988), “Saving the Phenomena.” In: The Philosophical Review 97 (3), 303 – 352. Carnap, R. (1936), “Testability and Meaning.” In: Philosophy of Science 3 (4): 419 – 471. Daston, L. / Lunbeck, E. (2011), “Introduction: Observation Observed.” In: Daston, L. / Lunbeck, E. (eds.), Histories of Scientific Observations, Chicago: University of Chicago Press, 1 – 9. Feest, U. (2010), “Concepts as Tools in the Experimental Generation of Knowledge in Cognitive Neuropsychology.” In: Spontaneous Generations: A Journal for the History and Philosophy of Science 4 (1), 173 – 190. Feyerabend, P. (1962), Explanation, Reduction and Empiricism. Scientific Explanation, Space, and Time, H. F. a. G. Maxwell (ed.), Minneapolis, University of Minneapolis Press: 28 – 97. Hempel, C. G. (1952), “Fundamentals of Concept Formation in Empirical Science.” In: International Encyclopedia of Unified Science, Vol. II, No. 7. Kuhn, T. S. (1962), The Structure of Scientific Revolutions, Chicago: University of Chicago Press. Kuhn, T. S. (1974), “Second Thoughts on Paradigms.” In: Suppe, F. (ed.), The Structure of Scientific Theories, Urbana: University of Illinois Press, 459 – 482. Machery, E. (2009), Doing Without Concepts, New York: Oxford University Press. Nersessian, N. J. (1984), Faraday to Einstein: Constructing Meaning in Scientific Theories, Boston: Kluwer. Nersessian, N. J. (2008), Constructing Scientific Concepts, Boston: MIT Press. Norton, J. (2003), “A Material Theory of Induction.” In: Philosophy of Science 70, 647 – 670. Putnam, H. (1975), “The Meaning of ’Meaning’.” In: Gunderson, K. (ed.), Language, Mind, and Knowledge, Minnesota Studies in the Philosophy of Science, v. 7., Minneapolis: University of Minnesota Press. Quine, W. v. O. (1951), “Two Dogmas of Empiricism.” In: Philosophical Review 60, 20 – 43. Rouse, J. (2002), How Scientific Practices Matter: Reclaiming Philosophical Naturalism, Chicago, Il: University of Chicago Press. Rouse, J. (2011), “Articulating the World: Experimental Systems and Conceptual Understanding.” In: International Studies in the Philosophy of Science, 25:3, 243 – 254. Salmon, W. (1985), “Empiricism: The Key Questions.” In: Rescher, N. (ed.), The Heritage of Logical Positivism, Lanham: University Press of America. Sellars, W. (1948), “Concepts as Involving Laws and Inconceivable Without Them.” In: Philosophy of Science 15, 287 – 815.

22

Uljana Feest & Friedrich Steinle

Schlimm, D. (2008), “Two Ways of Analogy: Extending the Study of Analogies to Mathematical Domains.” In: Philosophy of Science, Vol. 75, No. 2, 178 – 200. Schlimm, D. (2009), “Learning from the Existence of Models. On Psychic Machines, Tortoises, and Computer Simulations.” In: Synthese 169 (3), 521 – 538. Shapere, D. (1984), “Reason and the Search for Knowledge: Investigations in the Philosophy of Science.” In: Boston Studies in the Philosophy of Science 78, Dordrecht: Reichel. Society for the Philosophy of Science in Practice, Newsletter, Winter 2012 (http://www.philsci.org/files/SPSP_Newsletter_Feb_2012.pdf) Steinle, F. (1997), “Entering New Fields. Exploratory Uses of Experimentation.” In: Supplement. Proceedings of the 1996 Biennial Meeting of the Philosophy of Science Association, Part II: Symposia Papers, Vol. 64, 65 – 74. Steinle, F. (2002a), “Challenging Established Concepts. Ampere and Exploratory Experimentation.” In: Theorie. Revista de Teoria, Historio y Fundamentos de la Ciencia 17 (2), 291 – 316. Steinle, F. (2002b), “Experiments in History and Philosophy of Science.” In: Perspectives on Science, Vol. 10 (4), 408 – 432. Wilson, M. (1982), “Predicate Meets Property.” In: Philosophical Review 91 (4), 549 – 589.

Concept as Vessel and Concept as Use Vasso Kindi “‘Concept’ is a vague concept,” Wittgenstein says (RFM, 433). Vagueness is usually taken to be a defect compared to an abstract and absolute ideal of exactness which has been held in high esteem and dogmatically adhered to in philosophy. There is no such absolute ideal, Wittgenstein has said (PI, 88), and vague concepts can very well serve our purposes despite the fact that we may not be able to apply them unequivocally in every possible case, especially in cases we call borderline. So, even if ‘concept’ is a vague concept, i. e., even if we cannot sharply determine it by an analytical definition, it does not follow that it is useless. The problem with the concept of ‘concept’, however, is not so much that it may be vague as that there have been different ways of understanding it. It is the contention of the present paper that a particular way of understanding concepts, i. e., understanding concepts as entities in the form of vessels, which has dominated contemporary philosophy, has significantly contributed to the development of certain philosophical problems, such as the problem of incommensurability, which dissolve once we adopt a different understanding of concepts, namely, understanding concepts as uses of words in their sites. The aim of the paper is to defend this latter approach which suggests a corresponding historical study of concepts. I will begin by a short overview of the entity idea of concepts; I will present the concept as use approach and show how certain philosophical problems dissolve once this approach is adopted. I will consider a number of difficulties associated with understanding concepts as uses of words and close with a discussion of the implications this way of thinking about concepts has for their historical study. I argue that historians and philosophers who study concepts should not aim at identifying a recurrent common core (the content of the vessel) but rather concentrate on the particular uses of concept-words in a semantic field which also involves other concepts.

24

Vasso Kindi

1. Concepts as Sharply Delineated Entities In the history of philosophy the term ‘conceptum’ was used in connection to, and sometimes interchangeably with ‘idea’ and ‘notion’. Leibniz, for instance, writes: “By ‘term’ I understand not a name, but a concept (conceptum), i. e. that which is signified by a name; you could also call it a notion, an idea” (cited in Hacking 1982, 186) 1. ‘Concept’ was the product of conception which, in the seventeenth century, consisted in abstracting ideas and notions from sensible experience (Caygill 1995, 119). In contemporary philosophy, concepts are typically taken to be either mental entities (mental representations) or abstract objects like the Fregean senses2. The former view, the one about mental representations, has not been very popular in the early, at least, philosophy of science of the 20th century, because of its possible psychologistic and subjectivist implications. The latter, the one about abstract objects, has been more persistent. In this latter case, concepts are considered to be definition-like, descriptions of properties which help us identify the objects that fall under them3. Concepts, in that sense, are “ring-fenced” in Wil1

2

3

According to Hacking (1982, 186) translators commonly render Leibniz’s notio as concept. Caygill (1995, 118 f), in his Kantian dictionary, says that the term as a substantive, “does not appear in the philosophical vocabulary before the late seventeenth century; prior to this it meant a ‘provisional sketch’ of a legal document or agreement, or even a poetic conceit. It was first used in a logical and epistemological context by Leibniz.” In fact, the term conceptum appears much earlier, for instance, in the works of Ockham, Suarez and Descartes. Of course, sense and concept are distinct terms for Frege. Frege uses concepts to pick out the referents of predicates. C. I. Lewis advances the view that despite historical evolution and change, concepts are similar to Platonic ideas (Frege 1956, 269). For an overview and presentation of what concepts are taken to be in psychology and philosophy see Machery 2009. This is the so-called “classical theory of concepts” (Machery 2009, 77 – 83). Against this theory, there developed, mostly in psychology, other theories of concepts, namely, (1) the prototype theory, according to which a concept represents typical properties of a category which are embodied in a prototype, (2) the exemplar theory of concepts, according to which concepts are sets of exemplars each one of which is a body of knowledge about an individual member of a class and (3) the theory of concepts according to which concepts are theories (or parts of theories) which can explain the properties of the category members (ibid., 83 – 108). I am not going to discuss these theories of concepts since they deal with cognitive processes involving concepts and do not bear on what played out in the history of philosophy of science and is at stake in this

Concept as Vessel and Concept as Use

25

liam James’s terminology ( James 1996, 99); they have sharp boundaries. As Frege put it, this means that “every object must fall under [a concept] or not, tertium non datur” (Frege 1997, 298). If a concept does not satisfy this requirement, Frege says, it is meaningless (Frege 1979, 122). He compares an indeterminate concept to a district “whose boundary lines were not sharply drawn but lost themselves here and there in the surrounding country. This would not really be a district at all and similarly a concept which is not sharply delimited cannot truly be called a concept. Such vague ideas, though they resemble concepts, cannot be recognized by logic as concepts; it is impossible to draw up exact rules governing them”4 (Grundgesetze der Arithmetik, ii, 69; quoted in Waismann 1997, 181). Based on this understanding of concepts, Frege advanced his Begriffsschrift (1980), his concept-script, which, in explicit reference to the Leibnizian project of lingua characterica, aimed at capturing what is logical in concepts (i. e., what is relevant to logical inference). Following in his steps, the logical positivists, in their efforts to achieve the goal of unified science, set themselves to construct a total system of concepts, ordered in statements which they could manipulate mechanically. For them concepts are indistinguishable from terms which are, according to Carnap, preferable to concepts since terms do not carry the psychological baggage that has historically accrued on concepts (Carnap 1981, 118). He also avoided speaking of concepts because he thought that the

4

paper, namely, a logical consideration of concepts which involved the juxtaposition between a more historical and fluid understanding of concepts on the one hand and an abstract one on the other, which takes concepts to be entities sharply defined. A detailed account of the so called classical view of concepts can also be found in Mark Wilson’s Wandering Significance (2006). The whole of Chapter 3 is devoted to this topic and there is an Appendix where no less than 44 ‘chief theses’ of the classical framework are listed. Frege’s comparison of concepts to districts in order to show that concepts need to have sharp boundaries is ill-conceived exactly because only artificially can one sharply delimit districts. Cartwright et al. (1996, 190) in discussing Neurath’s introduction of the term Ballungen, i. e., the imprecise and unclean data reports which form the basic material of the sciences, say: “Ballungen are literally congestions. More intuitively, we can think of a modern city depicted on a map. One sees a big mass, dense in the middle and then spread out on the edges, here and there, with its boundaries undefined. This is a Ballungsgebiet.” In fairness to Frege it should be said that he is not interested in ordinary concepts while Neurath’s Ballungen refer to concepts of everyday life which cannot be eliminated from the data reports which are necessary, though, for scientific testing.

26

Vasso Kindi

language of concepts lends itself to the metaphysics of idealism (Carnap 1969, 10). The logical positivists’ concern in general was to cleanse concepts from the “metaphysical and theological debris of millennia” (Hahn et al. 1996, 334; 339) which had clung to them from ancient time. Carnap’s concepts are mere knots in a system of structural relations and have only formal, structural properties. This means that they belong to an empirically uninterpreted calculus and acquire content by means of bridge principles which connect them to experience and the world (cf. Feigl 1970, 6). The requirement of sharp boundaries in both Frege and the positivists (a qualification should be made here about Neurath who eventually came to realize that the empirical basis of science cannot involve concepts sharply delineated) makes concepts sharply discrete and turns them into some kind of vessel or receptacle (an expression found in Wittgenstein; RFM, 295) 5, which awaits to be filled with content from the soil of experience. They resemble empty shells. Actually, the term concept itself derives from the Latin verb concipere which means to take in, to receive, to contain, to hold, to grasp6. An apposite description 5

6

Even the term ‘Porosität’ (porosity), used by Waismann in relation to empirical concepts, invokes the picture of a porous vessel. According to Hacker (1996, 164), the Oxford logician W. C. Kneale, translated Waismann’s Porositt der Begriffe as open texture. According to Caygill (1995, 118) “The German word for concept – Begriff – translates the past participle of the Latin verb concipere: to take to oneself, to take and hold.” Gadamer (1992, 20) also discusses Begriff in the sense of grasping (cf. Wrathall 2004, 456 f, for a similar discussion by Heidegger). The Italian philosopher Mario Periola (1995, 122 f) juxtaposes concept and Begriff: “Twentieth-century philosophy is accustomed to regarding the term ‘concept’ as the translation of the German word Begriff, which gained broad currency in philosophical thinking precisely because of the speculative complexity with which German philosophers, from Kant onwards, enriched it. So it may be that we say ‘concept’, but think Begriff: what tends to escape us is the fact that the former word, of Latin origin, has a semantic orientation that is the precise opposite of that of the German word. Begriff links the act of intention etymologically to the verb greifen, meaning to take, in the sense of reaching out and seizing. In the Latin term conceptus, the act of intention is etymologically derived from concapio, meaning to take, in the sense of gathering in, receiving. To conceive therefore does not mean to appropriate anything, but rather to make room for it: it is not the act of a subject that takes an object, but the disposition to receive something from the outside that comes, occurs, arrives.” Mauro Carbone (2004, 47), who cites Periola, says: “[C]onceptus differs from Begriff in the following way: while the etymon of the latter, via the verb greifen, refers to grasping (the exact English equivalent of greifen), the etymon of the former

Concept as Vessel and Concept as Use

27

of this idea is that concepts are like static frames which either cut the dynamic continuity of nature and the continuous perceptual flux into discrete bits ( James’s approach 1996, 85 f; 91) 7, or bring together the disconnected manifold of our particular intuitions (Kant’s approach) 8. As idealized abstract entities, or even as mental images, concepts are supposed to chaperon our perceptual experience casting their shadows on the “big blooming buzzing confusion” ( James 1996, 50) of our sensible lives. In this way, they help us give shape and meaning to the sensible world by directing our attention and by carving out objects and events. Had we no concepts, said William James, we would be more allied to the beasts ( James 1996, 54); “we should live simply ‘getting’ each successive moment of experience as the sessile sea-anemone on its rock receives whatever nourishment the wash of the waves may bring. With concepts we go in quest of the absent, meet the remote, actively turn this way or that, bend our experience, and make it tell us whither it is bound” (ibid., 64). Constellations of concepts, harmoniously connected (ibid., 67), make up different conceptual schemes (ibid., 69; 81; 85), conceptual universes (ibid., 56) or universes of thought (ibid., 52). James believed, like Kant before him, that to know and command reality we need both percepts and concepts, as we need both legs to walk (ibid., 53), but his account, just like Kant’s, could easily be seen to uphold the third dogma of empiricism, namely the scheme- content distinction. Concepts provide the scheme and percepts the content. James contended that the discreteness of concepts cannot capture the perceptual flux that confronts them: “Conceptual knowl-

7

8

refers to an entity that is concave, and that, being concave, can function as a basin. (…). Concavity or hollowness is therefore a crucial feature of the basic meaning of conceptus (…) The meaning of conceptus invokes the gesture of ‘welcoming’ rather than the gesture of ‘grasping’” (ibid., 47). The picture that James draws about concepts is similar to the one described by Waismann about sentences (1978, 141). Waismann calls this picture oversimplified: “Reality is undivided. What we may have in mind is perhaps that language contains units, viz. sentences. In describing reality, by using sentences, we draw, as it were, lines through it, limit a part and call what corresponds with such a sentence a fact. In other words language is the knife with which we cut out facts.” James (1996, 52 f, fn. 2) highlights the distinction between his approach and Kant’s. Glock (2010, 104) finds Kantian affinities in Wittgenstein. Citing RFM, 237 f, he says that according to Wittgenstein, “[t]he role of concepts and of concept formation is to ‘channel’ experience and thereby to set the ‘limits of the empirical’.”

28

Vasso Kindi

edge is forever inadequate to the fullness of the reality to be known” ( James 1996, 78). Yet, he still operated with a notion of concepts that sees them as static moulds or templates. Active life and novelty escapes them since concepts, in James’s words, are “post mortem preparations, sufficient only for retrospective understanding” (ibid., 99). This morbid figure of speech is in the same line as Nietzsche’s who claimed that “the great edifice of concepts displays the rigid regularity of a Roman columbarium” (Nietzsche 1999, 85). Hacking (1983, 1) also followed Nietzsche and said that philosophers turned the dynamic historical process of science, a process of becoming and discovering, into a mummy, unwrapping the cadaver only around 1960. Now, one might say that this is the business of philosophy: namely, to dehistoricize and mummify. Philosophers do know that the practices they study are dynamic and evolve in history but they choose to “represent a theory quick frozen at one momentary stage of what is in fact a continually developing system of ideas” (Hempel 1970, 148). Hempel and the logical positivists knew very well what actual science was like but they were not interested to describe it. Their logical reconstructions had absolutely nothing to do with the actual practice of science as the positivists themselves recognized: “It should be stressed and not merely bashfully admitted that the rational reconstruction of theories is a highly artificial hindsight operation which has little to do with the work of the creative scientist” (Feigl 1970, 13). And Carl Hempel maintained that “the standard construal was never claimed to provide a descriptive account of the actual formulation and use of theories by scientists in the ongoing process of scientific inquiry” (Hempel 1970, 148). Note that Hempel speaks only of ‘theory’; that was what science was all about for philosophers at the time. So, the standard view of concepts takes them to be ring-fenced, that is, well-defined and circumscribed, some kind of entity in the form of vessel to be filled with content.

2. Concepts as Uses of Words A different approach, found in Wittgenstein’s work and advocated by Ian Hacking, takes concepts to be uses of words in their sites. According to Wittgenstein, “a concept is the technique of using a word” (Wittgenstein 1988, 50) and when we consider a concept what we consider is the application of the relevant word (PI, 383). Hacking sees concepts as

Concept as Vessel and Concept as Use

29

“molded by history,” as “historical entities whose form and force has been determined by their past” (Hacking 1990, 358).9 Concepts are words in their sites … If one took seriously the project of philosophical analysis, one would require a history of the words in their sites, in order to comprehend what the concept was. (Hacking 1990, 359)

Although Hacking speaks of entities whose shape has evolved over time, he clearly de-reifies concepts by explicitly equating them to the use of words in their sites (Hacking 1990, 359). From his perspective, studying concepts implies going over a large collection of detailed historical facts involving the use of the relevant words. It is what he calls the “Lockean imperative” (Hacking 1990, 354), a mandate which directs us “to take a look” at history. One may assume that resort to history (the historical facts of the words’ uses) may take the form of a genealogical account or of some kind of biography which traces the vicissitudes of a word’s life. A biographical approach may imply a hypostatization of concepts—as if concepts are entities which have a life that evolves—, while a genealogical investigation into the origins of things may be taken as an attempt to vindicate current concepts by illustrating their pedigree (cf. Geuss 2009) or destroy them by “exposing the contingent and ‘shameful’ origins of cherished ideas and entrenched practices” (Bevir 2008, 264). Hacking does not go down this road. He is much more influenced in his retrospective accounts of concepts by a Foucauldian understanding of genealogy which highlights difference, contingency and discontinuity in order to serve the purpose of criticism. Hacking says that he is interested in displaying the possibilities and the accidents in the course of the concepts’ history in view of discouraging grand unifying accounts (Hacking 1990, 345). One crucial question is whether, by resorting to the history of concepts, one is doing history rather than philosophy. Hacking denies it: “To use history for the understanding of philosophical problems is not 9

Cf. Kierkegaard in The Concept of Irony: “Concepts, like individuals have their histories, and are just as incapable of withstanding the ravages of time as are individuals. But in and through all this they retain a kind of homesickness for the scenes of their childhood” (Kierkegaard 1966, 47). One may also consider the view noted by the philosopher Morton White that only concrete substances have histories; abstract entities have natures (White 1945, 322). If concepts are not taken to be abstract entities, then, under this view, they can surely have histories. Versions of this view can be found in Vico, Augustine and Ortega y Gasset (Kelley 2005, 234).

30

Vasso Kindi

to resign one’s birthright to be a philosopher in the Present-Timeless mode” (Hacking 1990, 362). The Present-Timeless mode of doing philosophy shows no historical sensibilities and yet, Hacking thinks, it may take advantage of the histories of concepts in order to address the philosophical issues it deals with. The way concepts have been transformed and developed may give us a better understanding of philosophical problems and then, perhaps, help us solve or dissolve them. In Hacking’s view it is unimportant whether this delving into empirical facts is labeled history, anthropology or microsociology. Nothing fits, he says, but these investigations, together with the scholars who practice them, are “co-opted” into philosophy (1990, 356). Foucault advances a similar view. He advocates a “historicophilosophical practice” in which historical contents are not prepared by historians and offered “ready-made” to philosophers. He insists that it is a practice in which “philosophical labor, philosophical thought and philosophical analysis [is put] into empirical contents” (Foucault 1996, 391). In what follows I want to argue for two things: first, that the ‘concept as use’ approach, which involves the resort to history in the sense articulated above, helps us avoid or dissolve certain philosophical problems which have preoccupied us for a very long time. These problems have emerged in philosophy of science because there prevailed, I contend, the static, entity idea of concepts we have just described. Secondly, I will show what the concept as use approach implies for historically understanding concepts.

3. Solving Philosophical Problems I will consider three examples of philosophical problems which can be dissolved if we understand concepts as the use of words in their sites: incommensurability, the scheme-content distinction and the issue of hidden entities. William James, well before Kuhn discussed incommensurability, captured what it means: “‘Incommensurable’ means that ‘you are always confronted with a remainder’” ( James 1996, 62). When we compare two magnitudes by using a unit as a common measure, and the division is not perfect (i. e., it yields a remainder), then the two magnitudes are incommensurable. Kuhn and Feyerabend introduced the concept of incommensurability in philosophy of science and used it dialectically, i. e., against the defenders of the so-called received view. Feyerabend showed

Concept as Vessel and Concept as Use

31

that reduction and explanation cannot be carried out as conceived by the positivists since concepts differ irreducibly (i. e., reducing one to the other will leave a remainder) while Kuhn criticized the view that concepts (or a core in concepts) survive(s) unaltered the deep changes in the history of science. Both philosophers maintained that if theoretical concepts cannot be reduced to neutral or shared observation statements (the common core and common measure), then, historically distant concepts, or concepts from different theories, cannot be mapped onto each other without remainder. They are incommensurable. Now, one can make such a claim, that is, speak of remainder and lack of common measure in relation to concepts and theories, only if concepts are taken to be well-defined and ring-fenced. Nancy Nersessian makes the same point: “[The classical view of concepts] is partly responsible for the famous problem of incommensurability of meaning between scientific theories” (Nersessian 1985, 180). Concepts need to be understood as determinate and sharply bound entities if they are going to be rigorously compared. David Hull concurs: In order for two theories actually to contradict each other, they must be presented in complete, totally precise, possibly axiomatized form with all meanings sharpened to a fine point by sufficient conceptual analysis. Only then can the two theories be shown to be incommensurable. (Hull 1992, 470) 10

If, on the other hand, concepts are seen as variably used words in time, as being open, flexible and fluid, incommensurability would not have the bite it now has. It wouldn’t make sense in the first place to make the analogy with mathematical incommensurability which requires assessment and exact comparison by a common measure. One could still note deep differences between concepts (that is, between particular uses), but these would not imply the problems that incommensurability was taken to give rise to. Let’s take, for instance, the issue of rationality. According to the critics of incommensurability, if one acknowledges irreconcilable differences between concepts of different conceptual networks, then one cannot judge whether the transition from one network to another is rational. This conclusion follows and makes sense only if rationality is understood as resembling a logical inference, that is, as a methodical procedure involving sharply defined stable concepts and cogent arguments built around them. If the transition from one network 10 Of course, here Hull confuses contradictory theories with incommensurable theories.

32

Vasso Kindi

to the next is indeed represented in the form of an argument, then, of course, the concepts involved would have to be the same, in order for the argument to work. We would have fallacies of equivocation if, because of incommensurability, the words remained the same and the concepts (in the form of entities) signified by them differed radically. In this case, the transition from one network to the next, represented by these arguments, would not, of course, be deemed rational. But if concepts are not well-defined entities but particular uses of words, the move from some uses to others takes place in a stretch of time amidst a wealth of possibilities. Then, the rationality of transition is not judged by considering reconstructed abstract arguments involving sharply defined, entity-like concepts, but by attending to the particular circumstances of word use in order to assess the actual considerations and options in the range of possibilities available to the scientists. From this perspective the concept of rationality itself is not captured by an abstract ideal but it is variably adjusted with application. Incommensurability arises as an issue when concepts are seen as distinct islets and conceptual schemes as the circumscribed domes on which concepts are pinned. Only when concepts are seen in such a way, as well-determined and closed, can we say that they do not match and are, therefore, incommensurable. When concepts splinter into various uses, difference in application does not immediately imply incommensurable concepts in the standard sense of the term. Every application of a word in an extended practice of use differs somewhat from the others despite the fact that several of these different uses are taken to form one concept. For instance, we apply the term ‘chair’ to different objects of different shape and material and we have the concept of chair. Whether a new application can be assimilated to older uses, and so taken to form with them one concept, or whether it can be taken as the beginning of a new, radically different course of use, is a complicated matter which can only be judged, not unequivocally, and usually retrospectively, by careful and patient research of the relevant practice. This radically different course will be paved by uses of the relevant term that will, again, differ somewhat between them and may eventually lead to the formation of a new concept. Will this new concept be incommensurable to the old? If concepts are understood as extended in time and consisting of particular uses of words, incommensurability is hard to apply. Where is the common measure and the remainder, where are the well circumscribed entities to be compared? Incommensurability made sense as a polemical term against the

Concept as Vessel and Concept as Use

33

view which saw concepts as clear-cut entities subsisting, fully or partially, through time. Against this view, incommensurability highlighted incongruity (no common measure between entities that did not match) and steered us away from sameness and continuity to radical change of meaning. If the entity view of concepts is given up, however, incommensurability is not anymore an eye-opener. When introduced by Kuhn and Feyerabend, it was a concept that was simultaneously revelatory and upsetting because it unveiled deep differences in meaning which passed unnoticed by those who concentrated on identity and which could not be accommodated in the standard narratives of progressive scientific development. But once this understanding of concepts is left behind, incommensurability, if ever invoked, simply means a more serious case of deviance in use. This is not as alerting as before since, in the concept-as-use approach, every use is different, even if slightly, from the others and so radical difference is only a matter of degree. What is more, when concepts are seen as uses of words, attention is drawn to what agents do rather than to the role of concepts in logical inference. This means that the threatening implications of incommensurability (for instance, the danger of irrationality) are assuaged since now the rationality of the transition from one state of affairs to the next is not a matter of adherence to an absolute standard requiring sameness of meaning (as it would have been the case if the transition were depicted as a logical inference) but is rather judged against the particular circumstances in which the agents acted. An analyst, such as an historian or a philosopher, would have to ask whether the scientists under study, given what they knew at the time, the problems they were facing, the options available to them, etc., made a reasonable or rational choice. The transition becomes a practical rather than an abstract theoretical issue. The second example of problem which loses its grip if we understand concepts not as vessels but as uses of words is the scheme-content distinction. It is usually believed that, against an undifferentiated stuff, one identifies and interprets phenomena by having access to some model (for instance, a concept) in one’s mind or in a third realm. Concepts serve as frames to give shape to our blind experience. Davidson, who attacked this conception as the third dogma of empiricism, attributed it to Kuhn, among others, and proceeded to show that given this dogma, one cannot make sense of alternative conceptual networks (Davidson 1984, 9; 12). Kuhn, however, who defended the possibility of different conceptual schemes, did not endorse the third dogma. He

34

Vasso Kindi

did not believe that empty moulds, the concepts, are filled with experience. Rather he insisted that learning to use words in particular contexts offers us simultaneously knowledge of the world: When the exhibit of examples is part of the process of learning terms like ‘motion’, ‘cell’, or ‘energy element’, what is acquired is knowledge of language and of the world together. (…) In much of language learning these two sorts of knowledge—knowledge of words and knowledge of nature— are acquired together, not really two sorts of knowledge at all, but two faces of the single coinage that a language provides. (Kuhn 2000a, 31; cf. Kuhn 1977, 253)

In acquiring and learning a language, we do not have access to a template—the concept—which allows us to connect words and things. Rather, we acquire concepts practically by employing words in particular circumstances and, thus, learn words and things together. As Cavell put it: “In ‘learning language’ you learn not merely what the names of things are, but what a name is; not merely what the form of expression is for expressing a wish, but what expressing a wish is; not merely what the word for ‘father’ is but what a father is” (Cavell 1979, 177). So, if one follows Kuhn and Cavell in upholding that language and the world are learned together, and that concepts are not empty vessels to be filled with formless content from the world, then, the scheme-content distinction cannot really be drawn and the subsequent problems Davidson noted do not arise. Davidson showed that the very idea of a conceptual scheme is incoherent given the third dogma of empiricism, i. e., the scheme-content distinction. But if this distinction is challenged by rejecting the understanding of concepts as receptacles, the problem of making sense of conceptual schemes dissolves. Another issue which would be dealt with differently if concepts were seen as flexible and tied to practice is the hidden entities issue. Hidden entities seem problematic because, being hidden, they do not allow us to secure, across time, reference and meaning, in the entity sense of the term. But if we see concepts as formed in the use we make of the relevant concept-words, that is in their employment in particular research situations and activities, what can be called their epistemic functions, then, hidden entities concepts do not present any special difficulties in comparison to other concepts. We do not form concepts by reading off what is already there, but we form concepts by developing language for certain purposes. “Having a concept never means being able to recognize some feature we have found in direct experience; the mind makes concepts,” says Peter Geach (2001, 40) who has persistently

Concept as Vessel and Concept as Use

35

criticized the doctrine he calls “abstractionism,” i. e., the view that “a concept is acquired by a process of singling out in attention some one feature given in direct experience” (ibid., 18). He says that “abstractionism is wholly mistaken; that no concept at all is acquired by the supposed process of abstraction” (ibid., 18). One of the arguments he gives is that the same features can very well respond to different concepts, for example, ‘red’ and ‘chromatic color’. So, if we take concepts to be learned in practice, that is, by employing words in certain situations, then we do not have the particular problem of accounting for the way a concept-word refers to a hidden, as opposed to a visible, entity, for concepts are not formed by observing the entities conceptwords apply to. Of course, the metaphysical issues relating to hidden entities (for instance, whether the entities refer to by the concept-words exist or not) do not go away. I have maintained that a particular conception of concepts, i. e., the one which takes concepts to be concave entities and sees them as wellcircumscribed, is responsible for certain problems that have pre-occupied us in philosophy of science. If we see concepts differently, as uses or techniques(Wittgenstein 1988, 50), as instruments which direct and express our interests (PI, 569 f), these problems do not arise.

4. Confronting Difficulties of the ‘Concept as Use’ Approach Are there any disadvantages if we take concepts to be the use of words in their sites? There are certainly problems which need to be addressed: 1. How stable are the concepts that are formed by using linguistic expressions? Does the meaning of words change as we go from proposition to proposition (PO, 67)? Are concepts always evolving, always in a state of flux? 2. We may say, with Wittgenstein, that there is “transition” (LWPP I, 932) with every new application of words, a “gradual sloping of concepts” (LWPP I, 765). But is there also some radical break? Do revolutions occur in the use of concepts? Wittgenstein says that “[t]here is a continuum from an error to a different kind of calculation” (RC III, 293). Does this mean that it is arbitrary or that it depends on the community of language users whether a particular move is mistaken or revolutionary?

36

Vasso Kindi

3. Sometimes we may be inclined to say that two different concepts are attached to the same sign (for instance, ‘bank’) but in other cases we would rather say that different uses make up a single concept. Wittgenstein, for instance, says that understanding poetry and understanding empirical sentences make up a single concept of understanding. In the case of ‘is’, however, Wittgenstein makes a different pronouncement. He says that he prefers to say that the term ‘is’ has two different meanings rather than two different kinds of use and so, the term can be considered to name two different concepts (‘is’ as copula and ‘is’ a sign for identity). How do we make these pronouncements? Are there “essential and non-essential differences among the uses? This distinction,” Wittgenstein says, “does not appear until we begin to talk about the purpose of a word” (LWPP II, 2). Wittgenstein, in addressing these issues, laid emphasis on the decisions we are led to make: “I reserve the right to decide in every new case whether I will count something as a game or not” (PG, 117). He maintained that the concepts we have and use are not dictated by the world. Do not believe that you have the concept of colour within you because you look at a coloured object—however you look. (Any more that you possess the concept of a negative number by having debts.) (Z, 332) Would it be correct to say our concepts reflect our life? They stand in the middle of it. (LWPP II, 72) Then is there something arbitrary about this system? Yes and no. It is akin both to what is arbitrary and to what is non-arbitrary. (Z 358) [P]erhaps one thinks that it can make no great difference which concepts we employ. As after all it is possible to do physics in feet and inches as well as in metres and centimetres; the difference is merely one of convenience. But even this is not true if, for instance, calculations in some system of measurement demand more time and trouble than it is possible for us to give them. (PI, 569)

Wittgenstein’s point is that we do not make arbitrary whimsical decisions as regards the concepts we use; rather, we take huge responsibility in the moves we make which are constrained and influenced by our interaction with the world, by our education and training, by our purposes and interests11. “We are playing with elastic, indeed even flexible con11 Brigandt (this volume) and Steinle (this volume) unpack, with concrete examples, the significance of goals associated with the use of concepts. Brigandt says that specific investigative and explanatory aims underlie the use of concepts and shows, concentrating on concepts from biology, how epistemic goals result in

Concept as Vessel and Concept as Use

37

cepts. But this does not mean that they can be deformed at will and without offering resistance, and are therefore unusable” (italics in original, LWPP II, 24). So, if concepts are understood as elastic, as being “dragged away” (Wilson 2006, 330) from their initial implementation to new territories and different directions because of our diverse interests and aspirations, then their historical investigation should track the complicated paths of their development and should not aim to anachronistically discover (or construct) some pristine fixed crux which invariably appears through time. Concepts may better be seen as “the delicate filigree of patchwork arrangement” (ibid., 368) which unfolds, driven by contingent application, on the rough ground of practice.

5. Studying Concepts Historically How we understand concepts affects the way we study them historically. If concepts are understood as vessel-like entities which contain a core (the necessary and sufficient conditions of proper application) that survives intact in the course of time, what we are after when we study them historically is to identify this core in every occurrence of the concept-words under study. This is how we are supposed to ensure that we are talking of the same concept. If, however, we understand concepts as uses, no such core is to be found. Every use is different, even if slightly, from others and the unity of the concept is secured not by some common element but by the practice in which the concept lives. The language users treat certain uses of the concept-words as belonging together. The effort of the historian, under this understanding of concepts, is to understand, learn and describe the uses of words in their sites. The result of such effort is less emphasis on recurrent identity and more attention to difference. Some scholars, having recognized that no common core is to be found intact in the various uses of concepts, have resorted to Wittgenstein’s notion of family resemblance in their effort to salvage some elesemantic variation and semantic change. Steinle explicitly calls concepts tools which enable specific tasks. He presents the story of the concept of magnetic pole and maintains that if one is interested in the role of concepts in scientific practice, one should take into account the goals that shape them and the goals they serve.

38

Vasso Kindi

ments of continuity in the different applications of concept-words. Instead of features of identity, they seek similarities, criss-crossing and overlapping, in the instances of word application in order to account for the unity of concepts. Often, however, the notion of family resemblance is invoked as an uneasy substitute and an easy way out from the perplexity we find ourselves in when we are unable to furnish necessary and sufficient conditions that would allegedly justify the proper application of our concepts12. Saying that a concept is a family resemblance concept is usually a license to avoid the hard work of justifying why specific boundaries of concepts have been drawn13, of explaining why certain phenomena or certain uses are subsumed under the same concept. One can always find resemblances, especially when one has decided in advance what instances to include under a concept. One can find resemblances even among instances that fall under different concepts. So, to avoid begging the question or to be led astray by ubiquitous overlapping similarities, one needs to do the hard work of showing and defending why one includes certain instances of use under a certain concept14. Historians, for instance, need to explain why they take certain uses of words to belong to one concept, certain other uses to form modifications or extensions of the concept and others still to form a completely different unity. Their decision, just like the decision of the practitioners themselves, is not arbitrary and whimsical since they are under constraints which are formed by the innumerable ramifications and complications resulting from every course of action they take. I talk about a decision in order to contrast this approach with the typical realist one, irrespective of whether this realism is essentialist or of the family resemblance type. In the realist approach of either type, 12 I say that necessary and sufficient conditions ‘allegedly justify’ the proper use of concepts because even if we could find necessary and sufficient conditions, we would again be unable to justify definitively proper concept application. 13 M. Forster speaks of “the fig leaf of a claim to be using a family resemblance concept” when one does not have a genuine concept at all (Forster 2010, 83). He says that an appeal to the family resemblance character of a concept may leave one wondering whether there is anything to this appeal or whether it is not instead merely a mask for a set of arbitrary choices or preferences” (ibid., 83). 14 Paradigmatic works in that respect are Skinner’s in the area of political philosophy (for instance, the study of the concept of liberty or virtù) and Nersessian’s in philosophy of science (for instance, the study of the concept of field). Also Wilson (2006) provides a rich and detailed analysis of concepts such as ‘weight’, ‘square root’, ‘rigidity’, ‘solidity’, ‘hardness’, ‘force’, etc.

Concept as Vessel and Concept as Use

39

it is assumed that certain features, either the overlapping similarities which are not common to all the individuals under a concept or the ones that comprise the necessary and sufficient conditions that are present in all the instances to which the concept applies, are there in the world awaiting to be recognized and picked up. It is believed that objective common features are what set limits to the application of our concepts15. But similarities cannot be recognized irrespective of our interests and purposes. “What resemblances we are going to count as family resemblances, cannot be determined simply by looking at the objects themselves. (…) there is nothing about chess, cricket and patience that makes us call them games; any more than there is something about real and imaginary numbers which makes us call them numbers” (Tessin 1996, 65). “People call certain things games for certain purposes and not for others (Beardsmore 1992, 142). Similarities are recognized and established by following rules. “The use of the word ‘rule’ and the use of the word ‘same’ are interwoven” (PI, 225). Following rules is not an independent condition for recognizing similarities nor is recognizing similarities a condition for following rules. The two go together: We recognize similarities to continue to apply a rule and we need to be led by a rule to recognize similarities. The notion of family resemblance should not be taken as a suggestion to do empirical work and find out whether a particular concept is a family resemblance concept or not. We should not, that is, look for certain characteristics which, resembling each other across instances, mark off the family resemblance concepts. Wittgenstein used the notion of family resemblance to combat an essentialist understanding of concepts, i. e., to show that the unity of concepts is not secured by identifying some necessary and sufficient characteristics. But, in my view, he was equally opposed to the idea that the unity of concepts is secured instead by identifying distributed similarities, overlapping and criss-crossing. It is not the world which fixes our concepts. They are not formed by ab15 M. Forster, for instance, seems to be saying that a family resemblance term is “warranted in its applications not by any single common feature but instead by various features which are related in a criss-crossing or overlapping manner, and which moreover fail to be expressible in any statement of non-trivial essential necessary and sufficient conditions for the term’s applications” (Forster 2010, 76). As I will explain later, in my view, the application of a general term is not warranted by objective similar features in the individuals to which it applies, by the following of rules which establish the proper use of the relevant terms.

40

Vasso Kindi

stracting from the world, they do not reflect the world, they do not even reflect our life; they stand in the middle of it (RC III, 392). They are part of it, they participate in shaping it. Those who connect family resemblance concepts to particular similarities which are either projected or considered objective, sometimes fear that family resemblance cannot explain how two instantiations of a concept, at the extreme ends of an extended historical period, may share no common characteristic and yet belong to the same concept. J.-M. Kuukkanen (2008), for instance, has this view and, to solve the problem, he proposes that all concepts have a core which all instantiations must satisfy in order to belong to “the same concept.” He believes that an intelligible history of thought requires the postulation of stable units, which comprise an invariable set of features across historical periods. “[I]f we wish to write histories of ideas or concepts, we have to assume that there are invariable ideas or concepts in history after all” (Kuukkanen 2008, 367). He thinks that this is the only way we can make sense of conceptual change. Change requires something to remain unchanged and, in order to be able to speak of change, and not replacement, we need to postulate an invariable core. This is to assume that the unity of concepts is secured, objectively or conventionally, by a set of core properties captured by a definition. But I do not think that the identity of concepts through time will be saved by hypostatizing them. A different approach, such as Wittgenstein’s, rejects the idea that the unity of concepts is secured by the essentialist supposition of an invariable core. Wittgenstein’s rule-following considerations are supposed to show that even if we had such an essentialist definition, which would serve as a rule dictating correct application, this definition would not have guaranteed the non-arbitrary subsumption of instances under concepts. The definition can be variously interpreted and only practice can secure the correct application of a rule. Rejecting essentialism, that is the pursuit of a fixed core of properties present in all instantiations, does not mean that one should go after similarities (family resemblances) that connect, in a chain-like manner, the instantiations that fall under a concept. Overlapping similarities, which are supposed to be detected and read off from the world, are not better suited to account for the unity of concepts. The unity of concepts assumed or proposed by historians or philosophers who study the history of concepts, has to be illustrated and argued for and, in borderline cases, can very well be contested. That is, historians and philosophers, after they learn the language they study, need to make a case

Concept as Vessel and Concept as Use

41

for a common or varied use based on the circumstances and texts they examine. The unity, development and change of concepts are the responsibility of those who employ them and those who try to understand them. It should be noted that Wittgenstein’s considerations concerning rule-following and family resemblance apply to both ‘closed’ and ‘open’ concepts. It is usually maintained that closed concepts are defined by necessary and sufficient conditions while open ones are rather explained by offering examples of application16. Definitions and examples, however, can equally be variously interpreted, which means that closed concepts are not better off compared to the open ones in terms of how they are applied17. For Wittgenstein, all concepts are open in a certain sense, namely, in the sense that any new application can extend a concept to new territories without jeopardizing its unity and proper application18. Cavell captures this idea of extension or projection to a new 16 Dirk Schlimm (this volume), talking about mathematical concepts, distinguishes between a Fregean and a Lakatosian account of concepts. Fregean concepts are fixed and sharply determinate and, so, adequate to enter proofs and logical inferences while Lakatosian concepts are open and flexible, in the sense that they can evolve, be transformed, stretched or contracted. The former are introduced by a descriptive characterization while the latter via a paradigm. In my view, Lakatos’ understanding of open concepts differs from the one I attribute to Wittgenstein. In Lakatos’ case, concepts are open when, as they evolve, become, for instance, more inclusive. This means that their defining characteristics and their definitions change. In Wittgenstein’s case, concepts are open not because their definitions are enriched but because the way they are applied is not determined by a definition even if a definition is available. 17 For an analysis of this issue see Huff (1981). 18 This understanding of openness should be distinguished from Waismann’s ‘open texture’. According to Waismann, all empirical concepts are open textured because their definitions cannot be exhaustive, that is they cannot cover all possibilities. “[W]e can never exclude altogether the possibility of some unforeseen situation arising in which we shall have to modify our definition. Try as we may, no concept is limited in such a way that there is no room for any doubt” (Waismann 1978, 120). Waismann speaks of open concepts because the world may surprise us in unexpected ways. Wittgenstein’s understanding of concepts as open does not depend on how the world behaves. Wittgenstein takes concepts to be open because their application can be extended in ways that cannot be specified in advance. Yet, this does not imply that their application is doubtful. For the differences between Wittgenstein’s conception and Waismann’s ‘open texture’ see Baker and Hacker (1983, 170 f). Thomas Kuhn, discussing Waismann’s open texture, says that in case we have the bizarre possibilities that Waismann imagines, we would be forced to change the con-

42

Vasso Kindi

application when he says that “[a]ll language is metaphorical” (Cavell 1979, 190). What he means is that every new application of a term is literally a transfer into a new area. The different applications shape the trajectory of the concept. We can draw strict boundaries if we need them (PG, 117), but even in these cases our concepts are elastic and flexible (cf. LWPP I, §§ 246 f; 340).

6. Conclusion Given an understanding of concepts as open and flexible, the historians, in tracing the transposition and transformation of concepts, instead of trying to find either an invariable core that persists in all instances or the overlapping similarities that connect every instance, they should see concepts as occurring in a field surrounded by others (Wittgenstein 1988, 247) 19, as centers of variation20. And when they are interested in conceptual change, they may adopt, in the same spirit, the perspective of some kind of intellectual ecology suggested by Toulmin (1970). Toulmin talks of ‘conceptual populations’ and not of isolated, individual cept instead of speaking of open texture (Archive, Box 8). Thomas Kuhns Papers, MC 240 box 8. Massachusetts Institute of Technology, Institute Archives and Special Collections, Cambridge, Massachusetts. 19 Begriffsgeschichte, the history of concepts as it developed in Germany, associated most prominently with the work of R. Koselleck, also places concepts in a semantic field studying them in relation to social settings and developments. Yet, as critics such as Bevir (2000, 278) have pointed out, Begriffsgeschichte remains committed to extracting individual concepts from a synchronic linguistic context for diachronic treatment. Citing Pim den Boer, Bevir claims that Koselleck ascribes to concepts a “life span” and “vital properties” so that concepts seem to become “entities that lead a life of their own” (ibid., 279). He also thinks that Koselleck’s Begriffsgeschichte encourages a form of reductionism by assimilating individuals “all too readily to a monolithic langue or mentalité identified with a given social formation” (ibid., 281). In his view, Begriffsgeschichte “is prone to detach concepts from their settings in a way that encourages a neglect of the constitutive role of ideas and beliefs throughout our social life: social formations are in some way separated from languages and meanings” (ibid., 282). 20 This is what Wittgenstein says in his manuscripts: “If we were asked … about the essence of punishment, essence of revolution, of knowledge, of cultural decline or refined sense of music – we should not try to give something common to all cases, not what they all really are, that is an ideal which is contained in them all; but instead of this examples, as it were centres of variation” (cited in Kuusela 2008, 173).

Concept as Vessel and Concept as Use

43

concepts. These populations develop by processes of conceptual variation and selective perpetuation (ibid., 564). A given repertory of established concepts is surrounded by a pool of conceptual variants, or possibilities, from which scientists select the ones they will adopt based on various, and perhaps conflicting criteria, which may vary from science to science and from epoch to epoch (ibid., 562). Toulmin’s ecology connects with Hacking’s, who saw the project of philosophical analysis as the study of words in their sites. This study is not the study of abstract entities floating in abstract space in isolation, or in logical relations with other similarly understood concepts. It is the study of what we do with words, the study of word uses in concrete practices. The task of historians and philosophers, then, fulfills the requirement of hard work recommended by Ian Hacking (1990, 362). Embracing a complex methodology, they display the networks of possibilities and constraints which condition the emergence, formulation and change of our concepts (ibid., 360).

Acknowledgements I would like to thank Theodore Arabatzis, Michael Forster, the contributors and the editors of this volume for their comments and criticism which helped me to improve the paper considerably.

Reference List Baker, G. P. / Hacker, P. M. S. (1983), An Analytical Commentary on Wittgenstein’s Philosophical Investigations, Oxford: Blackwell. Beardsmore, R. W. (1992). “The Theory of Family Resemblances.” In: Philosophical Investigations 15 (2), 131 – 146. Bevir, M. (2000), “Begriffsgeschichte.” In: History and Theory 39 (2), 273 – 284. Bevir, M. (2008), “What is Genealogy?” In: Journal of the Philosophy of History 2, 263 – 275. Brigandt, I. (this volume), “The Dynamics of Scientific Concepts: The Relevance of Epistemic Aims and Values.” Carbone, M. (2004), The Thinking of the Sensible: Merleau-Ponty’s A-Philosophy, Evanston: Northwestern University Press. Carnap, R. (1969), The Logical Structure of the World (Der Logische Aufbau der Welt), English translation by R. A. George, Berkeley: University of California Press.

44

Vasso Kindi

Carnap, R. (1981/1938), “Logical Foundations of the Unity of Science.” In: Hanfling, O. (ed.), Essential Readings in Logical Positivism, Oxford: Blackwell, 112 – 129. Cartwright, N. et al. (1996), Otto Neurath: Philosophy between Science and Politics, Cambridge: Cambridge University Press. Cavell, S. (1979), The Claim of Reason, Oxford: Oxford University Press. Caygill, H. (1995), A Kant Dictionary, Oxford: Blackwell. Cohen, M. R. (1927), “Concepts and Twilight Zones.” In: The Journal of Philosophy 24 (25), 673 – 683. Davidson, D. (1984), “On the Very Idea of a Conceptual Scheme.” In: Inquiries into Truth and Interpretation, Oxford: Clarendon Press. Feigl, H. (1970), “The ‘Orthodox’ View of Theories.” In: Radner, M. / Winokur, S. (ed.), Minnesota Studies in the Philosophy of Science, vol. IV, Minneapolis: University of Minnesota Press, 3 – 16. Forster, M. (2010), “Wittgenstein on Family Resemblance Concepts.” In: Ahmed, Arif (ed.), Wittgenstein’s Philosophical Investigations: A Critical Guide, Cambridge: Cambridge University Press, 66 – 87. Foucault, M. (1984), “Nietzsche, Genealogy, History.” In: Rabinow, P. (ed.), The Foucault Reader, New York: Pantheon Books, 76 – 100. Foucault, M. (1996), “What Is Critique?” In: Schmidt, J. (ed.), What Is Enlightenment? Eighteenth-Century Answers and Twentieth-Century Questions, Berkeley: University of California Press, 382 – 398. Frege, G. (1979), “Comments on Sense and Meaning.” In: Hermes, H. / Kambartel, F. / Kaulbach, F. (eds.), Gottlob Frege: Posthumous Writings, English translation by P. Long / R. White, Oxford: Blackwell, 118 – 125. Frege, G. (1980), “Begriffsschrift, a Formula Language, Modeled upon that of Arithmetic, for Pure Thought.” In: Heijenoort, J. van (ed.), Frege and Gçdel: Two Fundamental Texts in Mathematical Logic, Cambridge, Mass.: Harvard University Press. Frege, G. (1997), “Introduction to Logic.” In: Beaney, M. (ed.), The Frege Reader, Oxford: Blackwell, 293 – 298. Gadamer, H.-G. (1992), “The Beginning and the End of Philosophy.” In: Macann, C. (ed.), Martin Heidegger: Critical Assessments, vol. I, London: Routledge. Geach, P. (2001), Mental Acts. South Bend, IN: St Augustine Press. Geuss, R. (2009), “Goals, Origins, Disciplines.” In: Arion 17 (2), 1 – 24. Glock, H.-J. (2010), “Wittgenstein on Concepts.” In: Ahmed, A. (ed.), Wittgenstein’s Philosophical Investigations: A Critical Guide. Cambridge: Cambridge University Press, 88 – 108. Hacker, P. M. S. (1996), Wittgenstein’s Place in Twentieth-Century Analytic Philosophy, Oxford: Blackwell. Hacking, I. (1982), “A Leibnizian Theory of Truth.” In: Hooker, M. (ed.), Leibniz: Critical and Interpretive Essays, Minneapolis: University of Minnesota Press, 185 – 208. Hacking, I. (1983), Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, Cambridge: Cambridge University Press.

Concept as Vessel and Concept as Use

45

Hacking, I. (1990), “Two Kinds on ‘New Historicism’.” In: New Literary History 21 (2), 343 – 364. Hahn, H. / Carnap, R. / Neurath, O. (1996), “The Scientific Conception of the World.” In: Sarkar, S. (ed.), The Emergence of Logical Positivism: From 1900 to the Vienna Circle, New York& London: Garland Publishing, 321 – 340. Hempel, C. (1970), “On the ‘Standard’ Conception of Scientific Theories.” In: Radner, M. / Winokur, S. (ed.), Minnesota Studies in the Philosophy of Science, vol. IV, Minneapolis: University of Minnesota Press, 142 – 163. Huff, D. (1981), “Family Resemblances and Rule-Governed Behavior.” In: Philosophical Investigations 4 (3), 1 – 23. Hull, D. (1992), “Testing Philosophical Claims about Science.” In: PSA. Proceedings of the Biennial Meeting of the Philosophy of Science Association, vol. II, 468 – 475. James, W. (1996), Some Problems of Philosophy: A Beginning of an Introduction to Philosophy, Lincoln, Nebraska: The University of Nebraska Press. Kelley, D. R. (2005), “Between History and System.” In: Pomata, G. / Siraisi, N. G. (eds.), Historia, Cambridge, Mass.: MIT Press, 211 – 237. Kierkegaard, S. (1966), The Concept of Irony, New York: Harper and Row. Kuhn, T. S. (1977), “A Function for Thought Experiments.” In: The Essential Tension: Selected Studies in Scientific Tradition and Change, Chicago: The University of Chicago Press, 240 – 265. Kuhn, T. S. (2000a), “What Are Scientific Revolutions?” In: Conant, J. / Haugeland, J. (eds.), The Road since Structure, Chicago: The University of Chicago Press, 13 – 32. Kuhn, T. S. (2000b), “The Trouble with the Historical Philosophy of Science.” In: Conant, J. / Haugeland, J. (eds.), The Road since Structure, Chicago: The University of Chicago Press, 105 – 120. Kuukkanen, M.-J. (2008), “Making Sense of Conceptual Change.” In: History and Theory 47, 351 – 372. Kuusela, O. (2006), “Do Concepts of Grammar and Use in Wittgenstein Articulate a Theory of Language of Meaning?” In: Philosophical Investigations 29 (4), 309 – 341. Leibniz, G. (1996), New Essays on Human Understanding, edited by P. Remnant / J. Bennett, Cambridge: Cambridge University Press. Lewis, C. I. (1956), Mind and the World Order: Outline of a Theory of Knowledge, New York: Dover. Machery, E. (2009), Doing without Concepts, Oxford: Oxford University Press. Nersessian, N. (1985), “Faraday’s Field Concept.” In: Gooding, D. / James, F. A. J. L. (eds.), Faraday Rediscovered, 175 – 187. Nietzsche, F. (1999), “On the Truth and Lies in a Nonmoral Sense.” In: Breazeale, D. (ed.), Philosophy and Truth, Amherst, New York: Humanity Books. Periola, M. (1995), Enigmas: The Egyptian Moment in Science and Art, English translation by C. Woodall, London: Verso. Schlimm, D. (this volume), “Mathematical Concepts and Investigative Practice.”

46

Vasso Kindi

Steinle, F. (this volume), “Goals and Fates of Concepts: The Case of Magnetic Poles.” Tessin, T. (1996), “Family Resemblances and the Unity of a Concept.” In: Philosophical Investigations 19 (1), 62 – 71. Toulmin, S. (1970), “From Logical Systems to Conceptual Populations.” In: PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, vol. II, 552 – 564. Waismann, F. (1978), “Verifiability.” In: Flew, A. G. N. (ed.), Logic and Language, Oxford: Blackwell, 117 – 144. Waismann, F. (1997), The Principles of Linguistic Philosophy, London: Macmillan. White, M. (1945), “The Attack on the Historical Method.” In: The Journal of Philosophy 42 (12), 314 – 331. Wilson, M. (2006), Wandering Significance: An Essay on Conceptual Behaviour, Oxford: Oxford University Press. Wittgenstein, L. (1958), Philosophical Investigations, English translation by G. E. M. Anscombe, Oxford: Blackwell. [Abbreviated as PI] Wittgenstein, L. (1965), The Blue and Brown Books, New York: Harper &Row. [Abbreviated as BB] Wittgenstein, L. (1974), Philosophical Grammar, edited by R. Rhees, English translation by A. Kenny, Berkeley: University of California Press. [Abbreviated as PG] Wittgenstein, L. (1977), Remarks on Colour, edited by G. E. M. Anscombe, English translation by L. L. McAlister / M. Schättle, Oxford: Blackwell. [Abbreviated as RC part, and paragraph] Wittgenstein, L. (1978), Remarks on the Foundation of Mathematics, English translation by G. E. M. Anscombe, Oxford: Blackwell. [Abbreviated as RFM] Wittgenstein, L. (1981), Zettel, edited by G. E. M. Anscombe / B. H. von Wright, English translation by G. E. M. Anscombe, Berkeley: University of California Press. [Abbreviated as Z] Wittgenstein, L. (1988), Wittgenstein’s Lectures on Philosophical Psychology 1946 – 47, edited by P. T. Geach, notes by P. T. Geach / K. J. Shah / A. C. Jackson, New York: Harvester. Wittgenstein, L. (1990), Last Writings on the Philosophy of Psychology, vol. I, edited by G. E. M. Anscombe / G. H. von Wright / H. Nyman, Oxford: Blackwell. [Abbreviated as LWPP I] Wittgenstein, L. (1993), Last Writings on the Philosophy of Psychology, vol. II., edited by G. H. von Wright et al., Oxford: Blackwell. [Abbreviated as LWPP II] Wittgenstein, L. (1993), “Wittgenstein’s Lectures in 1930 – 33. By G. E. Moore.” In: Klagge, J. / Nordmann, A. (eds.), Philosophical Occasions, Indianapolis: Hackett, 46 – 114. [Abbreviated as PO] Wrathall, M. (2004), “Heidegger on Plato, Truth and Unconcealment. The 1931 – 32 Lecture on the ‘Essence of Truth’.” In: Inquiry 47, 443 – 463. Archival Material: Thomas Kuhn’s Papers, MC 240, box 8. Massachusetts Institute of Technology, Institute Archives and Special Collections, Cambridge, Massachusetts.

Rethinking Scientific Concepts for Research Contexts: The Case of the Classical Gene Miles MacLeod The point of this paper is to take some steps towards a reconsideration of the term ‘concept’ as it applies to scientific practice. If an understanding of scientific practice over periods of research is our goal then in fact we need to broaden our notion of what we mean by concept. Problems have arisen precisely because the notions of concept we have traditionally applied, have their origins in philosophy of language, not in considerations of scientific practice. They have presumed that concepts are nothing more than representations or at least containers of them. Treating concepts in this way can at best be used to analyse the products of science, the structure and content of the theories and models that come to be presented as the fruits of research. While one cannot discount a part for this kind of analysis, it is constrained in dealing with research contexts in which the development of ideas is ongoing, and in which social, political and epistemic factors are interconnected. Such dynamic research contexts, I shall argue, cannot be explicated fully by a descriptive theory of meaning or, more to the point, a vessel or receptacle notion of concept. This criticism dovetails with the more recent moves by philosophers and historians such as Hacking (as well as some of the contributors to this book) away from a representation-centered account of science. Through an historical re-examination of the classical gene, my aim is to propose an alternative conception of ‘concept’ more reflective of historical patterns of development. Concepts do function to represent aspects of the world, but this paper will argue that in many cases this is only one of several functions of concepts rather than capturing the entirety of what concepts do or are. By contrast, the notion proposed here constructs concepts through their part in propagating and facilitating ongoing research. Such research is characterized generally by a longterm and investigative perspective by researchers of the complex phenomena under examination, well aware of the limits of their knowledge and of the information they have at any point in time, and the primary

48

Miles MacLeod

need for conceptual, explanatory and experimental resources to pursue it further. My account shifts weight to those features of concepts that are central to their ability to contribute to ongoing research activities. These features are (1) their open-endedness and (2) what I call their central epistemic attributes: those properties and relations assigned to the purported referent that partake centrally in the formation of experimental methods, identification conditions, explanations and conceptual schemes employing the concept. I will begin with a brief account of what I think is wrong with classical notion of concept, before fleshing out these problems in the context of the classical gene from 1900 until the 1960s. Finally I will generalize from this case the properties or features of an alternative notion of concept in terms of open-endedness and epistemic attributes, that gives a potentially better notion of concept for understanding scientific practice.

1. What’s Wrong with the Classical Construal of Concepts? Studying scientific change and development in terms of conceptual or theoretical development rather than say experimental practices has since the Vienna circle been at the heart of methodology for philosophers of science. This practice has in many cases relied upon a particular construal of concepts, which supports a particular methodological approach. Philosophers have treated concepts as vessels or receptacles of representations, which is closely associated with a descriptive theory of meaning.1 In these vessels, which bear particular names like ‘electron’, the beliefs, claims and suppositions of scientists about elements of the world are packed.2 Unraveling this content or meaning involves a ra1 2

See Frege, Russell, also Quine as particular famous proponents of descriptive theories of meaning. The notion of concept in the context of its historical articulations is explored by Vasso Kindi in this volume. The vessel notion I’m referring to captures the idea that concepts are principally passive repositories of information that are periodically updated to better fit the world. This fits Frege’s viewpoint (ring-fenced as James referred to it) but also those that argue for less discrete boundaries or vagueness like Wittgenstein. I think either applied to practice misses a critical aspect of what concepts are in practice, which comes through their integration with processes of investigation and the exploration of phenomena. See also Feest and Nersessian in this volume, and also Schlimm in the context of mathematics. I don’t claim, however, to be speaking about basic psychological struc-

Rethinking Scientific Concepts for Research Contexts

49

tional reconstruction of their beliefs, through attention to the concept’s broader instantiation in a network of concepts, which, in its extreme form, depends on the entirety of the theory in which it is embedded (see Kuhn 1970; Feyerabend 1958). Such networks are read off science at a particular point in time, through correlating journal articles and authoritative texts in order to give a reconstruction that broadly fits the views of particular individuals and research communities. In this way concepts are strongly identified with and affixed to the substantive representational pictures of the elements underlying them that can be reconstructed. Over time concepts are tracked and compared by treating them as vessels of such representations, their changing content is discussed philosophically, and conclusions about scientific practice and the way research functions are reached. Treating concepts as essentially passive stores or repositories of information, defined at any one time by the content attributable to them, is basic to philosophy of science. The study of the history and development of a scientific concept in philosophical circles has for the most part involved tracking concepts as vessels, reconstructing their meaning at particular points in time. The realism debate for instance has largely functioned this way when treating unobservable entity and natural kind concepts: likewise have those interested in the inter-theoretic reduction of concepts. It has been generally accepted that concepts change as a common point of agreement, but philosophers have argued over whether or not their reference might be consistent regardless. Thus causal theories of reference accept the equation of concepts with their meanings at any one time (the stereotypes of Putnam 1973), and thus accept conceptual discontinuity, but argue that reference may stay constant due to causal relations between a use of the term and a baptismal event. The chief problem is that reifying concepts historically as sequences of contextually defined representations in this way tends to overstress conceptual discontinuity that was never apparent or so important to the practicing researchers themselves, and loses vital information that held these processes together in the first place. It does so by presenting scientific ideas as “quick frozen at one momentary stage” (Hempel 1970, 148) rather than analyzing them in their continual development. tures that characterize our general cognition, but rather about scientific constructions, which are generated according to methodological principles and the contingencies of research.

50

Miles MacLeod

This characterization might sound like a caricature of philosophy of science, and many thinkers like Rheinberger (1997), Arabatzis (2006), or those doing cognitive history like Nersessian (1984), have been pulling away from it. However, I think it has an element of truth in it, in that philosophers have typically approached scientific products as a series of discontinuous representations, generating a massive industry in trying to connect these representations back together through explanatory, theoretical, referential or linguistic connections. In doing so, they have tended to disregard the possibility that concepts have other functions over and above carrying representations, and may in fact be closely tied to the manner in which they resource and support ongoing investigation. This paper argues that we need a notion of ‘concept’ that better captures the ways in which concepts function in specific research contexts. These research contexts are driven by shared goals, common objects of investigation and phenomena to be understood which reach across the organizational boundaries of research; individuals, research groups and communities, theoreticians and experimentalists. Both Brigandt and Steinle in this volume explore more thoroughly the roles of epistemic aims and goals in the development and stability of concepts.3 These shared goals and objects are sources of continuity and coherence. The analysis I present here complements theirs, but rather than focusing on how such goals structure the dynamics of conceptual change, particularly over a longer term, I focus specifically on how concepts can become associated with certain representational elements considered central to achieving these ends.

2. Rethinking the ‘Classical Gene’ (1900s to 1960s) In order to convince the reader of the limitations of this classical notion of concept for presenting and analyzing historical episodes in the life of a concept and what a reformulation should look like, I want to revisit the concept of the gene between the 1900s and 1960s. This will neither aspire to be in any respect a complete history nor a narrative account. 3

I think in fact Love and Brigandt’s notion of concepts (see Brigandt in this volume) for establishing problem agendas driving research could add useful depth to what I say here as generating or framing the phenomena in a particular way and providing resources for its eventual understanding.

Rethinking Scientific Concepts for Research Contexts

51

These exist plentifully already.4 Rather, it is intended to illustrate my own account. I will argue that this account leads to an alternative perspective on this history. It is fair to say that the gene has proved to be one of the most challenging and controversial concepts for historians and philosophers of science alike to get a grip on. It has manifested an extraordinary amount of change and diversification, leading philosophers to argue for the existence of a plethora of different concepts in the modern context (e. g., Griffiths/Neumann-Held 1999 or Stotz/Griffiths/Knight 2004). Using a traditional concepts approach reconstructing models of the gene at particular points in time before the 1960s, there appear many discontinuities: from Bateson’s presence/absence unit character concept to Morgan’s chromosomal locus concept governed by many-to-many relations between genes and traits; to Hermann J. Muller’s chemical gene concept in the 1920s; to the failure of the one-gene-one-enzyme model in the 1960s and the complexity that followed after. This is not to mention the many models of the gene that cropped up in each of these ‘paradigms’. Philosophers report these discontinuities as significant historical events and puzzle over their methodological and ontological ramifications. On the other hand scientists like Morgan, Muller or Benzer don’t report their variations of the concept creating significant disjunctions with the work of prior researchers. Rather they report their discoveries and reformulations of the gene as part of a continuous process of open-ended analysis, which they have been taking further. The disconnect, by contrast, that philosophers and historians find, I believe stems from the over-reliance on the traditional notion of concept (as a vessel of representations) mentioned above, and on the historical reconstruction of the gene concept’s history as a series of frozen historical cross-sections, without keying in to the fact that the gene concept was part of a robust process of research and investigation into heredity, development and more broadly biological organization. This is why we should suspect that the continuity was sustained at the methodological level in terms of central resources provided by certain ideas about the gene, rather than by its particular representational content in the heads of researchers or groups of them at particular points in time. As I will show, the gene case study leads us away from the classical towards an alternative picture of concepts which sees them more as epis4

See, for instance, Carlson (1966), Sturtevant (2001), Portin (2002), Falk (1986) as a few examples,

52

Miles MacLeod

temic structures than vessels. Firstly the open-endedness with which the concept was treated is a very salient part of its history. Some philosophers and historians commenting on the history of the gene have noted this. Rheinberger (2000) suggests that the classical and molecular gene concepts were treated as epistemically vague or fuzzy, not reducible to any particular definition, in order to leave space for the variation of the objects under study, namely genes. Genes needed constantly to be reconceived in new experimental contexts. Similarly Burian (1985, 2005) explains its history by relaxing the demand for referential precision for practitioners. Secondly, if we look for elements of stability of the gene concept underlying this open-endedness, we find it in certain properties and relations, which I call ‘central epistemic attributes’ of the gene. These were elements central to the application of the gene concept in the interpretation and understanding hereditary phenomena (for composing theories and models of it) and the construction of experimental techniques for probing and testing it. Given the complexity of heredity and development and ever accumulating new phenomena and new information in this context of ongoing research, their part at the center of the formulation of theoretical and experimental techniques was more important than having a particular stable well-developed representational account and harder to shift by virtue of their role assisting, even constituting, investigation. These features included, firstly, the functional/ causal-relation between gene and traits, and its qualities as a unit of gametic material and, secondly, the gene’s stable invariant across generations. They combined to make the gene concept a central epistemic resource. In the following, I’ll discuss the open-endedness of the gene concept (2.1), its central epistemic attributes (2.2), and show how these together give us a gene concept more reflective of historical developments (2.3).

2.1 Open-Endedness and the Classical Gene One way in which the gene concept is not easily treated as simply a vessel of representations, is through its open-endedness. By this I mean that the concept was employed by the community of researchers independently of beliefs (or lack of beliefs) that they had at various stages, about the gene’s structure, nature, or causal operation. This was supported nonetheless by a commitment to the existence of such information,

Rethinking Scientific Concepts for Research Contexts

53

mediated by a belief in the concept’s correspondence to a reality. This information would come in the form of better representations in the course of further investigations. The concept didn’t strongly fix or predetermine however what they would be. As Carlson put it in 1966: The gene has been considered to be an undefined unit, a unit character, a unit factor, a factor, an abstract point on a recombination map, a three-dimensional segment of an anaphase chromosome, a linear segment of an interphase chromosome, a sac of genomeres, a series of linear sub-genes, a spherical unit defined by a target theory, a dynamic functional quantity of one specific unit, a pseudoallele, a specific chromosome segment subject to position effect, a rearrangement within a continuous chromosome molecule, a cistron within which fine structure can be demonstrated, and a linear segment of nuclei acid specifying structural or regulatory product.

What these observations suggest is that the gene concept from 1902 up until the 1960s remained independent of such structural and causal representations of the gene, to the extent that the gene never was conceived primarily in terms of one or the other by the research community, at least for long. This we witness by the lack of fixity and permanent acceptance of these putative representations of it. But this independence was a very strong feature of it from the start, in the form of a recognized incompleteness of knowledge about the structure and powers of gene in the context of ongoing investigation. The construction of the gene concept at the hands of people like Bateson and Johannsen from unit-factors and unit-characters through to the classical ‘gene’ of the Drosophila school, specifically declaimed any postulation of what genes might be. Bateson put it, “of the nature of the physical basis of heredity we have no conception at all” (Bateson 1902, 2). Johannsen famously labeled it an “Etwas” ( Johannsen 1909, 123). For some this was a lack of knowledge. Others opted to avoid speculations about the gene’s chemical basis, wishing to steer clear of old metaphysical ideas about particulate units of inheritance. Likewise the specific causal operation and action of the gene was treated open-endedly in this respect. Morgan stated in 1926 that “the theory of the gene … states nothing with respect to the way in which genes are connected with the end-product or character” (Morgan 1926, 26). In the context of the limited and imprecise knowledge that characterized the exploration of hereditary phenomena during this period the gene was never in this period affixed strongly to, or identified with, particular causal or material representations. The only shift in this regard came early with its more specific

54

Miles MacLeod

identification with a particular element of germinal material, namely chromosomes (Sutton 1902, 1903; Boveri 1904) 5. But this didn’t give anything like a chemical representation of the gene given the complexity and lack of knowledge of the chromosomes themselves. It was nonetheless an important event, giving the researchers who accepted it more leverage on the study and identification of the gene. It was otherwise treated as open to further investigation in terms of the chemical and structure within the chromosomes that defined it, and its causal properties and powers. Theories on these came and went. Indeed theories about genetic operation went through many developments before the discovery of DNA and later developments, from an original one-toone relation between genes and traits with Bateson, to a many-tomany relation with Johannsen and Morgan, through to more complex specific theories about enzymes. Carlson, as we noted, documents the various pictures of gene structure that emerged. Thus the gene concept was maintained largely independently of deeper representational accounts beyond its part in chromosomes, whether these were available or not throughout this period. Others have characterized this history not in terms of open-endedness but as the development of new concepts and conceptual change. Falk (1986, 2000) and Gayon (2000) for instance consider that there was a significant redevelopment of the gene concept when Muller ascribed a material basis to the gene. That is, Muller decided to investigate in 1922 through the technique of mutation the physical and chemical properties of the gene based on a plausible assumption that the unit of chemical structure must be self-generating. Falk asserts that at this point the phenotype stopped being the dominant basis for extracting information about genes. Muller attacked the genes themselves. He was thus interested in treating these as units with real measurable size and location imputing to it certain distinct chemical properties, such as auto-catalysis. Before this, genetics had developed at the hands of the Drosophila school and others as an abstract concept locatable perhaps in the chromosomes, with properties deriving from Mendelism, such as being subject to segregation. Falk and Gayon thus identify a conceptual change reflected in the meaning of the concept through the shift from what they see in this respect as instrumentalism to the materialism that characterized the work of Muller and then the further ‘chemical attack on the gene’ begun by Beadle, Tatum and Ephrussi in the 50s and 5

The Drosophila school was most responsible for embedding this relation.

Rethinking Scientific Concepts for Research Contexts

55

onwards. Morgan had simply manipulated the gene ‘as if’ it were a material unit. With Muller suddenly the concept ‘gene’ had reference, and was no longer simply an intervening variable6. I would argue, however, that the shift undertaken by Muller signifies no conceptual discontinuity, rather than simply the investigative pursuit with an open-ended concept. There is no real evidence that earlier geneticists were in fact instrumentalist about the gene. Rather they preferred, given their lack of knowledge, to accept it independently of structural representations of it or its causal powers. Johannsen argued that, “As to the nature of the genes it is as yet of no value to propose any hypothesis; but that the notion of ‘gene’ covers a reality is evident from Mendelism” ( Johannsen 1911, 133). Morgan wrote in 1926 that, “[i]n the same sense in which the chemist postulates invisible atoms and the physicist electrons, the student of heredity appeals to invisible elements called genes…” (Morgan 1926, 1) by which he was referring to “independent units of the germinal material.” Falk relies on a quote from Morgan in 1933 in his Nobel Prize Lecture on ‘What are genes?’ which can be construed instrumentally—“it does not make the slightest difference whether the gene is a hypothetical unit, or whether the gene is a material particle” (Falk 1986, 148) but I would contend he was simply raising possibility of an instrumentalist statement, wanting to emphasize the distinction between its properties that made it fundamental to investigation, and representational claims about it. He was otherwise fairly consistent in attributing them a reality. Even East, whose paper of 1912 is often cited as the most instrumentalist defense of genetics, doesn’t step away from saying “it must have a basis in reality if it is to describe a series of genetic facts” (East 1912, 632). More quotes from Bateson and Morgan and other major figures in genetics can be given in this regard. Using the successful ways in which Mendelism was able to account for hereditary phenomena, geneticists abductively inferred that the gene concept did represent something, some kind of causal invariable unit, in gametic material.7 The treatment of it was otherwise open-ended—independent of the availability or otherwise of deeper representational accounts. Hence Muller himself saw no 6 7

Using MacCorquodale and Meehl’s (1948) sense of intervening, as instrumental and temporary. What this might be was itself interpreted openly. Some had definite ideas of particular discrete separable parts of the germinal material; others entertained the notion that it might be a family of complex processes not so isolatable.

56

Miles MacLeod

discontinuity. He took Mendelian mechanics and the chromosomal theory as starting point for his biochemical investigation, which was completely consistent with these abductive beliefs, in order to obtain greater information about its structure and more broadly the chemical processes at work in heredity and the production of traits. Of course more structural theories of what the gene was came and went as it became applied in molecular contexts. Yet it’s interesting to note that an explicitly open-ended characterization underlay the Drosophila school’s and Muller’s reliance on identification criteria for their image of the gene. This took it primarily to be identifiable by virtue of the cis-trans test, developed in the 1910s, which maps areas of chromosomes to specific functions. The cis-trans test didn’t require any commitment to the nature or structure of genes beyond their composition out of chromosomal substances, and their location within this material.8 These views overlapped with L. J. Stadler’s operational gene concept. Stadler posed this in 1954, after over 50 years of genetic research, in order to distil the notion functionally useful to their investigations, without weighing it down with further assumptions that prejudged their outcome. He appreciated that the gene concept was a research tool for engaging experimentally with complex phenomena and should be foremost described as such. It treated the gene as the smallest segment of a gene string that could be consistently associated with the occurrence of a specific genetic effect, which he took as sufficient and most wellestablished for these purposes. Otherwise he left it explicitly openended to further investigation what its material basis might consist of.9 Philosophers have noted to some degree the independence of the gene concept historically from deeper representational claims. Burian (2005) constructs the idea that what stabilized discourse was a schematic gene concept, which was reverted to when specific instantiations failed. The biologist Portin (2002) having discussed the history of the gene himself characterizes it as an ‘open concept’ even up until modern 8

9

The central importance of this test right through this period fits with Arabatzis’ own observations in the history of particle concepts in physics of the independence of fundamental experimental setups and systems from deeper theoretical perspectives. Stadler in fact gives a very good illustration of how scientists themselves are often explicit in their thinking and treatment of concepts as tools, a subject explored by many in this volume (see Steinle, Bloch and Boon). His thinking is good illustration of how the important aspects of a concept in this regard are those that facilitate research and investigation.

Rethinking Scientific Concepts for Research Contexts

57

day, choosing to define the gene by virtue of complementation or the cis-trans test. Falk (2000) argues that the way the gene concept has developed it appears that we have somewhat travelled in a circle and have arrived back in modern practice with something similar to Johannsen’s vague 1909 notion. To me, however, it doesn’t seem that what has happened at least until the 1960s is to be best characterized as conceptual change (through reversion), thus invoking a discontinuous picture, rather than the consistent and continuous methodological treatment of the gene as open-ended by the genetics and biochemical research communities. The relation between Johannsen’s concept and that of the Drosophila school differs only in terms of the preciseness of its locality within gametic material. This open-endedness was reflected in the statements of Muller, Stadler, and earlier geneticists on how for the purposes of research we should think of the gene primarily in terms of its cis-trans or functional identity. Likewise the defeasibility of claims in course of research, and the recognition of the incompleteness of knowledge in the face of increasingly complex phenomena, encourages us to see its treatment as largely independent of structural and material representations, which were thus open to variation.

2.2 Central Epistemic Attributes of the Classical Gene Open-endedness can only be one side of such a story, however. If the gene concept was never connected with any particular developed representation specifying its structure and causal nature, this raises the question of what aspects of it did underwrite its stability for its practitioners. As already intimated I suggest it was the central epistemic attributes of the gene that enabled this stability. These properties and relations are identifiable as those ascribed to the gene that were singularly productive in the construction of experimental methods applying and identifying the gene, while providing conceptual resources or ‘ways of thinking’ about the phenomena that governed theory and model-building with the gene, and in turn the construction of explanations for the particular problems and questions that arose in the study of heredity and development. These epistemic attributes provided adaptable tools and a framework for building alternative representations. They emerge when we look at how researchers perceived and commented upon the value that particular ideas about the genes provided to the advancement and propagation of their research.

58

Miles MacLeod

To understand this it is important to note that researchers of the time shared a belief that the gene concept and the family of Mendelian unit concepts (unit-factors, unit-characters) preceding it would be of central importance to fathoming heredity and development.10 Bateson in 1907 expressed of the unit-factor that, “it is becoming more probable that if more knowledge is to be attained the clue will be found through genetics … The recognition of the unit factor may lead—indeed must lead— to great advances in chemical physiology which without that clue would have been impossible.” Researchers understood at the outset that the gene, if it existed, would be embedded in all kinds of questions about heredity, development and evolution, and the general understanding of organismal organization. Most particularly they anticipated that the gene concept would guide them in their understanding of the chemical bases of these, once the resources existed to pursue it. In this way genetics reached across numerous previously disconnected fields of biology, like heredity, cytology and cell physiology, and biochemistry. As Wright put it, “The difficulty in the study of heredity is that characters of the germ cell must be deduced from a study of variation at the other end of developmental history,” yet, “it remains for genetics to assist embryology and biochemistry in filling the links in the chain between the germ cell and adult in specific cases” (Wright 1917, 224). 30 years later Muller expressed similarly, “the future task of biochemistry, combined with genetics, to unravel the whole complicated web of protoplasmic and bodily interactions, from the primary gene products to the last phenotypic effects” (Muller 1947, 26). It was an ongoing program of research, which led researchers armed with the gene concept across disciplinary boundaries into the realms of biochemistry. The fact that the gene concept had such value to this program of research was expressed by researchers in particular properties and relations that prescribed its explanatory, conceptual and experimental value to this research. In the 1950s, looking back on the history of genetics, Goldschmidt characterized the gene as something of a ‘mind set’ of units contributing to a Weltanschauung. “Saturated as we are with the 10 The term gene or gen was coined by Johannsen (1909) while distinguishing explicitly genotype from phenotype. Preceding concepts like the unit-factor of Bateson, the Mendelian ‘factor’ and some notions of ‘character’ as I’ll argue in further sections can be considered continuous with it in the respect that they shared epistemic attributes, even though they disagreed over genetic structure. They certainly shared perspectives on their centrality to research through the agency of Mendelian principles.

Rethinking Scientific Concepts for Research Contexts

59

clear, unassailable facts of Mendelian inheritance, we are conditioned to thinking in terms of discrete units which can be shifted and combined like dice in a throw, never losing their identity in a kind of splendid isolation.” (Goldschmidt 1958, 95) Of course, what Goldschmidt called conditioning others took to be investigative insight. Bateson (1913) had early perceived that the power of his unit-character lay with its ability to frame their research, and visual language was to the fore in expressing this. “With the recognition of unit-characters our general conceptions of the structure and properties of living things inevitably undergoes a change. We begin to perceive outlines where previously all was vague, nor can we doubt those outlines will very soon become clearer.” From expressions such as these we can single out three features that had distinct importance as central epistemic attributes. 2.2.1 A Unit of Heredity One has already come up in Goldschmidt’s description. The gene concept was important conceptually for thinking and theorizing about heredity as the result of something identifiable and bounded or discrete in the germinal material relying on at least the Mendelian principle of segregation. This unit structure or unit assumption informed particularly the conceptual calculus of the gene. It played a significant part in the way heredity was approached as a study of unit elements that could be identified functionally and then studied for their particular relations. From it formal symbolisms could be built out of which hereditary phenomena could be decomposed and studied experimentally, and then explained. Classical Mendelian research worked by starting with a set of trait phenomena and trying to conceptualize them in terms of a finite set of factors.11 The quantitative nature of this system impressed many geneticists, and was used to advertise its advantages by, amongst others, Bateson, East and Morgan. East especially, with his wish to distance the formula from its potential correspondence in reality, presents the system as a “conceptual notation as is used in algebra or in chemistry” (East 1912, 634). Morgan constructed the theory of the gene axiomatically to establish a formal system of relations between genes such that those principles would “enable us to handle problems of genetics on a strictly 11 See R. A. Emerson (1913). This is a classic case of this kind of conceptualization.

60

Miles MacLeod

numerical basis, and allow us to predict, with a great deal of precision, what will occur in any given situation” (Morgan 1926, 25). Geneticists worked at constructing schemes of factors often quite complex that could reproduce a particular trait phenomenon across generations, with systems of dominant, recessive and mutant genes, and modifying factors, represented by symbols and organized by the law of segregation. “In order to study the relation of these characters to each it has become necessary to combine many of them and in order to represent the results some system of symbols must be adopted.” (ibid., 25) With such a scheme they could claim an understanding of the phenomena, and a reduction of it, with which came the possibility of predictions but also the further analysis of the relations between these factors. Up until at least the 60s then, research operated within the framework of genetic units. This meant that this unit nature informed the methodology of biologists, seeking to frame explanation in terms of these functionally identifiable units, but also drove and focused their research towards a search for how this unit nature manifested itself in gametic material and in the processes of heredity and development. Researchers discovered certain patterns in their transmission, which were termed linkage relations (Sutton 1902, 1903; Boveri 1904). These informed and justified the chromosome theory of heredity, which in turn pursued the strategy of decomposing the chromosome into units. Once postulated as units of chromosomes their unit nature suggested properties like measurable size and precise position (over just relative position through plotting linkage relations), which became a central investigative avenue. The complexity discovered in these relations with the advent of mutations was to drive genetic theory. Postulations about how crossing over worked, for instance, or the discovery of relations between genes that did not to fit these patterns universally, provoked further experimental investigation and a more sophisticated understanding of gene operation. In this context the unit nature of the gene was more broadly articulated and expressed as a unit of structure in the gametic material (chromosomes), a unit of mutation, a unit of recombination, unit of replication as well as a unit of function. Much investigation that formed the “chemical attack on the nature of the gene” (Muller 1946) was driven by finding out just what this unit nature consisted of (on the presumption of there being one) and discovering that genetics was immensely complicated by Benzer’s (1961) discovery that the functional notion of a gene, at least in bacteria, did not correspond precisely to the units of recombination in chromosomes or units of mu-

Rethinking Scientific Concepts for Research Contexts

61

tation. For Goldschmidt the commitment to this unit structure seemed at times arbitrary but he knew it was fundamental to geneticists way of thinking, and his principle break with this methodology was to deny any Mendelian unit of heredity corresponding to units of gametic material, preferring to treat the chromosome as the principle unit of heredity composed of an ontology of positions and the interrelations between them. 2.2.2 A Causal Agent of Traits Unit identity was intertwined, however, with the most important epistemic feature of the gene concept, which was to represent the gene as a causal agent in the production of traits, and thereby principally a unit of function. This property was central to application of the gene concept in explanation. Moreover, this property was thought to enable scientists to identify and further investigate the gene. Genes were posited to explain patterns in traits, and they were thought to do so by virtue of their unit identity and invariance. This presumed relation between genes and traits was central to the identification of genes, and also to the examination of heredity phenomena and the biochemical production of traits through their experimental manipulation. Even those who later sought to define genes chemically (as structures of DNA for instance) still had to rely on these functional relations to sort out what these boundaries could be. Research sought in the process to find more precise elements of the phenotype closer to the operation of genes to extract information on how genes operated. In this vein for instance Beadle and Tatum (1941) looked directly at enzymatic products producing the onegene-one-enzyme hypothesis. They explicitly relied on the assumption that genes had causal effects that could be tested and filtered in or out by changing the environmental background. Avery, MacLeod and McCarty (1944) similarly fixed the role of DNA on the basis that either it or the protein complement of a cell could be responsible for phenotypic development and then experimentally trying to control for each. This presumed causal relation thus guided genetic research and also the identification of the gene. As illustration one can cite the complementation or cis-trans test, mentioned above, which identified, and is still used to identify, genes by their causal operation or function generating phenotypes. It resulted from a shift in identification criteria from the one-to-one trait correspondence of Bateson, to the idea of genes as difference-makers located on chromosomes, but remained nonetheless built on the assumption of a causal relation between genes and traits,

62

Miles MacLeod

as well as its unity and invariance. Complementation identifies alleles by mutations that affect the same phenotype, and can be used to locate them at genetic loci where recombination tests lack resolution. Consider two recessive mutations a and b where aa”bb produces no phenotype and a and b are mapped to the same general loci. The question is whether a and b affect different genes or the same gene and thus can build up a picture of the functional units involved in the production of the phenotype. During the complementation test, if no phenotype is observed when these are combined in trans (a+”+b, where ‘+’ represents a wild type chromosome), it is concluded that these mutations are alleles of the same gene; neither allele produces a product that can restore wild-type function. However, if the double heterozygote exhibits a wild-type phenotype, it is concluded that the two mutations are alleles of different genes; each mutant has a functional version of the other gene. In general, mutation experiments were useful precisely because they interfered with the causal effect of these units and thus could be used make finer hypotheses and inferences about how they operated. 2.2.3 A Hereditary Invariant Like the gene’s unit nature, another property that proved central to the use of the gene in theorizing and particularly experimentation was the presumption of stability or invariance of genes and their independence of the phenotype produced. This was postulated as central early on by Johannsen (1909, 1911), in what he labeled the genotype conception of heredity, backed up by his pure line experiments. It was itself a necessary presupposition to identifying and mapping genes through hybridization and crossing over. It was the basis by which mutations were assumed to operate as permanent transmissible marks. Again the continuity here is historically evident. Muller re-emphasized the importance of this invariance: [T]he finding of the starting-point in a complex course, were it observed in any other field, would be taken to imply the existence of some guide or guides, some elements that are relatively invariable and that serve as a frame of reference in relation to which the passing phases of other features are adjusted … So, too, in the organism it would be inferred that there exists a relatively stable controlling structure, to which the rest is attached and about which it refers. (Muller 1946, 1)

This captured nicely exactly how genetics had operated over the last 35 years, assuming that the explanation of any complex phenotypic phe-

Rethinking Scientific Concepts for Research Contexts

63

nomenon lay not in the effects of phenotypes on genetic material, but in the operations and interactions of the stable genes themselves. Muller took this to be indicated by the self-reproduction of the gene, which he considered the central characterizing feature of whatever chemical or chemicals composed it. Invariance played a fundamental role in his own research and thinking about genes.

2.3 Central Epistemic Attributes, Open-Endedness and Representations of the Gene These features provided a stable and common understanding of the gene. In many other representational aspects the concept was treated open-endedly. This becomes clearer when we examine the conflicts and continuities of genetic theory. Indeed if we identify the concept with these attributes, which provided essential means through which the concept propagated research, we have, I believe, a better historical understanding of its persistence and development than we get when treating the gene concept as just a vessel of successive representations. It was in the first place an epistemic resource for these researchers, valued by virtue of it. The concept did represent the gene for them but did so open-endedly, and new representations of the gene were in fact part of the productive applications of these attributes towards building up theories and models, explanations and experimental systems. As such the assumption that a character could have multiple alleles responsible, that any allele could come in different versions (not just a recessive/ dominant pair) (Cuénot 1904; Morgan 1911), that there could be modifying genes (East 1912), that genes could interact along the chromosome (position effects) (Sturtevant 1925), that genes had their own inner complexity (like the genomere hypothesis (Eyster 1924), and later fine structure theories) all presumed, relied upon and were constrained by its unit nature, the functional relationship to the phenotype, and its invariance. These were treated as elaborations of representations of the gene for solving particular problems, but not redeterminations of how the gene informed and structured their investigations. When these features seemed under challenge, researchers preferred to reconsider their understanding of ‘gene’ rather than revoke them and thus preserve its explanatory, experimental and theoretical or conceptual utility. In the case of the multiple factor hypothesis from East (1910) and Nilsson-Ehle (1912) a specific explanation was sought within the Mendelian system

64

Miles MacLeod

for the inheritance of quantitative traits, which came in grades. This was in line with the roles of these features and reinforced their function as epistemic. Changes that however threatened these features were treated differently. Consider the dispute between Castle (1912, 1915) and geneticists over the hereditary invariance of the gene. Part of East (1912) and Muller’s (1914) response to Castle was to assert that allowing modified genes simply undermined the whole epistemic value and epistemic program that genetics promised. It robbed it of its particular epistemic advantages. “[O]ne cannot measure or describe by changing standards.” (East 1912, 646) Geneticists at the time took this debate to be partially a result of confused terminology. They remarked that the unit-character term gave the impression that characters themselves were the functional unit, and thus they argued that Castle’s work relied on a different concept, distinct from theirs, which needed to be distinguished precisely by the property of invariance.12 Castle, in turn, suspected they were simply avoiding the substance of the debate, by shifting the conceptual basis of the discussion in this way. As he puts it at the time, “[t]his view has been repeatedly challenged, either by those who questioned the evidence cited in support, or by those who substituted a different concept, ‘gene’, for that of ‘unit-character’, and then denied that a ‘gene’ can vary” (Castle 1919, 126). Researchers were identifying the concept of gene with invariance. As constitutive of their research program and its operation they were reluctant to dissociate their notion from these attributes, even though these other interpretations could not be conclusively dismissed by available evidence. While these revisions all threatened the central epistemic attributes, and thus the functionality, of the concept to their research, there were other changes that didn’t challenge but in fact relied upon them. Those changes were treated as elaborations of the concept of the gene. For instance in respect of the presence-absence hypothesis of Bateson and his objection to the chromosome theory, Bateson himself understood the dispute in a review of Morgan’s central textbook (Morgan et 12 As Sturtevant (1919) expressed, “[t]his term ‘unit character’ is an unfortunate one, as it implies the conception that characters are in some way caused by single genes—a view which is not tenable in light of our modern knowledge … It focus attention upon the somatic characters of organisms rather than upon their germinal constitution; whereas the present study of genetics is tending in exactly the opposite direction.”

Rethinking Scientific Concepts for Research Contexts

65

al. 1915) as a dispute over what the gene concept represented in germinal material. Bateson’s unit-factor concept shared these epistemic features, central as they were for him, with the ways genes were to be identified and individuated through recombination. In Frost’s (1917) account of the presence/absence approach, again it’s clear that what is at issue is how deeply the gene or factor concept is to be applied to germinal material. For him the presence/absence approach was a way of expressing open-endedness about the particular material assumptions that might underlie these factors, by treating them rather as properties or characters of germinal material arising out of potentially complex, not so easily separable, processes. Burian (1995) notes that by the 1910s, features like invariance and unity were community properties of the concept and thus had entrenched themselves as means of evaluation. Hence Bateson’s reduplication hypothesis, which attempted to explain linkage relations through the duplication of factors in germ cells, relied on the essential methodology of Mendelism. It gave, however, poor predictions of factor distributions and was ultimately rejected. Similar things can be said about the various models of genetic structure that emerged and disappeared, like the genomere hypothesis. As already intimated the fine-structure analysis of Benzer was interpreted by biologists like Carlson (1966, 207) as an exploration of gene structure rather than a dissolution of the concept. It preserved the functional causal relation between genes and traits with the cistron (which carried on as the ‘gene’), situating mutations and recombinations within it, as now properties of genetic structure, rather than as characteristics defining it. Not long later Jacob and Monod (1961) demarcated two sets of genes: “one of these (the structural gene) responsible for the structure of the molecule, the other (the regulatory gene) governing the expression of the first through the participation of a repressor.” Both sets were identifiable by cis-trans tests of their function and targeted mutations. The discovery was consistent with, and relied upon, these properties, but it precipitated a much more complex picture for which the master molecule representation of the gene had to give way. These developments however were not interpreted as challenges to the concept per se in the same way as Castle had challenged it. Rather, they were dealt with by participants as part of the ordinary course of investigation of genes which the gene concept, understood through these epistemic attributes, provided the resources for. They are thus fashioned by Carlson (a participant at the time) in 1966 into a narrative of investigation and discovery, rather than a history

66

Miles MacLeod

of discontinuous conceptual reformations. The role of these attributes providing the resources and constraints for these common investigations makes them central then to what we attribute as the ‘concept’ of the gene that these actors shared. The organization of continuity and representational change around these features because of their epistemic roles has been evident to some philosophers of biology. Griesemer with his own emphasis on process in historical accounts evokes generational invariance or stability (independence of phenotype and the environment) for instance as for him the central methodological constraint. “A process historiography of the gene concept would show that challenges to the invariance or stability of the gene cut deeply because they challenged not only concepts, but the epistemological basis of experimental utility of the gene concept.” Likewise on unity, “[i]f pseudoallelism or fine structure mapping reveals that genes can no longer be thought of as units of structure, function, mutation and recombination, then perhaps genes can be interpreted more narrowly as units of structure that must team up to constitute units of function. If operon theory forces a divorce between units of structure and function, then perhaps genes can be kept as units of structure whose functions are explained in terms of cell-level regulation” (Griesemer 2000, 269 f).

3. A Concept of Concept for Research Contexts If we are to strictly apply the vessel notion of concept to the gene concept, viewing this concept as being reconstructed at particular points in time by the assertions of particular groups of researchers, then we get something that does in fact look discontinuous. Bateson’s gene concept looks quite different from Morgan’s. Muller’s gene concept looks different from Morgan’s through its association with chemical information. Benzer’s is distinct from Muller’s, splitting functional and structural from mutational and recombinational units. Philosophers have looked at this as evidence of conceptual and referential switches by building tables to indicate the shift in property attributions, listing at each stage what they take to be the central properties of the concept (for instance see Weber 2005). Hence when Muller defined the gene in chemical terms, this indicated a descriptive switch from that defined in terms of Mendelian laws and chromosomal elements, suggestive of referential and conceptual discontinuity.

Rethinking Scientific Concepts for Research Contexts

67

As noted, these kinds of discontinuity were not necessarily apparent to the practitioners themselves as significant conceptual changes. There are two ways to reconcile this, firstly by generating in a potentially ad hoc way a theory of reference that recovers continuity, or secondly by rethinking the notion of concept we attribute in the first place. I would suggest the reasons for this continuity can be located in the joint participation by researchers in a process of research and investigation with shared goals, which determined the way they treated the gene concept. My analysis suggests that treating this research process as sequences of representational events gives us poor understanding of the practices in question. Research is not an event; it is an extended and ongoing process. At the frontiers of research, where concepts are shaped, formed, and relied upon, knowledge is limited and uncertain, or at least accumulating, and any beliefs about concepts are generally understood to have been evaluated with respect to that limited evidence and may well need to change in the context of more. They are defeasible. Rather we need to construct our understanding of concepts relative to their essential parts facilitating that ongoing research and investigation, which apprehends representational change as part of the process. This is a better place to suspect a stable underlying concept more suitable for research contexts. In the case of the gene concept I postulated that such an understanding of concept should be constructed around its open-endedness and various central epistemic attributes. What I want to do here then briefly is to build on the gene case by providing a more general and precise characterization of these, which structure the alternative notion of concept I’m arguing for. With this alternative notion we have a tool for accounting for conceptual continuity in other cases.

3.1 Characteristics of Open-Endedness The idea that concepts are to some extent treated open-endedly by researchers is certainly not novel. In the case of the gene concept for instance many philosophers have chosen to understand its development and continuity by modeling the methodology governing it this way, as we’ve seen. What I’m referring to works specifically on the methodological level. It involves at some level decisions by researchers about how to treat the representational aspects of their concepts in complex

68

Miles MacLeod

research contexts. Here are the important aspects for a concept to be treated open-endedly that I incorporated in my gene account: 1. The concept is considered to refer to an aspect of reality, although what this is or even what type of thing it is (property, kind, function etc.) may well be unknown. 2. Concepts display an independence of representational accounts (or the lack of them) of their underlying referents to the extent that even deep descriptive change or disagreement across research groups occurs without perceptions of conceptual discontinuity or conceptual change.

3.2 Characteristics of Central Epistemic Attributes As I suggested earlier, open-endedness leaves open what it is that gives a concept stability and identity. This is where central epistemic attributes can play an important role. Earlier I described central epistemic attributes as particular parcels of information attributed by a concept to a referent that are central to the identification of the referent and experimentation with it, as well as organizing and thinking about the phenomena for theory and model-building, and ultimately for explanations of the phenomena. They are thus treated by researchers as basic to their program of investigation and understanding. The classical gene case suggests then the following more particular characteristics of such attributes that make them ‘central epistemic’ attributes: Such attributes are 1. goal relevant: By this I mean that they are features of concepts that researchers treat as relevant or essential to the generation of theories and the design of experiments that progress research towards the epistemic goals pursued by the researchers; 2. constitutive of research methods: By this I mean that they are elements of the concept that researchers use to build methodologies designed to obtain information relevant to these goals and guiding its interpretation; 3. representational: By this I mean that they make certain minimal claims about what the concept represents, but are also treated as a framework and starting point for building more detailed descriptions of the referent, often through their own elaboration; 4. interpretable: By this I mean that they don’t prescribe the form explanations, theories or models applying them can take. Part of their

Rethinking Scientific Concepts for Research Contexts

69

function as ‘central epistemic attributes’ is they can structure and facilitate alternative representations of what the concept represents13. The last two express the link between such attributes and open-endedness, which help give epistemic attributes independence of particular representations. As we saw, the features bearing these characteristics (causal link to traits, unity, and invariance) prescribe persistent and deep aspects of the classical gene concept’s functionality to ongoing research, embedded as they were in the explanatory, conceptual and experimental techniques and approaches by which genetic research operated. They together formed the ‘mind-set’ by which Goldschmidt pictured its modus operandi of investigation as dice in a throw. All of them were open to interpretation allowing the image of the gene to shift in the course of their investigation.

3.3 A Trans-Historical Concept of Concept? The part of these aspects then in the very structure of genetic research up until the 60s, points to the possibility of a trans-historical representation of concepts. Such a representation constructs concepts like the gene concept in the first place as epistemic tools, structured around such attributes, from which applications are developed and with respect to which representations take shape. Because of the fact that such properties and relations go through different interpretations and descriptions, 13 In this respect my characterization of central epistemic attributes overlaps with that of Feest and Bloch’s exploration of operational definitions of concepts in this volume. Operational definitions of the gene when they’ve come up (such as East’s, Stadler’s and Muller’s) do rely on these epistemic features as central to the formulation of investigative methods, including conditions for the identification of genes, without prejudicing them with deeper presumptions. What this approach emphasizes is that the use of these features was not free of interpretation, and took shape as new information and knowledge was acquired. Hence they cannot be identified with their part in any particular such operational definitions. The complementation test accompanied for instance a reformulation of the causal role governing the relation between genes and traits, from a one-to-one to many-to-many relation, while postulating that genes were discrete units of chromosomal structure. Later the identifying traits shifted inwards to enzymes and biochemical processes, and at least for a time the many-to-many relation dropped into the background, as genes were associated to, and identified as, particular structural units of DNA through the one-gene one-enzyme principle.

70

Miles MacLeod

and are used to create diverse methodologies, their operation is best visible over extended periods of research in which one can filter out what aspects of the concept they were relying on and underwrote descriptive change. These can be matched to what researchers have said when reflecting on what they themselves at a point in time deem necessary to investigations and research using the concept, as in the case of the more operational definitions that emerged from Morgan, Muller and Stadler at various times. As I’ve also argued, epistemic features of the gene concept come to the fore during debates over whether a particular description was an elaboration within the context of these roles, to be tested empirically, or was in fact treated as a conceptual change, by shifting the basis by which the concept was considered to operate for them in their investigations.

4. Conclusion I don’t want to overstate my case having relied on one example that of course suits this theory. If this restructuring of the notion of concept can be extended beyond this case, however, we have a standpoint from which to take account of continuity of the concepts through their interconnection with ongoing processes of research. In this respect the research process, characterized by uncertainty and lack of information, is the phenomenon through which we interpret, understand and define our notion of concept, by tracing the ways in which the concept helped constitute that research through open-endedness and central epistemic attributes. With this we can ascribe a concept to the actors, which potentially reflects better its usage and methodological treatment, including its persistence in the face of often deep descriptive change and competition. It also reflects the preference for researchers to talk in terms of different articulations of one concept rather than a plethora of different concepts, and the importance to research of the functional elements of a concept and association of the concept with them. On this view, the development of better representations can be understood as one function use of concept among others. Treating concepts as simply vessels of representations hides what is really important about them, namely the confidence researchers have in their ability to provide resources that sustain open investigative activity towards epistemic goals, an activity in which variable descriptions and representations are an inevitable part of the process.

Rethinking Scientific Concepts for Research Contexts

71

Acknowledgments This research was funded by the Intiativkolleg (Naturwissenschaften im historischen Kontext), University of Vienna, and the Konrad Lorenz Institute for Evolution and Cognition Research, Altenberg, Austria. I would particularly like to acknowledge here the dedicated and thorough help of the editors in improving this paper.

Reference List Arabatzis, T. (2006), Representing Electrons: A Biographical Approach to Theoretical Entities, Chicago: The University of Chicago Press. Avery, O. / MacLeod, C. / McCarty, M. (1944), “Studies on the Chemical Nature of the Substance inducing Transformation of Pneumococcal Types.” In: The Journal of Experimental Medicine 83, 89 – 96. Bateson, W. (1901), “Experiments in Plant Hybridization.” In: Journal of the Royal Horticultural Society 26 (1901), 1 – 32. Bateson, W. (1902), Mendel’s Principles of Heredity, London: Cambridge University Press. Bateson, W. (1907), “Facts Limiting the Theory of Heredity.” In: Science 26, 649 – 660. Bateson, W. / Punnett, R. (1911), “On the Inter-Relations of Genetic Factors.” In: Proceedings of the Royal Society of London: Series B, Containing Papers of a Biological Character 84 (568), 3 – 8. Bateson, W. (1913), Mendel’s Principles of Heredity, London: Cambridge University Press. Bateson, W. (1914), Problems of Genetics, London: Cambridge University Press. Beadle, G. / Tatum, E. (1941), “Genetic Control of Biochemical Reactions in Neurospora.” In: Proceedings of the National Academy of Sciences of the United States of America 27 (11), 499 – 506. Benzer, S. (1961), “On the Topography of the Genetic Fine Structure.” In: Proceedings of the National Academy of Sciences 47 (3), 403 – 415. Burian, R. (1985), “On Conceptual Change in Biology: The Case of the Gene.” In: Depew, D. / Weber, B. (eds.), Evolution at a Crossroad: The New Biology and the New Philosophy of Science, Cambridge, MA: MIT Press, 21 – 42. Burian, R. (2005), “Too Many Kinds of Genes? Some Problems Posed by Discontinuities in Gene Concepts and the Continuity of the Genetic Material.” In: Burian, R., The Epistemology of Development, Evolution, and Genetics, Cambridge: Cambridge University Press, 166 – 178. Carlson, E. (1966), The Gene: A Critical History, Philadelphia/London: W. B. Saunders.

72

Miles MacLeod

Carnap, R. (1956), “The Methodological Character of Theoretical Concepts.” In: Feigl, H. / Scriven, M. (eds.), Minnesota Studies in the Philosophy of Science 1, Minneapolis: University of Minnesota Press, 38 – 76. Castle, W. E. (1912), “The Inconstancy of Unit-Characters.” In: The American Naturalist 46, 352 – 362. Castle, W. E. (1915), “Mr Muller on The Constancy of Mendelian Factors.” In: The American Naturalist 49, 37 – 42. Castle, W. E. (1919), “Piebald Rats and The Theory of Genes.” In: Proceedings of the National Academy of Science 5, 126 – 130. Cuénot, L. (1904), “L’hérédité de la pigmentation chez les souris.” In: Archive de Zoologie Experimentale et Generale 4 (2), xlv–lvi. East, E. (1910), “A Mendelian Interpretation of Variation that is Apparently Continuous.” In: American Naturalist 44, 65 – 82. East, E. (1912), “The Mendelian Notation as a Description of Physiological Facts.” In: The American Naturalist 46 (551), 633 – 655. Emerson, R. (1913), “The Inheritance of a Recurring Somatic Variation in Variegated Ears of Maize.” The American Naturalist 48, 87 – 115. Eyster, W. (1924), “A Genetic Analysis of Variegation.” In: Genetics 9, 372 – 404. Falk, R. (1986), “What is a Gene?” In: Studies in the History and Philosophy of Science 17 (2), 133 – 173. Falk, R. (2000), “The Gene—A Concept in Tension.” In: Beurton, P. / Falk, R. / Rheinberger, H.-J., The Concept of the Gene in Development and Evolution, Cambridge: Cambridge University Press, 317 – 348. Frost, H. (1917), “The Different Meanings of the Term ‘Factor’ as affecting Clearness in Genetic Discussions.” In: The American Naturalist 51, 244 – 250. Gayon, J. (2000), “From Measurement to Organisation: A Philosophical Scheme for the History of the Concept of Heredity.” In: Beurton, P. / Falk, R. / Rheinberger, H.-J., The Concept of the Gene in Development and Evolution, Cambridge: Cambridge University Press, 317 – 348. Goldschmidt, R. (1958), Theoretical Genetics, Berkeley / Los Angeles: University of California Press. Griesemer, J. (2000), “Reproduction and the Reduction of Genetics.” In: Beurton, P. / Falk, R. / Rheinberger, H.-J., The Concept of the Gene in Development and Evolution, Cambridge: Cambridge University Press, 240 – 285. Griffiths, P. / Neumann-Held, E. (1999), “The Many Faces of the Gene.” In: BioScience 49 (8), 656 – 662. Hempel, C. (1970), “On the ‘Standard’ Conception of Scientific Theories.” In: Radner, M. / Winoku, S. (eds.), Minnesota Studies in the Philosophy of Science, vol. IV, Minneapolis: University of Minnesota Press, 142 – 163. Jacob, F. / Monod, J. (1961), “Molecular and Biological Characterization of Messenger RNA.” In: Journal of Molecular Biology 3, 318 – 356. Johannsen, W. (1911), “The Genotype Conception of Heredity.” In: The American Naturalist 45 (531), 129 – 159.

Rethinking Scientific Concepts for Research Contexts

73

Johannsen, W. (1909), Elemente der exakten Erblichkeitslehre, Jena: Gustav Fischer. Kuhn, T. (1970), The Structure of Scientific Revolutions, 2nd ed., Chicago: University of Chicago Press. Feyerabend, P. (1958), “An Attempt at a Realistic Interpretation of Experience.” In: Proceedings of the Aristotelian Society 58, 143. MacCorquodale, K. / Meehl, P. (1948), “On a Distinction between Hypothetical Constructs and Intervening Variables.” In: Psychological Review 55, 95 – 107. Mendel, G. (1866), “Versuche über Plflanzenhybriden.” In: Verhandlungen des naturforschenden Vereines in Brnn IV, 3 – 47, English translation by C. T. Druery / W. Bateson. Morgan, T. (1911), “The Influence of Heredity and of Environment in Determining the Coat Colors of Mice.” Annals of the N.Y. Academy of Science 21, 87 – 117. Morgan, T. (1911), “Factors and Unit Characters in Mendelian Heredity.” In: The American Naturalist 47 (553), 5 – 16. Morgan, T. et al. (1915), The Mechanism of Mendelian Heredity, New York: Henry Holt. Morgan, T. (1917), “The Theory of the Gene.” In: The American Naturalist 51 (609), 513 – 544. Morgan, T. (1926), The Theory of the Gene, New Haven: Yale University Press. Morgan, T. (1933), “The Relation of Genetics to Physiology and Medicine.” Nobel Lecture in Stockholm, June 1933. Muller, H. (1914), “The Bearing of the Selection Experiments of Castle and Philips on the Variability of Genes.” In: The American Naturalist 48, 567 – 576. Muller, H. (1922), “Variation Due to Change in the Individual Gene.” In: The American Naturalist, 56 (642), 32 – 50. Muller, H. (1927), “Artificial Transmutation of the Gene.” In: Science 66, 84 – 87. Muller, H. (1946), “The Gene.” In: Notes and Records of the Royal Society of London (Pilgrim Trust Lecture 137), 1 – 37. Nersessian, N. (1984), Faraday to Einstein: Constructing Meaning in Scientific Theories, Dordrecht: Martinus Nijhoff. Nilsson-Ehle, H. (1909), “Kreuzuntersuchungen an Hafer und Weizen.” In: ActaUnivLundensis 5 (New Series 2), 1 – 122. Portin, P. (2002), “Historical Development of the Gene Concept.” In: Journal of Medicine and Philosophy 27 (3), 257 – 286. Putnam, H. (1973), “Meaning and Reference.” In: Journal of Philosophy 70, 699 – 711. Rheinberger, H. -J. (1997), Towards a History of Epistemic Things: Synthesizing Proteins in the Test Tube, Stanford: Stanford University Press. Rheinberger, H.-J. (2000), “Gene Concepts: Fragments from the Perspective of Molecular Biology.” In: Beurton, P. / Falk, R. / Rheinberger, H.-J., The Concept of the Gene in Development and Evolution, Cambridge: Cambridge University Press, 317 – 348.

74

Miles MacLeod

Shapere, D. (1982), “Reason, Reference, and the Quest for Knowledge.” In: Philosophy of Science 49 (1), 1 – 23. Stadler, L. (1954), “The Gene.” In: Science 120, 811 – 819. Stotz, K. / Griffiths, P. / Knight, R. (2004), “How Biologists Conceptualize Genes: An Empirical Study.” In: Studies in History and Philosophy of Biological and Biomedical Sciences 35 (4), 647 – 673. Sturtevant, A. (1925), “The Effects of Unequal Crossing over at the Bar Locus in Drosophila.” In: Genetics 10, 117 – 147. Sturtevant, A. (2005), A History of Genetics, first published in 1970, Cold Spring Harbor: Cold Spring Harbor Laboratory Press. Watson, J. / Crick, F. (1953), “The Genetical Implications of the Structure of Deoxyribonucleic Acid.” In: Nature 171 (4361), 963 – 967. Weber, M. (2005), Philosophy of Experimental Biology, Cambridge: Cambridge University Press. Wright, S. (1917), “Color Inheritance in Mammals.” In: Journal of Heredity 8 (5), 224 – 235.

The Dynamics of Scientific Concepts: The Relevance of Epistemic Aims and Values Ingo Brigandt The philosophy of science that grew out of logical positivism tended to construe scientific knowledge in terms of a set of interconnected beliefs about the world, such as theories and observation statements. Confirmation was understood as a logical relation between observation statements and theoretical statements. This was dubbed the ‘context of justification’, to be contrasted with the ‘context of discovery’, where discovery was not generally deemed to be a rational process and thus not a concern for philosophy. During the last few decades this vision of philosophy of science has changed (Brigandt 2011d; Hacking 1983). Nowadays discovery (e. g., in experimental biology) is seen as intimately tied to confirmation and explanation (Bechtel 2006; Craver 2007; Darden 2006; Weber 2005). Science is conceived not merely as a set of axiomatic systems, but as a dynamic process based on the various practices of individual scientists and the institutional settings of science (Hull 1988; Longino 2002; Brigandt 2011a, sect. 4). Two features particularly influence the dynamics of scientific knowledge: epistemic standards and aims. An existing standard (be it a methodological standard, an evidential standard, or a standard of explanatory adequacy) accounts for why old beliefs had to be abandoned and new beliefs came to be accepted. At the same time, standards are subject to change. Epistemic aims (assumptions about what issues are currently in need of scientific study and explanation) likewise influence the practice and dynamic workings of science (Brigandt 2013; Love 2008). Notice that epistemic standards and aims operate on a different dimension than scientific beliefs. Whereas scientific beliefs are representations of the world, scientific standards and aims are epistemic values. Epistemic aims (e. g., explanatory problems deemed to be important) are not descriptions of the objects of science, but values held by scientists as the actors of science. Taking such epistemic aims and values into account is, in my view, key to an epistemological understanding of the dynamics of science, and past philosophical ac-

76

Ingo Brigandt

counts that focused exclusively on various beliefs (theoretical and observational) missed a whole aspect of scientific knowledge formation.1 The relevance of epistemic aims and values for belief change has been previously recognized. My paper intends to make a similar point for scientific concepts, both by studying how an individual concept changes (in its semantic properties) and by viewing epistemic aims and values tied to individual concepts. In a recent publication (Brigandt 2010b), I have presented my view that a scientific concept consists of three components of content: (1) the concept’s reference, (2) the concept’s inferential role, and (3) the epistemic goal pursued by the concept’s use. In the course of history a concept can change in any of these components (possibly with one component changing while the others are stable); and at any point in time these components of content can vary across different users of the term. The first two components are well-known. Part of a concept’s content is that it has a certain referent, such as kinds of material entities, physical properties, and natural processes. But a concept also embodies beliefs about the referent, where two coreferential concepts may represent the common referent in a different way. This is often expressed by saying that a term has a sense or an intension; sometimes it is construed as the term’s inferential role, which is the way in which a term is actually used, or is properly used given the rules of language (Brandom 2000; Boghossian 1993). A concept’s inferential role embodies some of one’s beliefs about the referent by connecting the concept to other concepts. How a term’s meaning or a concept’s content (which embodies beliefs about the referent) is actually construed matters less for my purposes (as my concern is to highlight a different aspect of concepts), but in what follows I use the notion of ‘inferential role’. According to my approach, a concept’s inferential role consists of those beliefs that are important for the application of the concept and that underwrite the 1

Even when using a post-positivist framework that, in addition to statements and theories, acknowledges models and accounts of mechanisms, it is important to bear in mind that all the former are representations that must be distinguished from epistemic aims. While my discussion focuses on epistemic values in science, I do not rely on a distinction between epistemic and other values. In current (commercialized) biomedical research, aims and values that are intuitively epistemic and intuitively non-epistemic are so entangled in the generation of knowledge that they have to be studied together. The question is not so much whether a value is epistemic or non-epistemic but whether it is licit (including socially desirable).

The Dynamics of Scientific Concepts

77

term’s successful scientific use. While there can be more to a term’s successful use than the definitions put forward by scientists, a term’s definition is a part of the term’s inferential role, so that a revision of a term’s definition is also a change of its inferential role and thus an instance of semantic change (for more detail, see Brigandt 2010b).2 Despite its name, inferential role includes not only the inferences supported by a concept, but also the explanations made possible by the concept. A synonymous term found in the philosophical literature is ‘conceptual role’ (Block 1986, 1998; Field 1977; Harman 1987)—which is more easily seen as including explanations—but I explain below my preference for ‘inferential role’ (to avoid any conflation with what I call a concept’s epistemic goal). A concept’s inferential role and even its reference can change in the course of history. For instance, Nancy Nersessian (1984) has studied the concept of an electromagnetic field in detail by breaking down this concept’s content—inferential role in my terminology—into different parts (e. g., function, structure, and causal power) and tracking the historical change of each such part, while viewing different historical stages of each part connected by ‘chain-of-reasoning connections.’ While this offers a detailed study of how this concept’s inferential role changed over time, my focus here is on a philosophical account of why such change occurred and why it was rational. To be sure, Nersessian (1984, 2008) views conceptual change as a problem-solving enterprise, but to fully explain the dynamic change of conceptual representations (or inferential roles) one has to make epistemic values—such as the aim of solving a particular problem—an additional and explicit part of one’s philosophical framework. I do so by introducing the epistemic goal pursued by a concept’s use as a third component of content in addition to reference and inferential role. It is well-known that scientists pursue various epistemic goals, such as confirming particular claims, explaining certain phenomena, or making discoveries of a certain kind. A particular epistemic goal 2

Due to this component of conceptual content, there is a close relation between a concept and a mental theory. It is a difficult question as to which of one’s beliefs about a referent is part of the inferential role (and thus what distinguishes a concept and one’s total beliefs about a referent). For some thoughts on the issue see Brigandt (2010b, sect. 2) and Brigandt (2006, sect. 3.3). I do not discuss it in this paper, as I deem my focus on epistemic aims and values to make a more fruitful contribution to understanding the use of scientific concepts than by revisiting longstanding debates about concept individuation and the analytic-synthetic distinction.

78

Ingo Brigandt

(e. g., explaining cell-cell interaction) is specific to a scientific discipline (e. g., cell biology) in that this discipline but no others are concerned with this scientific aim. While there are often several concepts used to address a particular epistemic goal, my point here is that there are instances where an epistemic goal is tied to a specific concept insofar as the rationale for introducing the term and for continuing to use it is to pursue this epistemic goal. For instance, the epistemic goal pursued by the concept of natural selection is to account for evolutionary adaptation. Taking this third component into philosophical consideration is essential because it accounts for semantic change and variation, i. e., for why a term’s inferential role and possibly reference has changed in history, or why a term’s inferential role and possibly reference varies across different contemporary users of the term. Among other things, the epistemic goal pursued by a term’s use sets standards for when the redefinition of a term (a change in a term’s inferential role) is rationally warranted. The notion of a concept’s epistemic goal is thereby important for understanding the epistemic dynamics of science and how concepts figure in investigative practice. It can do so because this third component of conceptual content is not about what a concept represents (reference) or how a concept represents (inferential role), but it is an epistemic value—what scientists attempt to achieve when using a concept. For this reason, it is vital to distinguish the concept’s epistemic goal from its inferential role. Both are determined by language use, and in this sense inferential role and epistemic goal are aspects of a concept’s use. My approach is consistent with the common idea that ‘meaning is use’ (Kindi in this volume), yet use has usually been identified with how a term is used (inferential role), though what a term is used for (epistemic goal) is likewise to be taken into account. Most importantly, terms such as ‘concept use’, ‘function of a concept’ and ‘conceptual role’ could be seen as ambiguously referring to both inferential role and epistemic goal, even though the two must be clearly distinguished.3 My tenet that a concept consists of three components (reference, inferential role, epistemic goal) is not so much to be understood as a met3

Accounts of ‘function’ in biology have pointed out that there are different notions of functions used by scientists (Wouters 2003). The function of a biological trait can refer to what it does (the activity it performs), but it also can refer to what the trait is for (what it is designed to do for a larger system). These two notions of function mirror the difference between a concept’s inferential role and its epistemic goal.

The Dynamics of Scientific Concepts

79

aphysical doctrine about what a concept is, rather it is a methodological guideline about how actual scientific concepts are to be studied—all three components, their change, and their interaction have to be considered (Brigandt 2011c). In what follows, I explain and illustrate this general approach in concrete cases by discussing three biological concepts that exhibit some interesting conceptual dynamics—the concept of evolutionary novelty, the homology concept, and the gene concept. In the final section, I will compare and contrast the three cases and draw some general conclusions.

1. The Concept of Evolutionary Novelty An evolutionary novelty (also called evolutionary innovation) is a structure in a group of species that was not present in any ancestors of this group (Müller/Wagner 2003). An example of a novelty is the vertebrate jaw, which evolved in the transition from jawless vertebrates to jawed vertebrates (among extant vertebrates, hagfish and lampreys are jawless). The evolution of fins in fish and the transformation of fins into limbs are other examples. The origin of bird feathers is an evolutionary novelty. The concept of evolutionary novelty is central to current evolutionary biology, in particular to the emerging field of evolutionary developmental biology, typically dubbed ‘evo-devo’ (Hall/Olson 2003). While accounting for the evolution of novel structures is an important scientific task, evo-devo biologists contend that traditional, neo-Darwinian evolutionary biology is ill-equipped to do this. Neo-Darwinism, having population genetics at its theoretical core, can explain how the frequency of an existing trait increases within a population, but it does not provide the tools to account for the very origin of morphological structures. The explanation of evolutionary novelty is a core item on the agenda of evo-devo, and there is widespread agreement that knowledge from developmental biology is essential in explanations of novelty (Müller/Newman 2005; Wagner 2000). Despite the intimate connections of both disciplines in the second half of the 19th century, developmental biology was irrelevant to evolutionary biology for most of the 20th century. As a result, current evolutionary developmental biology is often hailed as forging a (re-)synthesis of evolutionary and developmental biology in the near future (Brigandt/Love 2010; Love 2003). Despite the fact that, as the central item on the agenda of evo-devo, the concept of evolutionary novelty contributes to defining the intellec-

80

Ingo Brigandt

tual identity of this new discipline, there is substantial disagreement on how to define novelty (Brigandt/Love 2010; Moczek 2008). Whereas some construals of novelty focus on the new adaptive capacities generated by some novel traits, excluding issues pertaining to adaptation and considering structure alone is important to many other accounts of novelty. Some assert that upon its evolution a novel structure qualifies as such (if it was not present in the ancestor), while others argue that novelty means new evolutionary potential, so that a structure can count as a novelty only if upon further evolution it has actually resulted in a wide array of new structural variants. Most importantly, debates about different proposals of how to define ‘evolutionary novelty’ stem from the difficulty of deciding which morphological changes are mere quantitative variants (and thus not novelties), and which are qualitative differences (and thus genuine novelties). Some define a novelty as a structure that is not homologous to any ancestral structure (Hall 2005; Müller/Wagner 1991), but this may be of no help given that it has been argued that ‘being homologous’ is not an all-or-nothing affair but a matter of degree (Minelli 2003). For any structure there are some precursors; at least some components of a novel structure (e. g., tissues, cell differentiation patterns) were already present in the ancestor. Indeed, we may be surprised by how much novelty was generated by small developmental changes and minor rearrangements of existing features (Moczek 2008). As a result, there is possibly nothing but a continuum between non-novelty and novelty. Some cynics maintain that the concept of novelty does not admit of any precise definition and does not have a real scientific significance, though it is advantageous to use the label ‘novelty’ in grant applications. Admittedly, the concept of evolutionary novelty does a poor job at distinguishing novel from non-novel structural changes. But this would be a drawback only if the central function (epistemic goal) of this concept was to make precise distinctions, for instance, if the concept was a tool of classification. In contrast, I follow Alan Love in arguing that the primary function of the concept of novelty is to set a problem agenda, i. e., to point to a phenomenon in need of explanation (Brigandt 2010a; Brigandt/Love 2010; Love 2005, 2006, 2008).4 In this case the problem 4

Some may wonder how the concept of novelty can point to a phenomenon in need of explanation (various evolutionary novelties), if it is not clear exactly which structures are novelties. However, a mechanistic explanation of a morphological transformation is an important achievement even if this structural

The Dynamics of Scientific Concepts

81

is the explanation of the evolutionary origin of novelty, and given the nature of this particular problem, it is clear that knowledge from different biological disciplines is required—developmental biology, paleontology, phylogeny, and evolutionary genetics, among others. As a result, the problem of novelty motivates intellectual integration across disciplines. Darden and Maull (1977) have already observed that the integration of fields can be effected by the existence of a problem that cannot be solved by the resources of any field in isolation. But the further philosophical point can be made that a problem agenda can structure intellectual integration by foreshadowing how the intellectual contributions from different fields are to be coordinated. The reason for this is twofold. First, a problem agenda is associated with criteria of explanatory adequacy (Love 2008), which specify what considerations have to be adduced to yield a satisfactory explanation. Second, a problem agenda is a complex problem, consisting of several interrelated problems (Love introduced the term problem agenda for this reason). A problem agenda such as the explanation of evolutionary novelty consists of component questions that stand in systematic or hierarchical relations. This problem structure indicates how the different explanatory ingredients provided by different fields (e. g., answers to particular component questions) are to be related and integrated. To illustrate this idea in the context of evolutionary novelty, the first basic step in accounts of novelty (encompassing several smaller component questions) is to lay out a sequence of structural changes leading up to a novelty, showing that and how the novelty qualitatively differs from structures that existed earlier, what aspects or parts of the overall structure has precursors in ancestral species, and how related structures changed in this period. Apart from detailed morphological studies of the relevant structures in extant species, the field of paleontology and its fossil data is particularly important for this task. Likewise, the discipline of phylogeny (which sets up phylogenetic trees) is needed to get an idea of at which phylogenetic junctures certain morphological transitions occurred. A second basic step in the explanation of novelty is a causal-mechanistic account of how the morphological transformations came about. Here developmental biology is necessary to understand change does not qualify as a novelty on some definitions of ‘novelty’. The idea that the concept of novelty sets a problem agenda shifts the focus away from the identification of novelty to the more important issue of the explanation of morphological change.

82

Ingo Brigandt

how ancestral developmental systems could have been modified and reorganized so as to result in the advent of the novel morphology. Such an account has to address several levels of organizations (genes, cells, tissues, morphological structures), so that different areas of developmental biology (broadly construed) and other related fields are often involved. The need to address the activities of genes, cells, and tissues across developmental time and relevant changes in such developmental processes across evolutionary time (corresponding to particular phylogenetic junctures and structural intermediates), yields a conceptual template to relate the various explanatory inputs from different disciplines. Sometimes the novel feature to be explained is not just a single structure but an anatomical function, i. e., the relative articulation, movement, and interaction of several structures, for instance the origin of flight in birds. In this case functional morphology is another discipline whose resources are needed, and the problem agenda makes plain that the articulation and interaction of the structures involved and the evolutionary origin and change of such interactions has to be addressed. The scenario of how the novelty arose also has to be consistent with the mechanisms of genetic change in populations, and the environmental conditions and forces of natural selection that existed in this historical period, calling for an involvement of the disciplines of population genetics and paleoecology. In my recent work, I have argued that integration in biology is not the stable theoretical unification of different fields, but the dynamic coordination of various epistemic units (explanations, models, concepts, methods) across several fields (Brigandt 2010a). Rather than several disciplines merging into a unified whole, disciplines often retain some relative autonomy (based on various intellectual and institutional factors), while at the same time engaging in various relations to other disciplines. These intellectual relations can be problem-relative: Given one problem addressed by a discipline, one set of relations to other fields is operative, for the purposes of another problem the discipline currently maintains relations to other fields.5 Due to their internal structure, problem agen5

Kitcher (1999) argues that while genuine unification cannot be achieved (as nature is too complex), unification is still a regulative ideal. From my perspective, unification/integration is not at all an aim in itself; rather, a certain kind and degree of integration may be needed for the aim of solving a scientific problem (while at the same time some degree of disciplinary specialization may be required as well).

The Dynamics of Scientific Concepts

83

das coordinate interdisciplinary research—as discussed in the case of evolutionary novelty. A problem agenda specifies a particular epistemic aim, and its associated standards of explanatory adequacy are epistemic standards. In the introduction I have mentioned that epistemic aims and standards generally account for the epistemic dynamics of science, and the same holds in this context, where taking up a problem agenda leads to the emergence of novel epistemic relations across different ideas and fields. A change in the problems currently addressed by a discipline or in the criteria of explanatory adequacy results in further epistemic change. To return to the concept of evolutionary novelty, I have suggested that the primary function of this concept is to set a particular problem agenda, so that this concept motivates interdisciplinary research and coordinates intellectual integration. In this fashion, the concept of novelty generates some epistemic dynamics, including exploratory experimental and theoretical research that is part of attempts to account for specific evolutionary novelties. Using the terminology of my framework on concepts sketched above, a major epistemic goal pursued by the use of the concept of evolutionary novelty is to set a problem agenda (the explanation of the evolutionary origin on novelty). Biologists clearly state that one of the aims of evolutionary developmental biology is the explanation of evolutionary novelty, though they may not explicitly talk about the function of scientific concepts. Yet the fact that the concept of evolutionary novelty is used to pursue a certain epistemic goal is implicit in the practice of many of evolutionary developmental biologists using the concept, so that philosophers can articulate this concept’s epistemic goal to make the operation of scientific practice intelligible and possibly contribute to science by making the relevant scientists more aware of and reflective about the functions of their concepts. By setting a problem agenda, the concept of evolutionary novelty fulfills an important function in science, despite definitions of novelty being contested and it being unclear exactly which structures are novel. The concept’s most fruitful epistemic goal is not to classify objects or make precise distinctions. This is at odds with standard philosophical views of scientific terms, which assume that a term refers to certain objects, and that a scientifically useful term has a relatively precise definition which determines which objects fall under the term. Given disagreement on how to define novelty, the reference of the concept of evolutionary novelty is vague and what I call its inferential role (definition) may shift depending on who uses the concept. Still, by tak-

84

Ingo Brigandt

ing the epistemic goal pursued by the concept’s use into account (which in this case is not to classify and make precise distinctions), one can understand its role in and positive contribution to science. By setting a problem agenda, the concept of evolutionary novelty guides the generation of an explanatory framework, which is to bring together several concepts that are needed to successfully explain the origin of novelty.

2. The Homology Concept The notion of homology has been crucial to the practice of comparative biology, including evolutionary biology (Brigandt 2006, 2011b; Brigandt/Griffiths 2007). Homologous structures are the corresponding structures in different species. For instance, the right arm in humans, the right wing in bats, the right forelimb of horses, and the right flipper in whales are homologous. Even some of the individual bones of the forelimb (such as the radius and ulna) reoccur in different species. Though the shape of such a homologous structure varies among different species, it is identified as the same structure and typically given the same name across species. In addition to bones, all types of anatomical structures and bodily parts can be homologous, including individual muscles, nerves, and tissues. Molecular structures such as particular genes and proteins are likewise identified as homologous across different species. Unsurprisingly, the reason why homologous structures occur in different species is that these structures have been inherited from the species’ common ancestor. This is reflected by modern definitions of homology: Two structures in two species are homologous if they have been derived from one and the same structure in the ancestor. While homology is an evolutionary phenomenon, the homology concept was actually introduced well before the advent of Darwin’s theory of evolution. Up to the 18th century anatomical structures were often referred to by a description of the structure’s composition, shape, position, or function, with practices varying across countries. Where shorter names were used by an anatomist, a common name was applied to structures in different species insofar as these structures were of similar shape and function, so that the same name was only used for structures in taxonomically closely related species (e. g., different mammals). The homology concept was established in early 19th century comparative anatomy and embryology, based on the recognition that the same structure

The Dynamics of Scientific Concepts

85

can be found in taxonomically less closely related groups, such as reptiles and mammals, or even fish and mammals. This was possible due to the use of two basic criteria of homology. One was the relative position of one structure to other structures of the same organism, such as the relative position of adjacent bones, or a nerve innervating a particular muscle. A structure can substantially vary in its length and shape across species, while keeping its relative position to and articulation with other structures. The other criterion of homology was the idea that homologous structures have the same embryonic origin, i. e., develop out of the same tissues and embryonic precursors in different species. While the homology concept was already an important part of the practice of comparative biology in this pre-Darwinian period, different non-evolutionary accounts of the nature of homology were put forward. One idea was that different species are governed by the same laws of development, resulting in corresponding structures in different species. Another account appealed to abstract geometric body plans (or possibly to blueprints in the mind of God), so that structures in actual species were defined to be homologous in case they corresponded to the same element in the abstract body plan. The fact that the advent of evolutionary theory paved the way for the later definition of homology in terms of common ancestry raises the following issue: Do the preDarwinian and the post-Darwinian uses of the term ‘homology’ amount to two different concepts, so that the Darwinian revolution led to the replacement of the pre-Darwinian concept of homology by a separate concept? The worry is that the a change in definition makes the preDarwinian and post-Darwinian concepts of homology incommensurable (meaning incommensurability in the sense of Kuhn 1962 and ). While not addressed by other authors in the case of homology, the issue has been discussed in a related context, namely, the question of whether the pre- and post-Darwinian accounts of the nature species amount to two distinct concepts (Beatty 1986). In the case of the homology concept, some semantic change did occur with the advent of evolutionary theory. The change in definitions and accounts of homology is what I call a change in the concept’s inferential role. But on my philosophical framework, inferential role is only one component of a concept. The epistemic goal pursued by the use of the homology concept did not shift with the origin of Darwinism, so there was a major element of conceptual continuity. Before the advent of evolutionary theory, biologists used the homology concept for two epistemic aims: (1) the systematic morphological description of several

86

Ingo Brigandt

species, and (2) the taxonomic classification of species. Individuating anatomical structures in terms of homology proved to be very conducive for both goals. Another possible scheme of individuating structures is in terms of analogy, where analogous structures are structures having the same function. The wings of birds and insects are analogous, but not homologous. Homologous structures need not be analogous, as the above example of the mammalian forelimb (human arm, bat wing, whale flipper) shows that the function of a homologous structure can be very different in different species. Homology individuates structures by breaking down an organism into its natural anatomical units. What these units are is not always obvious, as what appears to be one bone can actually be several fused bones (which can be uncovered by a study of the skeletal structure’s development, or by comparison with other species where the bones are not fused). Homology also individuates by relating structures across species as the same ones. First, this yields unified morphological descriptions (far more unified than other, earlier approaches permitted). Many anatomical and developmental descriptions that apply to a structure in one species also hold for the corresponding, homologous structure in other species. To the extent that a homologous structure varies substantially across species, dissimilarities (and similarities) become meaningful if they pertain to actually corresponding structures, so that homology provides a reference system to which descriptions across species have to attach. The comparative practice using the homology concept made possible a unified morphological account of the vertebrate skeleton even before the advent of Darwinian evolutionary theory (Owen 1849). Regarding the concept’s second epistemic goal, pre-Darwinian taxonomists aimed at grouping species into higher taxa not in an arbitrary or artificial fashion, but in a manner that revealed the species’ so-called natural affinities. Before the advent of evolutionary theory it became clear that while analogies were similarities independent of taxonomic relatedness, homologies across species reflected their natural affinities and were thus to be used as guides to taxonomic relatedness. Despite its introduction of a new perspective for biology, the advent of Darwin’s evolutionary theory did not change what comparative biologists such as anatomists and taxonomists attempted to achieve when using the homology concept—the epistemic goals were still systematic morphological description and the classification of species. Biologists gradually came to adopt the new definition of homology in terms of common ancestry precisely because they realized that the new con-

The Dynamics of Scientific Concepts

87

strual permitted them to meet their traditional epistemic goals in an improved fashion. Once homologous structures are defined as structures inherited from an ancestral structure and taxonomic groups are seen as branches of the tree of life stemming from an ancestral species, it is clear why homologous structures are to be compared in the classification of species—whereas analogous structures are similarities independently of phylogenetic relatedness and for this reason are not to be used for taxonomic purposes. A phylogenetic definition of homology permitted a better resolution of controversial claims about particular homologies. A theoretically more sound morphology based on phylogenetic principles led to more adequate and unified anatomical descriptions encompassing different species, as breaking an organism down into structural units by means of homology means to pick out units of morphological evolution across species. (For more details on the history of the homology concept, see Brigandt 2006.) In the terminology of my framework of concepts, the change in the homology concept’s definition and thus its inferential role was scientifically warranted because it permitted biologists to meet the concept’s epistemic goals to a greater extent (where the two epistemic goals were stable). To be sure, the continued presence of an unchanging epistemic goal alone cannot trigger change in a term’s inferential role. Relevant are also novel empirical findings (which can lead to the endorsement of new beliefs or the abandonment of previously held beliefs), in this case the idea of the common ancestry of species and anatomical structures. But note that in addition to a change in beliefs, what philosophers have to account for in this case is a change in meaning, a change in the very definition of the term ‘homology’. This is possible because the epistemic goal pursued by a concept’s use provides the required standard: A change in the concepts inferential role (definition) is rationally warranted if the new inferential role meets the concept’s epistemic goal to a greater degree than the previous inferential role. Some semantic change occurred with the Darwinian revolution, but there is no need to consider it as resulting in incommensurability.6 6

Given the change in definition, some may notice that I have not answered the question as to whether the term ‘homology’ as used by pre- and post-Darwinian biologists is the same concept or different concepts. Since on my account a term has three semantic properties (reference, inferential role, epistemic goal) and can change in each of them, I do not think that there is a unique account of concept individuation. Whether this particular instance of semantic change is

88

Ingo Brigandt

In this fashion, the epistemic goal pursued by a concept’s use guides scientists to revise the definition of a term, and the notion of a concept’s epistemic goal enables philosophers to account for the rationality of semantic change in the course of history. In addition to this, the notion of epistemic goal also bears on understanding semantic variation across different users of a term, if the term’s epistemic goal varies. In addition to the homology concept’s traditional use in comparative and evolutionary biology, in the second half of the 20th century this concept came to be used in two novel disciplines—molecular biology and evolutionary developmental biology. As I have argued earlier (Brigandt 2003), each of these two new fields came to use the homology concept for somewhat different epistemic goals. This subsequently resulted in semantic variation across fields and in conceptual divergence, where nowadays homology is construed differently in contemporary systematics / evolutionary biology, in molecular biology, and in evolutionary developmental biology. A diversification of the epistemic goals for which the term ‘homology’ is used (among several biological fields) led to a diversification of the term’s inferential role. In much of molecular biology (yet not in molecular evolution and molecular phylogeny), ‘homology’ simply refers to similarity of gene and protein sequences. From the point of view of evolutionary biology, this fails to distinguish similarities that are and that are not due to common ancestry, where on a phylogenetic definition only the former are instances of homology. Evolutionary biologists have criticized the construal of molecular homology as sequence similarity for this reason (Reeck et al. 1987). Yet in molecular biology, merely knowing that a gene or protein sequence (not studied yet) is similar to a sequence whose role in molecular mechanisms has been established permits an inference regarding which experimental techniques can be effectively used to investigate the new sequence. Thus, the term ‘homology’ as used in most of molecular biology is used for the epistemic goal of experimental discovery. The starting point for homology as approached in evolutionary developmental biology is that an account of homology in systematics and viewed as an enduring homology concept (undergoing internal change) or as one concept giving rise to a different concept, in either case the rationality of the change in the term’s inferential role has to be justified. I consider it to be philosophically more important to account for change in any of a term’s semantic properties than to debate whether this amount to a separate concept being used (Brigandt 2010b).

The Dynamics of Scientific Concepts

89

traditional evolutionary biology does not explain what makes parts of parent and offspring the corresponding (homologous) characters, and it does not explain how the same structures developmentally reappear in different generations. The epistemic goal pursued by the use of the homology concept in evolutionary developmental biology is to developmentally explain how homologues are units of morphological transformation, which can appear in different generations as the same morphological unit while being able to undergo change and structural modification. Here the epistemic goal is causal-mechanistic explanation as opposed to the unified descriptions of comparative biology. As a result, the notion of a concept’s epistemic goal also accounts for why semantic variation emerged (variation in inferential role), if the latter results from a term being used for different concrete epistemic goals by different scientific approaches. Whether or not such semantic variation creates problems depends on the particular case. If a term is used to pursue quite different epistemic goals in different fields (where a single inferential role cannot be used to meet different goals at the same time) and the scientists are not aware of this, communication across these fields can be hampered. This is, up to a point, the case for ‘homology’ as nowadays used, as some biologists criticize the account of homology of another field without being aware that this field pursues aims different from their own when using the same term. For example, working within the perspective of comparative biology, Cracraft (2005) rejects the approach to homology found in evolutionary developmental biology.

3. The Gene Concept My account of the gene concept is in some ways similar to my discussion of the homology concept, involving both semantic change in the course of history (see also MacLeod in this volume), and semantic variation at present. The latter situation is of particular interest, as the use of the term ‘gene’ in contemporary molecular biology can vary from context to context, so most of my discussion is devoted to this issue (for my detailed treatment of the gene concept, see Brigandt 2010b). Philosophers typically distinguish between the classical gene concept and the molecular gene concept (Waters 1994). The classical gene concept emerged around 1900 and was well-established by the 1920s. Classical genetics was concerned with the study of patterns of inheritance

90

Ingo Brigandt

across generations, where phenotypic patterns of inheritance were mathematically explained based on the underlying transmission of genes. On my account, the epistemic goal pursued by the use of the classical gene concept was the prediction (and statistical explanation) of phenotypic of patterns of inheritance, i. e., distribution of phenotypes in the offspring generation. This aim was achieved by an account of classical genes—in my terminology the inferential role of the classical gene concept. Even though genes were often deemed to be concrete material entities, the classical gene concept did not embody a structural construal of the nature of genes apart from the fact that genes were tied to specific chromosomal locations (Sarkar 1998; Waters 1994). Instead, the concept’s inferential role contained knowledge about how genes and chromosomes behave in processes of inheritance and sexual reproduction, including meiosis and crossing over, which sufficed for setting up chromosomal maps (showing the relative position of various genes on a chromosome) and predicting and statistically explaining patterns of genotypic and phenotypic inheritance. The molecular gene concept grew gradually out of the classical gene concept in the 1950s and 1960s. Despite this historical continuity, the classical concept (of the 1920s) and the molecular concept (of the 1970s) differ in important respects. Molecular genetics is not in the business of studying patterns of inheritance across generations; instead, it addresses processes taking place within organisms, in fact within single cells. The epistemic goal pursued with the molecular gene concept is to account for how a gene codes for a specific molecular product, usually a protein. For this reason, a structural characterization of genes is essential. The inferential role of the molecular gene concept includes the idea that a gene is a so-called open reading frame, which is a stretch of DNA bounded by a start and a stop codon and preceded by a promoter sequence. In combination with knowledge about how genes as structural units figure in molecular processes, this explains gene function, i. e., the production of gene products. Molecular entities bind to the promoter and thereby initiate the transcription of a gene’s DNA sequence into an RNA sequence. In a second step, this RNA sequence is translated into a protein as a sequence of amino acids, where the particular amino acid sequence is determined by the gene’s DNA sequence. (Three adjacent DNA nucleotides code for one amino acid, and the nucleotide-amino acid mapping is called the genetic code.) In contrast to the classical gene concept, whose function is to predict (and offer stat-

The Dynamics of Scientific Concepts

91

istical explanations), the molecular gene concept is a tool of causalmechanistic explanation. As a result, all three components of content (reference, inferential role, epistemic goal) changed in the transition from the classical to the molecular gene concept. The inferential role of the term ‘gene’ changed since only the molecular gene concept offers a structural account of genes. This led even to a change in reference. Since classical genes are individuated in terms of their phenotypic effects and molecular genes are defined as particular structural units coding for proteins, these two concepts may offer a different account of how many genes there are at a genetic region in the case of regions with a complex organization (Weber 2005, ch. 7).7 A change in reference has traditionally been seen as threatening incommensurability of meaning, and the causal theory of reference has been invoked by philosophers of science to show how a term’s reference can be stable despite major theory change. However, the gene concept is one of the cases where a scientific concept underwent rational change in meaning despite a change in reference (Brigandt 2010b; Burian et al. 1996; Kitcher 1982). In the case of the homology concept, I have accounted for the redefinition of this concept based on the concept’s stable epistemic goal, which sets standards for when a change in inferential role and possibly correlated change in reference is rationally warranted. However, this option does not seem to be available in the present context, as in the transition from the classical to the molecular gene concept the very epistemic goal pursued by the use of the term ‘gene’ changed. Still, a philosophical account is possible, based on the fact that the change in epistemic goal was gradual. The reader is referred to Brigandt (2010b, sect. 3) on this issue.8

7

8

While detailed classical studies carried out in the 1970s had suggested five classical genes at the achaete-scute gene complex, molecular research of the 1980s instead revealed four molecular genes that are responsible for the phenomena observed by prior classical studies. Weber (2005) argues that what geneticists were tracking when studying ‘genes’ was not a single structural kind, but that there are several kinds with overlapping extensions, to which biologists can and did refer. He introduces the useful notion of ‘floating reference’ for the idea that the reference of the gene concept has changed constantly during its history, though in a gradual fashion from one category to another category overlapping with the former. Another complication is that the advent of the molecular gene concept did not eliminate the classical gene concept. Even though both concepts are still in use, it is important to account for how the molecular concept growing out of and

92

Ingo Brigandt

In this section I want to devote more discussion to how the molecular gene concept has changed in the last few decades, and the associated origin of substantial semantic variation. While the molecular gene concept was well-established by the 1970s, novel findings in molecular genetics and genomics have led to semantic change. Originally, it was assumed that all genes have the same structure (a stretch of DNA delineated by a start and stop codon and preceded by a promoter sequence), where one such structural unit codes for a single product and every gene product results from one such DNA unit. However, it has been discovered that gene structure and function is incredibly more complicated in non-bacterial eukaryotes (Griffiths/Stotz 2007; Stotz 2006a, 2006b). It turns out that genes form a structurally heterogeneous kind and that the relation between DNA elements and their products is many–many. This led to revised construals of what molecular genes are, resulting in a historical change of both the inferential role and reference of the molecular gene concept. At the same time, the molecular gene concept’s epistemic goal has been stable—the concept is still used to explain how genes code for their products (but see the refined account below). The new use of the molecular gene concept came about through those findings about gene structure that bear on gene function. Thereby it was an instance of rational semantic change, as current construals of what molecular genes are provide an improved account of how DNA elements function by coding for gene products— meeting the molecular gene concept’s epistemic goal to a higher degree. This semantic change in the last few decades has also led to a significant degree of semantic variation. Nowadays, different molecular biologists may offer different of genes. These scientific developments have recently triggered philosophical discussions of the molecular gene concept, addressing such questions as whether there is a unified concept underlying the varying uses of ‘gene’ or whether there are two or more distinct gene concepts used in molecular biology (Beurton et al. 2000; Griffiths/Stotz 2007; Moss 2003; Stotz/Griffiths 2004; Waters 2000). In my study of the homology concept, I have pointed out that nowadays the term ‘homology’ is used for different scientific purposes (epistemic goals) in different three biological fields (systematics / traditional evolutionary biology, molecular biology, and evolutionary developmental biology), so that one could argue that these three are different, largely replacing the classical concept was an instance of rational semantic change.

The Dynamics of Scientific Concepts

93

though related concepts. In the case of the term ‘gene’ as employed across molecular biology, the situation is that there is still a shared epistemic goal underlying different uses of the term. Its usage is contextsensitive, where the term is used in slightly different ways by different molecular researchers (or by the same person in different scientific contexts). In any case, rather than trying to determine whether semantic variation corresponds to one shared or several distinct concepts, I view it as philosophically more fruitful to study and explain the presence of semantic variation (as an instance of conceptual dynamics), in particular showing why a context-sensitive use of a term can be beneficial to scientific practice. For the purpose of this essay I mention only one major reason for the current semantic variation, namely, the many–many relation between DNA elements and gene products. A continuous DNA segment can give rise to an RNA transcript, where in a process called splicing only some chunks of the RNA are selected and fused to be translated into the protein product (so that only certain chunks of the DNA segment actually code for the product). In the case of alternative splicing, different parts of a gene’s RNA transcripts can be selected in different cells of an organism or in one cell at different points in time, leading to the situation where one DNA element produces many protein products with distinct amino acid sequences. One could consider this DNA element to be a gene, which happens to code for many distinct products. On the other hand, one could postulate a gene for each product, where these genes happen to physically overlap or be identical. There is also a many–one relation between DNA elements and gene products. In the case of trans-splicing, several non-contiguous DNA elements (possibly located on different chromosomes) are independently transcribed to RNAs, which are then fused together to generate a single protein product. This raises the question of whether each of these non-contiguous DNA elements is a separate gene (though each such gene does not code for a protein in isolation), or whether they jointly form a gene (that happens to be physically spread out over the genome). Due to such many–many relations between DNA elements and gene products, it is unclear which DNA elements (and their mereological sums) count as a gene, as a mere part of a gene, or as a collection of several genes. As a result, different scientists may use different criteria for individuating genes, which also entail a different reference of the term ‘gene’. This is aggravated by the fact that the relation from DNA elements to RNA products is largely one–one, but the relation between DNA elements

94

Ingo Brigandt

and protein products is many–many (due to alternative splicing and trans-splicing of RNA transcripts). Nowadays it is clear that both RNAs (originally assumed to be mere intermediates) and enzyme-forming proteins fulfill important cellular functions. Researchers focusing on RNAs or rather on proteins as the molecular gene products of interest are likely to individuate different DNA elements as independent genes. Both the use and the reference of the term ‘gene’ in contemporary molecular biology can vary across utterances, which is determined by two basic factors. First, genes form a heterogeneous kind, so that different structural and functional features can be used to characterize genes. Some geneticists assume that only DNA elements with distinct promoters can count as distinct genes; others do not make this requirement. Some permit that a gene may have different products, yet count genetic elements that are trans-spliced together as distinct genes. Other relevant considerations are whether all separable genetic elements are translated, whether a genetic element that forms a product in conjunction with other DNA elements (trans-splicing) also produces another product on its own in other cellular contexts, how far apart the different DNA segments involved are, and how chemically diverse the different products are. Several such considerations can be combined to various specific characterizations of what a gene is. Each way of individuating genes picks out a different category (though the categories overlap extensionally), so that genes are not a unique kind, but a set of several overlapping categories. Second, when using the gene concept on a certain occasion, a biologist has particular investigative or explanatory aims in mind. A geneticist is typically interested in quite specific aspects of gene structure or gene function in her research. The research question that is pursued when using the term ‘gene’ influences which of the possible structural or functional features of genes are relevant for this instance of term use. As a result, two biologists may employ different construals of precisely what defines a gene when addressing one and the same complex genetic region. For example, one scientist may be interested in the RNA produced from a DNA segment, while another may focus on the protein as the gene product of interest. Usually, this semantic variation is pronounced across different branches of molecular biology (RNA researchers as opposed to protein biochemists), but occasionally one and the same person can use the term ‘gene’ differently in different scientific contexts. On my philosophical account, there is a common generic epistemic goal pursued with the use of the molecular gene concept, namely, to

The Dynamics of Scientific Concepts

95

account for gene function. Yet in concrete contexts this can be spelled out in different ways, resulting in different specific epistemic goals underlying actual uses of the term, e. g., focusing on RNA or protein as the gene product of interest. The variation in the (specific) epistemic goal pursued explains why there is semantic variation (i. e., variation in inferential role and reference), and why a context-sensitive use of the term ‘gene’ is conducive to scientific practice. For different epistemic goals are legitimate, and a unique construal of what genes are cannot do justice to various epistemic goals and the complexities of genetic structure. This semantic variation does not lead to communication failure, as the variation is small and the particular context disambiguates the particular use in play. In this fashion, small and context-dependent variation in the epistemic goal pursued with a term’s use accounts for conceptual dynamics across utterances.

4. Conclusions A theme common to all three case studies was that scientific concepts are used to pursue particular epistemic goals, and that these epistemic goals influence the epistemic dynamics of science. One basic difference between the concept of evolutionary novelty, on the one hand, and the homology concept and the gene concept, on the other, is that it is only in the latter two cases that the very concept under consideration is meant to meet the epistemic goal specified by this concept. The molecular gene concept, for instance, is used to account for how DNA segments produce their molecular products—the epistemic goal pursued by the concept’s use. This concept sets out a phenomenon to be explained, and its inferential role (as one part of the concept’s content) ideally offers an explanation of this phenomenon.9 The concept of evolutionary novelty, in contrast, sets out a problem agenda; however, it is not the concept of novelty, but several other biological concepts, that are assumed to account for the origin of novelty. Some such concepts are notions pertaining to the structure of gene regulatory networks, the concepts of epigenetic interaction, thresholds in morphogenesis, de9

Other terms pertaining to gene structure and function (such as ‘exon’, ‘transcription unit’, and ‘splicing’) are involved in explanations of how genes produce their products, so that the term ‘gene’ is not the only one tied to the goal of explaining gene function. But the term ‘gene’ is central in this context and the other terms are tied to it as part of the gene concept’s inferential role.

96

Ingo Brigandt

velopmental reprogramming, and heterochrony—a successful explanatory framework is a task yet to be achieved in the future. As a result, the epistemic dynamics that is at stake in this case is not a change of the concept of evolutionary novelty.10 The concept fulfills a stable function by setting out a problem agenda, but as argued above, this substantially influences the operation of evolutionary biology, as this particular problem agenda (consisting of hierarchicallyrelated component questions and associated criteria of explanatory adequacy) coordinates research across several biological subdisciplines, foreshadowing how various intellectual resources (models, explanations, concepts, and methods) are to be related and integrated. Thereby, the concept of evolutionary novelty influences the epistemic dynamics of several biological fields in general, and the behavior of other concepts in particular. In the case of the homology concept and the gene concept, the dynamic behavior of these very concepts was concerned (even though other concepts related to them have changed as well). The definition of ‘homology’ changed during the 19th century in the transition from pre-evolutionary biology to evolutionary theory. Likewise, basic accounts of what a molecular gene is have changed since the advent of the molecular gene concept in the late 1960s. Both are changes in inferential role on my account, and the stable epistemic goal of the respective concept motivated biologists to revise its definition (once new empirical knowledge became available), and furthermore, the notion of epistemic goal philosophically justifies why the redefinition was legitimate. Nowadays, the terms ‘homology’ and ‘gene’ also exhibit semantic variation, as a consequence of variation in the precise epistemic goal pursued by different users of the respective term. The homology concept came to be used within different branches of biology, and used for different epistemic purposes and aims among these branches. The molecular gene concept is universally used for a generic epistemic goal (accounting for how DNA segments produce their products), but this generic goal can be spelled out differently by different researchers and in different research contexts (e. g., focusing on RNAs or rather proteins 10 This leaves out the fact that traditional evolutionary biology did not see the explanation of novelty as a distinct challenge for evolutionary theory, so that historically with the advent of evolutionary developmental biology the concept of novelty has exhibited some change, and likewise its dynamic behavior across different parts of evolutionary biology is contingent upon how seriously this concept is taken.

The Dynamics of Scientific Concepts

97

as the gene product of interest); thus, there is a variation in the specific epistemic goal tied to the use of term ‘gene’, resulting in context-dependent construals of what genes are. My case studies mentioned various kinds of epistemic goals tied to concept use. The homology concept used in molecular biology is purely a tool of discovery. The homology concept in comparative biology (and traditional evolutionary biology) is used to yield unified descriptions, and the classical gene concept aims at predictions. Beyond inference, prediction, and classification, causal-mechanistic explanation can be an epistemic goal, as witnessed by the homology concept used in evolutionary developmental biology, the molecular gene concept, and the concept of evolutionary novelty. Even if the epistemic goal pursued by a concept is to arrive at a scientific explanation (rather than to discover certain phenomena), this may influence investigative practice in an essential way. The molecular gene concept clearly guides discovery in molecular biology, and the concept of evolutionary novelty motivates and structures exploratory experimental and theoretical research. Concepts refer to the world and represent the world in a certain fashion. Consequently, concepts have usually been construed as consisting of some beliefs about the concept’s referent: an intension, an inferential role, a definition, or an analytic statement. However, note that the epistemic goal pursued by a concept’s use operates on a different dimension than reference and inferential role. For the epistemic goal does not consist in a belief about states of the world—not even in a desire as to how aspects of the world studied by science should be like. Instead, it is a goal for scientific practice, or a desire as to what a scientific community should achieve. Such goals have to be taken into account to understand the dynamic operation of science, including the epistemology of scientific concepts. It has been observed that a tentative definition of a term can be revised once a new definition becomes available which is explanatorily more fundamental (Bloch in this volume). However, in order to adjudicate whether one definition is explanatorily more fundamental than another, one has to know what particular issues are in need of explanation in the context of this concept, which is provided by the concept’s epistemic goal. Some concepts are used to pursue several explanatory aims; some are not used for the purpose of explanation, so that other considerations apart from explanatory fundamentality determine the appropriateness of a definition. Scientific concepts are open-ended in that scientists are never hostage to the definitions they once favored and free to change their concepts (MacLeod in this volume). Neverthe-

98

Ingo Brigandt

less, to understand this phenomenon it is not enough to point to the fact that the meaning (inferential role) of some terms is not clearly delineated, as in the case of a Wittgensteinian family resemblance, and thus easier to change. Apart from a flexible inferential role one needs an independent standard that motivates the inferential role’s change, and thus has to consider a property on a different dimension than inferential role, namely the epistemic goal of a concept. It is for this reason that my claim that the epistemic goal pursued by a concept’s use is a part of this concept’s content is controversial. Reference and inferential role (or some equivalent property) have generally been deemed to constitute mental content, and reference (extension) and inferential role (intension) are semantic properties of terms.11 But many will resist my suggestion that the epistemic goal pursued by a term’s use is also a semantic property of a scientific term, as it is not part of the ‘what is said’ (the truth-conditional meaning of an expression containing terms). Still, I maintain that the epistemic goal pursued by a concept’s use is a component of this concept, because this component accounts for the rationality of semantic change and variation, and thus fulfills a semantic task—even if this task has not been recognized by traditional accounts of concepts. In fact, all three components of conceptual content have to be studied together. A stable epistemic goal causally determines and rationally justifies historical change in inferential role and reference, and variation in a concept’s epistemic goal (across different persons) accounts for variation in inferential role and reference. Likewise, changes in inferential roles and scientific beliefs can transform epistemic goals that scientists deem worth pursuing. My framework of concepts is not so much to be construed as a metaphysical account (or the only account) of what a concept is, but as a methodological guideline for how philosophers should study scientific concepts. Such a methodological framework is to be defended in terms of its fruitfulness for understanding the behavior of actual concepts (Brigandt 2011c). Ascribing a certain reference, inferential role, and epistemic goal to a term is justified if it sheds light on the use of this term and the change and variation in use. One may wonder whether every concept (or at least every scientific concept) has an epistemic goal. While there are very generic epistemic goals common to most 11 The exception is conceptual atomists (and direct reference theorists), who claim that concepts are individuated in terms of reference only (Fodor 2004; Laurence/Margolis 1999).

The Dynamics of Scientific Concepts

99

concepts, for instance referring to a referent or ensuring cognitive economy, more specific epistemic goals that are particular to a concept may exist only for scientifically central concepts, as the ones discussed above. Nonetheless, this is unproblematic as a unique epistemic goal has to be ascribed to a concept only if the concept exhibits semantic change or variation, which needs to be philosophically explained. Associated with epistemic goals are standards of adequacy that specify what would count as meeting the epistemic goal—what method is suitable for an investigative goal, what evidential standards obtain for an inferential or inductive aim, or what criteria of explanatory adequacy underlie an explanatory goal. Both epistemic goals and standards are epistemic values. Values are not beliefs about the object of scientific study and are thus not part of scientific theories and models—they operate on a quite different dimension. Yet epistemic aims and values are part of scientific practice and essential determinants of the epistemic dynamics of science, including scientific discovery and belief change. The central purpose of this essay has been to argue that (1) not only do epistemic aims and values influence theory change, but more specifically they influence the dynamic behavior of individual concepts, and (2) epistemic aims and values can be embodied by specific scientific concepts, so that such concepts influence the dynamics of science. As a result, the epistemic aims and values underlying the use of individual concepts have to be taken into account by any epistemology of science.12

Acknowledgements I thank the other contributors to this volume for comments on previous drafts of this essay, and Megan Dean for proofreading the manuscript. I am indebted to Uljana Feest and Friedrich Steinle for financial support to attend the project’s author meeting. The work on this paper was also supported by the Social Sciences and Humanities Research Council of Canada (Standard Research Grant 410 – 2008 – 0400). 12 In Brigandt (2011c), I suggest that also philosophical concepts should be related to aims. Against the armchair method of analyzing philosophical concepts by using intuitions, I argue that in analogy to scientific concepts, philosophical concepts should be viewed as being tied to philosophical goals. The specific goal pursued by a philosophical term’s use determines what philosophical account of the term is the most adequate one.

100

Ingo Brigandt

Reference List Beatty, J. (1986), “Speaking of Species: Darwin’s Strategy.” In: Kohn, D. (ed.), Darwin’s Heritage: A Centennial Retrospect, Princeton: Princeton University Press, 265 – 283. Bechtel, W. (2006), Discovering Cell Mechanisms: The Creation of Modern Cell Biology, Cambridge: Cambridge University Press. Beurton, P. J. / Falk, R. / Rheinberger, H.-J. (eds.) (2000), The Concept of the Gene in Development and Evolution, Cambridge: Cambridge University Press. Block, N. (1986), “Advertisement for a Semantics for Psychology.” In: French, P. A. / Uehling, T. E. Jr. / Wettstein, H. K. (eds.), Studies in the Philosophy of Mind (Midwest Studies in Philosophy, vol. 10), Minneapolis: University of Minnesota Press, 615 – 678. Block, N. (1998), “Semantics, Conceptual Role.” In: Craig, E. (ed.), Routledge Encyclopedia of Philosophy, vol. 8, London: Routledge, 652 – 657. Boghossian, P. A. (1993), “Inferential Role Semantics and the Analytic/Synthetic Distinction.” In: Philosophical Studies 73, 109 – 122. Brandom, R. B. (2000), Articulating Reasons: An Introduction to Inferentialism, Cambridge, MA: Harvard University Press. Brigandt, I. (2003), “Homology in Comparative, Molecular, and Evolutionary Developmental Biology: The Radiation of a Concept.” In: Journal of Experimental Zoology (Molecular and Developmental Evolution) 299 B, 9 – 17. Brigandt, I. (2006), A Theory of Conceptual Advance. Explaining Conceptual Change in Evolutionary, Molecular, and Evolutionary Developmental Biology, Dissertation, University of Pittsburgh (http://etd.library.pitt.edu/ETD/ available/etd-08032006 – 145211). Brigandt, I. (2010a), “Beyond Reduction and Pluralism: Toward an Epistemology of Explanatory Integration in Biology.” In: Erkenntnis 73, 295 – 311. Brigandt, I. (2010b), “The Epistemic Goal of a Concept: Accounting for the Rationality of Semantic Change and Variation.” In: Synthese 177, 19 – 40. Brigandt, I. (2011a), “Critical Notice of Evidence and Evolution: The Logic Behind the Science by Elliott Sober, Cambridge University Press, 2008.” In: Canadian Journal of Philosophy 41, 159 – 186. Brigandt, I. (2011b), “Homology.” In: The Embryo Project Encyclopedia (http:// embryo.asu.edu/view/embryo:1249). Brigandt, I. (2011c), “Natural Kinds and Concepts: A Pragmatist and Methodologically Naturalistic Account.” In: Knowles. J. / Rydenfelt, H. (eds.), Pragmatism, Science and Naturalism, Berlin: Peter Lang Publishing, 171 – 196. Brigandt, I. (2011d), “Philosophy of Biology.” In: French, S. / Saatsi, J. (eds.), The Continuum Companion to the Philosophy of Science, London: Continuum Press, 246 – 267. Brigandt, I. (2013), “Explanation in Biology: Reduction, Pluralism, and Explanatory Aims.” In: Science & Education, 22(1), DOI:10.1007/ s11191 – 011 – 9350 – 7.

The Dynamics of Scientific Concepts

101

Brigandt, I. / Griffiths, P. E. (2007), “The Importance of Homology for Biology and Philosophy.” In: Biology and Philosophy 22, 633 – 641. Brigandt, I. / Love, A. C. (2010), “Evolutionary Novelty and the Evo-Devo Synthesis: Field Notes.” In: Evolutionary Biology 37, 93 – 99. Burian, R. M. / Richardson, R. C. / Steen, W. J. van der (1996), “Against Generality: Meaning in Genetics and Philosophy.” In: Studies in History and Philosophy of Science 27, 1 – 29. Cracraft, J. (2005), “Phylogeny and Evo-Devo: Characters, Homology, and the Historical Analysis of the Evolution of Development.” In: Zoology 108, 345 – 356. Craver, C. F. (2007), Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience, Oxford: Oxford University Press. Darden, L. (2006), Reasoning in Biological Discoveries: Essays on Mechanisms, Interfield Relations, and Anomaly Resolution, Cambridge: Cambridge University Press. Darden, L. / Maull, N. (1977), “Interfield Theories.” In: Philosophy of Science 44, 43 – 64. Feyerabend, P. (1962), “Explanation, Reduction, and Empiricism.” In: Feigl, H. / Maxwell, G. (eds.), Scientific Explanation, Space, and Time (Minnesota Studies in the Philosophy of Science, vol. 3), Minneapolis: University of Minnesota Press, 28 – 97. Feyerabend, P. (1970), “Against Method.” In: Radner, M. / Winokur, S. (eds.), Analyses of Theories and Methods of Physics and Psychology (Minnesota Studies in the Philosophy of Science, vol. 4), Minneapolis: University of Minnesota Press, 17 – 130. Field, H. (1977), “Logic, Meaning, and Conceptual Role.” In: Journal of Philosophy 74, 379 – 408. Fodor, J. A. (2004), “Having Concepts: A Brief Refutation of the Twentieth Century.” In: Mind and Language 19, 29 – 47. Griffiths, P. E. / Stotz, K. (2007), “Gene.” In: Hull, D. L. / Ruse, M. (eds.), The Cambridge Companion to the Philosophy of Biology, Cambridge: Cambridge University Press, 85 – 102. Hacking, I. (1983), Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, Cambridge: Cambridge University Press. Hall, B. K. (2005), “Consideration of the Neural Crest and Its Skeletal Derivatives in the Context of Novelty/Innovation.” In: Journal of Experimental Zoology (Molecular and Developmental Evolution) 304B, 548 – 557. Hall, B. K. / Olson, W. M. (eds.) (2003), Keywords and Concepts in Evolutionary Developmental Biology, Cambridge, MA: Harvard University Press. Harman, G. (1987), “(Non-Solipsistic) Conceptual Role Semantics.” In: Lepore, E. (ed.), New Directions in Semantics, London: Academic Press, 55 – 81. Hull, D. L. (1988), Science as Process: An Evolutionary Account of the Social and Conceptual Development of Science, Chicago: University of Chicago Press. Kitcher, P. (1982), “Genes.” In: British Journal for the Philosophy of Science 33, 337 – 359. Kitcher, P. (1999), “Unification as a Regulative Ideal.” In: Perspectives on Science 7, 337 – 348.

102

Ingo Brigandt

Kuhn, T. S. (1962), The Structure of Scientific Revolutions, Chicago: University of Chicago Press. Laurence, S. / Margolis, E. (1999), “Concepts and Cognitive Science.” In: Margolis, E. / Laurence, S. (eds.), Concepts: Core Readings, Cambridge, MA: MIT Press, 3 – 81. Longino, H. E. (2002), The Fate of Knowledge, Princeton: Princeton University Press. Love, A. C. (2003), “Evolutionary Morphology, Innovation, and the Synthesis of Evolutionary and Developmental Biology.” In: Biology and Philosophy 18, 309 – 345. Love, A. C. (2005), Explaining Evolutionary Innovation and Novelty: A Historical and Philosophical Study of Biological Concepts, Dissertation, University of Pittsburgh (http://etd.library.pitt.edu/ETD/available/etd-05232005 – 142007). Love, A. C. (2006), “Evolutionary Morphology and Evo-Devo: Hierarchy and Novelty.” In: Theory in Biosciences 124, 317 – 333. Love, A. C. (2008), “Explaining Evolutionary Innovations and Novelties: Criteria of Explanatory Adequacy and Epistemological Prerequisites.” In: Philosophy of Science 75, 874 – 886. Minelli, A. (2003), The Development of Animal Form: Ontogeny, Morphology, and Evolution, Cambridge: Cambridge University Press. Moczek, A. P. (2008), “On the Origins of Novelty in Development and Evolution.” In: BioEssays 30, 432 – 447. Moss, L. (2003), What Genes Can’t Do, Cambridge, MA: MIT Press. Müller, G. B. / Newman, S. A. (2005), “The Innovation Triad: An EvoDevo Agenda.” In: Journal of Experimental Zoology (Molecular and Developmental Evolution) 304B, 487 – 503. Müller, G. B. / Wagner, G. P. (1991), “Novelty in Evolution: Restructuring the Concept.” In: Annual Review of Ecology and Systematics 22, 229 – 256. Müller, G. B. / Wagner, G. P. (2003), “Innovation.” In: Hall, B. K. / Olson, W. M. (eds.), Keywords and Concepts in Evolutionary Developmental Biology, Cambridge, MA: Harvard University Press, 218 – 227. Nersessian, NN J. (1984), Faraday to Einstein: Constructing Meaning in Scientific Theories, Dordrecht: Kluwer. Nersessian, N. J. (2008), Creating Scientific Concepts, Cambridge, MA: MIT Press. Owen, R. (1849), On the Nature of Limbs: A Discourse, edited by R. Amundson, preface by B. K. Hall, introductory essays by R. Amundson / K. Padian / M. P. Winsor / J. Coggon, Chicago: University of Chicago Press, 2007. Reeck, G. R. et al. (1987), “‘Homology’ in Proteins and Nucleic Acids: A Terminological Muddle and a Way out of It.” In: Cell 50, 667. Sarkar, S. (1998), Genetics and Reductionism, Cambridge: Cambridge University Press. Stotz, K. (2006a), “Molecular Epigenesis: Distributed Specificity as a Break in the Central Dogma.” In: History and Philosophy of the Life Sciences 28, 527 – 544.

The Dynamics of Scientific Concepts

103

Stotz, K. (2006b), “With ‘Genes’ like That, Who Needs an Environment? Postgenomics’s Argument for the ‘Ontogeny of Information’.” In: Philosophy of Science 73, 905 – 917. Stotz, K. / Griffiths, P. E. (2004), “Genes: Philosophical Analyses Put to the Test.” In: History and Philosophy of the Life Sciences 26, 5 – 28. Wagner, G. P. (2000), “What Is the Promise of Developmental Evolution? Part I: Why Is Developmental Biology Necessary to Explain Evolutionary Innovations?” In: Journal of Experimental Zoology (Molecular and Developmental Evolution) 288, 95 – 98. Waters, C. K. (1994), “Genes Made Molecular.” In: Philosophy of Science 61, 163 – 185. Waters, C. K. (2000), “Molecules Made Biological.” In: Revue Internationale de Philosophie 4, 539 – 564. Weber, M. (2005), Philosophy of Experimental Biology, Cambridge: Cambridge University Press. Wouters, A. (2003), “Four Notions of Biological Function.” In: Studies in History and Philosophy of Biological and Biomedical Sciences 34, 633 – 668.

Goals and Fates of Concepts: The Case of Magnetic Poles Friedrich Steinle 1. Concepts as Tools in Research Practice It is a truism that concepts do not have a truth value. We do not, and cannot, say of a tree, flower, fish, mammal, chemical bond, metal, magnet, line of force, electron, or the like, that it is true or false, right or wrong. We do say such things of propositions that involve concepts – “this is a tree”, “electrons do have a mass 2000 times than protons”, or the like. What we say of concepts, by contrast, is rather that they are appropriate or inappropriate, useful or useless. “Vegetable” is a useful concepts for greengrocers, but not for botanists, while the concept of “rose-family” is useful for botanists, but not for florists. Turned epistemologically, this means that concepts cannot be proved, or be confirmed or disconfirmed as such, but they have to ‘prove themselves’; they have to demonstrate whether they are appropriate or not, useful or not. This brings a central point into focus: What exactly the purposes are in any given case, what exactly “appropriate” means, may vary widely and is a matter of the specific epistemic situation at hand. One of the fundamental characteristics of concepts is their directedness towards specific goals (as also emphasized by Brigandt in this volume), and for any attempt at understanding the role of concepts in scientific practice and the processes of concept formation and revision, this aspect of concepts is crucial. We might say that while concepts cannot be judged as true or false, they may well be said to be valid or not (or valid to a higher or lesser degree). This again makes clear their directedness: there may be – and indeed are – a large variety of values that provide the criteria along which a process of validation might be conducted. In scientific practice, this property is reflected in the use of concepts as tools, in their character as doing and enabling specific work for specific tasks. As the tasks of research may change over time, concepts

106

Friedrich Steinle

change too, and often in fundamental ways. Whether or not, in specific historical cases, this directedness and capacity of concepts is also the motivation for introducing them, is a different question that has no general answer. But in dealing with the role of scientific concepts in research practice, it seems to me to be fruitful to shift emphasis from questions of reference to questions of use or, in other words, to analysis of what work concepts actually do (or are supposed to do) in research practice (for similar arguments, see also Feest 2010 or MacLeod and others in this volume). It would be ridiculous to say, of course, that questions of reference are of no interest for scientific practice or for our general understanding of concepts. However, my intention in this paper is to shift focus. My claim is, first, that looking for a concept’s referent is but one of the many possible goals (and often enough not the most important one) that are connected with concepts in research practice. Second, I claim that the development of concepts is connected to the development of goals, and hence an understanding of the role and development of concepts is not possible without taking those goals into account. Third, I wish to highlight that many of the goals attached to concepts are quite independent of questions of reference. To illustrate these points I shall trace the fate of a specific concept through time and try to see how its changes were related to changing uses, i. e., the changing work it was supposed to do. First, however, I shall sketch out a more general outlook.

2. A Variety of Epistemic Goals Scientific activities can have a variety of different goals: not only is it often the case that practical goals are involved (such as that of producing superconducting materials with high critical temperature, or genetic variations that produce specific poisons, or new molecules that provide specific pigments), but also the range of ‘purely’ epistemic goals is wide. Across different historical periods and research constellations, the goals of research activities typically undergo wide varieties of changes, even if we focus our attention on the epistemic aspects of science, i. e., on goals that are pertinent to knowledge in a wide sense. ‘Making sense of’ a domain1 can mean a great deal of different things, such as creating an order within a multitude of objects (with concepts such as fish, mam1

As a well chosen book title of E.F. Keller has it: Fox Keller 2003.

Goals and Fates of Concepts: The Case of Magnetic Poles

107

mal, eucaryotes), allowing quantification of properties (temperature, e. g.2), formulating empirical laws (molar weight, two electricities, ‘magnetic curves’, e. g.3), mathematizing a domain (intension and extension of physical properties, Newton’s light ray), or explaining by reference to hidden processes (Boyle’s corpuscles, Dalton’s atoms). The specific goals pursued in a research enterprise play out centrally in processes of concept formation. They direct the formation process, and they provide the criteria against which any concept has to be judged appropriate or not. While concepts cannot be said to be true or false, it is well possible to judge how well they serve the purpose they were intended to fulfil. In this vein, some concepts may even be regarded as ‘failed,’ i. e., as useless for the purpose they were intended to serve. A pharmacist will classify flowers and plants by concepts that are different from those used by a botanist, a chemist, a druggist, or an apothecary. A mechanic will conceptualize mechanical action different from a mathematical physicist. Anthropologists may use different concepts for classifying colours than physiologists, physicists, colour metricians, painters or dyers. Moreover, even within one and the same field, epistemic goals may change over time, and then the concepts might change accordingly, even if they might retain the same word. The story of the next sections will provide an example of such a process. A general remark is in order, concerning the talk of goals and of functions of concepts. What we can often reconstruct in historical cases is the functions that concepts have exerted in specific constellations of knowledge. Whether the aim to fulfil these functions were instrumental in driving the process of concept formation, i. e., whether these functions were indeed the goals pursued by the historical actors, is a different question that can be decided only in rare cases. While I am convinced that processes of concept formation are generally driven by goals, the reconstruction of those goals requires the (rare) existence of specific types of sources that allow insight in the researchers’ motivations. Hence in what follows I shall mainly focus on functions that concepts exert (or on the work they do or allow to do) rather than on goals.

2 3

Chang 2004. Steinle 2006.

108

Friedrich Steinle

3. The Story of Magnetic Poles The concept I shall highlight is the concept of magnetic poles – a concept that is basic to all considerations of and dealing with magnetism, but that nevertheless has a peculiar status. We learn to use it already in elementary lessons on magnetism at school and it shapes our everyday handling of magnets. In science, however, its status and use there is quite different: it has in no way a fundamental status; in many cases it does not occur at all. In analyzing the force distribution of an electromagnet, the properties of a magnetic lens, the magnetic properties of the earth, of a star, or of an atom or a nucleus, the concept of poles does not play a role. Researchers and engineers would agree that the concept is useful in some cases of handling, but that, when it comes to quantitative analysis and sharp notions, it has no place, or at any rate no significant place. In particular, it does not reflect any fundamental properties of magnetism. Such a peculiar status – successful use in some respects, no use in many others and differing concomitant ideas about the ‘real’ properties of the object in question (magnets in our case) illustrates my above thesis about concepts as tools. As such, the concept of magnetic poles may well be illustrative for concepts more generally. For this reason it will be instructive to follow its history, tracing the ways in which its status developed and shifted over time in connexion with changing uses and expectations.

3.1 A New Concept and Its Function: Magnetic Poles and the Law of Attraction The story of magnetic poles starts in the 13th ct., i. e., in a period when knowledge about magnets was already age old. In Greek and Latin texts, magnets were mentioned with their property of attracting iron. Pliny had described in detail the various kinds and uses of magnets, and in the 13th ct. it had become known in the west that iron needles touched with the magnet had the property to direct themselves in north-south direction. Mediterranean sailors had started to use those needles as compasses, and a practice of magnetic navigation had developed of which we, however, have only very little sources.4 It is not by accident that we find the first western treatise on magnetism in that very period. 4

Smith 1992.

Goals and Fates of Concepts: The Case of Magnetic Poles

109

In 1269, the French nobleman Pierre de Maricourt (Petrus Peregrinus) who was in contact with Roger Bacon’s circle authored a text in the form of a letter. It deals with the properties of magnets and with specific instruments, for philosophers and navigators alike. The letter circulated widely in the Middle Ages, was printed in 1558,5 and eventually became very important for the Early Modern period. The letter is highly significant in many respects: it combines natural philosophy with practical use, it includes to a high degree experimental research conducted by the author, and it contained the first description of a magnetic compass suspended on a pivot. The relevant point for my paper, however, is that Peregrine’s text was the very first ever to mention and use the concept of magnetic pole. The notion of pole was ancient and had its place in astronomy: Celestial poles denoted the points where the axis of rotation of the celestial sphere met that sphere. The striking point in Peregrine’s work is that the concept was transferred into a totally different realm: from the poles of the heavens (poli mundi) to the poles of the magnetic stone (poli lapidis). It is perhaps indicative of the lack of attention to conceptual practices that this remarkable shift has not been studied so far in the history of science. Historical analysis is difficult due to the shortage of sources. We cannot even say with certainty whether Peregrine formed the concept himself or took it from some unknown scholar or practitioner before him. We cannot reconstruct the way by which Peregrine or whoever it was may have arrived at the new concept. What we can do, however, is to analyze the context of knowledge and practices of his time on the one hand, and the way in which the concept showed up in the text on the other hand. As to the first point, three contexts should be mentioned: first, the domain of astronomy, well known to Peregrine, in which celestial poles formed an essential part of the system of coordinates and allowed for the differentiation of the north and south hemispheres. Second, there was the practical use of magnetized needles in navigation, where those needles were used to indicate the north-south direction. However, and unhappily, we do not know the concepts used by navigators in dealing with that property of the needle. Third, the effects of attraction and repulsion of loadstones, well-known since antiquity, played an important role. No regularity had been formulated so far, nor were they connected 5

Peregrinus 1558, Peregrinus 1904. For a first analysis, cf. Balmer 1956.

110

Friedrich Steinle

to the property of ‘direction’, i. e., the alignment in north-south direction. Peregrine’s text was the one to bring them together. As to the way the concept showed up in Peregrine’s text, it is remarkable that he did not give any definition, but just used it as given (which might indicate that he was not its creator) and right away described two procedures for finding the poles. Both of those procedures worked with a loadstone in the form of a globe, and both used a combination of geometrical reasoning (great circles on the sphere) and physical experiment, using small iron needles on the globe’s surface. The fact that Peregrine took it for granted that these two procedures should render the same result indicates that the astronomical analogy played a central role for him. It is most striking, then, that poles showed up in his text only at one further point, an absolutely key point: Peregrine did not only give a regularity for the north-south alignment of the loadstone, but also, and this was completely new, for its attraction and repulsion. “Hence note, as a regularity (“regula”), that the northern part of the stone attracts the southern part of another stone, and the southern the northern.”6 The northern and southern parts of the stone were defined in terms of location of the two poles. Hence the concept of magnetic poles and the regularity for magnetic attraction and repulsion were most closely connected: the new concept formed an essential part of the law and enabled its very formulation. While we don’t have firm sources about the motivation to introduce the concept, the situation indicates that the major function of the concept was exactly to enable the formulation of a law. Whether or not this was also the goal that drove process of forming that concept in the first place, has to be left open for lack of sources.

3.2 Concepts and Fact: Magnetic Poles as Properties of the Earth Peregrine’s new concept was very successful, though in quite a specific manner. In the later literature on magnetism, it was not problematized, but just used, in a most successful way. Both for navigators and for magnetic experimenters (two largely distinct groups), magnetic poles (of the magnetized needle or of the magnetic stone) formed a basic property, to which all actions and properties could be referred. Poles became a firm 6

Peregrinus 1904, cap. VI

Goals and Fates of Concepts: The Case of Magnetic Poles

111

Figure 1. Gilbert 1600, lib. V, cap. II.

and unquestioned, if not unquestionable, property of all magnets and magnetized bodies. Such an attitude can well be found in 16th ct. considerations of magnetism (della Porta and Cardano, among others) and is illustrated perfectly in William Gilbert’s monumental De Magnete of 1600, the first monographic treatise on magnetism ever. Gilbert introduced magnetic poles right at the start and as a fact: “Any magnet has its north and south poles”7 Gilbert expanded the notion very consistently. The central claim of his study, highlighted in its very title, was that the globe of the earth itself was a gigantic magnetic stone. This meant in particular that it had its poles, like any other magnetic stone. The terrestrial magnetic poles were considered as two opposite points on the surface. Hence the possibility, opened already in Peregrine, to relate magnetic attraction and magnetic direction materialized in a very direct manner: The effect of direction was just the same effect as could be seen as alignment between two magnets. This transferred immediately to the effects of inclination, discovered half a century before Gilbert, that now found a most straightforward explanation enabled by the concept (or rather the fact) of magnetic poles of the globe of the earth (Figure 1). It is worth noting that Gilbert, while he regarded the earth as a huge magnet, took the inhomogeneous distribution of iron materials as the cause of the observed deviation of the compass needle from the real (i. e. astronomical) north direction, and hence gave a general explanato7

Gilbert 1600, p.13.

112

Friedrich Steinle

ry scheme for the phenomenon of geographically varying declination of the needle. The function of the concept of pole was to denote a property of the earth that was not directly observable, but provided a unified explanation of the various geomagnetic properties. The concept even opened up the possibility to conceive of the magnetic poles of the earth separately from its geographic poles. Even if Gilbert was sure the two coincided, the conceptual distinction was made. With the discovery of the secular variation of declination by Gellibrand in 1635, the point was rendered in sharp light. The aspect of time became essential and the concept of poles became more open: it was no longer clear that the terrestrial magnetic poles did not migrate. Still they were properties of the earth, but the simple picture of the earth being just a huge magnet had to be modified. Rather the magnetic properties and their changes became somehow detached from the perceivable properties of the globe.

3.3 Poles as Sources of Magnetic Force and Its Distribution Indeed, in the 17th and 18th cts., increasingly wide measurements of the magnetic declination and its variation over time were undertaken – an enterprise of utmost importance for the expanding oceanic traffic. At the same time, the data became ever more complex and resisted any simple account or generalization. As a result, various schemes were developed that allowed comprehensive understanding of the measured data. As Jonkers has shown, the development of those ‘magnetic hypotheses’ may be summarized in four steps, of which Figure 2 gives a modern illustration:8 Starting with Gilbert’s idea of a stationary magnet in North-South direction, the next step was supposing a stationary, but oblique magnet. In a third step a rotating oblique magnet was assumed, and finally two independently rotating magnetic poles (prominently proposed by Euler). In the early 19th ct, C. Hansteen even widened that approach by adding a second pair of poles.9 Different as these accounts were, they had in common that they were founded on the concept of magnetic poles. These poles, regardless of whether there were two or four of them and whether they were stationary or wandering, were considered as point-like sources and deter8 9

Jonkers 2003, p.36. Hansteen 1819.

Goals and Fates of Concepts: The Case of Magnetic Poles

113

Figure 2 (Figure 2 p. 113 taken from Jonkers, A. R. T.. Earth’s Magnetism in the Age of Sail. p. 36, fig. 2.1. © 2003 John Hopkins University Press. Reprinted with permission of The Johns Hopkins University Press.)

minants of the magnetic force and its distribution. Such a general approach was underlined by precision measurements in which Coulomb in the 1780s showed an inverse square law of the force exerted by a single magnetic pole10 and herewith brought geomagnetic research in close connexion with the general physical concept of central forces, exerted by point-like centres of force. In 1804, Biot and Humboldt used that result to show that, in order to explain the patterns of terrestrial magnetic inclination, the magnetic poles of the earth had to be considered close to each other, deep within the earth, and near its centre.11 While the function of the concept of poles in those accounts was essentially the 10 Coulomb 1785. 11 Biot, Humboldt 1804.

114

Friedrich Steinle

same as in Gilbert, the ever more complex constellations of poles that had to be imagined weakened the analogy to common magnets and hence the straightforward realistic commitment presented in Gilbert. Rather mathematical precision and the potential to encompass as many data as possible became more prominent. In what can be seen as the culmination and turning point of that development, Gauss gave it a decisive switch. Still pursuing the idea of having a mathematical structure that could account for the observed magnetic data of the earth, he not only proposed new systematic campaigns of worldwide precision measurement (his “Magnetischer Verein”, soon to be matched by the British “magnetic crusade”),12 but developed a new mathematical tool of analysis by spherical harmonic functions, a sort of 3D-Fourier analysis that had the potential, by developing the series stepwise further, to match the observed data with unprecedented precision.13 At the same time, the tool was much more abstract than anything earlier and could no longer be easily connected to magnetic poles. For the enterprise of matching geomagnetic data, mathematics took over, the concept of magnetic poles faded in the background and its importance was reduced.

3.4 Magnetic Poles as Reference System While the development discussed so far mainly concerned geomagnetism and the poles of the globe of the earth, there was another significant development in the early 19th century concerning common magnets. The discovery of an electromagnetic action by Ørsted in 1820 posed a most serious challenge to established physical thinking since the strange properties of the electromagnetic effect could not be reconciled with the concept of attractive and repulsive central forces. That concept had become fundamental to all approaches to physical phenomena and lay at the core of the successful mathematization program of the late 18th and early 19th centuries. In that situation, the first task of most researchers was to formulate laws of the relative motions of magnetic needle and electric wire. That task was difficult exactly due to the lack of appropriate concepts, in particular the non-applicability of the fundamental notion of central force. In that situation, researchers went 12 Cawood 1979. 13 Gauß 1738.

Goals and Fates of Concepts: The Case of Magnetic Poles

115

back, on the one hand, to elementary notions like compass directions and, on the other hand, tried to develop new concepts, such as Ørsted’s “electric conflict,” Biot’s “circular magnetic force,” or Wollaston’s “electro-magnetic current.”14 Given all this ‘conceptual turmoil’, it is significant to see that the concept of magnetic pole remained stable and unquestioned, even while magnets were central parts of the new domain: Obviously conceptual revisions were proposed on the relatively new side of galvanic currents rather than on the age-old side of magnetism. However, a closer look makes clear that even while there was no explicit discussion, the concept of magnetic poles was not left totally untouched. Its function indeed underwent a significant switch of focus: from denoting the source of magnetic force to forming a means of representing magnetic geometry. That switch can best be seen in the early work of Faraday, as one of those being most creative in forming new concepts, and at the same time the one least committed to any former tradition. Faraday, after having elaborated a full survey of the young domain, focussed his own research on electromagnetic motions in asymmetric constellations (as opposed to the highly symmetric arrangements that Biot and Ampère had studied).15 Typical experiments included vertical wires, connected to a battery, and horizontal magnetic needles in the manner of boussoles. His attempts to formulate a law for the observed motions became successful when he referred them to the stretched wire on the one hand and to the individual poles of the needle on the other. While he had to assume those poles to be located at some point between the end and the middle of the needle (and not right at its ends, as it had traditionally be taken), he was able to detect a pattern of circular motion, either the wire rotating around one of the poles (# 1 in Figure 3) or a pole around the wire (# 6 and 7 in Figure 3). From here, he explained straight motions as superposition of circular motions (# 8 and 10 in Figure 3), and at the same time designed and proudly presented a rotation apparatus (Figure 4), the first electric motor (which made him as famous as to be made FRS soon after). The conceptual constellation here is highly significant: With his idea of conceiving circular electromagnetic motions as “simple”, and straight motions as (derivated) effects of superpositions, Faraday deviated dramatically from the tradition of central forces. On the other hand, how14 For an overview, cf. Steinle 2005, ch.4 & 5, or Steinle 2011. 15 For detailed studies of that episode, see Gooding 1990 and Steinle 1995.

116

Friedrich Steinle

ever, the concept of magnetic poles remained central in providing the essential reference frame for describing magnetic motions. With this second point, Faraday indeed represented a common attitude of all electromagnetic research of that early period. There was no other way to think about magnets at his time, even if one did not regard poles as centres of attractive and repulsive forces. And, after all, the use of poles as reference system in electromagnetic constellations (i. e. as denoting specific points within a magnet) came out to be successful. It is worth noting that Faraday from early on was essentially interested in establishing laws that comprised as many data as possible rather than committing himself to ontological claims. Given that general attitude, the switch of focus from real centres of force towards means of representation was an easy move for Faraday. But it was exactly that specific use of the pole concept that would give rise to serious challenges. The main actor was again Faraday who not only expanded the scope of electromagnetic phenomena more than anyone else between 1830 and 1850, but also was always interested in establishing general laws of those phenomena. Things became problematic already ten years later, when Faraday resumed his research into electromagnetism and discovered the (long sought-for) effect of electromagnetic induction.16 The new effect had puzzling features, and Faraday invested much effort in finding a law, even delaying his publication to find time to do so. The essential role of motion in induction effects became clear quickly, and Faraday started again with using poles as reference system for the relative motion of wire and magnet. These attempts failed, however, and he was not able to formulate a law that covered his experimental results. In that situation he was ready to question the appropriateness of that reference system and tried out – a highly unconventional move – the system of “magnetic curves” that had been known since long. This attempt succeeded and he could formulate an induction law (“very simple, but rather difficult to express”) 17 with reference to magnetic curves (Figure 5). Poles, by contrast, only showed up to denote the two different ends of the magnet, i. e., in a much reduced meaning. After a great deal of work and the publication of two papers on the new effect, Faraday, in looking back, sketched a highly significant consideration in his laboratory notebook (Figure 6): 16 For a detailed account, cf. Steinle 1996. 17 ERE, 1 ser, cf. Faraday 1839 – 55, vol. I, 1 – 41.

Goals and Fates of Concepts: The Case of Magnetic Poles

Figure 3. Faraday 1821b.

117

118

Friedrich Steinle

Figure 4. Faraday 1821a.

Figure 5 (Faraday 1839 – 55, vol. I, plate I).

“The mutual relation of electricity, magnetism and motion may be represented by three lines at right angles to each other, …. Then if electricity be determined in one line and motion in another, magnetism will be devel-

Goals and Fates of Concepts: The Case of Magnetic Poles

119

Figure 6. Faraday’s Diary, 26 March 1832 (Martin 1932, vol. I, 25).

oped in the third; or if electricity be determined in one line and magnetism in another, motion will occur in the third. Or if magnetism be determined first then motion will produce electricity or electricity motion. Or if motion be the first point determined, magnetism will evolve electricity or electricity magnetism.”18

What we see here is the first attempt to grasp and solve the vexed problem of relating three directions – electricity, magnetism, and motion – in full generality. The problem had been there from the beginning of electromagnetism, and partial solutions for special cases had been proposed,19 but only here we find it formulated explicitly and in full generality, and a general solution proposed. The striking point for my present discussion, of course, is that the direction of magnetism was no longer given as the line between the poles, but completely independent of the poles as the direction of the magnetic curves. There could be no more significant indication of a conceptual shift: Magnetic poles with their specific function to provide a reference frame lost this function – and hence their significance – to the new concept of curves. However, these were considerations of a single researcher in his private notebook. And the very fact that Faraday did not publish this account at the time, may indicate that he himself was far from sure about his conceptual shift. Indeed, in his research of the following years, Faraday continued to use the concept of poles time and again, but the fundamental nature poles had definitely been put in question. 18 Diary, 26 March 1832, No. 403: Martin 1932, vol. I, 25. 19 Steinle 2011.

120

Friedrich Steinle

3.5 Putting Magnetic Poles out of Focus Another two decades later, finally, Faraday arrived at a distinct view. Step by step, he had realized that the reference system of “lines of force” (as the magnetic curves were now renamed) did do much better work than the concept of poles, i. e. that it enabled a much wider variety of magnetic and electromagnetic effects to be comprised in regularities and laws. Already in his discovery of the magneto-optic effect in 1845, he had successfully formulated the set of necessary conditions in terms of “lines of (magnetic) force”.20 When he (re-) discovered diamagnetism in the same year, he cast it in terms of lines of force. His explicit discussion of whether it could also be treated in terms of magnetic polarity led him to a negative conclusion.21 In the early 1850s, finally, he felt safe enough to develop a general view on the concept of magnetic lines of force,22 introducing, among others, a sort of conservation law for those lines, and also the notion of various “conductivities” of magnetic lines of force by which the behaviour of various materials in the magnetic field plus their mutual motions could be accounted for (see Figure 7) Not surprisingly, it was in the context of that general approach that he discussed the concept of magnetic polarity explicitly and went through various experimental domains to check what work it did, compared with the account in terms of lines of force. His result was clear: While the concept of poles was useful in handling paramagnetic bodies, it was not apt to account for diamagnetic action, and even in the paramagnetic case it could be replaced by the notion of lines of force and their conductivities.23 The traditional notion of magnetic polarity could, in terms of the broader lines-of-force approach, be rephrased as “conduction polarity”.24 The concept of magnetic poles lost its last basic function and was reduced to pragmatic utility. What Faraday summarized here were the conceptual foundations of field theory. When J.C. Maxwell, from the late 1850s on, took on the aim of giving that approach a mathematical form that was accessible for his contemporaries, he would adopt that conceptual constellation, and 20 ERE, 19th ser., 1845. 21 ERE, 20th ser., see also his debate with Weber in this matter, cf. Darrigol 2000, ch. 3.4. 22 ERE, 26th and 28th ser., plus some other papers in the Philosophical Magazine. 23 ERE 28th ser., 3154 ff. 24 ERE 26th ser., § 2818 – 2827.

Goals and Fates of Concepts: The Case of Magnetic Poles

121

Figure 7 (Faraday ERE, 26th series: Faraday 1839 – 55, vol. III, p. 204 and 212)

also the specific status of magnetic polarity: The concept was of practical use in some specific domains, but did not work in others, and could in principle be replaced by the more general concepts of lines of force and susceptibilities (as conductivities were renamed). In no way was it fundamental, let alone expressed a basic property of magnetism. This also held for the refined forms of Maxwell’s account by Lodge and Heavyside, and in the complex reception path of field theory,25 it was always clear that accepting field theory meant at the same time giving up the concept of magnetic poles as grasping anything fundamental about magnets. Despite all changes that field theory underwent since then, this point holds up to this day, and the situation that I have sketched at the beginning of my paper is essentially the one that Faraday framed. The main difference of our modern account is perhaps that also the status of lines of force has substantially been weakened. In times of immense calculation power, for most practical purposes that imagery is no longer needed, and its place became mainly restricted to qualitative reasoning and introductory courses.

25 Darrigol 2000, ch. 5.

122

Friedrich Steinle

4. Towards a Dynamic History of Concepts To summarize the story rather crudely, one might highlight several strands in the history of the concept of magnetic pole, partly in sequence, partly overlapping. The concept’s first function, and perhaps the motivation for its introduction, was to supply a means for formulating the law of magnetic attraction and repulsion. It was quickly taken as presenting fundamental properties of any magnet, and in particular extended (by Gilbert) to the globe of the earth, considered as gigantic magnet. Here its function was quickly expanded to being a central instrument for comprehensively understanding the various features of terrestrial magnetism, first qualitatively, then in ever more detail with mathematical rigor. Magnetic poles, successively detached from geographic poles and from geographic properties, were taken as sources of magnetic force, and hence the concept served as a means of deducing mathematically the distribution of terrestrial magnetism. That use, however, with its tendency to represent the growing body of data with ever higher comprehensiveness and precision, led to increasing abstraction of the tool until Gauss, finally, did away with the concept of terrestrial poles altogether, replacing it with a rather abstract mathematical account. In the same period, electromagnetism opened a new field of magnetism in which the concept of magnetic pole had a peculiar status from the beginning. While the connotation of poles with attractive/ repulsive forces was problematic and hence left out, the concept served in providing a firm reference system to which all electromagnetic motions could be referred. This use was at first successful, but step by step failed to work with respect to new phenomena within the rapidly expanding experimental domain. For Faraday, the concept was seriously put in doubt when he realized that his attempts to formulate an induction law were successful only at the point when he gave up poles as reference system if favor of magnetic curves – the function left for poles was just to form the anchor points of magnetic curves. In the 1850s, with the background of more those failures, Faraday drew an even sharper consequence and proposed to give up the concept of magnetic poles altogether. It still was of high practical use as sort of shorthand device in dealing with attraction and repulsion of common magnets, but it could not any longer be regarded as presenting fundamental properties of magnetism. With the slow acceptance of field theory, this stance became commonly accepted.

Goals and Fates of Concepts: The Case of Magnetic Poles

123

The story gives rise to some general observations. First, it illustrates to what high degree concepts are shaped by goals and expectations connected to them. It also illustrates that they also may, by their successes and failures, affect, redirect, and shift those goals in turn, sometimes to the degree that the concepts in question become marginalized or even obsolete. A dynamic history of concepts cannot but include those goals. Second, and more particular for the concept of magnetic poles, the study makes visible that the concept did specific work (or had specific functions), and that these functions were the important aspect for research practice. The concept allowed the formulation of experimental results and was essential in designing new experiments. At the same time, researchers were convinced over long periods that the concept grasped fundamental properties, even the essence of magnetism. Those essentialist commitments faded gradually away and the tool-like character of the concept came more to the forefront. It has not yet been analyzed in detail, but would be an interesting study to see, what effect exactly the ontological commitment or non-commitment had for the research practice. Finally, we see an important point concerning “discovery” and “facts”. Peregrine is sometimes presented as having “discovered” the magnetic poles. The picture conveyed with such talk – the picture of facts, waiting to be discovered – is often used in historiography in the context of the formation of empirical concepts. But as the story shows, it is misleading and blocks an appropriate understanding of those processes. Taking a more sensitive analysis, we might say that what Peregrine discovered is that the many effects of magnetic attraction and repulsion could be well formulated in a law when using the new concepts of magnetic poles. In the same way Gauss discovered that for a mathematical structure supposed to fit geomagnetic data, poles were not the best concept (despite the fact that everyone before him had used them), and Faraday discovered that the best reference frame for electromagnetic phenomena was not poles, but lines of force. In the historical development, however (and even in common historical talk up to this day), such reflectivity often disappeared. The new concepts were quickly taken for granted and presented in a factual mode, the process of their active formation forgotten, and the world newly ordered with them. Processes like these indicate how deeply formation of

124

Friedrich Steinle

empirical concepts is connected to what Ludwik Fleck had termed in 1935 the “Genesis and development of scientific facts”.26 While Fleck has often been read for his emphasis on the social aspects, our view of his work can much be enriched from the standpoint of concept formation. And one of the most intriguing promises of studying the dynamics of concepts in the contexts of in context of their specific goals is to obtain an enriched understanding of the knowledge claims of the empirical sciences, and in particular of scientific facticity.

Acknowledgements I want to thank Uljana Feest for intriguing comments and critique of former versions of this paper, and all the participants of the two Berlin meetings for stimulating discussion.

Reference List Balmer, H. (1956), Beitrge zur Geschichte der Erkenntnis des Erdmagnetismus, Aarau: Sauerländer & Co (Veröffentlichungen der Schweizerischen Gesellschaft für Geschichte der Medizin und der Naturwissenschaften). Biot, J.-B. / Humboldt, A. v. (1804), “Sur le variations du magnétisme terrestre à différentes latitudes.” In: Journal de physique, de chimie, et de l’histoire naturelle 59, 429 – 450. Cawood, J. (1979), “The Magnetic Crusade. Science and Politics in Early Victorian Britain.” In: Isis 70, 493 – 518. Chang, Hasok (2004): Inventing Temperature. Measurement and Scientific Progress, New York: Oxford University Press. Coulomb, C.-A.(1785), “Premier mémoire sur l’électricité et le magnétisme.” In: Histoire et Mmoires de l’Acadmie Royale des Sciences, 569 – 577. Darrigol, O. (2000), Electrodynamics from Ampre to Einstein, Oxford: Oxford University Press (Studies in History and Philosophy of Science). Faraday, M. (1821a), “New electro-magnetic apparatus.” In: Quarterly Journal of Science 12, 186 – 187. Faraday, M. (1821b), “On some new electro-magnetical motions, and on the theory of magnetism.” In: Quarterly Journal of Science 12, 74 – 96. Faraday, M. (1839/1844/1855), Experimental Researches in Electricity, Vol. 1 – 3, London: Taylor.

26 Fleck 1979.

Goals and Fates of Concepts: The Case of Magnetic Poles

125

Feest, U. (2010), “Concepts as Tools in the Experimental Generation of Knowledge in Cognitive Neuropsychology.” In: Spontaneous Generations: A Journal for the History and Philosophy of Science 4(1), 173 – 190. Fleck, L. (1979), Genesis and Development of a Scientific Fact (With assistance of Fred Bradley, Thaddeus J. Trenn), Chicago: University of Chicago Press. Fox Keller, E. (2003), Making Sense of Life, Cambridge MA: Harvard University Press. Gauß, C. F. (1738), Allgemeine Theorie des Erdmagnetismus. Gilbert, W. (1600), De magnete, magneticisque corporibus, et de magno magnete tellure; physiologia noua, plurimis & argumentis, & experimentis demonstrata, London: Petrus Short. Gooding, D. C. (1990), Experiment and the making of meaning. Human agency in scientific observation and experiment, Dordrecht: Kluwer. Hansteen, Ch. (1819): Untersuchung ber den Magnetismus der Erde. Jonkers, A. R. T. (2003), Earth’s magnetism in the age of sail, Baltimore and London: John Hopkins University Press. Martin, T. (Ed.) (1932), Faraday’s Diary. Being the various philosophical notes of experimental investigation made by Michael Faraday, DCL, FRS, during the years 1820 – 1862 and bequeathed by him to the Royal Institution of Great Britain, 7 vol’s + index, London: G.Bell & Sons. Peregrinus, P. (1558), Petri Peregrini Maricurtensis de Magnete seu rota perpetui motu, libellus, per Achillem P. Gasserum L: nunc primum promulgatus. Augsburgi in Suevis, Ulhart d.Ä. Peregrinus, P. (1904), The letter of Petrus Peregrinus on the magnet, A.D. 1269, transl. by Brother Arnold, M.Sc., with introductory notice by Brother Potamian, D.Sc, New York: McGraw. Smith, J. A. (1992), “Precursors to Peregrinus. The early History of Magnetism and the Mariner’s Compass in Europe.” In: Journal of Medieval History 18, 21 – 74. Steinle, F. (1995), “Looking for a ‘simple case’. Faraday and electromagnetic rotation.” In: Hist.Sci. 33, 179 – 202. Steinle, F. (1996), “Work, Finish, Publish? The formation of the second series of Faraday’s ‘Experimental Researches in Electricity’.” In: Physis 33, 141 – 220. Steinle, F. (2005), Explorative Experimente. Ampre, Faraday und die Ursprnge der Elektrodynamik, Stuttgart: Steiner (Boethius, 50). Steinle, F. (2006), “Concept formation and the limits of justification. “Discovering” the two electricities.” In: Schickore, J., Steinle, F. (eds.), Revisiting discovery and justification. Historical and philosophical perspectives on the context distinction, Dordrecht: Springer (Archimedes, 14), 183 – 195. Steinle, F. (2011), “Die Entstehung der Feldtheorie: ein ungewöhnlicher Fall der Wechselwirkung von Physik und Mathematik?” In: Schlote,K. H., Schneider, M. (eds.), Mathematics meets physics. A contribution to their interaction in the 19th and the first half of the 20th century, Frankfurt/ Main: Verlag Harri Deutsch GmbH (Studien zur Entwicklung von Mathematik und Physik in ihren Wechselwirkungen), 441 – 485.

Mathematical Concepts and Investigative Practice Dirk Schlimm 1. Introduction One way in which mathematics differs prima facie from empirical science is in its subject matter. Mathematicians do not seem to investigate the empirical world, but rather an abstract, conceptual realm. Consequently, mathematical concepts are not only tools for investigative practice, but are also constitutive of the subject matter itself. In the following section, two notions of mathematical concepts are presented and discussed. According to the first, concepts are definite and fixed; in contrast, according to the second notion they are open and subject to modifications. I will refer to these two notions as ‘Fregean’ and ‘Lakatosian’, because they are based on ideas that have been emphasized by Frege and Lakatos respectively, and I shall use the writings of these two philosophers to illustrate and motivate them. Nevertheless, it should be kept in mind that these two notions are to a certain extent idealizations and thus are not necessarily those held by Frege and Lakatos themselves. After presenting some historical developments in geometry, arithmetic, and algebra (sect. 3), I discuss how such conceptual changes can be accounted for by employing Fregean and Lakatosian concepts. I argue that both notions capture important aspects of mathematical reasoning and, thus, that an adequate account of investigative practices in mathematics must involve features from each. This insight is not completely novel, as it has already been formulated by the 19th century mathematician Moritz Pasch and others; but philosophers of mathematics, by focusing exclusively on particular mathematical activities (namely proving theorems and creating new mathematics) have been led to consider notions of concepts that are too narrow to account for all aspects of mathematical practice. Finally, I indicate that these considerations are not exclusive to mathematics, but that they also apply to scientific reasoning in general.

128

Dirk Schlimm

2. Two Notions of Concepts in Philosophy of Mathematics In this paper I am going to contrast what I regard as the two main notions of concepts that have been put forward and discussed in philosophy of mathematics. Several authors have contributed to these discussions, but the authors who come closest to a clear formulation of these views are Frege and Lakatos, to whom I turn next.

2.1 Fregean Concepts Gottlob Frege’s tremendous influence on 20th century analytic philosophy is well known. For shaping the mainstream of 20th century philosophy of mathematics his anti-psychologist stance and his views on concepts have been of particular importance. These views are expressed clearly in the introduction to his Grundlagen der Arithmetik (1884): No, sensations are absolutely no concern of arithmetic. No more are mental pictures, formed from the amalgamated traces of earlier sense-impressions. All these phases of consciousness are characteristically fluctuating and indefinite, in strong contrast to the definiteness and fixity of the concepts and objects of mathematics. (Frege 1884, V–VI; quoted from Frege 1980a, V f)

Frege’s conception of mathematics as being completely independent of human sensations and mental images, and his explicit demand to always “separate sharply the psychological from the logical, the subjective from the objective” (Frege 1884, X), kept psychological considerations strictly outside the realm of philosophy of mathematics. Furthermore, Frege promoted a view of mathematics according to which its subject matter is regarded as static, representable by fixed and definite concepts. Frege’s understanding of a concept as being fixed can be interpreted to mean that its extension does not change over time. If an object a falls under a concept P at some point in time, then it always falls under it, eternally.1 That a concept is definite means that it is determined for

1

The extension of certain concepts, like ‘being tired’, is certainly relative to a specific time, but Frege is restricting his attention to mathematical concepts, which are not intrinsically tied to space and time; Quine formulates this restriction by referring to eternal sentences, ‘whose truth value stays fixed through time and from speaker to speaker’ (Quine 1960, 192).

Mathematical Concepts and Investigative Practice

129

every object, whether it falls under the concept or not.2 Concepts that are not definite are also referred to as vague (Black 1937) or fuzzy concepts (Zadeh 1965). Such fixity and definiteness of concepts are necessary for a formal account of valid inferences. If the extension of a concept fluctuated over time, the truth value of universal statements could change during the course of a single proof, thus not allowing for sound rules of inference. Consider the situation where we take as premises ‘Pluto orbits around the sun’ and ‘If Pluto orbits around the sun, then Pluto is a planet’. This licenses us to infer by modus ponens that ‘Pluto is a planet’. However, if the concept of planet changed its extension in the time between the formulation of the premises and the statement of the conclusion in such a way that Pluto was no longer a planet, then the assumptions were true, but the conclusion false, i. e., the inference would be invalid! Similarly, the syllogistic inference from ‘All swans are black’ and ‘All black things are pretty’ to ‘All swans are pretty’ would be invalid if the concept of swan changed its meaning between the formulation of the premises and the statement of the conclusion in such a way that it was extended to include an ugly white swan. A very similar argument was adduced by Poincaré in response to the question regarding the necessary conditions for the application of the rules of logic.3 The conclusion he reached was that “the classification which is adopted be immutable” (Poincaré 1909, 461; quoted from Poincaré 1963, 45). What he means by “immutable” is that the extensions of the concepts remain fixed, as he illustrates with the following example. Since a brigade of soldiers is constituted by regiments, it follows logically that two soldiers who are in the same regiment are also in the same brigade. But, this requires the concepts to remain fixed during the course of an argument: We learn that two soldiers are members of the same regiment, and we want to conclude that they are members of the same brigade; we have the right do to this provided that during the time spent carrying on our reasoning one of the two men has not been transferred from one regiment to another. (Poincaré 1963, 45; italics added) 2

3

Formally, the notions under consideration are: Ps ðaÞ $ 8tPt ðaÞ, the extension of a concept does not change over time, and 8t8xðPt ðxÞ _ :Pt ðxÞÞ, i. e., at each point in time, either an object falls under P or it does not (for s and t ranging over points in time, an object a, and a concept P, with Ps(a) meaning that the object a falls under concept P at time s). I am grateful to Michael Hallett for pointing out this passage to me. For a more detailed discussion of Poincaré’s position, see Hallett 2011, 194 – 200.

130

Dirk Schlimm

The definiteness of concepts guarantees the law of the excluded middle, a logical principle formulated by Aristotle and accepted by Frege. Moreover, as Black has argued, concepts that are not definite lead to logical contradictions (Black 1937, 436). Thus, without keeping the extension of concepts fixed and without concepts being definite, no general relationships between them could reliably be expressed once and for all. In Frege’s words: [A]s regards concepts we have a requirement of sharp delimitation; if this were not satisfied it would be impossible to set forth logical laws about them. (Frege 1891, 20; quoted from Beaney 1997, 141; see also Frege 1884, § 1)

For Frege, the development of a “logically perfect system” is the ultimate goal of mathematics, as he explained in a letter to Hilbert (Frege 1980b, 44). To be able to express conceptual relations unambiguously Frege invented the language of predicate logic or, as he called it, the “concept script” (Begriffsschrift). This new language was intended to overcome the ambiguities and limitations of natural language and to allow for rigorous, gap-free arguments that could be carried out without any recourse to intuition. The determinateness (encompassing both fixity and definiteness) of concepts was a requirement, indeed the only requirement, for these logical investigations: All that can be demanded of a concept from the point of view of logic and with an eye to rigour of proof is only that the limits to its application should be sharp, that we should be able to decide definitely about every object whether it falls under that concept or not. (Frege 1884, § 74; quoted from Frege 1980a, 87)

Here Frege introduces a further subtlety regarding the definiteness requirement for concepts. Not only must it be determined which objects fall under a concept and which do not, but we must be able to ascertain which one of these is the case. In the second volume of Grundgesetze der Arithmetik (1903) Frege formulates these requirements somewhat differently as a demand for definitions, and calls it the “Principle of completeness,” the first of his “Principles of definition”: A definition of a concept (of a possible predicate) must be complete; it must unambiguously determine, as regards any object, whether or not it falls under the concept (whether or not the predicate is truly ascribable to it). Thus there must not be any object as regards which the definition leaves in doubt whether it falls under the concept; though for us human beings, with our defective knowledge, the question may not always be decidable. We may express this metaphorically as follows: the concept must

Mathematical Concepts and Investigative Practice

131

have a sharp boundary. (Frege 1903, § 56; quoted from Beaney 1997, 259; see also 298 for a similar formulation)

Now Frege clearly distinguishes between properties of mathematical concepts and our abilities to check whether these properties hold or not. Definiteness is a requirement of concepts, not of our access to them. This understanding of the nature of concepts as being independent of human cognition and epistemological concerns is still widespread in contemporary philosophy of mathematics. The contexts of Frege’s considerations were the developments in late 19th century mathematics, in particular the emergence of non-Euclidean geometries and Weierstrass’ definition of continuous but nowhere differentiable functions, which defied all expectations and intuitions. As a result, mathematicians felt the need to provide some kind of secure foundation for mathematics. This led on the one hand to the axiomatizations of arithmetic by Dedekind and Peano, and on the other hand to Cantor’s, Weierstrass’, and Dedekind’s work on rigorizing the treatment of the continuum. Michael Friedman interprets Frege’s general project regarding concepts as being analogous to that regarding mathematics: Just as Cantor, Weierstrass, Dedekind, and Peano had finally uncovered the ‘true’ logical forms of the concepts of infinity and continuity, Frege and his followers could now embark on an analogous project of uncovering the ‘true’ logical forms of all the concepts found in the mathematical-physical sciences—including, especially, the radically new science of space, time, motion, and matter emerging in the context of Einstein’s general theory of relativity. (Friedman 2010, 539)

The followers of Frege are the logical positivist philosophers of the 20th century, who set out to give logical, or rational, reconstructions of concepts, i. e., to give “an explication of the conceptual contents of scientific concepts ex post facto, as it were, which shows, among other things, how truly rigorous and objective (that is, intersubjective) conceptualization is possible in principle” (Friedman 2010, 539). This program, in particular in the hands of Rudolf Carnap, played a paradigmatic role in 20th century philosophy of science and mathematics.

132

Dirk Schlimm

2.2 Lakatosian Concepts A radical break with the static conception, or reconstruction, of mathematics presented in the previous section was put forward by Imre Lakatos in Proofs and Refutations (1976a). Lakatos positions himself squarely in opposition to the logical positivists’ aim—inherited from Frege—of presenting reconstructions of science and mathematics in formal languages. Instead, he advocates a dynamic view of mathematics based on proof-generated concepts. Lakatos considers this in opposition to the “deductivist approach,” which “hides the struggle, hides the adventure” of mathematics (Lakatos 1976a, 142); for him such a conception of mathematics not only covers up the creative work that goes into mathematical investigations, but is also completely inadequate to account for those investigations. After a reference to Carnap’s Logische Syntax der Sprache (1934), Lakatos writes: “Science teaches us not to respect any given conceptual-linguistic framework lest it should turn into a conceptual prison” (Lakatos 1976a, 93, fn. 1). In support of an alternative view of scientific progress, Lakatos presents—skillfully put into the setting of a classroom discussion—the history of Euler’s formula for regular polyhedra, according to which the relation between the number of vertices, edges, and faces is VE+F=2. After starting with the naive conjecture that “All polyhedra are Eulerian” (i. e., that they satisfy the above formula), various criticisms and counterexamples lead to a series of reformulations or refinements of the conjecture and the involved concepts. Local counterexamples attack particular steps in the argument, while global counterexamples, or ‘monsters’ in Lakatos’ terminology, are directed against the conclusion. According to Lakatos’ account, in reaction to such counterexamples the statement “All polyhedra are Eulerian” was initially retained, but the concept of polyhedron acquired a different meaning to exclude the counterexamples. Faced with further objections and challenges the statement was changed to “All convex polyhedra are Eulerian” and finally to “All simple polyhedra with simply-connected faces are Eulerian” (Lakatos 1976a, 41). Of particular importance for the present discussion is the development of the concept of polyhedron, which involved the stretching and contracting of the concept, as well as exploring the relations between what was initially intended by the concept and later definitions that were put forward as responses to criticisms. Understood initially in an informal way, the characteristics of the concept of polyhedron were made more and more explicit in reaction to the various

Mathematical Concepts and Investigative Practice

133

proofs and their criticisms. However, Lakatos notes that there is a high price to be paid for this: “[P]roof-analysis, when increasing certainty, decreases content. […] Increasing rigour is applied to a decreasing number of polyhedra” (Lakatos 1976a, 57; italics in original). Thus, Lakatos points out a trade-off between conceptual content and rigor, and he is vehemently opposed to a view of mathematics that emphasizes the latter at the cost of the former. While Lakatos seems to acknowledge in places the importance of sharp mathematical concepts for the purpose of allowing for rigorous deductions, he does not consider this to be of much significance, since he objects to the deductive, ‘Euclidean’ view of theories. In the following exchange Lakatos even goes so far as to doubt that such sharply defined concepts exist at all: Delta: Rationality, after all, depends on inelastic, exact, concepts! Kappa: But there are no such concepts! Why not accept that our ability to specify what we mean is nil? If you want mathematics to be meaningful, you must resign of certainty. (Lakatos 1976a, 102; italics in original; Delta and Kappa are students in the imaginary classroom discussion)

That Kappa here indeed presents a position that is similar to Lakatos’ own becomes clear in a footnote, in which he objects to the view that “‘clarification’ or ‘explication’ of concepts [is] a preliminary to any scientific discussion” (Lakatos 1976a, 90; italics in original). Both components of Frege’s determinateness requirement for concepts, namely fixity and definiteness, are thus explicitly rejected by Lakatos, who stresses the importance of the elasticity and inexactness of concepts. In What does a mathematical proof prove? (1978b) Lakatos elaborates on the relation between proofs and the development of mathematics. Here he distinguishes between three kinds of mathematical proof: Pre-formal proofs, which are open ended and not just formal proofs with gaps; formal proofs, which do add rigor to pre-formal proofs, but not much else; and finally, post-formal proofs, which are meta-mathematical results about formalized theories, like the duality principle in projective geometry or the undecidability proofs. Again, Lakatos’ arguments are directed against an idealized picture that considers formal theories to be the essence of mathematics. His main claim is that such an approach is unnecessarily restrictive and unable to account for mathematical progress. While in an informal theory there really are unlimited possibilities for introducing more and more terms, more and more hitherto hidden axioms, more and more hitherto hidden rules in the form of new so-called ‘obvi-

134

Dirk Schlimm

ous’ insights, in a formalized theory imagination is tied down to a poor recursive set of axioms and some scanty rules. (Lakatos 1978b, 160) 4

Instead of focusing on proofs and the accumulation of timeless truths, Lakatos considers mathematics as driven by the search for solutions to problems and thus as being in a state of continual growth and permanent revolution. As such, mathematical practice is much closer to scientific practice than it has been commonly held and its results are fallible ‘quasi-empirical’ theories. These theories can be falsified by ‘heuristic falsifiers’, i. e., informal views about the subject matter in question: “The crucial role of heuristic refutations is to shift problems to more important ones, to stimulate the development of theoretical frameworks with more content” (Lakatos 1976b, 218). For Lakatos, a view of mathematics that is based solely on Fregean concepts cannot adequately account for these developments.

3. Mathematical Investigations In the history of mathematics many concepts were introduced informally and went through considerable changes, redefinitions, splittings, etc. However, other concepts were introduced by formal, determinate definitions from the start. As a consequence, sweeping generalizations regarding the use of concepts in mathematics are difficult to make and, in any case, will most likely fail to account for some aspect of mathematical practice. To get an impression of the diversity of conceptual developments in mathematics, I shall now briefly present some historical episodes. These will be useful in the later discussion of the merits of the different notions of mathematical concepts.

3.1 Geometry Geometric points and lines are mathematical concepts that have been around for well over two thousand years. In Euclid’s famous Elements they are introduced as follows: 1. A point is that which has no part. 4

Similar arguments against the use of formal and axiomatic theories were put forward by the mathematician Felix Klein (1926, 335 f).

Mathematical Concepts and Investigative Practice

135

2. A line is a breadthless length. (Heath 1956, 153) Their relations are specified by postulates (and common notions), the first one of which reads “To draw a straight line from any point to any point” (Heath 1956, 154). While Euclid and his fellow Greek geometers were able to derive astonishing consequences from these initial assumptions (e. g., the famous Pythagorean Theorem) the nature of the concepts involved and of our epistemic access to them was a matter of much philosophical debate (Mueller 1981). Nonetheless, geometry thrived and bold investigations regarding the dependence relationships among Euclid’s axioms eventually led to the emergence of non-Euclidean geometries, i. e., theories in which one of Euclid’s postulates, the Parallel Postulate, is replaced by an incompatible one.5 While the origin of the concepts of Euclidean geometry can be traced back to empirical observations, the concepts of non-Euclidean geometries were based from the beginning on axiomatic characterizations. In fact, visualizations or intuitive models of these geometries were developed only decades after the theories had been developed to a considerable degree. In the wake of these achievements, mathematicians were suddenly faced with the problem of distinguishing between different kinds of geometric points and lines (Euclidean and non-Euclidean). But that wasn’t the end of it. In the 19th century another kind of geometry was also studied in great detail, namely projective geometry, in which a curious relationship (called the ‘principle of duality’) holds between points and lines: In any theorem of projective geometry, if the terms ‘point’ and ‘line’ are interchanged and the relations are changed accordingly, the result is again a theorem of projective geometry. Mathematicians were perplexed about this behavior, since, prima facie, points and lines do not seem to be interchangeable. These developments made it even more difficult to understand the basic concepts of points and lines. At first these concepts had seemed to be very tightly connected to drawn figures, possibly obtained from such figures through a process of abstraction or idealization. But this relation between geometrical concepts and visual figures became thinner and thinner with the emergence of new and mutually incompatible notions of points and lines. In Hilbert’s celebrated Grundlagen der Geometrie (1899), the referents of the terms points and lines were finally allowed to be any objects of thought that satisfied the axioms. In other words, only the relational structure that was characterized implicitly by 5

See Bonola 1955 and Gray 2007 for an account of these developments.

136

Dirk Schlimm

the axioms was taken to matter for geometry; the concepts of points and lines vanished and all that remained were roles in a system of concepts that define a geometric space. This is a structuralist conception of mathematics (Reck/Price 2000). As a philosophical position it is currently held in high esteem, although it is certainly not accepted by everybody—in particular, if connected to a realist view regarding the ontology of mathematical structures; but as a methodological account of contemporary mathematics it seems to be largely uncontroversial. In any case, our focus here is not the metaphysics of mathematics, but the role of concepts in mathematical investigations. It is also worth pointing out that this development of geometry was by no means a smooth one: It took many decades of heated debates before non-Euclidean geometries were accepted as genuine mathematical theories, and Hilbert’s account was heavily criticized—most famously by Frege (Hallett 2010).

3.2 Arithmetic and Algebra With regard to the basic concepts of arithmetic, like number, zero, integer, etc., a similar story to that of geometric concepts can be told. Again, in Euclid’s Elements (Book VII) we find the definition of ‘number’ as “a multitude composed of units” and “[a] unit is that by virtue of which each of the things that exist is called one” (Heath 1956, 277). Definition 11 introduces prime numbers as “that which is measured by a unit alone” (Heath 1956, 278). So, according to these definitions the concept of number applies only to 2, 3, 4, …, since the unit is not itself a number. The origins of the concept of zero are difficult to trace back, because according to our common conception of it, it is the symbol for an empty place-value in the decimal system (e. g., in ‘101’) as well as the symbol denoting the absence of a magnitude (e. g., in ‘the number of round squares is zero’). These notions (and others that are now associated with zero) are, however, independent from each other and, historically, were not always all represented by the same symbol (Schlimm/Skosnik 2011). After the incorporation of zero and one into the number system, the concept of number was gradually, and not without objections, extended further to cover also the negative numbers, fractions, real numbers, and complex numbers. Each of these steps required a reconceptualization that involved giving up some of the properties that had previously been considered essential for numbers. The status of Cayley’s octonions and Hamilton’s quatern-

Mathematical Concepts and Investigative Practice

137

ions is still ambiguous: While they are sometimes simply classified as algebras, one can also find them referred to as ‘hypercomplex numbers’. The concept of integer is sometimes used in an absolute sense to denote the positive and negative whole numbers (…,–2, –1, 0, 1, 2, 3, …), but it is also used in algebraic number theory as being relative to a given number field. This is related to investigations in the 19th century that led to a change regarding the defining characteristics of the concept of prime numbers. Within the natural numbers, Euclid’s definition stated above is equivalent to the following, second definition: x is prime if, whenever x divides pab, ffiffiffi then x divides either a or b. However, pffiffiffi  a product  in the ring Z 5i ¼ a þ b 5ija; b 2 Zg the number 2 is a prime according to Euclid’s definition, but not according to this pffiffiffi second definipffiffiffi tion, since 2 divides 6, but it neither divides 1  5i nor 1 þ 5i, which are also factors of 6. Faced with this dilemma, mathematicians had to decide which property to regard as the definition of primality and they settled for the second one, possibly due to its greater explanatory power (Tappenden 2008, 268). They renamed the property determined by Euclid’s definition ‘irreducibility’.

4. Notions of Concepts and Conceptual Change 4.1 Patterns of Conceptual Change Rather than analyzing the particular episodes of conceptual changes from the history of mathematics presented above in more detail, I would like to focus our attention on some general patterns and discuss these in terms of the two notions of concepts introduced in Section 2 above. A useful starting point for discussing conceptual change is Kitcher’s study of patterns of mathematical change in terms of rational transitions between mathematical practices (Kitcher 1983). Such a practice is characterized by Kitcher as consisting of five components: a language, metamathematical views (including standards of proof and definition, and claims about the scope and structure of mathematics), a set of questions selected as important, a set of accepted reasonings, and a set of accepted statements. For Kitcher, transitions between practices result from answering questions, generating new questions, generalizing, rigorizing, and systematizing. Answering questions that are deemed relevant by the mathematical community often involves the acceptance of new

138

Dirk Schlimm

statements (theorems) and also the introduction of novel forms of reasoning. The study of projective geometry in order to provide a unified treatment of geometric proofs as well as the extension of the real numbers to the complex numbers, are examples of developments that resulted from the desire to answer open questions. New questions can be introduced into mathematics from practical considerations in everyday life or other sciences (Wilson 2006), but also from within mathematics itself. In the transfer of questions from one domain to another, analogical reasoning plays an important role. When a new theory is developed to go beyond the scope of previous ones, Kitcher speaks of generalization (e. g., the introduction of negative numbers to allow for subtraction between any two numbers, and Cantor’s extension of arithmetic to transfinite numbers). Rigorization mainly involves a reconsideration of the accepted modes of reasoning, while systematization aims at reorganizing a given body of knowledge. Kitcher distinguishes between systematization by axiomatization (e. g., Euclid’s axiomatization or the axiomatizations of group theory) and systematization by conceptualization, where new definitions and questions are applied to some previously developed mathematics (e. g., Lagrange’s identification of the form of equations that are amenable to certain techniques of root finding). In Kitcher’s analysis, mathematical concepts are expressed by terms in the given language, so that any change in the language has effects on the concepts involved. Thus, from the historical episodes sketched in the previous section together with reformulations of Kitcher’s interpractice transitions in terms of how they effect changes in the underlying language, we can extrapolate a few general patterns of conceptual changes in mathematics: Clarifications of informal concepts, systematizations of concepts and results, investigations of sharp concepts (defined by axiom systems), and generalizations and abstractions. Each of these patterns can result in new or modified mathematical concepts.

4.2 Conceptual Change: Fregean and Lakatosian Given the examples from mathematics (sect. 3) and the patterns of conceptual changes discussed in Section 4.1 above, we can now ask: Which of the two notions of mathematical concepts, the Fregean or the Lakatosian, is better suited for an adequate account of these changes? Let us take for granted that at some point in time the term ‘number’ was used only for the natural numbers greater than one, but that later

Mathematical Concepts and Investigative Practice

139

one and zero were included, and that later still the concept was extended to cover rational, real, and complex numbers as well. According to the Fregean notion of concepts every concept is definite and fixed, i. e., it is clearly determined which objects fall under it and which do not, and the extension of a concept does not change over time. Thus, the concept that includes only the natural numbers greater than one is different from the concept that also includes one, which is again different from the concept that includes one and zero. For the sake of clarity, one should thus distinguish between the three concepts number>1, number>0 and number0. Since concepts are fixed, none of these concepts changed in any way in the course of history; rather, what changed was our usage of the term ‘number’: while Euclid used the term (or a Greek translation of it) to refer to the concept number>1, it later came to refer to the concepts number>0 and number0. In fact, these different concepts can still be referred to in the current language of mathematics, e. g., by speaking of ‘the positive whole numbers’ or ‘the positive integers’ to refer to number>0. Since the Fregean account does not consider concepts as being relative to the mathematician or mathematical community at any particular time, it allows for a description of mathematics that is independent of any human agents and their contingent properties. Particular mathematicians simply grasp one concept or the other. Under the assumption that mathematics per se corresponds to the current state of mathematics, we can speak of somebody grasping the ‘right’ concept just in case it corresponds to the currently accepted one. Such a view is flattering to current mathematicians and philosophers of mathematics and it allows for clear identifications of mathematical heroes, namely those who were able to grasp the ‘right’ concepts first, but it also makes most mathematicians of the past look like they were mistaken most of the time. A more charitable interpretation of the past in terms of Fregean concepts would be one in which the past mathematicians grasped the ‘right’ concepts from the start, but had difficulties in expressing them properly. From this perspective, Euclid had the concept of prime numbers (the same we use nowadays), but he failed to latch onto the correct defining characteristic since he was misled by the fact that for the natural numbers, the irreducible numbers and prime numbers are co-extensional. Another possible interpretation would be that Euclid grasped a concept that was ‘in the vicinity’ of our current concept of primality, but that he

140

Dirk Schlimm

had difficulties in narrowing in exactly which concept primality was.6 The problem of picking the ‘right’ concept is indeed one that one finds frequently expressed in accounts of mathematical research. From a Lakatosian perspective on concepts, changing a concept poses no difficulties. With regard to the development of the concept of number mentioned above, it simply changed in such a way that it became more and more inclusive. With regard to the notion of primality, the concept changed its defining characteristics from the Euclidean ones to those now used in modern algebra. Such an account of the history of mathematics in terms of the open notion of concept that was put forward by Lakatos tends to highlight certain continuities in the development, rather than the discrete changes from one Fregean concept to another. Closely related to a Lakatosian notion of concepts are evolutionary accounts of conceptual change. The mathematician Raymond Wilder put forward such an account based on the metaphor of mathematics as an organism (Wilder 1953, 438). He presents an organic, evolutionary view of the development of mathematical concepts that are conceived, have a certain life-span, and then die. Wilder emphasizes the mathematical community as the fertile ground in which concepts thrive: Previous concepts are synthesized using available tools, and collaborations promote cross-fertilization. His main example, the development of the concept of curve, is similar in many respects to Lakatos’ account of the history of Euler’s theorem for polyhedra. More recently Madeleine Muntersbjorn (2003) has also pointed out the evolutionary character of concept development in mathematics by speaking of the ‘cultivation’ of concepts. While historical accounts of the development of mathematics are likely to focus on different issues depending on the notion of concepts that is employed, there do not seem to be any a priori reasons for preferring one to the other. A history of mathematics can be formulated using either Fregean or Lakatosian concepts. However, metaphysical views regarding the nature of mathematics may well influence the notion of concepts that one employs. For example, a Platonist, who thinks that mathematical objects exist in some eternal, acausal realm, will be inclined to analyze historical developments using a Fregean notion of concept, while somebody who links the possession of concepts to sensible 6

I am grateful to Brian van den Broek for suggesting to me this interpretation.

Mathematical Concepts and Investigative Practice

141

experiences and who has no difficulty admitting changes to the concepts themselves might prefer an analysis in terms of Lakatosian concepts.

4.3 Towards a Pluralistic Approach to Mathematical Concepts The above discussions brought out different reasons in favor of both a Fregean and a Lakatosian notion of mathematical concepts. On the one hand, keeping with classical conceptions of logical inference requires concepts to be definite and fixed. Moreover, mathematicians themselves do sometimes define their concepts (often understood as higher-order relational concepts, e. g., that of a Euclidean space) by systems of axioms and thereby provide clear-cut conditions for whether something falls under a concept or not. On the other hand, the history of mathematics is full of episodes that appear to be characterized best by an account according to which definitions and extensions of concepts change over time. This also is encouraged by the frequent reuse and adaptation of mathematical terminology. Influenced by his analyses of the evolution of the concepts of function, continuity, integrability, series summation, and group, Kitcher summarizes: Originally the reference of the associated terms was fixed through paradigms. Later discussions show a sequence of attempts to give a descriptive characterization of the entities which had previously been picked out. (Kitcher 1983, 171)

As an example he discusses the development of the concept of function. For Leibniz, functions of a curve were given paradigmatically as its length and area. But the concept was developed further, leading to Euler’s ‘partial descriptive characterization’ of a function as any expression made up of variables and constants, and eventually resulted in the modern set-theoretic characterization (Kitcher 1983, 172). A concept that is introduced via a paradigm is more like a Lakatosian concept, while one that is defined by a descriptive characterization is more like a Fregean one. Thus, Kitcher’s characterization can be interpreted as a move from Lakatosian to Fregean concepts in mathematics. Similarly, the general trend of the historical development of mathematics is interpreted by Buldt and Schlimm (2010) as leading from a ‘bottom-up’ approach to mathematical concepts—based on an Aristotelian understanding of abstraction and resulting in Lakatosian concepts—to a ‘top-down’ intro-

142

Dirk Schlimm

duction of determinate mathematical structures by means of axiom systems, resulting in Fregean concepts. It is interesting to note that a related distinction was also made by the 19th century mathematician Moritz Pasch, who was interested both in the dynamic character of mathematics as well as in the need for fixed and determinate mathematical concepts. Unlike the authors mentioned above, Pasch did not draw the line diachronically, but between different mathematical activities. He distinguished between the “rigid” (starr) part of mathematics, which is governed by the rules of logic, and the “pliable” (biegsam) part of mathematics, which is openended and creative: In the latter, more libertine, part of geometry and of mathematics generally, all are free to do as they will. In the former part, every move is constrained by the iron laws of deductive logic. I call these the pliable and the rigid parts of mathematics. (Pasch 1920; quoted from Pollard 2010, 100)

For the correct application of deductive reasoning, Pasch is very clear that it must be decidable whether something falls under a concept or not (Pasch 1914, 153 – 159; see also Schlimm 2010, 105). Thus, in the terminology introduced above, Pasch requires Fregean concepts for rigorous, rigid mathematics in which giving proofs is the main activity. But he acknowledges the usefulness of open, Lakatosian concepts for mathematical reasoning that leads to new discoveries. However, Pasch does not consider the development of mathematics to necessarily involve replacing Lakatosian concepts by Fregean ones. Instead, the use of these kinds of concepts can alternate: I distinguish between the rigid and pliable parts of mathematics. The pliable part includes the ‘pre-mathematical’ discussions that precede purely mathematical work or are interpolated between purely mathematical contributions. (Pasch 1926; quoted from Pollard 2010, 237; italics added)

Thus, for Pasch, we do not have to decide between a Fregean and a Lakatosian notion of concepts once and for all, nor is mathematics characterized by a development from Lakatosian to Fregean concepts. Rather, we will understand mathematics better if we consider mathematical practice to alternate between these notions, depending on the context of the investigations.7

7

The observation that concepts are often framed with a particular purpose in mind is also addressed in the studies of Brigandt and Steinle in this volume.

Mathematical Concepts and Investigative Practice

143

4.4 Notions of Scientific Concepts While historians and philosophers of science often have a different understanding of scientific concepts than the two presented above concerning mathematical concepts, it seems that the main point of the previous considerations regarding mathematical concepts also carries over to scientific concepts in general. For in science, too, there is a tension between the need for determinate concepts in reasoning and for open concepts to accommodate conceptual changes. While logical positivists adhered mainly to a Fregean notion of concepts, the historical turn in philosophy of science, with its emphasis on scientific creativity and discovery, emphasized a rather different notion. Lakatos’ and Kitcher’s talk of ‘revolutions’ and ‘paradigms’ seen above clearly recall Kuhnian ideas, particularly the distinction between normal and revolutionary science. A Fregean notion of concepts fits well in an account for the more structured developments within normal science, while a Lakatosian understanding might appear better suited for times of fundamental changes. In this vein, Hofstadter explicitly opposes ‘Platonic’ (Fregean) concepts to ‘fluid’ (Lakatosian) concepts and argues that the latter are necessary for creative reasoning, e. g., by analogy. For him fluid concepts are “concepts with flexible boundaries, concepts whose behavior adapts to unanticipated circumstances, concepts that will bend and stretch—but not without limit” (Hofstadter 1995, 307; see also 119). The importance of such underspecified or ‘open-textured’ predicates for analogical reasoning has also been noted recently by Bartha. Whether such a predicate applies is something that needs to be argued for and that frequently “involves a nontrivial comparison to paradigm cases or prototypes” (Bartha 2010, 9). However, Bartha’s nuanced analysis of analogical reasoning also reveals the tension between fixed and open concepts, and the need for both notions when reasoning and creativity are combined. He writes: […] analogical arguments in science depend upon basic similarities that are supposed to be unproblematic. The relevant concepts have a clear meaning over a range of cases including those under discussion; they are not opentextured in the context of that argument. By contrast, it is not uncommon for the conclusion of an analogical argument in science to extend an open-textured predicate to a new case. (Bartha 2010, 10; italics in original)

These considerations are directly tied to Lakatos’ historical reconstruction discussed in Section 2.2 above, which Bartha analyzes as follows, again invoking both notions of concepts:

144

Dirk Schlimm

Here, too, open-textured concepts are the ones under investigation. They figure in the conclusions of analogical arguments. The starting similarities, however, are independently acceptable and the relevant predicates are not treated as open-textured. (Bartha 2010, 10)

We see here that Bartha’s considerations are remarkably similar to those of Pasch, emphasizing the need for both open-textured, Lakatosian concepts and closed, Fregean concepts, depending on the particular context of reasoning.

5. Conclusion Much of mathematical research has to do with introducing new concepts and this is often done on the basis of a paradigm or a not yet fully articulated understanding of their defining characteristics. This insight has been vividly brought to the attention of philosophers by Lakatos. For the purpose of deductive arguments, however, concepts have to be assumed to be definite and fixed, as was pointed out repeatedly, e. g., by Frege, Pasch, and Poincaré. By focusing exclusively on only one of these two mathematical activities one is easily lead to opposing conceptions of mathematical concepts: the notions of closed, Fregean concepts and open, Lakatosian concepts. I have argued that while both can be used for describing the history of mathematics, each of them also captures an important aspect of mathematical practice. Thus, for a more complete and rich understanding of the development of mathematics and science as human activities a pluralistic approach – which allows for both Fregean and Lakatosian notions to play a role depending on the particular context in which a concept is used. – appears to be most fruitful.

Acknowledgements I would like to thank the organizers and participants of the workshop Scientific Concepts and Investigative Practice, held January 7 – 8, 2011, at Technische Universität Berlin, Germany, as well as Tyler Call, Mona Ghassemi, Oran Magal, Michael Hallett, Rachel Rudolph, Brian van den Broek, and Andy Yu for helpful comments on earlier versions of this paper.

Mathematical Concepts and Investigative Practice

145

Reference List Bartha, Paul F. A. (2010), By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments, New York: Oxford University Press. Beaney, M. (ed.) (1997), The Frege Reader, Oxford: Blackwell Publishers. Black, M. (1937), “Vagueness: An Exercise in Logical Analysis.” In: Philosophy of Science 4 (4), 427 – 455. Bonola, R. (1955), Non-Euclidean Geometry: A Critical and Historical Study of Its Development, New York: Dover Publications. Buldt, B. / Schlimm, D. (2010), “Loss of Vision: How Mathematics Turned Blind while It Learned to See More Clearly.” In: Löwe, B. / Müller, T. (eds.), Philosophy of Mathematics: Sociological Aspects and Mathematical Practice, London: College Publications, 87 – 106. Carnap, R. (1934), Logische Syntax der Sprache (Schriften zur wissenschaftlichen Weltauffassung 8), Vienna: Springer. Rev. and exp. English translation by K. P. Trench (1939), London: Trubner & Co. Frege, G. (1884), Grundlagen der Arithmetik, Breslau: Verlag von Wilhelm Koeber. English translation by J. L. Austin: Frege (1980a). Frege, G. (1891), Funktion und Begriff. Jena: Hermann Pohle. English translation: Geach, P., Function and Concept. Reprinted in Beaney (1997). Frege, G. (1903), Grundgesetze der Arithmetik, vol. 2. Jena: Verlag Herrman Pohle. Reprinted in Hildesheim: Georg Olms Verlagsbuchhandlung (1962). English translation of excerpts: Beaney (1997). Frege, G. (1980a), The Foundations of Arithmetic: A Logico-Mathematical Enquiry into the Concept of Number, 2nd rev. ed., English translation by J. L. Austin, Evanston, Ill.: Northwestern University Press. Frege, G. (1980b), Philosophical and Mathematical Correspondence, edited by G. Gabriel et al., Chicago: University of Chicago Press. Friedman, M. (2010), “Logic, Mathematical Science, and Twentieth Century Philosophy: Mark Wilson and the Analytic Tradition.” In: Nos 44 (3), 530 – 544. Gray, J. (2007), Worlds Out of Nothing: A Course in the History of Geometry in the 19th Century, London: Springer. Hallett, M. (2010), “Frege and Hilbert.” In: Potter, M. / Ricketts, T. (eds.), The Cambridge Companion to Frege, Cambridge: Cambridge University Press, 413 – 464. Hallett, M. (2011), “Absoluteness and the Skolem Paradox.” In: DeVidi, D. / Hallett, M. / Clark, P. (eds.), Logic, Mathematics, Philosophy: Vintage Enthusiasms (The Western Ontario Series in Philosophy of Science 75), New York: Springer, 189 – 218. Hallett, M. / Majer, U. (eds.) (2004), David Hilbert’s Lectures on the Foundations of Geometry 1891 – 1902, Berlin / Heidelberg / New York: Springer. Heath, T. L. (ed.) (1956), The Thirteen Books of Euclid’s Elements, vol. 1, Introduction and Books I, II, 2nd ed., translated from the text of Heiberg with introduction and commentary by Sir T. L. Heath, New York: Dover Publications.

146

Dirk Schlimm

Hilbert, D. (1899), Grundlagen der Geometrie, Leipzig: Teubner. Reprinted in Hallett/Majer (2004), 436 – 525. English translation: E. J. Townsend (1902), The Foundations of Geometry, Chicago: Open Court, and by L. Under (1971), La Salle, Ill.: Open Court. Hofstadter, D. R. (1995), Fluid Concepts & Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, New York: Basic Books. Kitcher, P. (1983), The Nature of Mathematical Knowledge, New York: Oxford University Press. Klein, F. (1926), Vorlesungen ber die Entwicklung der Mathematik im 19. Jahrhundert, vol. 1, Berlin: Springer. Lakatos, I. (1976a), Proofs and Refutations, edited by J. Worrall / E. Zahar, Cambridge: Cambridge University Press. Lakatos, I. (1976b), “A Renaissance of Empiricism in the Recent Philosophy of Mathematics.” In: British Journal for the Philosophy of Science 27 (3), 201 – 223. Lakatos, I. (1978a), Mathematics, Science and Epistemology (Philosophical Papers 2), edited by J. Worrall / G. Currie, Cambridge: Cambridge University Press. Lakatos, I. (1978b), “What Does a Mathematical Proof Prove?” In: Lakatos (1978a), 61 – 69. Reprinted in Tymoczko (1998), 153 – 162. Mueller, I. (1981), Philosophy of Mathematics and Deductive Structure in Euclid’s Elements, Cambridge, Mass.: MIT Press. Muntersbjorn, M. (2003), “Representational Innovation and Mathematical Ontology.” In: Synthese 134, 159 – 180. Pasch, M. (1914), Vernderliche und Funktion, Leipzig/Berlin: B. G. Teubner. Pasch, M. (1920), “Die Begründung der Mathematik und die implizite Definition: Ein Zusammenhang mit der Lehre vom Als-Ob.” In: Annalen der Philosophie 2 (2), 144 – 162. Pasch, M. (1926), “Die axiomatische Methode in der neueren Mathematik.” In: Annalen der Philosophie 5, 241 – 274. Poincaré, H. (1909), “La logique de l’infini.” In: Revue de mtaphysique et de morale 17, 462 – 482. Reprinted in Flammarion, E. (1913), Dernires Penses, 7 – 13. English translation: Poincaré (1963), 45 – 74. Poincaré, H. (1963), Mathematics and Science: Last Essays, French Original edited by E. Flammarion (1913), Dernires Penses, English translation by J. W. Bolduc, New York: Dover Publications. Pollard, S. (ed.) (2010), Essays on the Foundations of Mathematics by Moritz Pasch (The Western Ontario Series in Philosophy of Science 83), New York: Springer. Quine, W. V. O. (1960), Word and Object, Cambridge, MA: MIT Press. Reck, E. H. / Price, M. P. (2000), “Structures and Structuralism in Contemporary Philosophy of Mathematics.” In: Synthese 125, 341 – 383. Schlimm, D. (2010), “Pasch’s Philosophy of Mathematics.” In: Review of Symbolic Logic 3 (1), 93 – 118. Schlimm, D. / Skosnik, K. (2011), “Symbols for Nothing: Different Symbolic Roles of Zero and Their Gradual Emergence in Mesopotamia.” In: Cupil-

Mathematical Concepts and Investigative Practice

147

lari, A. (ed.), Proceedings of the 2010 Meeting of the Canadian Society for History and Philosophy of Mathematics, May 29 – 31, Montreal, vol. 23, 257 – 266. Tappenden, J. (2008), “Mathematical Concepts and Definitions.” In: Mancosu, P. (ed.), The Philosophy of Mathematical Practice, New York: Oxford University Press, 256 – 275. Tymoczko, T. (ed.) (1998), New Directions in the Philosophy of Mathematics, rev. and exp. ed., Princeton, NJ: Princeton University Press. Wilder, R. L. (1953), “The Origin and Growth of Mathematical Concepts.” In: Bulletin of the American Mathematical Society 59, 423 – 448. Reprinted in Campbell, D. M. / Higgins, J. C. (eds.) (1984), Mathematics. People, Problems, Results, Belmont, CA: Wadsworth Int., 239 – 254. Wilson, M. (2006), Wandering Significance: An Essay on Conceptual Behavior, New York: Oxford University Press. Zadeh, L. A. (1965), “Fuzzy Sets.” In: Information and Control 8, 338 – 353.

Experimentation and the Meaning of Scientific Concepts Theodore Arabatzis 1. Introduction: Concepts and &HPS There are encouraging signs that, after a long period of “withering on the vine” (Fuller 1991), integrated history and philosophy of science has begun to pick up steam. Scientific concepts can play a significant role in achieving a synthesis of historical and philosophical perspectives on science, because they are of interest to both fields. On the philosophical side, ever since the heyday of logical positivism concepts have been at the center of scholarly discussions in philosophy of science (Arabatzis/ Kindi 2008). This is not surprising, since concepts mediate our cognitive access to the world. Furthermore, ever since the early 1960 and the historicist turn in philosophy of science, concepts have figured large in philosophical debates about the nature of scientific change. The historical character of concepts, their dependence on the context in which they are formed and their change over time, has casted doubt upon the rationality of scientific change and has been among the main challenges faced by scientific realism. Realists favor ontological stability and it is not prima facie clear that concepts, qua historically evolving entities, continue to pick out the same referents throughout their historical development (Arabatzis 2007). On the historiographical side, a focus on concepts may enhance our understanding of how local intellectual, material, and cultural resources are brought to bear on the production of scientific knowledge (Nersessian 2008). Moreover, because of their historicity concepts lend themselves to become “central subjects” of historical narratives (Hull 1975). Here, however, one has to face the same thorny issue that philosophers have been struggling with: the evolving character of concepts. If concepts change (often beyond recognition), how can we construct coherent historical narratives around them? What keeps together different uses of a term over time if the beliefs and practices associated with it

150

Theodore Arabatzis

change? If a concept does not retain its identity over time, what is its history the history of (Arabatzis 2006; Dear 2005; Kuukkanen 2008)? Thus, a focus on concepts has the potential of becoming a vehicle for integrating history and philosophy of science. On the one hand, to deal with the historiographical issue of how to frame a historical narrative we need to engage with philosophical accounts of concepts and conceptual change. Conversely, to come to terms with the philosophical challenge posed by conceptual change we need to do historical research. One of the pillars of the revival of integrated HPS has been the attempt to redress ‘the neglect of experiment’ and to scrutinize its intricate relationship to theory. Concepts can provide a fruitful means to that purpose, because they play an important role in experimentation and mediate its interplay with theory. The very detection and stabilization of experimental phenomena goes hand in hand with concept formation (Gooding 1990; Steinle 2005; Andersen 2008; Feest 2010). Furthermore, the explanation of experimentally produced novel phenomena often requires new concepts of the entities and processes that underlie those phenomena. The refinement and articulation of those ‘theoretical’ concepts play, in turn, an important role in experimental research. Most of the discussion on concepts in philosophy of science, however, has been theory-oriented. In what follows, I will attempt to redress this imbalance. I will start with a brief review of three salient approaches to concepts: two theoryoriented ones, associated with the ‘orthodox’ view in philosophy of science and with early Kuhn and Feyerabend; and their rival, causal approach that was put forward by Kripke and Putnam. I will suggest that the causal approach, despite certain shortcomings, opens up space for rethinking concepts from the perspective of the philosophy of experimentation. Philosophers of experiment have come up with important insights about the relationship between theory and experiment and have stressed the autonomy of experimental practice. Those insights may shed new light upon long-standing puzzles about the identity and evolution of concepts. To show this, I will discuss the role of experiment in the formation of new concepts and in the articulation of antecedently available concepts. The focus of my analysis will be on hidden-entity (H-E) concepts, that is, concepts referring to entities that are not accessible to unmediated observation. I will round off the paper with a discussion of the significance of experimentation for tracking the referents of such concepts.

Experimentation and the Meaning of Scientific Concepts

151

2. Theory-Oriented Approaches to Concepts A lot of philosophical ink has been spilled on spelling out the meaning of scientific concepts in terms of their location within a systematic theoretical framework.1 In the “orthodox” view (Feigl 1970), the culmination of logical empiricism, there were two kinds of scientific concepts: observational and theoretical. The meaning of the former was fully specified by their direct association with observable entities, properties, and processes. The meaning of the latter, on the other hand, derived partly from the system of “postulates” in which they were embedded and partly from “correspondence rules” which linked those postulates with a domain of phenomena. Thus, the meaning of theoretical concepts was determined, indirectly, by their links via scientific laws with other theoretical concepts and by their connections, via correspondence rules, to observational concepts. Given that new laws or correspondence rules could always be discovered, it followed that the meaning of theoretical concepts was always “partial” or incomplete (Carnap 1956, 48; 67; cf. Feigl 1970, 5 ff).2 The contribution of correspondence rules to the meaning of theoretical concepts allows for some input from experiment. Meaning is partly shaped by experimental procedures and operations. I think though it would be fair to say that, despite its empiricist orientation, the orthodox view downplayed the connection between theoretical concepts and observation and experiment. For instance, in “The Methodological Character of Theoretical Concepts” Carnap admitted, “in agreement with most empiricists, that the connection between the observation terms and the terms of theoretical science is much more indirect and weak than it was conceived … in my earlier formulations” (Carnap 1956, 53; cf. Feigl 1970, 7). Furthermore, as some of its critics pointed out, the orthodox view neglected the use of theoretical concepts in experimental contexts. Theoretical concepts, such as the concept of the electron, are often used in “observation sentences” describing the outcome of experimental interventions. Think, for instance, of experimental reports of positron tracks in a cloud chamber (cf. Feyera1 2

For a historical survey of the philosophical literature on the meaning of scientific concepts see Arabatzis/Kindi 2008. Note, however, that in his earlier work Carnap had suggested that the meaning of scientific concepts derived from their conditions of application in experimental situations (Carnap 1936, 1937).

152

Theodore Arabatzis

bend 1960/1999, 18 ff; Putnam 1962/1975, 217; Hempel 1973/2001, 212). The tenuous connection between scientific concepts and experience was loosened further with the rise of historicist philosophy of science. Feyerabend, for instance, claimed that “the fact that a statement belongs to the observational domain has no bearing upon its meaning” (1962/ 1981, 52). Rather, “the interpretation of an observation language is determined by the theories which we use to explain what we observe, and it changes as soon as those theories change” (1958/1981, 31). As regards the meaning of scientific terms, Feyerabend opted for “regarding theoretical principles as fundamental and giving secondary place … to those peculiarities of the usage of our terms which come forth in their application in concrete and, possibly, observable situations” (Feyerabend 1965/1981, 99). Thus, the meanings of scientific concepts (observational and theoretical alike) are “dependent upon the way in which … [they have] been incorporated into a theory” (Feyerabend 1962/1981, 74). The theory dependence of concepts implies that theory change leads to conceptual change. Moreover, according to Feyerabend and Kuhn, the older concepts and their descendants refer to completely different entities. The very subject matter of scientific investigation shifts along with conceptual change. I would like to stress that the fluidity of scientific ontology over time is not a straightforward consequence of the historical record concerning the development of science. Rather, it follows from an explicit decision to ignore the stability of concept use at the observational (and, I would add, experimental) level and to focus exclusively on the theoretical frameworks in which concepts are embedded. Here is a striking passage from Feyerabend’s “On the ‘Meaning’ of Scientific Terms”: It may be readily admitted that the transition from T to T’ [classical mechanics to general relativity] will not lead to new methods for estimating the size of an egg at the grocery store or for measuring the distance between the points of support at a suspension bridge. But … we have already decided not to pay attention to any prima facie similarities that might arise at the observational level, but to base our judgment [concerning stability or change of meaning] on the principles of the theory only. It may also be admitted that distances that are not too large will still obey the law of Pythagoras. Again we must point out that we are not interested in the empirical regularities we might find in some domain with our imperfect measuring instruments, but in the laws imported into this domain by our theories. (Feyerabend 1965/1981, 100)

Experimentation and the Meaning of Scientific Concepts

153

This exclusive preoccupation with ‘high’ level theory was bound to overemphasize the unstable characteristics of scientific concepts at the expense of their stable features, associated to a significant extent with ‘low’ level methods of measurement and identification of the referents of scientific concepts in experimental contexts. I will have more to say about this below.

3. Problems of Theory-Oriented Approaches to Concepts I see three problems with this exclusive preoccupation with theory. First, quite often concepts have a priority over theories. Especially in the frontiers of research, concepts are formed and used in the absence of a fully developed, or even consistent, theoretical framework. For instance, in the early 18th century, well before the development of a systematic theory of electrical phenomena, the concept of two electricities was formed by Charles Dufay in the process of detecting and stabilizing various regularities. The formation of that concept and the genesis of facts about (not a theory of) electricity went hand in hand (Steinle 2005, 2009a). Another more recent example can be found in the investigation of atomic structure during the 1910s and early 1920s. In deciphering the riddle of the atom physicists made heavy use of the concept of the electron, even though they lacked a consistent and systematic theory of the electron and its behavior inside the atom (Arabatzis 2006). A second problem for theory-centered accounts of concepts is created by their synchronic and diachronic stability. Scientific concepts have a trans-theoretical character and enable the formulation of different contemporary theories about their referents. Sometimes they even have a trans-disciplinary dimension, as testified to by the existence of “boundary objects” that are coveted by different disciplines (Star/Griesemer 1989; Arabatzis 2006, ch. 7). Furthermore, scientific concepts persist across theoretical change, transcending the theoretical frameworks in which they are embedded at particular times. This is an insight that we owe to several philosophers of science, including Hilary Putnam, Dudley Shapere, and Nancy Nersessian. A third problem is that the exclusive preoccupation with the role of fundamental theories in concept formation has led to a neglect of the interplay between concepts and experimentation. On the one hand, experimental interventions are often crucial for the formation, articulation, and sometimes the failure of scientific concepts (cf. Steinle

154

Theodore Arabatzis

2009b). On the other hand, concepts frame and guide experimental research. Furthermore, experimentation is often crucial for the identification of the referents of H-E concepts. Experimental procedures, robust across changes in high-level theory, enable the identification and measurement of H-E on the basis of their (purported) manifestations in experimental settings.

4. Coming to Terms with the Trans-Theoretical Character of Concepts The most notable response to theory-oriented approaches to concepts has been the causal theory of reference (CTR). It was first suggested by Saul Kripke as an account of proper names, but shortly afterwards it was extended by Kripke and Hilary Putnam to natural kind concepts.3 While Putnam acknowledged the theory-dependence of those concepts, he stressed their persistence across theory change, a persistence that he attributed to the stability of their reference. This stability derives from the way the reference of a natural kind concept is picked out: not by the full theory in which the concept is embedded, but through a specification of the phenomena that are causally associated with its referent. For instance, [n]o matter how much our theory of electrical charge may change, there is one element in the meaning of the term ‘electrical charge’ that has not changed in the last two hundred years … and that is the reference. ‘Electrical charge’ refers to the same magnitude even if our theory of that magnitude has changed drastically. And we can identify that magnitude in a way that is independent of all but the most violent theory change by, for example, singling it out as the magnitude which is causally responsible for certain effects. (Putnam 1975, ix)

The CTR has its problems. It makes mastery of a scientific concept dependent on “contact” with its counterpart in nature (Putnam 1973/ 1975, 205). This requirement is problematic when a concept refers to a hidden or fictitious entity, where the required ‘contact’ is either indirect or altogether missing. In the latter case, the lack of contact between the users of a concept and its purported referent would seem to undermine their linguistic competence. Thus, the realist character of the 3

For some salient differences between Kripke’s and Putnam’s versions of the causal theory of reference see Hacking 2007.

Experimentation and the Meaning of Scientific Concepts

155

CTR introduces an implausible double-standard in the semantics of scientific concepts: the competence of a concept user depends on whether that concept has a referent (see Arabatzis 2007, 53 ff; Arabatzis/Kindi 2008, 360 f). Having said that, the CTR opens up space for examining the role of concepts in experimental research, where their purported referents become objects of investigation and manipulation. A focus on experimentation may, in turn, elucidate the ‘contact’ requirement that, as we saw above, is necessary for the CTR to get off the ground. In the laboratory sciences the presumed contact is achieved, if at all, in artificially produced experimental situations.4

5. Experimentalism and Its Implications for Understanding Scientific Concepts For some time now, experimentation has become the object of sustained philosophical scrutiny. Philosophy of experiment has focused on the validation of experimental knowledge by means of a variety of epistemological strategies. However, despite some notable exceptions,5 the importance of experimentation for concept formation and concept articulation has not received the attention it deserves. Perhaps the main lessons of experimentalist philosophy of science have been the relative autonomy of experimentation and its complex non-reductive relationship with various levels of theoretical knowledge, from specific models of phenomena to phenomenological laws to ‘deep’ unifying principles. One of the manifestations of this autonomy, I would like to suggest, is the relative independence of the concepts employed in experimental settings from the wider theoretical environment in which they ‘live’. To put it another way, concepts have a life in experimentation. They frame experimental research and are shaped by it. Sometimes they even fail, by becoming incoherent as a result of experimentally obtained information. 4

5

I should stress though that the ‘contact’ in question is a fallible assumption. As the examples of phlogiston and caloric show, even concepts which have played a fruitful role in experimental research may turn out to have no counterpart in nature. Besides the works I’ve already mentioned, these exceptions include Jed Buchwald’s and Hasok Chang’s contributions (Buchwald 1992; Chang 2004).

156

Theodore Arabatzis

The failure of concepts can be particularly instructive as to the surplus content they obtain when they are used in experimental contexts and out of the theoretical context in which they originally obtained their meaning.6 When new experimental phenomena are discovered their very description and explanation is often achieved in terms of antecedently available concepts. This process, however, may sometimes lead to tensions and paradoxes that indicate the limitations of those concepts and the need for their revision. Take descriptive failure first. A fascinating example can be found in the history of low-temperature physics. Helium was liquefied by Heike Kamerling Onnes in 1908. Some years later, during the 1930s, there were attempts to describe the behavior of liquid helium by using the established concept of viscosity. That concept was associated with the internal friction of fluids and had an operational dimension. There were two distinct methods for measuring viscosity: rotating a disc and observing the rate of its deceleration; and letting a liquid pass through tiny capillaries. Up to that point those methods had led to identical results. However, in the case of liquid helium below a certain temperature (2 degrees Kelvin) it turned out that those two methods led to fantastically different results. The “first gives a value that is a million times larger than the second” (Gavroglu 2001, 165). This discrepancy undermined the coherence of the theoretical and the operational dimensions of the concept of viscosity. The internal friction of liquid helium manifested itself only under the specific circumstances associated with the first method of measuring it. Under different circumstances, such as those associated with the second method, it vanished without a trace. This paradoxical situation indicated that liquid helium was not a normal fluid; rather it had to be reconceptualized as a “superfluid” (cf. Gavroglu/ Goudaroulis 1988).7 The failure of the viscosity concept in providing a coherent description of low-temperature phenomena can be understood if we take into account the two-dimensional character of scientific concepts. Scientific concepts have a theoretical dimension—a description of the characteristics of their referents, and an operational dimension—specific ways of measuring those characteristics. Of course, these dimensions are not independent; rather, the latter is the “material realization” (Radder 1995, 69) of the former. Furthermore, if different material realizations are as6 7

Here I’m drawing on Gavoglu and Goudaroulis (1988). I would like to thank Kostas Gavroglu for a helpful discussion on this point.

Experimentation and the Meaning of Scientific Concepts

157

sociated with the same concept, they should lead to the same results. For instance, there shouldn’t be a discrepancy between two different ways of measuring temperature, using mercury and resistance thermometers. If that happened the coherence of the concept of temperature would be undermined. The emergence of incoherence in a concept is a sign of its failure to be applicable to “situations that are too distant from the kind of situation for which they were designed” (Kroon 2011, 182). As a result, the concept may split into two, or more, different concepts. Concepts may also fail in the process of explaining new experimental results. An instance of this type of failure is provided by the ‘discovery’ of spin in 1925. Spin was suggested by the Dutch physicists Samuel Goudsmit and George Uhlenbeck in an attempt to make sense of the ‘anomalous’ Zeeman Effect, the patterns of magnetic splitting of spectral lines that did not conform to the predictions of the classical theory of electrons. Those patterns could be accommodated by the ‘old’ quantum theory of the atom if one assumed that the electron was a tiny charged sphere, whose internal rotation (spin) gave it magnetic properties. The experimentally indicated magnitude of the electron’s magnetic moment was such that any point on the electron’s surface should travel with a velocity about ten times the velocity of light! In other words, the new property attributed to the electron in the process of interpreting experimentally obtained information was incompatible with relativity theory. Thus, the concept of spin failed to meet certain theoretical constraints. That failure led to a reinterpretation of spin as a quantum mechanical property with no classical counterpart. These examples show that experimentation is crucially involved in the formation and articulation of concepts. Even ‘theoretical’ concepts, such as the concept of the electron, are ‘laden’ with information obtained through observation and experiment. A focus on the experimental content of concepts will make it possible to understand their transtheoretical character, the extent to which their meaning is independent from theory. Before I proceed a distinction is in order. Scientific concepts come in, at least, two varieties. The first variety comprises concepts that are formed in the early, exploratory stages of the development of a field with a primarily descriptive and classificatory aim, namely to impose order in a domain of natural or experimentally produced phenomena. I have in mind concepts such as Dufay’s ‘two electricities’ or Faraday’s ‘lines of force’. The generation of these concepts and the establishment of observable facts and regularities are two aspects of a single process.

158

Theodore Arabatzis

For lack of a better term, we could call them ‘phenomenal’ concepts. A number of scholars, David Gooding and Friedrich Steinle among others, have ably discussed how such concepts are born, stabilized, and refined. So my focus here will be on a second variety of concepts that emerge in later, and perhaps more mature, stages of the investigative process. Their purpose is primarily explanatory, namely to account for previously established facts and regularities. Typically, they refer to hidden entities and processes that lie deeper than (and give rise to) the observable realm. Their articulation goes hand in hand with the construction of theories specifying the mechanisms or laws that govern the hidden realm in question.8 In the rest of this paper I will chart the various roles of experimentation in the articulation of the meaning of H-E concepts and in the stabilization of their reference. Departing from traditional philosophical accounts of the semantics of scientific terms, I will argue that essential aspects of the meaning and reference of H-E concepts can be understood only by examining their role in experimentation.

6. Experimentation and the Meaning of H-E Concepts Experimentation plays a significant role in the formation and articulation of H-E concepts. New H-E concepts are introduced to make sense of experimentally produced phenomena and are, in turn, shaped by the information provided by the latter. For instance, after J. J. Thomson had put forward the concept of ‘corpuscle’ in order to account for various phenomena observed in the discharge of electricity in gases at low pressures, he inferred on the basis of quantitative information extracted from those phenomena that the corpuscle had a minute massto charge-ratio (three orders of magnitude smaller than the atom). Similar inferences were drawn by other experiments (e. g., by Zeeman on the magnetic influence on spectral lines) for related concepts, such as H. A. Lorentz’s concept of ‘ion’. The convergence of such experiment-driven results led to a unification of the ‘corpuscle’ and the ‘ion’, under the umbrella term ‘electron’, and a unified understanding of different phenomena as manifestations of electrons (Arabatzis 2006). 8

Sometimes, of course, the functions of phenomenal and hidden entity concepts may overlap. The former may play an explanatory role and the latter may facilitate the detection and description of novel phenomena.

Experimentation and the Meaning of Scientific Concepts

159

Furthermore, when H-E concepts are created for theoretical (explanatory, predictive) purposes they are not fully articulated, either qualitatively or quantitatively.9 The qualitative features that an H-E must have in order to bring about its purported effects are specified only to the extent that is required in order for them to play their explanatory role in the given context. Furthermore, the magnitude of those features is not determined in advance; rather it is inferred by the magnitude of the effects under investigation. Thus, H-E concepts are (forever?) incomplete and provisional in at least three ways. First, they do not specify exhaustively all the properties of their referents. When new experimental information is obtained it often turns out that the concepts in question have to be either refined or enriched in order to fulfill their explanatory function.10 As an example of refinement consider Lorentz’s ‘ion’. Originally it referred to both positively and negatively charged particles, but as a result of Zeeman’s magneto-optic investigations ‘ions’ were endowed solely with negative charge. Various examples of enrichment can be provided by the career of the concept of the electron in the ‘old’ (pre-1925) quantum theory. The challenges of experimental spectroscopy led to the incorporation of new properties, such as spin, in the concept of the electron. Second, H-E concepts are incomplete with respect to their quantitative characteristics. Experimental research often provides the information used to articulate quantitatively the concepts in question. For instance, as I already mentioned, experiments in magneto-optics and with cathode rays enabled the calculation of the charge to mass ratio of corpuscles/ions and their identification with electrons.11 Third, experiment may lead to the retraction of some established properties of a H-E and, thus, to the adjustment of the associated concept. One may argue, for instance, that the experiments on electron diffraction in the late 1920s undermined the electron’s particulate character. Thus, experimentally obtained phenomena direct the articulation of H-E concepts by indicating, under certain theoretical assumptions, the kinds of properties that the referents of those concepts should have. Various features of experimental phenomena eventually find their counter9 Cf. Carnap’s claim that the meanings of concepts become more fully specified as science develops (Carnap 1959 / Psillos 2000, 171). 10 Cf. Radder 2006, 121: “[T]he meaning of concepts needs to be articulated when they are being extended or communicated to a novel situation.” Cf. also Rouse 2011. 11 Cf. van Fraassen (2008; 2009) on the “empirical grounding” of scientific theories.

160

Theodore Arabatzis

parts in the putative properties and behavior of the H-E associated with them. For instance, when Niels Bohr put forward his model of the hydrogen atom, he made several assumptions about the properties of the electrons’ orbits (e. g., only certain orbits, corresponding to the discrete structure of the hydrogen spectrum, were allowed). As Robert Millikan noted, “if circular electronic orbits exist at all, no one of these assumptions is arbitrary. Each of them is merely the statement of the existing experimental situation” (Millikan 1917, 209). To put it another way, on the assumption that a H-E exists, scientists construct the associated concept with an eye to the particularities of experimentally obtained information. In that sense H-E concepts are constructions from experimental data (see Arabatzis 2006, ch. 2).

7. The Experimental Life of H-E Concepts Extending a point made by experimentalist philosophers of science about the relative independence of experimentation from theory, I would now like to suggest that the experimentally derived part of the meaning of a H-E concept is, to a significant extent, independent from theory. The experimentally produced component(s) of H-E concepts is often remarkably immune to changes in theoretical perspective. This is because the experimentally obtained information that is incorporated in H-E concepts can be robust across theory change. In the case of the electron, for example, its experimentally determined properties, such as its charge and mass, remained stable features of a concept which, in other respects, was in flux. To understand the autonomy of the experimental life of H-E concepts, we need to understand the complexity of ‘theory’ and its relation to experiment. As philosophers of experiment have insisted, ‘theory’ is an accordion term covering various kinds of knowledge, ranging from general principles or laws that unify entire domains of nature to particular models of a phenomenon or an instrument. In the case of H-E concepts we should distinguish, I think, the following three levels of ‘theory’: First, the high-level theoretical framework in which those concepts are embedded. For instance, the concept of the electron was originally embedded within the framework of classical electromagnetic theory (Maxwell’s laws plus Lorentz’s force). The second level of theory concerns the representation of H-E, which provides an account of their nature. To stick with the same example, electrons were originally repre-

Experimentation and the Meaning of Scientific Concepts

161

sented as sub-atomic singularities in the ether. Finally, the third level of ‘theory’ consists of the low-level knowledge that makes possible the identification of H-E in different experimental situations and the purported manipulations performed on (and with) them in the laboratory. This final level of ‘theory’, whose robustness has been stressed by experimentalist philosophers of science (Hacking 1983; Cartwright 1983), is crucial for my argument about the autonomy of the experimental life of H-E concepts. Its significance for experimentation on (and with) H-E could be revealed by a mere glance at scientists’ experimental reports. One is struck, for instance, by the paucity of high-level theory in C. T. R. Wilson’s reports of his early 20th century experimental work on b-rays and x-rays. Wilson associated different cloudchamber tracks with different particles on the basis of low-level considerations concerning the effect that the velocity of a particle and its scattering by atoms would have on its trajectory (Wilson 1923). In other cases cloud-chamber tracks of positrons were distinguished from those of protons by the width and length of their paths after they had been slowed down by lead. The “length [of a positron track] above the lead was at least ten times greater than the possible length of a proton path of this curvature” (Millikan 1947, 330). The inference from the manifest characteristics of a track to the identity of the underlying HE was enabled by low-level facts about the differential deceleration of differently sized particles by a dense substance. In the remaining part of this paper, I will suggest that we can capitalize on the experimental life of H-E concepts to resolve some of the long-standing difficulties concerning their synchronic and diachronic identity. Experiment may provide the key for stabilizing the referents of evolving H-E concepts.

8. Experimentation and the Identity of H-E Concepts: A Role for History Let me start with an observation that can find ample documentation in the historical record. Scientists who disagree about the ultimate nature of a H-E may come to agree about its experimental manifestations and its experimentally determined properties. J. J. Thomson and H. A. Lorentz, for instance, disagreed about the ultimate nature of the H-E they had postulated; that is why they gave them different names

162

Theodore Arabatzis

(corpuscles versus ions). Thomson thought that corpuscles were structures in the ether, whereas Lorentz believed that ions were ontologically distinct from the ether. They both came to agree, however, on their experimental manifestations (e. g., in magneto-optics and in cathode-ray tubes) and on their experimentally determined properties (their charge and their mass). The differentiation between the different levels of theory involved in the description and identification of a H-E can help us understand how this partial agreement between otherwise disagreeing scientists is possible. Disagreement is limited to some of those levels, whereas agreement is made possible by shared, usually low-level, background knowledge. In the Thomson and Lorentz case, their high-level disagreement concerned the deeper nature of electrons and did not preclude a lowlevel agreement about their observable effects. Their high-level disagreement about the irreducible or merely epiphenomenal character of, say, charge was compatible (or even made possible by) their agreement on how charged particles would behave under the influence of electromagnetic forces. This latter agreement made possible the unproblematic identification of electrons in different experimental situations. An agreement about the identity of a H-E in an experimental setting can be possible even when there is disagreement about the high-level theoretical framework in which the associated H-E concept should be embedded. Walther Kaufmann’s early 20th century experiments on brays provide a particularly instructive example of this possibility. Kaufmann’s experiments aimed at resolving a disagreement among theoreticians about the laws obeyed by the electron. Different laws had been proposed about the precise effect of the electron’s velocity on its mass and its shape. This disagreement, however, did not extend beyond a certain level. In particular, it did not involve the (rest) mass and charge of the electron, and the alteration of its motion by electromagnetic forces. Again, this shared background was responsible for the agreement of all concerned parties about the identity of the H-E in Kaufmann’s apparatus (Staley 2008; Arabatzis 2009). The different levels of ‘theory’ involved in experimentation and their different temporalities can elucidate a condition I have suggested for the referential stability of H-E concepts (Arabatzis 2006, 2011; cf. Psillos 2001, 85). According to that condition, an evolving H-E concept continues to refer to the same ‘thing’ as long as its experimental mani-

Experimentation and the Meaning of Scientific Concepts

163

festations remain stable or grow (more or less) cumulatively.12 This stability or cumulative expansion, even across ruptures in high-level theory, can be explained along three different lines. First, it may result from the robustness of low-level knowledge of the behavior of a H-E in various experimental settings. Second, it may be related to the enduring attribution of some key properties to a H-E. And, third, it may be the outcome of high-level theoretical continuity, where the theory that specifies the laws obeyed by a H-E is preserved, as a limiting case, in subsequent theories (Bartels 2010). Which of these possibilities obtains has to be investigated on a historical case by case basis. Be that as it may, I hope to have shown that experiment provides various ways of establishing ‘contact’ with the referents of H-E concepts. In contrast with the CTR, however, these experimental ways do not fix the reference of H-E concepts once and for all. Future developments may always reveal flows in the knowledge that kept together different phenomena as manifestations of the same H-E. If the bonds between those phenomena dissolve, the H-E that purportedly gave rise to them will turn out to have been a merely useful fiction (cf. Feest 2011).

Acknowledgments This paper was developed out of two workshops on scientific concepts, organized by Uljana Feest and Friedrich Steinle. I am grateful to Uljana and Friedrich for their invitation to participate in the workshops and to all the workshop participants for perceptive questions and suggestions. An earlier version of this paper was presented at the Institut für Philosophie, Universität Bonn. I am indebted to Andreas Bartels for his helpful commentary and to the audience for probing questions. I am also thankful to Kostas Gavroglu for reading a penultimate draft and offering constructive comments. I would like to thank the State Scholarships Foundation for supporting my work through the IKYDA program. This research has been co-financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic 12 The qualifier “more or less” is meant to allow some room for error. A mistaken attribution of an experimental phenomenon to a H-E need not throw doubt on its identity.

164

Theodore Arabatzis

Reference Framework (NSRF) – Research Funding Program: THALIS – UOA. Reference List Andersen, H. (2008), “Experiments and Concepts.” In: Feest, U. et al. (eds.), Generating Experimental Knowledge (MPIWG preprint 340), 27 – 37. Arabatzis, T. (2006), Representing Electrons: A Biographical Approach to Theoretical Entities, Chicago: University of Chicago Press. Arabatzis, T. (2007), “Conceptual Change and Scientific Realism: Facing Kuhn’s Challenge.” In: Vosniadou, S. / Baltas, A. / Vamvakoussi, X. (eds.), Reframing the Conceptual Change Approach in Learning and Instruction, Amsterdam: Elsevier, 47 – 62. Arabatzis, T. (2009), “Electrons.” In: Weinert, F. / Hentschel, K. / Greenberger, D. (eds.), Compendium of Quantum Physics: Concepts, Experiments, History and Philosophy, Dordrecht: Springer, 195 – 199. Arabatzis, T. (2011), “On the Historicity of Scientific Objects.” In: Erkenntnis 75, 377 – 390. Arabatzis, T. / Kindi, V. (2008), “The Problem of Conceptual Change in the Philosophy and History of Science.” In: Vosniadou, S. (ed.), International Handbook of Research on Conceptual Change, London: Routledge, 345 – 373. Bartels, A. (2010), “Explaining Referential Stability of Physics Concepts: The Semantic Embedding Approach.” In: Journal for General Philosophy of Science 41, 267 – 281. Buchwald, J. Z. (1992), “Kinds and the Wave Theory of Light.” In: Studies in History and Philosophy of Science 23, 39 – 74. Carnap, R. (1936/1937), “Testability and Meaning.” In: Philosophy of Science 3, 420 – 471; 4, 2 – 39. Carnap, R. (1956), “The Methodological Character of Theoretical Concepts.” In: Feigl, H. / Scriven, M. (eds.), Foundations of Science and the Concepts of Psychology and Psychoanalysis (Minnesota Studies in the Philosophy of Science, vol. 1), Minneapolis: University of Minnesota Press, 38 – 76. Carnap, R. (1959) / Psillos, S. (2000), “Rudolf Carnap’s ‘Theoretical Concepts in Science’.” In: Studies in History and Philosophy of Science 31, 151 – 172. Cartwright, N. (1983), How the Laws of Physics Lie, Oxford: Oxford University Press. Chang, H. (2004), Inventing Temperature: Measurement and Scientific Progress, Oxford: Oxford University Press. Dear, P. (2005), “What Is the History of Science the History Of ?” In: Isis 96, 390 – 406. Feest, U. (2010), “Concepts as Tools in the Experimental Generation of Knowledge in Cognitive Neuropsychology.” In: Spontaneous Generations: A Journal for the History and Philosophy of Science 4, 173 – 190. Feest, U. (2011), “Remembering (Short-Term) Memory: Oscillations of an Epistemic Thing.” In: Erkenntnis, 391 – 411.

Experimentation and the Meaning of Scientific Concepts

165

Feigl, H. (1970), “The ‘Orthodox’ View of Theories.” In: Radner, M. / Winokur, S. (eds.), Theories and Methods of Physics and Psychology (Minnesota Studies in the Philosophy of Science, vol. IV), Minneapolis: University of Minnesota Press, 3 – 16. Feyerabend, P. K. (1958/1981), “An Attempt at a Realistic Interpretation of Experience.” Reprinted in: Feyerabend, P. K., Realism, Rationalism and Scientific Method. Philosophical Papers, vol. 1, Cambridge: Cambridge University Press, 17 – 36. Feyerabend, P. K. (1960/1999), “The Problem of the Existence of Theoretical Entities.” Reprinted in: Feyerabend, P. K., Knowledge, Science and Relativism: Philosophical Papers, vol. 3, Cambridge: Cambridge University Press, 16 – 49. Feyerabend, P. K. (1962/1981), “Explanation, Reduction and Empiricism.” Reprinted in: Feyerabend, P. K., Realism, Rationalism and Scientific Method: Philosophical Papers, vol. 1, Cambridge: Cambridge University Press, 44 – 96. Feyerabend, P. K. (1965/1981), “On the ‘Meaning’ of Scientific Terms.” Reprinted in: Feyerabend, P. K., Realism, Rationalism and Scientific Method: Philosophical Papers, vol. 1, Cambridge: Cambridge University Press, 97 – 103. Fraassen, B. C. van (2008), Scientific Representation: Paradoxes of Perspective, Oxford: Oxford University Press. Fraassen, B. C. van (2009) “The Perils of Perrin, in the Hands of Philosophers.” In: Philosophical Studies 143, 5 – 24. Fuller, S. (1991), “Is History and Philosophy of Science Withering on the Vine?” In: Philosophy of the Social Sciences 21, 149 – 174. Gavroglu, K. (2001), “From Defiant Youth to Conformist Adulthood: The Sad Story of Liquid Helium.” In: Physics in Perspective 3, 165 – 188. Gavroglu, K. / Goudaroulis, Y. (1989), Methodological Aspects of the Development of Low Temperature Physics: Concepts out of Contexts, Dordrecht: Kluwer. Gooding, D. (1990), Experiment and the Making of Meaning: Human Agency in Scientific Observation and Experiment, Dordrecht: Kluwer. Hacking, I. (1983), Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, Cambridge: Cambridge University Press. Hacking, I. (2007), “Putnam’s Theory of Natural Kinds and Their Names Is Not the Same as Kripke’s.” In: Principia 11, 1 – 24. Hempel, C. G. (1973/2001), “The Meaning of Theoretical Terms: A Critique of the Standard Empiricist Construal.” Reprinted in: Fetzer, J. H. (ed.), The Philosophy of Carl G. Hempel: Studies in Science, Explanation, and Rationality, Oxford: Oxford University Press, 208 – 217. Hull, D. L. (1975), “Central Subjects and Historical Narratives.” In: History and Theory 14, 253 – 274. Kroon, F. (2011), “Theory-Dependence, Warranted Reference, and the Epistemic Dimensions of Realism.” In: European Journal for Philosophy of Science 1, 173 – 191. Kuukkanen, J.-M. (2008), “Making Sense of Conceptual Change.” In: History and Theory 47, 351 – 372.

166

Theodore Arabatzis

Millikan, R. A. (1917), The Electron: Its Isolation and Measurement and the Determination of some of Its Properties (Chicago: The University of Chicago Press). Millikan, R. A. (1947), Electrons (+ and –), Protons, Photons, Neutrons, Mesotrons, and Cosmic Rays, rev. ed., Chicago: The University of Chicago Press. Nersessian, N. J. (2008), Creating Scientific Concepts, Cambridge, Mass.: The MIT Press. Psillos, S. (2007), “Reflections on Conceptual Change.” In: Vosniadou, S. / Baltas, A. / Vamvakoussi, X. (eds.), Reframing the Conceptual Change Approach in Learning and Instruction, Amsterdam: Elsevier, 83 – 87. Putnam, H. (1962/1975), “What Theories Are Not.” Reprinted in: Putnam, H., Mathematics, Matter and Method: Philosophical Papers, vol. 1, Cambridge: Cambridge University Press, 215 – 227. Putnam, H. (1973/1975), “Explanation and Reference.” Reprinted in: Putnam, H., Mind, Language and Reality: Philosophical Papers, vol. 2, Cambridge: Cambridge University Press, 196 – 214. Putnam, H. (1975), “Introduction: Philosophy of Language and the Rest of Philosophy.” In: Putnam, H., Mind, Language and Reality: Philosophical Papers, vol. 2, Cambridge: Cambridge University Press, vii–xvii. Radder, H. (1995), “Experimenting in the Natural Sciences: A Philosophical Approach.” In: Buchwald, J. Z. (ed.), Scientific Practice, Chicago: The University of Chicago Press, 56 – 86. Radder, H. (2006), The World Observed / The World Conceived, Pittsburgh: University of Pittsburgh Press. Rouse, J. (2011), “Articulating the World: Experimental Systems and Conceptual Understanding.” In: International Studies in the Philosophy of Science 25, 243 – 254. Star, S. L. / Griesemer, J. R. (1989), “Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907 – 39.” In: Social Studies of Science 19, 387 – 420. Staley, R. (2008), Einstein’s Generation: The Origins of the Relativity Revolution, Chicago: University of Chicago Press. Steinle, F. (2005) “Experiment and Concept Formation.” In: Hájek, P. / Valdés-Villanueva, L. M. / Westersthal, D. (eds.), Logic, Methodology and Philosophy of Science: Proceedings of the Twelfth International Congress, London: King’s College Publications, 521 – 536. Steinle, F. (2009a), “Scientific Change and Empirical Concepts,” In: Centaurus 51, 305 – 313. Steinle, F. (2009b), “How Experiments Make Concepts Fail: Faraday and Magnetic Curves.” In: Hon, G. / Schickore, J. / Steinle, F. (eds.), Going Amiss in Experimental Research (Boston Studies in the Philosophy of Science 267), Dordrecht: Springer. Wilson, C. T. R. (1923), “Investigations on X-Rays and b-Rays by the Cloud Method, Part I: X-Rays.” In: Proceedings of the Royal Society of London, series A, vol. 104, no. 724, 1 – 24.

Exploratory Experiments, Concept Formation, and Theory Construction in Psychology Uljana Feest 1. Introduction For the past 30 years, it has been a guiding assumption of the epistemology of experimentation that “experimentation has many lives of its own” (Hacking 1983, 165). There are several prominent ways in which this statement was taken up in the subsequent literature. One that we will treat as a point of departure for this article is that the function played by scientific experiments cannot be reduced to theory testing, but that experiments instead also play other important roles, such as the exploration of phenomena, the calibration of instruments, or the robust detection of entities or phenomena (e. g., Woodward 1989; Franklin 1999; Hacking 1983). To this list we might add the formation of concepts about the entities or phenomena that are thought to be the objects of the investigative endeavors in question (e. g., Steinle 1997; Feest 2010). For the purposes of this paper, I will assume that when the concepts used to describe a given domain of experimental research are still very much in flux, we can refer to the corresponding experiments as ‘exploratory’ (Burian forthcoming). I am in complete agreement with the assertion that there is more to scientific experiments than can comfortably be described as ‘theory testing.’ Moreover, especially in the context of exploratory experiments, we may assume that there are no ‘finished,’ full-fledged theories waiting to be tested (see also Arabatzis in this volume). From this latter point, it does not follow, however, that theories play no role in such experiments, as might be claimed by a simplistic account of Baconian inductive method. In this vein, some recent work about exploratory experimentation has taken up the question of the extent to which theories enter into experimental processes, even if the experiments are not designed to test theories (e. g., Franklin 2005; Waters 2007; Elliott 2007; Burian 2007). If we assume, then, that exploratory experiments often

168

Uljana Feest

aim at the formation of concepts of specific objects of research, the question arises how we should regard the process of forming a concept of a given object or phenomenon to that of constructing a theory of it. In the philosophical literature about experiments, there has been a tendency to argue that these two processes can be separated. This has the welcome consequence of being able to argue that it is possible to attain robust knowledge about objects or phenomena in the world while avoiding problems of theory-ladenness (e. g., Bogen/Woodward 1988; Woodward 1989). A similar argument is also presented by Arabatzis (this volume). If we want to evaluate the accuracy and scope of these kinds of arguments, we need to take a close look at specific cases and analyze not only the notion of theory in play, but also ask what is the nature of the phenomena or objects towards which research efforts are directed in any given domain. In this paper I plan to do both of these things by narrowing down the question as being concerned with concept formation and theory construction in exploratory research in cognitive psychology. When there is talk of something being constructed or formed, one obvious question is what kinds of tools are used in the process. In an earlier paper (Feest 2010) I argued that in certain kinds of experimental contexts the scientific concepts of cognitive neuroscience are best understood as tools. In hindsight, a more accurate formulation of this (my own) point would have been that particular kinds of definitions of concepts (i. e., operational definitions) are best understood as tools, namely tools that structure the kinds of experimental interventions that are performed.1 They do so by articulating what are thought to be typical conditions of application for the concept in question, where these ideas are expressed in terms of experimental operations that one would perform in order to investigate the objects thought to be in the extension of the concept.2 While I previously referred to the goal of using such definitions as that of generating knowledge, this 1

2

While I focused my attention on the productive function of operational definitions, I did not mean to reduce scientific concepts to definitions. Nor did I mean to say that operational definitions are the only productive factor in the research process. For example, it is conceivable, as argued by several contributions to this volume, that other aspects of concepts can be productive, too, in that they structure the kinds of questions and hypotheses pursued about a given research object. I am using the term “object” in a deliberately broad sense to include the kinds of entities, processes, or phenomena that can be objects of scientific research.

Exploratory Experiments, Concept Formation, and Theory Construction

169

paper argues that the process of knowledge generation should be unpacked as one of concept formation and theory construction. My thesis is that—at least in cognitive psychology—these two activities are often closely intertwined. The thesis just stated is at odds with at least some of the arguments from the philosophical literature about experimentation mentioned above. It is not my aim in this article to provide a detailed analysis of whether and how my thesis is really in conflict with those. Providing such an analysis would involve discussing whether my account about concept formation in psychology generalizes to other scientific disciplines.3 Instead, my aim in this paper is to develop my on proposal in more detail, for the time being focusing on cognitive (neuro-)psychology. I will do so by explaining it in relation to (a) a couple of articulations of similar theses in 20th century philosophy and psychology and (b) recent trends within the philosophy of psychology and neuroscience. I will begin (in sections 2 and 3) by outlining the positions of the philosopher Carl Hempel and the psychologist Clark Hull, both of whom articulated views that are prima facie quite similar to the thesis argued for by me. In doing so, I hope to not only elucidate the content and scope of my thesis, but also highlight some aspects of 20th-century philosophical discourse about science, which, by setting certain problem agendas and providing certain methods and fundamental presuppositions, have until recently stood in the way of pursuing the question/thesis pursued here. Section 4 will briefly summarize my analysis of the epistemic function of operational definitions, highlighting both convergences and disagreements with Hempel and Hull’s views. Section 5 will argue that especially with regard to psychology and neuroscience the recent literature about mechanisms is congenial to the ways I want to think about knowledge generation in cognitive neuropsychology as a process whereby concept formation and theory construction are closely intertwined. Section 6, finally, returns to the role of operational definitions and experiments in this process.

3

Theo Arabatzis (in conversation) has suggested to me that the seeming conflict is perhaps due to the specific nature of the phenomena studied in psychology.

170

Uljana Feest

2. Carl Hempel and the Demand that Concepts Have Systematic Import Going back to the logical empiricists’ distinction between the contexts of discovery and justification, we do not typically think of logical empiricists as having had much to say about the dynamics of the research process qua process. Hence, it may come as a bit of a surprise to see that Carl Hempel formulated a thesis about this process, namely that “[c]oncept formation and theory formation in science are closely intertwined as to constitute virtually two aspects of the same procedure” (Hempel 1952, 1/2). This thesis was repeated some 14 years later, in Hempel’s Philosophy of Natural Science, where he stated that “[i]n scientific inquiry, concept formation and theory formation go hand in hand” (Hempel 1966, 97). Given the topic of this volume (scientific concepts and investigative practice) it is worth considering Hempel’s argument for this thesis, which is in fact closely related to his analysis of the epistemic function of scientific concepts in investigative practice. According to this analysis, it is not enough that concepts have (partial) empirical content. In addition, they have to have systematic import. In this vein Hempel wrote that “it must not be forgotten that good scientific constructs must also possess theoretical, or systematic, import; i. e., they must permit the establishment of explanatory and predictive principles in the form of general laws and theories” (Hempel 1952, 46). Such laws connect the concept in question with other characteristics, thereby allowing for predictions and explanations. Hempel’s point was that a relatively ‘unformed’ empirical concept will not have a lot of systematic import, because even if we have well-defined conditions of application for the concept, we cannot infer much from this (i. e., we cannot use it to derive any interesting predictions). According to Hempel, concepts gain such import to the extent that they are increasingly well embedded in networks of lawstatements. This, in turn, means that the more mature the concept, the more formed the theory that provides it with systematic import. This point, also familiar from his 1958 “The Theoretician’s Dilemma” (Hempel 1958), can be illustrated by means of the example of a concept that has empirical, but (prima facie) no systematic, import: hage of a person = (height in mm) x (age in years). As Hempel points out, the systematic import of this concept is doubtful, “for we have no general laws connecting the hage of a person with other characteristics”

Exploratory Experiments, Concept Formation, and Theory Construction

171

(Hempel 1952, 46). In other words, in the absence of more general laws and theories we lack criteria for determining whether an empirically defined concept is scientifically fruitful. Another way of putting this might be to say that even though it is possible to specify any number of empirical concepts in this fashion, we have an intuition to the effect that they will not all be equally projectible, and that projectibility is ultimately underwritten by laws or regularities. Hempel recognized, of course, that such laws may not yet be available “at the early and exploratory stages of research” (Hempel 1952, 49). This remark makes it clear why Hempel viewed concept formation and theory construction as so closely related: The elaboration of a given scientific concept will involve showing that it is projectible, and this in turn will involve constructing a theory about the entities or phenomena in the extension of the concept. As I will explain in more detail in section 4 below, it seems to me that operational definitions, typically used at the early stages of exploratory experimentation (but not only then!), have a status not so dissimilar from Hempel’s definition of “hage”: They provide instructions of how to apply an empirical concept, but the question of whether it is a concept with genuine scientific import cannot be answered until the concept’s referent is better understood.4 The follow-up question, then, concerns the processes whereby such an understanding is gained. I shall follow Hempel’s analysis, according to which this process of forming the concept involves formulating statements that describe, predict and explain the behavior of the phenomena in the extension of the concept. (The formulation of such statements is what Hempel calls “theory construction”). In referring to tentative definitions as providing a starting point for scientific concept formation, I do not wish to suggest that they are formulated on a blank slate (as it were). In psychology and the social sciences especially, concepts are often imported from everyday language, which typically inform the ways in which scientists think about the concepts’ referents.5 The process of scientific concept forma4

5

I do not mean to suggest that the concepts typically introduced by means of operational definitions are as arbitrary as the concept of hage. See also Block (this volume). On the contrary, ‘genuine’ scientific concepts, even if their empirical conditions of applications are formulated in terms of operational definitions, are scientifically productive by virtue of a number of conceptual assumptions not contained in the ‘definition’ (see also Boon this volume). Here, too, Hempel is right on the mark when he observes that researchers tend to start out with concepts that already have everyday meanings. It is by virtue of

172

Uljana Feest

tion, then, can be understood as one in which these everyday intuitions are put on a more rigorous and explicit footing. Hempel himself did not view the status of operational definitions in the way just outlined. Part of the explanation for this is probably that his primary interest did not lie in accounting for dynamics of knowledge generation.6 Rather, he aimed at analyzing scientific theories and their empirical interpretation. By 1950, it had become a commonplace of logical empiricism that the empirical content of a theory could not be provided by means of empirical definitions for each and every concept (Hempel 19657). By the time Hempel specifically wrote about operationism, this lesson had sunk in. He thus rejected the view, commonly (though perhaps not accurately) ascribed to Percy Bridgman, according to which a concept’s meaning can be exhaustively defined in terms of operations and their observable effects, and held instead that if the notion of operationism was to be useful at all, it needed to be broadened so as to be “just another formulation of the empiricist requirement of testability for the theories of empirical science” (Hempel 1952, 43), a requirement he saw addressed for example in Carnap’s 1936 notion of a reduction sentence (see also Hempel 1956; 1966). In this vein, he offered a reading that operational definitions are reduction sentences, which provide partial empirical interpretations to axiomatic systems. As I will explain in section 4 below, this reading is quite compatible with my analysis of their function. Summarizing this section, we may say that Hempel presented an argument in support of his thesis about the close relationship between concept formation and theory construction. However, he did not have much to say about how this dynamic process unfolds. In particular, he did not address the function of scientific concepts or definitions during the process of theory construction. By contrast, the practicing experimental psychologist Clark Hull (1884 – 1952), working only a few

6

7

this everyday meaning that scientists determine whether a given concept is likely to have systematic import, and hence, whether the research in question is worth pursuing. This point is also emphasized by Kindi (this volume). Hempel’s assertion, thus, was not based on a specific thesis about how the practices of concept formation and theory construction are intertwined. It was rather the conclusion of an argument that takes as a premise that scientific concepts, by virtue of having a particular epistemic function in the practices of scientific reasoning, have to be interconnected with scientific theories. The article goes back to two earlier ones that were written in 1950 and 1951.

Exploratory Experiments, Concept Formation, and Theory Construction

173

years before Hempel, not only addressed this issue, but also did so in a way that is compatible with my thesis about the role of operational definitions. It is his account that we turn to next.

3. Clark Hull and the Bottom-Up Construction of an Axiomatic System Hull, who shared with Hempel the conviction that theories are axiomatic systems, and was firmly committed to the ideal of psychology striving for the formulation of such axiomatic theories, is best known for his book, Principles of Behavior: An Introduction to Behavior Theory (Hull 1943a). This book was supposed to be the first in a series of three volumes. It presented the fundamental axioms from which hypotheses about mammalian behavior were to be deduced.8 Between 1929 and 1942, Hull published a series of theoretical papers, in which he laid down the foundations of the ideas which were to appear in his books (these papers are reprinted in Amsel/Rashotte 1984). He also co-authored a book, A Mathematico-Deductive Theory of Rote Learning, in which the authors tried to cast their theory into a rigorously logical form (Hull et al. 1940). The reason why Clark Hull is of interest to us here is that he attempted to combine a neo-behaviorist outlook with the very ambitious project of formulating an axiomatic theory of behavior. The former commitment meant that he wanted to ground his theoretical concepts as firmly as possible in the empirical data. The latter commitment meant that he thought of scientific theories as highly abstract formal structures, from which empirical predictions were to be derived deductively. With regard to both of these ideas, his outlook was quite compatible with that of Carl Hempel and other logical empiricists. Moreover, like Hempel he thought that while theoretical concepts had to be securely tied to observational contexts, this alone was not sufficient for their status as genuinely theoretical concepts. Thus he stated that “[a] term to be at all useful ordinarily represents a class or group of phenomena all of which under specified conditions behave according to the same natural principles or laws” (Hull et al. 1940, 4). I take this to mean (in accordance with some of the views we attributed to Hempel above) 8

The second volume appeared in 1952.

174

Uljana Feest

that the corresponding concept would have to be defined or characterized in such a way as to embed it in all the relevant law statements, such that predictions and explanations of regularities about the objects or about phenomena in the extension of the concept could be derived. There was one decisive difference between Hull and Hempel, however. As will be recalled, Hempel was interested in the analysis and empirical interpretation of theories, and he viewed operational definitions (conceived of as reduction sentences) as providing partial empirical interpretations of theories, conceived of as already existing axiomatic systems. By contrast, it was Hull’s aim to construct such an axiomatic system by empirical means from the bottom up. However, since he was also committed to a hypothetico-deductive method, he proceeded by using axioms already containing “primitive” theoretical terms to derive empirical predictions, which would be tested experimentally. As I will argue now, these primitive terms were in fact introduced by means of operational definitions, which Hull derived from empirical regularities that were already known to occur as a result of particular experimental manipulations. Let me briefly explain this with an example, which also illustrates the development of his own thinking about this. In trying to account for rote-learning phenomena, Hull’s initial ambition (in his earlier papers) was to formulate hypotheses, and to derive empirical predictions in support of these hypotheses from a number of (a) operational definitions and (b) postulates, where only the latter were meant to express factual knowledge. But a closer look reveals that the distinction between the two was not so clear, and that in fact both were based in empirical regularities. For example, in a paper from 1935, Hull defined “[a]n ‘excitatory tendency’, as emanating from a stimulus,” as “a tendency for a reaction to take place and (sometimes) more vigorously, all things being equal, after organism has received stimulus at other times” (Hull 1935, 501, definition 3). This definition is ‘operational’ (in the sense outlined above), in that it specifies a particular experimental set-up (exposing subjects to stimuli and taking vigorous response as indicative of excitatory tendency). Such definitions were complemented by what Hull called “postulates,” which were meant to be factual statements about the phenomena picked out by the definitions. For example, “[t]he period of delay of trace conditioned reflexes possesses a power to inhibit (temporarily) to a certain extent the functional strengths of excitatory tendencies, the reactions of which would otherwise take place during such period” (ibid., postulate 3). However, as Hull soon came to realize, the formulations of both defi-

Exploratory Experiments, Concept Formation, and Theory Construction

175

nitions and postulates were based on previously observed empirical regularities (e. g., by Ebbinghaus and Pavlov), and hence the decision when to refer to a statement as an operational definition of a concept and when to refer to it as an empirical postulate about the phenomena in the extension of the concept seemed somewhat arbitrary (Hull et al. 1940, 3). In his subsequent work, Hull therefore abandoned the distinction between operational definitions and postulates, instead distinguishing between undefined (or implicitly defined) primitive terms and postulates, where “excitatory potential” now figured as one of the former. The fact that he called this term “undefined” does not mean that he thought it devoid of meaning. All he meant to indicate was that it is not provided by explicit definitions. It was implicitly defined by the axiom system they occur in, i. e., by virtue of the postulates. However, as we saw, the distinction between postulates and operational definitions was not so clear, in that some of Hull’s postulates made empirical claims, which referred to typical conditions of application. Based on his fact some of his contemporaries interpreted his approach as operationalist (e. g., Koch 1941; Bergman/Spence 1941). Even though he was, at the peak of his career, one of the bestknown American psychologists, Hull’s methodology and theoretical system did not really get off the ground, and was not adopted and developed by other scientists. So, what stood in the way of Hull’s approach? Part of the answer may be that his theoretical system had too many variables to be manageable; another part may be that even though Hull was one of the few theoretically and methodologically inclined psychologists, the psychological community as a whole was rather hostile to excessive theorizing (see Amsel/Rashotte 1984). However, I suggest that there was yet another reason, which had to do with the way Hull conceived of the very notion of a theory as a set of axioms that allow for deductive derivations of explanations and predictions, based on mathematically describable functional relationships among the different variables of the system. Hence, while Hull differed from Hempel and the logical empiricists in the kind of interest he took in theories (they were interested in analyzing theories, he was interested in constructing theories), he did not question the very notion of theory they presupposed. In fact, he was quite clear about his admiration for formal and deductive structures as a model for psychology (see Smith 1986). While I do not mean to suggest that mathematical psychology has no place in contemporary research, I argue that the typical way in

176

Uljana Feest

which psychological theories came to be represented with the advent of cognitive psychology (and to this day) is in terms of flow charts, not in terms of mathematical formulae. This means that overall the main mode of psychological theorizing is causal-mechanical, not mathematical. It follows that when thinking about the relationship between concept formation and theory construction in psychology, it will be useful to try to explicate a notion of theory that takes seriously the importance of causal claims in psychological theorizing. Such explications have been provided by the recent literature about mechanisms, taking its clues from the by now seminal article by Machamer et al. (2000). This suggestion will be developed in section 5 below. First, however, I will provide a more detailed explication of my thesis about the role of operational definitions in the process of knowledge generation, already sketched above.

3.1 Operational Definitions as Tools of Knowledge Generation Operation(al)ism—to the extent that it is discussed at all in contemporary philosophy—is typically viewed as a position closely related to logical positivism and its theory of meaning, verificationism, which famously stated that a sentence is meaningful only if it is either a tautology (i. e., analytically true by virtue of its meaning), or empirically verifiable (Carnap 1931), and that, moreover, the meaning of a concept could be reduced to its empirical conditions of application. Given Percy Bridgman’s famous statement that “in general, we mean by a concept nothing more than a set of operations; the concept is synonymous with the corresponding sets of operations” (Bridgman 1927, 5), it is easy to see why many philosophers have taken operationism to be yet another version of verificationism. Thus interpreted, all the things that were subsequently judged to be wrong with verificationism (as laid down, for example, in Hempel 1950) would also posit problems for operationism. In this vein, the doctrine of operationism has over the years been severely attacked both in philosophy (e. g., Suppe 1977, 19) and psychology (e. g., Green 1992). Such critiques would lose some of their bite, however, if it could be shown that operationism, as intended by Bridgman, or as applied within psychology, was never really (or primarily) aimed at a formulation of an empiricist criterion of meaning, but rather at the generation of knowledge. In a previous publication I have argued that this can indeed be shown (Feest 2005). Based on an analysis of the experimental work of

Exploratory Experiments, Concept Formation, and Theory Construction

177

early advocates of operationism in psychology, I argued that the aim of the researchers in question was not to reduce the meanings of their central concepts (such as that of experience or of hunger) to empirical conditions of application. Quite to the contrary, they acknowledged that the meanings of such concepts are not exhausted by testing procedures, but stressed that in order to conduct scientific experiments one had to be able to specify testing procedures that one would use to detect the states referred to by such concepts.9 Deriving a philosophical lesson from this historical analysis, I have more recently argued that operational definitions should be viewed as tools in an ongoing process of knowledge generation by experimental means (Feest 2010). On my construal, an operational definition specifies an experimental operation, which, when conducted, produces experimental data that are treated as indicative of the mental state or psychological phenomenon under investigation. For example, if my object of study is short-term memory, my operational definition might state that if a particular simple span test is administered then short-term memory is present if and only if the subject’s performance gets worse with increasing numbers of items that have to be remembered (Feest 2011b). In addition, I argue that since operational definitions specify paradigmatic conditions of experimental application for a given term, they are closely tied to experimental paradigms typically used for the investigation of particular objects of research (see Sullivan 2009 for closely related analysis of the notion of a paradigm). Notice that operational definitions, as construed here, have empirical presuppositions engrained in them: Not only is their formulation typically derived from the effects produced by applying a specific experimental paradigm; but in addition these effects are then treated as indicative of the—often not very well-understood—object of research. It is this latter fact that explains both why the concept of the object of research (to be formed in the course of investigating the object) cannot be reduced to the operational definition, and why the definition nonetheless has an important epistemic function, in that it allows for the experimental manipulation of the purported object under varying conditions. However, since any empirical presupposition may turn out to be false, it follows that operational definitions do not have the timeless or a priori status we sometimes associate with definitions. They are used for the practical purpose of investigating a purported phenomenon, and 9

Clark Hull did not figure in these case studies, but I argue that his work is also a case in point.

178

Uljana Feest

they can be revised or discarded if they are judged to no longer serve this purpose. In this vein, Clark Hull writes that “[i]f it is discovered that some of the individual phenomena included within a definition behave according to a different set of principles or laws, the concept must either be abandoned or redefined” (Hull et al. 1940, 4). Notice that contrary to a common perception of operationism as closely tied to antirealist positions in philosophy of science, the account provided here is compatible with a robust ontological realism. However, even if we assume a realist stance, according to which there are mind-independent and stable objects and phenomena out there, to be discovered and described by psychology, the point highlighted by operationism is that especially in fields like psychology and neuroscience the exact identity conditions of those phenomena are often not known. This means that the question of whether we are entitled to a scientific realism about the referents of specific scientific concepts should be answered on a case-by-case basis, but is often simply premature. To summarize, on my account operational definitions have the following features: First, they do not exhaust the meanings of the concepts thus ‘defined.’ Second, they make empirical presuppositions about the (purported) objects or phenomena in the extension of the concept in question. Third, they function as tools in the process of exploring those objects or phenomena by experimental means, and they do so by specifying paradigmatic conditions of application for the concepts in question. Fourth, the tools (which can be revised or discarded in the process) are tools of knowledge generation. This analysis somewhat shifts the kind of focus philosophers typically have when talking about concepts and definitions: Rather than asking whether the meaning of a given concept can be fully explicated by means of an operational definition, the issue addressed here concerns the epistemic function of such definitions. Of course it is conceivable that a theory of meaning could be devised according to which the epistemic functions of operational definitions (and of other components of concepts), are crucial ingredients of a concept’s meaning. This possibility will not be pursued here.10 The account presented thus far provides some insights into the process of the exploration of a purported object or phenomenon of interest. Presented in this way, we may say that operational definitions play an important role in the formation and elaboration of a concept of the ob10 However, see Brigandt (this volume) for an attempt to argue that a concept’s epistemic function should be included in an analysis of its content.

Exploratory Experiments, Concept Formation, and Theory Construction

179

ject/phenomenon in question. What I have not shown yet is that this process is intertwined with that of theory construction. It is this part of the argument that I turn to next. In doing so, I will pick up a suggestion I made at the end of section 3 above, namely that part of the reason for Clark Hull’s failure to really make his account of theory construction work was his desire to formulate a formal mathematical theory. This was somewhat at odds with his commitment to the idea that biological organisms are at base causal-mechanical machines (Hull 1962 [1929], 828). I will (a) suggest that by looking at the recent literature on mechanistic explanations we can sharpen our analysis of theory construction in psychology, while (b) using this analysis to provide an alternative (or supplementary) argument for Hempel’s thesis that concept formation and theory construction go hand in hand.

4. (Mechanistic) Theory Construction and Concept Formation The notion of theory presupposed by both Hempel and Hull is much in accordance with a still dominant philosophical tradition, which seeks to identify formal properties common to all theories. We may contrast this type of analysis with one that pays more attention to non-formal patterns in theories (see Craver 2002). One recently example of such a non-formal analysis holds that scientific theorizing, especially in neuroscience and the biological sciences, proceeds by way of constructing models of mechanisms, where mechanisms are conceived of as entities and activities organized such as to realize (exhibit or cause) instances of a phenomenon of interest. There is by now an extensive literature about this notion, which was anticipated in the works of Glennan (1997), and Bechtel and Richardson (1993), and found a much-cited formulation in Machamer at al. (2000). We do not need to go into the details of this literature here to appreciate the basic descriptive point made by advocates of mechanistic approaches, namely that in many domains of the biological sciences the explanation statements provided make reference to mechanisms rather than laws. As Carl Craver lays out, a mechanistic approach is attractive for philosophy of science because it seems to be able to account for “theories in the wild,” which “are sometimes written in a natural language; they are also charted, graphed, diagrammed, expressed in equations, explicated by exemplars, and (increasingly) animated in the streaming images of web pages” (Craver 2002, 58).

180

Uljana Feest

Space does not permit me to cover the mechanism literature in any great detail, but two features are of particular interest for our purposes. They are (a) that “in the wild” scientific descriptions of mechanisms are often partial and inadequate, and (b) that mechanisms are individuated, i. e., by means of causal manipulations (Craver 2007, 94 ff.). The first point highlights the construction of mechanistic theories as an ongoing process, whereas the second point highlights questions about experimental approaches to the study of mechanisms (and thereby, presumably, to the dynamics of constructing mechanistic explanations).With respect to the processes by which more and more comprehensive models of mechanisms are constructed, friends of mechanisms have distinguished between various stages of the construction of a mechanistic explanation (sketch, schema, and model of a mechanism), and have moreover proposed that the entities and processes invoked by a mechanistic explanation can be discovered by an iterative top-down- and bottom-up process, whereby phenomenal (behavioral) descriptions of the activities of a hypothetical mechanism guide the search for the neural correlate and vice versa (e. g., Craver/Darden 2001; Bechtel 2008). Now, in terms of our question about the relationship between concept formation and theory construction, how should we construe this kind of situation? The first issue we need to confront here is whether a description of a mechanism is a concept or a theory. On the one hand it would seem that insofar as we can refer to individual mechanisms, for example, the mechanism of long-term-potentiation (LTP), then surely we must have a concept of the mechanism. On the other hand, since mechanisms are meant to be explanatory, there is also a sense in which a description of a mechanism plays the role traditionally associated with theories. 11 Hence, if we think of the process by which more and more robust knowledge about a given mechanism is acquired as one of concept formation, it is not obvious how such a process could be distinguished from that of theory construction. There are two possible replies to the point just made. First, it could be objected that this is a purely terminological issue: Whether we refer 11 I am aware that there is some disagreement over what actually does the explaining: the mechanism itself, or the description of the mechanism (e. g., Craver 2007 vs. Bechtel 2008). However, for the purposes of this paper, this question is not relevant. Regardless of what a mechanistic explanation ‘really’ is, models or descriptions of mechanism take over the cognitive role traditionally associated with theories.

Exploratory Experiments, Concept Formation, and Theory Construction

181

to a description of a mechanism as a concept or a theory is irrelevant, so long as it is possible to provide empirical criteria that clearly delineate any one particular brain mechanism (such as that of LTP) from other brain mechanisms, thereby demonstrating how one could have robust knowledge about it even in the absence of an adequate theory of learning. This argument would thus state that the formation of a concept of the mechanism in question can be separated from the construction of a theory of learning or of the brain. The other possible response could be to say that my above construal confuses questions about the explanans and the explanandum of mechanistic explanations. In this vein, it might be suggested that it is possible to form an empirical concept of the explanandum phenomenon, and that the knowledge engrained in this concept will remain robust even if scientific account of the explanatory mechanisms (the explanans) shifts. However, if we look closely at both objections, they support rather than weaken the position I wish to argue for here. I will look at both points in turn. We will begin with the question of what might be empirical criteria that individuate a particular mechanism as distinct from others. This is where causal interventions come into play. Drawing on James Woodward’s manipulationist account of causality (Woodward 2004), Craver argues that we should think of the identity conditions of a causal mechanism as provided by an “ideal intervention.” Such an intervention “I on X with respect to Y is a change in the value of X that changes Y, if at all, only via the change in X” (Craver 2007, 96). The idea, then, is that a mechanism is individuated by stating (counterfactually) the conditions under which it would be executed. For example, [w]hen one asserts a causal relevance relation between the firing rate of the pre-synaptic cell and the strength of the synapse, one asserts [that] when one alters the firing rate of the pre-synaptic cell in specified ways using an ideal intervention, then one either changes the strength of the synapse or changes the probability that he strength of the synapse would be strengthened. (Craver 2007, 98)

I find this to be an adequate characterization of the ways in which an experimental scientist might reason about causal claims. However, for our purposes, the difference between an ideal intervention and a real intervention is crucial, since after all our interest here is not in providing a conceptual analysis of what constitutes a causal explanatory theory, but in understanding what is involved in constructing such a theory. The important point is that when conducting a real intervention, one cannot be sure that one has in fact been able to control all the variables that

182

Uljana Feest

could conceivably confound the results. Craver acknowledges this when he states that “experimental situations are in many ways not ideal,” adding that this “is an important insight about our epistemic situation with respect to the causal structure of the world” (Craver 2007, 97). He does not, however, develop on this important insight. This allows me to rephrase the question of this article by saying that if we try to understand the processes by which knowledge is generated in practice, then we need to take seriously the epistemic situation a scientist is in at any given point in time. In particular, scientists may not have been able to control all the potentially confounding variables for the simple reason that they do not know what they are. This means that the more is known about the mechanisms operative in a complex domain like the brain, the more confident we can be that a given experimental intervention really does individuate one particular mechanism, rather than being confounded by others. In my mind this is tantamount to saying that the generation of robust knowledge about one particular mechanism cannot be separated from the construction of a mechanistic theory of the domain in which it is thought to occur. Let us turn to the other potential objection to my claim about the close relationship between concept formation and theory construction, namely that I conflate the question what it takes to form a concept of an explanandum phenomenon with the question of what it takes to construct an explanatory theory for it. In order to respond to this charge, we need to be clearer on what exactly are the types of explanandum phenomena in psychology and cognitive neuroscience. A few words need to be said about the usage of the term “phenomenon” in play here. In general, I am sympathetic to Bogen and Woodward’s analysis, according to which phenomena are stable and regular features in the world (Bogen/Woodward 1988). Within the philosophy of psychology, the term is also sometimes used, though in a slightly different way. For example, William Bechtel (2008), in his analysis of the way in which explanatory mechanisms are individuated, talks about the interplay between descriptions of empirical regularities and mechanistic explanations of them, where only the former are referred to as “phenomena.” In turn, Craver (2007), when discussing the possibility of providing a mechanistic explanation for the phenomenon spatial memory, does not give a very clear analysis of what he takes this phenomenon to

Exploratory Experiments, Concept Formation, and Theory Construction

183

be, and more generally what he means by a “phenomenon.”12 It seems to me that this points to a largely unexplored question within the philosophy of psychology, in that there has been a lot of work regarding the question of what constitutes a mechanistic explanation, but relatively little about the question of how to think about the very notion of an explanandum phenomenon in that domain. The objection that I conflate explanans and explanandum presupposes that scientists typically have a clear-cut notion of an empirical explanandum phenomenon, awaiting an explanation in terms of a mechanism. By contrast, I argue that the very question of what constitutes the relevant explanandum phenomenon can shift in the course of research, and that this research is best understood as an ongoing process of theory construction, where accounts of phenomena at various levels are mutually adjusted to one another, and especially the causal and evidential relations between them are discussed and evaluated (see also Feest 2011a).13 Space does not permit me to go into this point in much detail, but my basic contention is that when cognitive scientists approach complex objects of scientific inquiry they bring to their study a number of loosely related conceptual assumptions about phenomena thought to be relevant to the objects of study, where the phenomena in question include prima facie quite divergent kinds of processes and entities, such as behavioral regularities, neural and molecular mechanisms, brain regions, etc. Operational definitions, on my account are used in the process of attempting to integrate this various features of the presumed objects of research.

5. Operational Definitions and the Experimental Generation of Knowledge Summarizing the previous section, I argue that mechanisms as phenomena and as explanations of phenomena have an intriguing dual status that makes it hard to see how the formation of a concept of a psychological phenomenon could proceed independently of the construction of an explanatory theory for it. However, this analysis still needs to be tied 12 See also Sullivan 2010 for a critique of Craver’s treatment of ‘the phenomenon’ of spatial memory. 13 I take this conclusion to be compatible with Craver’s account of lumping and splitting errors (Craver 2007, 123) and also with Bechtel and Richardson’s account of the reconstitution of phenomena (Bechtel/Richardson 1993).

184

Uljana Feest

up with my account of the role of operational definitions. After all, my thesis was not only that concept formation and theory formation go hand in hand, but also that preliminary, ‘operational,’ definitions of the relevant concepts can function as tools in the process of constructing both the concept and the theory. The question, then, is how this can be done. We will begin by reviewing some of the complications introduced by the previous section. One way of rephrasing the two arguments of the previous section is to say that (a) given the lack of knowledge about the domain, scientists face the challenge of having to demonstrate that specific experimental effects really are indicative of the phenomenon under investigation, and (b) in cognitive psychology and neuroscience we typically do not deal with atomistically delineated entities, but rather with clusters of presumed or empirically observable phenomena thought to be related to, or subsumable under, a given general concept (e. g., the concept of spatial memory). The question then is, how operational definitions can contribute to the tasks of (a) showing that any given empirical regularity (e. g., the navigational behavior of mice), tied to a given experimental paradigm and treated as indicative of a given phenomenon (e. g., spatial memory), really is indicative of it rather than something else, and (b) showing that different presumed phenomena (e. g., the ability of humans to find their way home; the neural mechanisms responsible for the ability in question; the brain area in which the relevant information is thought to be encoded, stored and retrieved) can in fact be relevant to, or subsumed under, the same concept. My answer, briefly, is that operational definitions, taken by themselves, cannot show either of these things. To expect this of them, however, would be to misconstrue the analysis I have provided for them. What I am trying to argue is not that operational definitions can ‘single-handedly’ validate particular conceptual or theoretical assumptions. What I would like to argue, rather, is that any attempt to construct and/or validate such assumptions by empirical means will inevitably have to rely on operational definitions, i. e., on some presuppositions regarding the ways in which the data generated by an experiment are relevant to the phenomena under investigation. In the case of cognitive neuropsychology, these presuppositions are going to pertain to the ways in which various phenomena are related and the ways in which various cognitive mechanisms might conceivably contaminate the experimental data. It follows that the operational definitions – and the corresponding concepts – are going to be more and more refined (i. e., less and less

Exploratory Experiments, Concept Formation, and Theory Construction

185

preliminary) insofar as they are constrained by a growing body of theoretical knowledge about the complex interplay of mechanisms in a given domain.

6. Conclusion In this article I have narrowed down the question about scientific concepts and investigative practice in two ways. First, the types of scientific concepts I considered were concepts referring to the subject matter of cognitive neuropsychology. Second, the type of investigative practice I considered was that of conducting experiments, especially experiments aimed at exploring the object of research. With respect to scientific concepts, my question was both how they are formed as a result (and in the course) of experimentation, and how they themselves contribute to the ways in which experiments are designed and carried out. I argued that (1) in the context of experimental psychological research, preliminary and evolving, ‘operational’ definitions of concepts function as tools of knowledge generation, where (2) that process must be conceived of as a joint process of concept formation and theory construction. My argument for this thesis ultimately (in sections 5 and 6) turned on the specific nature of (mechanistic) theorizing in experimental cognitive (neuro-)psychology as well as the close relationship between explanations for, and identity conditions of, phenomena in that domain. I approached this topic (in sections 2 and 3) by way of an analysis of a couple of historical authors for two reasons. First, I believe that both Hempel and Hull, each in their own way, raised points that remain relevant for our purposes. Carl Hempel did so by highlighting the crucial importance of a concept’s systematic import to the scientific practices that use it. Clark Hull did so by drawing attention to the role of operational definitions in one particular such practice, i. e., that of constructing theories. Second, it seems to me that the limitations of each of these analyses with regard to our question points to some systematic blind spots in 20th century philosophy of science. With regard to Hempel, this blind spot concerned his utter lack of interest in providing an analysis of the actual dynamics of theory construction and concept formation in the sciences. With regard to Hull, the blind spot concerned his preoccupation with the idea of a scientific theory as a formal axiomatic structure.

186

Uljana Feest

I suggested that we take both the insights and blind spots of these two authors as starting points of for a positive account of the role of operational ‘definitions’ in knowledge generation. It is often helpful to turn to classical philosophical writers such as Hempel for a better understanding of the origins and limits of today’s philosophical discourse, but also in order to appreciate that they may still have some thought-provoking theses and insights to more contemporary debates. Second, when debating questions about scientific practice, the ways in which scientists themselves reason about their practice are often a step ahead of philosophical accounts. In this vein, I argue that the methodological approach of operationism, as it was articulated and practiced by scientists from the 1930s onwards, is vital to an understanding of key issues in psychological experiments. With regard to the analysis provided in section 5, it is clear that it points to a number of far-reaching questions in the philosophy of psychology, touching in particular on the question of what constitutes a phenomenon in this domain, and how to give a philosophical account of the processes that lead to the construction of a mechanistic story of how various interrelated phenomena are connected. While I do not claim to have provided a conclusive argument for my claim about the close relationship between concept formation and theory construction in experimental cognitive neuropsychology, I do hope to have raised enough questions to drive home the necessity pursuing this topic in more detail.

Acknowledgements I would like to thank the other contributors to this volume for helpful discussions of a much earlier version of this paper. Particular thanks go to Dirk Schlimm. I am also grateful to Jim Bogen and Friedrich Steinle for enlightening comments on a recent version of this paper.

Reference List Amsel, A. / Rashotte, M. (1984), Mechanisms of Adaptive Behavior: Clark L. Hull’s Theoretical Papers, with Commentary, New York: Columbia University Press. Arabatzis, T. (this volume), “Experimentation and the Meaning of Scientific Concepts.”

Exploratory Experiments, Concept Formation, and Theory Construction

187

Bechtel, W. (2008), Mental Mechanisms, New York: Routledge. Bechtel, W. / Richardson, R. (1993), Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research, Princeton: Princeton University Press. Bergmann, G. / Spence, K. (1941), “Operationism and Theory in Psychology.” In: Psychological Review 48, 1 – 14. Bogen, J. / Woodward, J. (1988), “Saving the Phenomena.” In: The Philosophical Review XCVII, 303 – 352. Boon, M. (this volume), “Scientific Concepts in the Engineering Sciences: Epistemic Tools for Creating and Intervening with Phenomena.” Bridgman, P. (1927), The Logic of Modern Physics, New York: MacMillan. Brigandt, I. (this volume), “The Dynamics of Scientific Concepts: The Relevance of Epistemic Aims and Values.” Burian, R. (2007), “On MicroRNA and the Need for Exploratory Experimentation in Post-Genomic Molecular Biology.” In: History and Philosophy of the Life Sciences 29 (3), 285 – 312. Burian, R. (forthcoming), “Experimentation, Exploratory.” In Dubitzky, W; Wolkenhauer, O; Cho, K.–H.; Yokota, H. (eds.): Springer Encyclopedia of Systems Biology. Berlin: Springer (in press). Carnap, R. (1932),”Psychology in Physical Language.” In: Erkenntnis 3, 107 – 142. Reprinted in: Ayer, A. (ed.), Logical Positivism, Glencoe, Ill.: Free Press, 165 – 198. Craver, C. F. (2002), “Structures of Scientific Theories.” In: Machamer, P. K. / Silberstein, M. (eds.), Blackwell Guide to the Philosophy of Science, Oxford: Blackwell. Craver, C. F. (2007), Explaining the Brain, Oxford: Oxford University Press. Craver, C. F. / Darden, L. (2001), “Discovering Mechanisms in Neurobiology: The Case of Spatial Memory.” In: Machamer, P. K. / Grush, R. / McLaughlin, P. (eds.), Theory and Method in Neuroscience, Pittsburgh, PA: University of Pittsburgh Press, 112 – 137. Elliot, K. (2007), “Varieties of Exploratory Experimentation in Nanotoxicology.” In: History and Philosophy of the Life Sciences 29 (3), 311 – 334. Feest, U. (2005), “Operationism in Psychology—What the Debate is about, What the Debate Should Be about.” In: Journal of the History of the Behavioral Sciences, 41 (2), 131 – 149. Feest, U. (2010), “Concepts as Tools in the Experimental Generation of Knowledge in Cognitive Neuropsychology.” In: Spontaneous Generations: A Journal for the History and Philosophy of Science 4 (1), 173 – 190. Feest, U. (2011a), “What Exactly Is Stabilized When Phenomena Are Stabilized?” In: Synthese 182 (1), 57 – 71. Feest, U. (2011b), “Remembering (Short-Term) Memory Oscillations of an Epistemic Thing.” In: Erkenntnis 75 (3), 391 – 411. Franklin, A. (1999), “How to Avoid the Experimenters’ Regress.” In: Can That Be Right? Essays on Experiment, Evidence, and Science, first published in 1994, Dordrecht: Kluwer, 13 – 38. Franklin, L. (2005),”Exploratory Experiments.” In: Philosophy of Science 72, 888 – 899.

188

Uljana Feest

Glennan, S. (1996), “Mechanisms and the Nature of Causation.” In: Erkenntnis 44, 49 – 71. Green, C. (1992), “Of Immortal Mythological Beasts: Operationism in Psychology.” In: Theory & Psychology 2, 291 – 320 Hacking, I. (1983), Representing and Intervening: Introductory Topics in Philosophy of Science, Cambridge: Cambridge University Press. Hempel, C. G. (1952), “Fundamentals of Concept Formation in Empirical Science.” In: International Encyclopedia of Unified Science, vol. II, no. 7. Hempel, C. G. (1956), “A Logical Appraisal of Operationism.” In: Frank, P. (ed.), The Validation of Scientific Theories, Boston: The Beacon Press. Hempel, C. G. (1958),”The Theoretician’s Dilemma: A Study in the Logic of Theory Construction.” In: Minnesota Studies in the Philosophy of Science 2, 173 – 226. Hempel, C. G. (1965), “Empiricist Criteria of Cognitive Significance: Problems and Changes.” In: Aspects of Scientific Explanation, and Other Essays in the Philosophy of Science, New York: The Free Press, 101 – 119. Hempel, C. G. (1966), Philosophy of Natural Science, Englewood Cliffs, NJ: Prentice Hall. Hull, C. (1935), “The Conflicting Psychologies of Learning—A Way Out.” In: Psychological Review 42, 491 – 516. Hull, C. (1943), Principles of Behavior: An Introduction to Behavior Theory, New York: Appleton-Century. Hull, C. (1952), A Behavior System: An Introduction to Behavior Theory Concerning the Individual Organism, New Haven: Yale University Press. Hull, C. et al. (1940), Mathematico-Deductive Theory of Rote Learning: A Study in Scientific Methodology, New Haven, N.J.: Yale University Press. Kindi, V. (this volume), “Concept as Vessel and Concept as Use.” Koch, S. (1941), “The Logical Character of The Motivation Concept I.” In: Psychological Review 48, 15 – 38; “The Logical Character of The Motivation Concept II.” In: Psychological Review 48, 127 – 144. Machamer, P. / Darden, L. / Craver, C. (2000), “Thinking about Mechanisms.” In: Philosophy of Science 67, 1 – 25. McLeod, M. (this volume), “Rethinking Scientific Concepts for Research Contexts: The Case of the Classical Gene.” Smith, L. (1986), Behaviorism and Logical Positivism: A Reassessment of Their Alliance, Stanford: Stanford University Press. Steinle, F. (1997), “Entering New Fields: Exploratory Uses of Experimentation.” In: Philosophy of Science 64, 65 – 74. Sullivan, J. (2009), “The Multiplicity of Experimental Protocols: A Challenge to Reductionist and Non-Reductionist Models of the Unity of Neuroscience.” In: Synthese 167, 511 – 539. Sullivan, J. (2010), “Reconsidering ‘Spatial Memory’ and the Morris Water Maze.” In: Synthese 177, 261 – 283. Suppe, F. (1977), The Structure of Scientific Theories, 2nd edition, Urbana: University of Illinois Press. Water, K. (2007),”The Nature and Context of Exploratory Experimentation.” In: History and Philosophy of the Life Sciences 29 (3), 275 – 284.

Exploratory Experiments, Concept Formation, and Theory Construction

189

Woodward, J. (1989),”Data and Phenomena.” In: Synthese 79, 393 – 472. Woodward, J. (2003), Making Things Happen: A Theory of Causal Explanation, Oxford, New York: Oxford University Press.

Early Concepts in Investigative Practice— The Case of the Virus Corinne L. Bloch 1. Introduction Uljana Feest points to the following puzzle, with regard to scientific concepts at the early stages of a scientific investigation: In order to investigate a given phenomenon, one has to be able to empirically individuate instances of it. In order to be able to do so, one has to possess some concept of the phenomenon. The possession of a concept is generally taken to imply knowledge about the class of phenomena that it applies to. But how do we make sense of concept use in cases where scientists investigate phenomena or objects they don’t know much about, or perhaps aren’t even sure really exist? Clearly they must use concepts when conducting their research. (Feest 2010, 173)

This gives rise to two separate but intertwined questions. First, what do scientists have to know about a phenomenon, in order to decide it warrants a new concept? That is, at what point do they decide that the phenomenon under investigation is a stable kind that should be treated separately from other kinds? 1 How much do they have to know about the causal nature of the phenomenon in order to make such a judgment? Secondly, assuming that scientists decide that the new phenomenon indeed warrants a concept of its own, how do they individuate and define the phenomenon, when so little is known about it? What would such a definition look like? Any definition scientists construct at the very beginning of the investigation of the phenomenon could not reflect its causal structure.2

1 2

I do not mean to imply here that the kind must be a natural kind. Rather, that it is a category that contains a phenomenon that is, in some important respects, uniform, and different from other phenomena. An exception would be a theoretical concept for which a definition is formulated in terms of a hypothetical cause. While this is a useful tool in investigative

192

Corinne L. Bloch

Thus, such a definition would be nothing like the scientific definitions described by the modern scientific essentialist, who argues that scientific definitions specify the hidden causes of things as they appear to us; that they specify those characteristics that give rise to the nature of things qua members of a kind, distinguishing the members of one kind from other kinds (e. g., Ellis 2001, 35; Copi 1954; Dubs 1943). Putting aside the question of whether scientific kinds actually possess such eternal essences and whether later definitions indeed specify them (see Bloch 2011), it is clear, at the very least, that at the preliminary stages of a scientific investigation into a phenomenon, the scientist is not in a position to form definitions in terms of essential characteristics. Feest addresses this problem by showing that scientists form what she calls an operational definition, which is a specification of “paradigmatic conditions of application for the concepts in question. These are cast in terms of a description of a typical experimental set-up thought to produce data that are indicative of the phenomenon picked out by the concept” (Feest 2010, 178). Thus, this operational definition serves to experimentally isolate instances of the phenomenon in question, without the requirement that the scientist already possesses a deep, causal understanding of that phenomenon. Accordingly, the definition does not serve to encompass the meaning of the newly formed concept, and it is likely to be revised as scientists learn new facts about the phenomenon. This solution, in turn, gives rise to an additional question. What warrants the scientists’ assumption that the preliminary definition, which is formulated in experimental terms, consistently succeeds in picking out the kind under investigation? If the phenomenon under investigation is assumed to be stable and not idiosyncratic to particular experimental contexts (see Bogen/Woodward 1988), while the operational definition is dictated by the available experimental practices, the assumption that the definition reliably picks out the phenomenon may seem arbitrary. If scientists know nothing about the causal structure of the kind, what warrants their expectation that their experimental parameters will succeed in repeatedly isolating its instances? In this paper, I will use the case study of the virus, focusing on its early conceptualization, to address these questions. In section 2, I will give a very brief overview of the historical development of the virus concept, focusing on the earliest stages in virus research. I will discuss practice, I would like to focus here on definitions that reflect actual scientific knowledge, and their involvement in the expansion of such knowledge.

Early Concepts in Investigative Practice—The Case of the Virus

193

the role of the early definition in the experimental and theoretical development of the concept, showing that the early definition of the virus facilitated the experimental isolation and the empirical investigation into the nature of the virus, leading to the formulation of a more elaborate and a more causally-fundamental definition. In section 3, I will examine the formation of the virus concept, to provide insight into the theoretical background giving rise to a new concept. I will discuss the junctures, in the accumulation of scientific knowledge about the virus, in which scientists stopped treating it as a form of bacteria that presents different experimental results, and started treating it as a unique kind—with everything that this entails—requiring a separate concept. In section 4, I will discuss the characteristics of the early definition within its context of knowledge, showing that these characteristics have theoretical significance and are not arbitrarily chosen experimental parameters. This analysis will shed light on the way in which operational definitions get at the causal nature of the kind, and pick out its instances in a stable manner, distinguishing it from other kinds. It will also show how these preliminary definitions, despite the lack of scientific knowledge about the kind at the early stages of the investigation, condense and integrate the theoretical knowledge that scientists already possess. I will advocate a view of definitions under which there is no tension between a definition formulated in an experimental language and one that conveys insight into the essential nature of the kind, and I will therefore argue that the difference between the initial “operational definition” and a later definition is a difference in degree and not in kind.

2. The Development of the Virus Concept Viruses are submicroscopic, obligate intracellular parasites. Unlike bacteria, they are unable to generate metabolic energy or to synthesize proteins. Before launching into the history of their discovery, I should specify the different ways in which the term has been used by scientists in the past. Until the first decades of the twentieth century, the term ‘virus’ was used broadly, to designate all infectious agents, including bacteria. By the 1940’s, however, it was used more narrowly, to desig-

194

Corinne L. Bloch

nate more or less the same group of parasites we now refer to as viruses.3 In this paper I usually use the term in this narrow manner. An exception is any instance of the quoted phrase ‘filterable virus’, in which the term virus is used in its earlier, broader meaning, making ‘filterable virus’ a specific subgroup of that broad class, i. e., those infectious agents who are able to pass through a filter. Having clarified the terms, we are now set for a very brief sketch of the history. The germ theory, which was developed in the 19th century, maintained that infectious diseases were caused by microorganisms. The theory became widely accepted only after 1877, when independent studies by Koch and Pasteur demonstrated the etiology of anthrax, establishing a bacterial cause for the disease. A few years later, Koch formulated three conditions which were necessary to establish a bacterial etiology (Koch 1884): 1. The specific microbe must be demonstrated in all cases of the disease. 2. The microbe must be isolated and cultured in a pure state on an artificial medium. 3. The pure culture must produce the disease when inoculated into healthy, susceptible animals. With the development of techniques for the isolation of bacteria, and the formulation of criteria for the determination of bacteriological etiology, bacteriological research began to advance rapidly. The germ theory became so widespread that it was assumed that all infectious diseases were caused by bacteria. Accordingly, the agents of infectious diseases were commonly described as microorganisms detectable with light microscope, retained by bacterial filters, and cultured on artificial media. In the following years, however, scientists working on some infectious diseases failed to isolate, visualize, and grow the causal agents in vitro. For example, in the 1880s, Louis Pasteur could not detect microscopically the infectious agent of rabies, nor could he grow it in cultures. These failings were thought to be a consequence of technical difficulties due to the small size of the infectious agents, rather than these agents being fundamentally different from the known bacteria.4 Pasteur wrote: “The anthrax of cattle, malignant pustule of man, is produced 3 4

I say more or less because, as research progressed, certain groups were excluded from this category, while others became included. For a review, see Hughes 1977.

Early Concepts in Investigative Practice—The Case of the Virus

195

by a microbe; croup is produced by a microbe … The microbe of rabies has not been isolated as yet, but judging by analogy, we must believe in its existence. To resume: every virus is a microbe” (Pirie 1948, 329). Adolf Mayer, who showed that tobacco mosaic disease (hereafter ‘TMD’) is infectious, hypothesized in 1882 the existence of a “soluble, possibly enzyme-like contagium, although almost any analogy for such a supposition is failing in science” (Bos 1999, 676; italics in original). Here, he postulated that the agent that causes TMD is a protein rather than a corpuscular organism, although it was difficult for him to conceive of what such a contagious agent would be like. A few years later, he gave up this idea, and reported that while he had not been able to isolate the causal agent, “the mosaic disease of tobacco is a bacterial disease” (Mayer 1886, 24). In 1892, Dimitri Ivanovsky reported that the sap from plants infected with tobacco mosaic disease remained infectious even after filtration through two layers of filter paper. He attributed the inability of the agent to develop on artificial media to a technical difficulty that could in principle be resolved. He concluded that either the infectious agent was a bacterial toxin which passed through the filter, or it was very small bacteria that passed through the pores (Ivanovsky 1892, 29 – 30). While the discovery of the virus is often attributed to Ivanovsky, he did not, at that point, conceive of the causal agent of the tobacco mosaic disease as a new form of infectious agent. In 1898, Loeffler and Frosch, bacteriologists trained by Koch, published their report on foot-and-mouth disease. They found that the disease is transferred in lymph from epidermal vesicles and to their surprise; filtration of the lymph did not diminish its infectivity, indicating that the causal agent is able to pass through the filter. Like Ivanovsky, the authors could only conceive of two explanations for these results; either the disease is caused by a toxin that was present in the lymph, or by minute bacteria that are too small to be retained by the filter. Calculating the dilution of the causal agent in their experiments, however, they contended that a soluble toxin must have a level of activity that is “truly unbelievable” in order to maintain the level of infections they observed. Thus, they argued that the infectious agent must be capable of reproduction and could not be a toxin (Loeffler/Frosch 1898). Although Loeffler and Frosch did not observe reduced infectivity of the lymph after filtration, could not grow the agent in culture, and could not detect it under the microscope, they maintained their hypothesis that the agent is a small bacterium. Loeffler’s later observation that the infectivity

196

Corinne L. Bloch

of diluted lymph was lost after repeated filtration through the very finepored Kitasato filter— indicating that the causal agent was finally retained by the filter— further convinced him that the causal agent of the disease was cellular and not soluble (see Bos 1999, 679; Hughes 1977, 61 – 64). Loeffler and Frosch concluded that it’s a minute organism. In the same year that Loeffler and Frosch published their report, Martinus Beijerinck found, independently of Ivanovsky, that sap from plants infected with TMD remained infectious after filtration (Beijerinck 1898; 1899). Based on his finding that an infinite number of healthy plants can be infected by the sap in series, Beijerinck concluded—as did Loeffler and Frosch with regards to the agent of foot-andmouth disease—that the infectious agent had the capacity to multiply within living plants and therefore cannot be a toxin.5 Beijerinck, who was very familiar with Mayer’s early work on TMD, gave this finding a different interpretation from that of his colleagues. Several results indicated to him that the causal agent cannot be a contagium fixum (corpuscular bacterium) in the usual sense of the word. The observation that the agent can pass through filter was not conclusive proof that it is not bacterial, as it was possible that the filter was not completely bacteria-proof. Therefore, Beijerinck designed an experiment which he thought would definitively answer the question of whether the agent is a cellular organism. Beijerinck placed filtrate containing the agent on agar for ten days, after which he examined the infectivity of the deeper layers. He hypothesized that, if the agent is cellular in nature, it would not be able to diffuse through the agar. If, on the other hand, it is soluble, it would be able to diffuse into the deeper layers of the agar. The agent diffused several millimeters into the agar, and Beijerinck viewed this as a conclusive proof that the agent is liquid and not corpuscular. Beijerinck set to determine the nature of the agent’s inability to reproduce outside the plant. As mentioned above, the scientists that were unable to grow the agent in culture assumed that the problem is a technical difficulty—a proper medium had not yet been found. In order to test this assumption, Beijerinck mixed the agent with the fresh sap of young tissues of healthy tobacco plants. Had the agent merely required nutrients from the plant for its reproduction, the sap should have been 5

Ivanovsky later also abandoned the idea that the agent causing TMD might be a toxin, agreeing that it multiplies in the living plant. See Hughes 1977, 52.

Early Concepts in Investigative Practice—The Case of the Virus

197

sufficient to enable the agent to reproduce in vitro. However, adding the sap had the same effect on the agent as adding water to it; i. e., the infectivity of the solution was reduced, indicating the agent was simply diluted, and did not reproduce despite the presence of nutrients. This led Beijerinck to assert that, while the agent can exist outside the tobacco plant, it cannot reproduce under these conditions. The agent does not merely require nutrients from the plant for reproduction; it had to be present in the living plant in order to reproduce. But what form of reproduction of an infectious agent could be dependent on its host? Beijerinck studied the patterns of the spread of the disease within the plant, observing that infection only occurs in growing portions of the plant. Since he thought that a non-cellular being is not likely to be able to reproduce independently, he hypothesized that it reproduces passively, by incorporation into the living protoplasm of the host cell. He proposed that “[w]ithout being able to grow independently, [the agent] is drawn into the growth of the dividing cells and here increased to a great degree without losing in any way its own individuality in the process” (Beijerinck 1898, 39). While the suggested process was not completely comprehensible to him, he viewed parallels between this mode of replication and that of amyloplasts and chromoplasts, “which likewise only grow along with the cellular protoplasm but nonetheless lead an independent existence and function on their own” (Hughes 1977, 52). Beijerinck’s findings that the agent not only passes through a filter but is also soluble, and that its dependence on the plant for reproduction is not merely nutritional, indicated that the agent is not cellular. On the other hand, his observation that the agent is capable of multiplying within living cells indicated that the agent cannot be a toxin. Beijerinck named the agent contagium vivum fluidum (translated as ‘living germ that is soluble’), characterizing it as a non-cellular, submicroscopic, infectious agent, which can only reproduce within living cells (Beijerinck 1898; see also Helvoort 1994, 191). The properties of a transmittable living agent on the one hand, and a non-corpuscular agent on the other hand, conflicted with two prevailing theories—the cell theory and the germ theory.6 They were therefore not immediately accepted by the scientific community. 6

As several authors have noted, Beijerinck’s training in chemistry probably helped him think “outside the bacteriological box” when interpreting his observations.

198

Corinne L. Bloch

It was the very originality of Beijerinck’s concept of the virus which made its acceptance difficult around the turn of the century. His idea of the contagium vivum fluidum appeared to be contradicted by two doctrines of late nineteenth century science: the germ theory of infectious diseases and the cell theory … For two decades the germ theory had guided the ideas and the experimental methodology for research on infectious diseases … Beijerinck’s view of a living, reproducing, non-cellular material was also difficult to reconcile with the theory promulgated by Virchow that the cell is the fundamental unit of all living things … Thus scientists accepting the cell as the basic unit of life and the germ theory as the explanation of the cause of infectious diseases were frequently reluctant to endorse the concept of the contagium vivum fluidum … (Hughes 1977, 57)

Similarly, Lute Bos writes: Beijerinck’s entirely new concept, launched in 1898, of a filterable contagium vivum fluidum which multiplied in close association with the host’s metabolism and was distributed in phloem vessels together with plant nutrients, did not match the then prevailing bacteriological germ theory. At the time, tools and concepts to handle such a new kind of agent (the viruses) were non-existent. Beijerinck’s novel idea, therefore, did not revolutionize biological science or immediately alter human understanding of contagious diseases. (Bos 1999, 675)

In a section that demonstrated how widespread the germ theory was and how difficult it was for researchers to accept a causal agent that does not conform to Koch’s postulates, Thomas Rivers wrote in 1937: After a real struggle that occurred not so many years ago, certain maladies were shown to be induced either by small animals or minute plants, e. g., protozoa, fungi, bacteria and spirochetes. Indeed, the victory was so great that most workers in time began to consider that all infectious diseases, including those whose incitants had not been discovered, must be caused by agents similar to those already recognized. According to them, there could be no infections that were not caused by protozoa, fungi, bacteria or spirochetes, and to intimate that some infectious agents might be inanimate constituted heresy of the first order. (Rivers 1937, 1)

While accepting Beijerinck’s theory required scientists to drastically change their basic premises, alternative explanations could be maintained, which did not require such far-reaching measures. For example, although Loeffler’s and Frosch’s experiments produced results very similar to those of Beijerinck, these students of Koch did not see their results as undermining the generality of the germ theory. Their view that the causal agent of foot-and-mouth disease was a minute organism, which requires as yet undiscovered conditions to grow in vitro, enabled them to maintain the accepted theory.

Early Concepts in Investigative Practice—The Case of the Virus

199

Another problem posed to the scientific community by Beijerinck’s theory was that his concept not only clashed with their deep theoretical commitments, but was immune to their experimental tools. They did not have the techniques to deal with an invisible agent that cannot be retained by a filter and that that cannot be grown in vitro. The fact that scientists were yet unable to grow the virus using their standard procedures did not preclude the possibility of future success using other bacteriological techniques, especially in light of previous experience with bacterial cultures that required conditions that were difficult to attain. Accepting the contagium vivum fluidum concept, on the other hand, meant that bacteriological techniques would never be successful in cultivating the infectious agents. In the face of the failure to visualize some viruses and grow them in vitro, several works continued to provide support for the accepted view, that “filterable viruses” (i. e., infectious agents that can pass through a filter) are microbes. One such report was made by Edmond Nocard and Emil Roux, who researched bovine pleuropneumonia, now known to be caused by a mycoplasma and not a virus. Nocard and Roux observed that, while the causal agent of bovine pleuropneumonia passes through a filter, it could be demonstrated in the microscope and—utilizing new techniques—grown in artificial cultures (Nocard et al. 1896) 7. They wrote: Discovery of the agent of pleuropneumonia virulence is not only of interest because of the difficulty overcome; its significance goes beyond. It raises the hope of also being successful in the study of other viruses whose microorganism has heretofore remained unknown. What made the determination of this microbe so difficult was, on the one hand, its extreme tenuity, and, on the other, in particular, the extremely limited culture conditions in artificial medium. Now it is quite conceivable that even smaller microbes exist, which instead of being within the limits of visibility, as is the case for this one, are beyond such limits; in other words, we can accept the fact that microbes exist which are invisible to the human eye. Well, even for these microbes, study remains possible provided a favorable culture medium is found. (Nocard et al. 1896, 357)

Accordingly, in a highly influential 1903 review entitled “On the SoCalled ‘Invisible Microbes’”, Roux discusses not only the causal agent of bovine pleuropneumonia, but also other infectious agents, including that of Beijerinck’s TMD, suggesting that the agent of pleuropneumo7

The paper does not discuss the filterability of the virus, but see Hughes 1977, 65, for Nocard’s determination that the causal agent can pass through a filter.

200

Corinne L. Bloch

nia “forms a transition between the ordinary bacteria and those which the microscope is incapable of showing” (Bos 1999, 680). Indeed, since the causal agent of bovine pleuropneumonia was almost submicroscopic, it seemed reasonable to assume that there are even smaller living cellular organisms that are invisible under the microscope. Furthermore, the fact that the causal agent could only be grown in culture using very specific bacteriological methods gave researchers the hope that—with some technical improvements—other causal agents could eventually be isolated and grown using similar methodology8.9 Support for the accepted theory also continued to come from Ivanovsky, who insisted that the causal agent of TMD is a minute bacterium. Ivanovsky responded to Beijerinck’s publication by asserting that in 1892 he had already successfully evoked the disease by inoculation of a bacterial culture, and that this had strengthened his hope “that the entire problem will be solved without such a bold hypothesis” (Bos 1999, 679). Despite his inability to conclusively isolate and identify the causal factor, Ivanovsky later continued to insist that “the contagium of the mosaic disease is able to multiply in the artificial media” (Bos 1999, 679). Furthermore, in an experiment meant to disprove Beijerinck’s conclusions about the solubility of the infectious agent, Ivanovsky showed that ink particles placed on the surface of an agar plate moved into its interior, asserting that the form of the infectious agent is that of solid particles. He further showed, in fixed and stained cells, the presence of minute amoeba-like structures which he believed might be the causal agent of TMD. Ivanovsky reconciled his conclusions with the infectivity of the filtered sap by hypothesizing that the TMD agent is spore-forming, and that the spores are able to pass through the filter. He also argued that his theory explains the agent’s resistance to heat and desiccation, and that if the spores germinate only under very specific conditions, this would explain the failure to grow the agent in vitro as well (Hughes 1977, 54). At the turn of the century, Beijerinck turned to bacteriology—perhaps because there were no experimental tools to verify Beijerinck’s view and to further investigate the nature of the contagium vivum fluidum (see Bos 1999, 683)—and the germ theory remained dominant for the first decades of the 1900s. During those years, most researchers still 8 9

See also the discussion in Wilkinson 1974, 213. For additional examples of the resistance by scientists to the idea of contagium vivum fluidum, see Hughes 1977; Waterson/Wilkinson 1978; and Bos 1999.

Early Concepts in Investigative Practice—The Case of the Virus

201

viewed the difference between ‘filterable viruses’ and ordinary bacteria as primarily a matter of size rather than the pathogen being a fundamentally different type of infectious agent. Accordingly, they assumed that these extremely minute creatures still possess the properties of ordinary bacteria. They further attributed the failure of these agents to grow in vitro to inappropriate experimental conditions. During this period, progress in the field was very slow as scientists, insisting on adhering to Koch’s postulates, repeatedly failed to grow viruses by bacteriological techniques. Reflecting on this period, Rivers writes: [I]n regard to certain diseases, particularly those caused by viruses, the blind adherence to Koch’s postulates may act as a hindrance instead of an aid … The idea that an infectious agent must be cultivated in a pure state on lifeless media before it can be accepted as the proved cause of a disease has also hindered the investigations of certain maladies, inasmuch as it denies the existence of obligate parasitisms the most striking phenomenon of some infections, particularly those caused by viruses. (Rivers 1937, 4 – 5)

Although tissue culture technique for growing viruses in vitro was developed as early as 1913 (Steinhardt et al. 1913), tissue cultures were not widely used until several decades later (see Parker 1950), due to the technical difficulties involved in this method, along with the prevailing assumption that viruses could, in principle, be grown in lifeless media. Since viruses could not be visualized or grown in vitro, research focused on viral diseases rather than on the causal agent themselves. Indeed, during these years, many additional diseases were found to be caused by the filterable agents. The nature of these agents, however, was still a mystery. After a couple of decades in which bacteriologists repeatedly tried— and failed—to grow the virus in vitro, it became difficult to sustain the view that these failures were only due to temporary technical difficulties. In 1927, Rivers wrote: In general it can be said that no worker has proved that any of the etiological agents of the [viral diseases] is susceptible to cultivation in the absence of living cells. A satisfactory explanation of the difficulty experienced in cultivating the viruses on artificial media is not easily found. Their small size alone should not make them insusceptible to cultivation. Nor does it seem to be a question of delicacy or sensitiveness, because many of them are extremely resistant to chemical and physical agents. Therefore, the viruses appear to be obligate parasites in the sense that their reproduction is dependent upon living cells. (Rivers 1927, 228)

202

Corinne L. Bloch

The repeated failures of the traditional bacteriological techniques to deal with the virus increased the confusion and disagreement in the field, with regards to the nature of the virus. Although the germ theory was still the dominant one, some alternatives started to gain more attention. For example, some researchers took the filterability and invisibility of the infectious agent, along with its inability to grow on artificial medium, its ability to diffuse through agar, and its reaction to various conditions, to indicate that viruses are “products of cellular perversion” rather than exogenous agents (Rivers 1932, 431). By the 1930s, most scientists still viewed the virus as a living organism. It became clear, however, that it is different from bacteria in some essential respect, and that the nature of the virus is not known (ibid., 437 – 438).10 While the field was still far from achieving a consensus with regard to the nature of these agents, scientists at this point (mostly) agreed that they were dealing with a new class of infectious agents, with some common properties.11 How should this new category be classified? The discovery of inclusion bodies in the host cells in many viral diseases demonstrated the possibility that viruses could be grouped on the basis of common biological properties (Hughes 1977, 83 – 84). Confusion was still abundant, however, about what these bodies are and what their relation to the virus is (Rivers 1932, 431 – 432). Thus, inclusion bodies could not, at this point, serve as biological characteristics for differentiating the viruses from ‘ordinary bacteria’. Despite the lack of biological differentiating characteristics, scientists agreed that this distinct class of infectious agents shared several physical characteristics with experimental manifestations. The new category was classified as viruses, and in accordance with Beijerinck’s original description, they were defined by: (D1): “invisibility by ordinary microscopic methods, failure to be retained by filters impervious to well-known bacteria, and inability to propagate themselves in the absence of susceptible cells” (Rivers 1932, 423).12

The formulation of D1 marked a turning point in the development of virus research both because it finally reflected a consensus among most researchers that viruses are different in nature from other contagious 10 See also Stanley 1937, 59. 11 Some researchers, however, suggested that the group may not be a homogenous one. See Rivers 1932, 440. 12 Rivers’ description was taken to be a representative definition, reflecting the consensus in the 1930s. See Helvoort 1994, 186; Norrby 2008, 1118.

Early Concepts in Investigative Practice—The Case of the Virus

203

agents, and because it indicated that scientists had realized that the virus needs to be treated with different experimental approaches. Thus, D1 provided explicit criteria for the experimental individuation of viruses. As I discussed earlier, prior to the 1930s, advancement was impaired by researchers treating viruses as a sub-category of bacteria. Thus, they continued to expect that viruses will be able to grow in lifeless cultures if only the right medium is found. With the understanding—reflected in D1—that viruses were obligate parasites that reproduce only within a living cell, virus research began to advance very rapidly. As the fundamental differences between the virus and the bacteria became clear, several new techniques were developed specifically for viral research, and quickly became widespread. In 1931, Woodruff and Goodpasture were able to grow fowl pox viruses on chorio-allantoic membranes of chick embryos (Woodruff/Goodpasture 1931). Reflecting the growing consensus expressed by D1, that the growth of the virus must be inside living cells, the usefulness of this technique was immediately appreciated. Instead of continuing attempts to grow viruses in lifeless media, the method was rapidly adapted by many laboratories for use with other viruses.13 Rather than focusing on virus-inflicted pathology, research now focused on the determination of the nature of viruses and the ways in which they operate in the host cell. Rather than being limited to the microbial approach, research was directed toward the biochemical composition of viruses, now realized to be different from that of ordinary bacteria, as they lack the apparatus that enables them to reproduce independently. Wendell Stanley conducted multiple biochemical studies in which he measured the inactivation of the virus in exposure to various agents. The results indicated that the virus is a protein (Stanley 1934; 1935a; 1935b; 1936). As a result, scientists focused on development of techniques for the isolation, concentration, and purification of proteins. In 1935, Stanley first crystallized the TMD virus (Stanley 1935c), for which he was awarded a Nobel Prize in Chemistry in 1946. Further research following the crystallization of the virus established that the virus is, in fact, a nucleoprotein (Bawden/Pirie 1937), and its amino acid composition was studied. 13 For a few examples of adaptations of this technique in the 1930s for work with various viruses, see Higbie/Howitt 1934; Burnet 1935; Harrison/Moore 1937; Smadel/Wall 1937; Hoffstadt/Pilcher 1937.

204

Corinne L. Bloch

An additional technique stemming from the insight expressed in D1 was the plaque assay, used to isolate and purify viruses and to measure their infectivity, based on the visible changes produced in the host cells. These and other developments—such as the visualization of viruses under the electron microscope—shed light on the morphology and function of viruses.14 In 1957, Andre Lwoff, who, a few years later, was awarded the Nobel Prize in Physiology or Medicine, provided a definition that is based on these advances, and which is often considered the first to reflect causal understanding of the virus.15 He defined viruses as follows: (D2): “strictly intracellular and potentially pathogenic entities with an infectious phase, and (1) possessing only one type of nucleic acid, (2) multiplying in the form of their genetic material, (3) unable to grow and to undergo binary fission, (4) devoid of a Lipmann system” (Lwoff 1957, 246; italics in original).

Thus, the insights expressed in D1 facilitated the advancement of research techniques that enabled the experimental isolation of viruses and the investigation into their structure and physiology, leading to the formulation of D2.

3. The Virus as a Novel Kind As we have seen, experimental differences between what we now call virus and bacteria were observed as early as the late nineteenth century, when Pasteur and others could not isolate, visualize, and grow these infectious agents in vitro as they did with bacteria. However, at that point, the size of these agents was not taken to indicate that they are fundamentally different from others. Indeed, Pasteur wrote: “Although these beings are of infinite smallness, the conditions of their life and propagation are subject to the same general laws which regulate the birth and multiplication of the higher animal and vegetable beings” (Pirie 1948, 329). A few years later, the ultrafilterability of these agents was established, but, again, this was not enough to indicate a difference in nature between the virus and bacteria. Ton van Helvoort writes, “to 14 For a review of the progression of research see Hughes 1977, ch. 6. 15 For example, Norrby writes “Lwoff’s (Fig. 2) definition of viruses in 1957 [9] is often referred to as the first comprehensive and resilient description of their nature” (Norrby 2008, 1110).

Early Concepts in Investigative Practice—The Case of the Virus

205

what extent anomalies like filterability and invisibility had to be viewed as indicating a fundamental distinction from other micro-organisms like bacteria, fungi and protozoa was not immediately obvious” (Helvoort 1994, 191; italics in original). At what point was the virus viewed as different in nature from bacteria, rather than as a subgroup of bacteria, with somewhat different experimental characteristics? Let us go back to Beijerinck. Unlike Ivanovsky and others16, Beijerinck did not take his inability to grow the virus in vitro to be an experimental limitation, but rather a result of the special nature of the entities he was dealing with. His inability to remove the virus from the sap with a filter reflected the fact that “the virus must really be regarded as liquid or soluble and not as corpuscular” (Beijerinck 1898, 36), a conclusion that he argued was proven by the ability of the virus to diffuse through agar. His inability to produce cultures reflected the fact that the infectious agent “in the plant is capable of reproduction and infection only when it occurs in cell tissues that are dividing…” This conclusion was supported by Beijerinck’s observation that the virus attacks the tissues of the tobacco plant that are “not only in a state of active growth, but in which the division of cells is still in full progress” (ibid., 38). Hence, Beijerinck did not take his unsuccessful attempts to visualize and grow the virus as indicative of experimental failings, but rather as stemming directly from the nature of the entity he was dealing with. As Lute Bos wrote: Beijerinck’s virus was an entity fundamentally different from microorganisms, as it was present systemically in plants, passively moving, together with the plant’s metabolites; it multiplied in growing tissue; and it retained infectivity in expressed sap after filtration and alcohol precipitation, as well as after storage in desiccated leaves and dry soil. Beijerinck clearly indicated that the virus became part of the cell’s metabolism: ‘Without being able to grow independently, it is drawn into the growth of the dividing cells and here increased to a great degree without losing in any way its own individuality in the process’. Beijerinck’s awareness that the virus required an active host metabolism was crucial. Beijerinck’s biographer stated that ‘Throughout the paper, Beijerinck expresses a firm belief in the existence of an autonomous sub-microscopical (that is, sub-cellular) form of life’. (Bos 2000, 83; italics added)

Similarly, Hughes wrote that Beijerinck formulated a concept of the causal agent which for the first time clearly differentiated it from pathogenic micro-organisms. He presented an entirely 16 See Ivanovsky 1892, 29; and also Rivers 1932, 433; Hughes 1977, 63, 72.

206

Corinne L. Bloch

new view of the infectious agent; his contagium vivum fluidum was a microscopically invisible and filterable substance which was alive and multiplied only within living cells. This interpretation anticipated in several aspects the modern concept of the virus, namely the fact that it is an infectious, reproducing entity, which is an obligate parasite of living cells … Hence it would appear that Beijerinck rather than Ivanovski should be credited with the formulation of a concept which agrees to some extent with the modern definition of the virus. Particularly remarkable is Beijerinck’s suggestion that the agent reproduced by passive inclusion in the protoplasm of dividing cells. This property of obligate, intracellular replication, rather than submicroscopic size or filterability, is now recognized to be one of the crucial differences between viruses and micro-organisms. (Hughes 1977, 57)

Beijerinck interpreted his findings as indicative of a new type of etiological agent, rather than a subgroup of bacteria with specific experimental characteristics. It is because Beijerinck recognized that this infectious agent is different in nature from bacteria, that he named it contagium vivum fluidum instead of using the term ‘filterable virus’, which had been previously used (Hughes 1977). Andre Lwoff wrote in 1957: Although [Pasteur and Roux] were unable to see the agent [of rabies], they quite naturally considered it to be a small microbe. When Iwanowsky discovered that the juice of tobacco plants showing the symptoms of mosaic disease remained infectious after filtration, he also concluded that the infectious agent was a small microbe. Then came Beijerinck who confirmed the filterability of tobacco mosaic virus. He also discovered that the infectious power was not lost by precipitation with ethanol and that the infectious agent could diffuse through agar gels. The infection, wrote Beijerinck, is not caused by microbes but by a fluid infectious principle. This intuition of genius about a difference of nature between tobacco mosaic virus and micro-organisms makes Beijerinck the real founder of conceptual virology. (Lwoff 1957, 240 f; italics added)

While Beijerinck’s conclusions were not immediately accepted, and disagreement about the nature of viruses was still abundant in the first decades of the twentieth century, by the time D1 was formulated most researchers came to view the viruses as unique entities. Even those theorists who still held that viruses are minute living organisms, considered them to be different from ‘ordinary’ bacteria not only in size but also in their intimate type of parasitism, and in other related characteristics of the virus.17 Thus, even though the nature of the virus had not yet 17 See Rivers 1932, 423, for some examples. It should be noted, however, that the consensus was not complete. For example, Ledingham (1932, 953) wrote that

Early Concepts in Investigative Practice—The Case of the Virus

207

been elucidated, it was by now viewed as a separate class of entities, classified as viruses, and defined in the spirit of Beijerinck by D1. We are now in a position to answer the first question I posed in the introduction—what did scientists have to know about the virus in order to justify grouping its instances under a new concept, differentiating it from all other agents? When does the scientist stop treating a new phenomenon as a subtype of a known kind, albeit with different experimental characteristics, and starts treating it as a unique kind requiring a concept of its own? We have seen that the first to treat the virus as a new kind of entity was Beijerinck. Beijerinck did not know the structure of the virus, or the way in which it affects the host cell. But he concluded that, despite the virus being an autonomous entity (or living, as he put it), it has two interrelated characteristics that differentiate it from bacteria—it is noncorpuscular and it is able to reproduce only within living cells. Beijerinck understood that these two differences give rise to far many other characteristics of the virus that differentiate it from bacterium. These would have implications for the virus’s interaction with the host cells, for the dynamics of the spread of the disease within an individual plant and its transmission between plants, and for possible treatments against the disease.18 Thus, without having knowledge of the structure and function of the virus, Beijerinck understood that the differences he observed are indicative of differences in both the structure and function of the virus, as well as in its pathology. A second point is that Beijerinck’s account could explain other experimental phenomena reported around the same time, indicating that his new concept may have greater applicability than just to the TMD virus. Around the time of publication of Beijerinck’s paper, there were several diseases that could be explained by Beijerinck’s concept. These may have served as incentives for his investigation (Bos 1999, 680). For example, Loeffler and Frosch’s 1898 publication on the foot-and-mouth disease in cattle detailed experimental results very similar to those of Beijerinck. As mentioned earlier, these authors reported that the disease is transferred in lymph from epidermal vesicles, and that the lymph remains infectious even after filtration. They were further unable to grow the infectious agent in artificial media or view it “viruses are living organisms distinguished from the visible bacteria only by their small size.” 18 Beijerinck discussed some of these issues in his paper; see Beijerinck 1898.

208

Corinne L. Bloch

under a microscope (Loeffler/Frosch 1898). Beijerinck disagreed with Loeffler and Frosch’s conclusion, which adhered to the germ theory, that the causal agent of foot-and-mouth disease is corpuscular in nature, as he regarded his contagium vivum fluidum as a better explanatory alternative to their results.19 Beijerinck devoted a portion of his paper to discussion of other infectious diseases that were likely to be caused by other types of contagium vivum fluidum. Although there were some differences between these viruses (e. g., differences in their transmission between plants and in their ability to exist, even if not multiply, outside the plant for a certain period), he believed it was highly probable that they too, like the TMD virus, were a type of contagium vivum fluidum. If concepts enable us to make generalizations and inductions about members of a category,20 then this assumption of Beijerinck can be seen as a significant factor in his decision to form a new concept. If the new phenomenon is a representative of a larger category whose instances share various characteristics, forming a concept to designate the larger kind would enable scientific knowledge to be generalized for all instances of that kind. In conclusion, it seems that to form a new concept, a scientist does not have to possess deep causal knowledge about the nature of the phenomena. On the other hand, a mere experimental difference between the new phenomenon and the old one (e. g., the virus’s invisibility and filterability) is not sufficient for the formation of a new concept. What is required is that the observed differences—even if not themselves causally-fundamental—would indicate that such a causally-fundamental difference is probable, and that there would therefore be other differences between the phenomenon in question and others.21 Last,

19 Addressing Loeffler’s and Frosch’s finding that the agent of the foot-and-mouth disease lost infectivity after repeated filtration with Kitasato filter, Beijerinck pointed out that soluble substances could be adsorbed by bacterial filters, thus the experiment was not sufficient as proof that the agent is cellular. For Beijerinck’s response to Loeffler’s and Frosch’s results, see Bos 1999, 680 and Hughes 1977, 73, fn. 1. 20 See, for example, Machery 2009. 21 As I will discuss in the following section, invisibility and filterability do have some implication for the nature of viruses. However, by themselves, they were not enough to indicate a difference in nature between the virus and bacteria.

Early Concepts in Investigative Practice—The Case of the Virus

209

the phenomenon under investigation is taken to represent a general kind rather than a unique instance. I have argued here that scientists can form a concept—and even define it—before they have detailed understanding of the causal nature of the kind under investigation. As we have seen in the previous section, a formation of the concept and its definition at such an early stage facilitates the progression of research, and the eventual formulation of a more causally-fundamental definition. In the next section, I shall analyze the characteristics specified in the early definition of the virus, and discuss the way in which even a definition that is formulated in experimental terms is able to pick out a phenomenon in a stable manner.

4. The Early Definition of the Virus—Arbitrary or Functional? Ton van Helvoort viewed the operational definition of the virus as completely different from definitions attempting to decipher its nature. He wrote: In creating the category of ‘filterable virus’ (or ‘virus’) researchers were implicitly confronted with the question: What is a Virus? In fact, this question was answered in two ways. On the one hand, it was defined operationally, namely as an agent which was filterable and invisible with light-microscopic techniques. On the other hand, the question arose as to the nature of these filterable agents. The answers to this query were divergent, and ranged from an ultramicrobe, a globulin and a colloid to a ‘free gene’. (Helvoort 1991, 557)

In this section, I will argue that there is no tension between the experimental isolation of a kind, and a definition that attempts to elucidate its nature. The project of individuating a phenomenon empirically is not distinct from the project of conveying something about its causal structure. Thus, I will show that the difference between D1 and D2 is a difference in degree and not in kind. In doing so, I will answer the question I posed in the introduction: How can a definition formulated in experimental terms reliably pick out a stable kind which is independent of any specific set of techniques? Van Helvoort argued that D1 was not a proper definition because it depended on the specific techniques which were used in microbiology, and thus did not provide insight into the nature of the virus (Helvoort 1994, 188). This criticism is, from the outset, inapplicable to the virus’s inability to grow in lifeless media, which is one of the characteristics that

210

Corinne L. Bloch

D1 specifies. This is because the bacteriologists’ inability to grow the virus in culture was not due to a technical difficulty, but rather to the reproductive mechanism of the virus. It is easy to see, however, how such criticism could be directed toward the criterion of filterability. It was established before D1 was formulated, that the filterability of viruses depends on multiple factors, many of them are idiosyncratic to the particular experimental protocol used at the lab. In 1903, Roux wrote: “In order to classify a microbe amongst those which traverse filters, it is not sufficient to assert that it has traversed the [filter] wall; it is necessary to know under what conditions [it had done so]” (Hughes 1977, 66). Rivers elaborated some of the relevant experimental conditions: [T]he most one can say regarding the viruses is that under given experimental conditions they either pass or do not pass through certain filters. The failure to pass through a filter, however, is certainly not determined in every instance by the size of the virus. The electrical charge on the virus, the electrical charge on the filter, the adsorption of the virus by aggregates of protein or by cell detritus, the amount of protein or other substances in the virus emulsion, the temperature at which the filtration is conducted, the amount of negative or positive pressure employed, the duration of filtration, and other factors, not mentioned or not known, serve to influence the results of all filtration experiments. (Rivers 1927, 225; italics added)

Since some viruses could be retained by some filters but not by others, filterability cannot indicate any intrinsic property of the virus. Sir Henry Dale, who later won the Nobel Prize in Physiology or Medicine, wrote in 1931: “The crude qualitative distinction between the filterable and non-filterable agents of infection has long since ceased to have any real meaning. There is no natural limit of filterability. A filter can be made to stop or to pass particles of any required size” (Dale 1931, 600; italics added). Similarly to the criterion of filtration, visibility under the microscope also depends, to some extent, on the specific experimental methods employed in the lab, such as the resolving power of the microscope, the staining technique used, etc. Thus, it is easy to see how one could make the objection that, since (some of) the characteristics of D1 depend on specific experimental technique, they cannot reveal the nature of the virus. A second, related critique stems from the fact that D1 is cast in terms of three negative properties, all contrasting it with bacteria: the virus’ invisibility by ordinary microscopic methods, its failure to be retained by filters, and its inability to grow in cell-free media. Lute Bos argued that the early definitions of the virus—which he calls the ‘dark-age def-

Early Concepts in Investigative Practice—The Case of the Virus

211

inition’—reflected the haunting of virology by Koch’s Postulates, and that since, “in practice, a virus was something that Koch’s Postulates did not apply to,” the virus concept did not refer to a homogenous kind, but rather was a wastebasket of a variety of mysterious pathogens (Bos 1981, 97). Indeed, if the criteria in the definition of the virus are merely the negation of the characteristics of bacteria, how can they reveal anything about the fundamental nature of the virus? As I discussed above, if the virus is indeed a stable kind and not idiosyncratic to particular experimental setups, while D1 is dictated by the experimental techniques common in bacteriology and not by knowledge of the nature of the virus, the assumption that D1 reliably picks out instances of the virus may seem arbitrary. Yet, as we have seen in section 2, scientists repeatedly made this assumption. In this section, I shall explore the three characteristics specified in D1, examining whether they were entirely dependent, in this manner, on experimental techniques, or whether the defining characteristics were chosen according to additional, theoretical considerations that made the early definition a reliable tool for picking out instances of the virus. The first two defining characteristics in D1 are the invisibility and filterability of viruses. Although these characteristic depend on additional factors and not on size alone, together they are still a good indication of the small size of the agent. The size, in turn, has implications for the structure and function of viruses. Beijerinck clearly grasped these implications, as evident in his argument that his inability to remove the virus from the sap with a filter reflected the fact that the virus is a fluid rather than a cellular structure (Beijerinck 1898, 36). In 1932, Rivers wrote about the functional significance of the size of the virus: If the figure of 210 mm for the diameter of vaccine virus is accurate, there is no reason as far as size is concerned to suppose that the virus is not a living organism. On the other hand, if the figures of 1.2 mm, 5.5 mm, and 8 mm for the bacteriophage, mosaic virus, and foot-and-mouth disease virus, respectively, are correct, it is obvious that these agents cannot be highly organized, because it is impossible that with such a magnitude they can consist of more than one, or at most several, molecules of protein. (Rivers 1932, 426)

Along the same lines, Dale wrote that “the dimensions assigned to the units of some viruses, representing them as equal in size to mere fractions of a protein molecule, might well make one hesitate to credit them with the powers of active self-multiplication” (Dale 1931, 601). Similarly, in his Nobel lecture, Wendell Stanley wrote “It was soon re-

212

Corinne L. Bloch

alized that the acceptance of a virus 10 mm in size as a living organism presented certain inherent difficulties, especially with respect to metabolic activity. Grave doubts were expressed that the complicated processes of respiration and digestion and the general metabolic functions usually associated with life could be contained within structures as small as 10 mm, especially since protein molecules larger than 10 mm were known” (Stanley 1946, 140). Finally, Lwoff wrote, “[v]iruses are often opposed to bacteria because of their size … If dimensions have any meaning it is not by the astrological virtue of a number, but because of a correlation between size and some essential properties which are responsible for fundamental differences ” (Lwoff 1957, 244 f; italics added).22 We see, therefore, that although the specific experimental conditions may affect the filterability and visibility of the infectious agent, these properties are still causally related to the size of the agent, which is, in turn, causally related to many other properties. The effect of the various experimental conditions does not break the causal chain between the (yet unknown) fundamental properties of the agent and those properties that are manifested at the lab. This chain is preserved by the standard scientific practice of detailing as much of the relevant experimental factors as possible. The third criterion in D1 is the inability of the virus to grow in a lifeless media. Like the other criteria, this negative, experimental characterization actually reflects the fundamental distinction between the virus and the bacteria—a distinction based on the dependence of the virus on the host cell. We have seen that unlike Pasteur, Beijerinck viewed his inability to produce cultures as indicating (along with other evidence) that the infectious agent was capable of reproduction only within dividing host cells (Beijerinck 1898, 38). Similarly, in 1924 Andre Philibert wrote that the exclusive affinity of the virus to the living cell is the fundamental characteristic of viruses, which distinguishes them from all microbes (Hughes 1977, 87). Last, Rivers wrote that this characterization of viruses, which emphasizes the intimate relation that exists between them and their host cells, “implies much, 22 I should note that today, some viruses are known which are larger than some bacteria. Partly for this reason, the size of the agent is no longer a component of the definition of the virus. Such revisions of defining characteristics are certainly not unique to operational definitions. My focus here is on the physiological significance that the scientists attributed to the size of the virus, within their context of knowledge, and the relation of its size to the early definition.

Early Concepts in Investigative Practice—The Case of the Virus

213

not only as concerns their biological nature which is still a moot question, but as regards their activities about which something is definitely known” (Rivers 1932, 423; italics added). We are now in a position to take stock of D1 and consider it within its context. Van Helvoort argues that in contrast to D2, D1 is not stated in functional terms (Helvoort 1994, 187). Indeed, compared to 1957, not much was known in 1932 about the function of the virus. As the above discussion shows, however, D1 specifies the characteristics that—based on causal theoretical background (e. g., about the relation between size and cellular structure) —are thought to have implications for various aspects of the structure and function of the virus. This makes D1, preliminary as it may be, a useful tool for the condensation and integration of all that was known about the viruses at the time, as it implies an array of other characteristics. Hence, D1 not only facilitates the experimental individuation of viruses and the empirical investigation into their nature, but also the theoretical development of the concept, its unification and its differentiation from bacteria.23 While D1 was formulated in experimental terms derived from bacteriological techniques, the defining characteristics were caused—and therefore implied—by properties of the virus that are essential to its nature, namely, that it is non-corpuscular and that it depends on the host cell for reproduction. In addition, while the formulation of D1 certainly reflects the objective of distinguishing it from bacteria, and is composed of negative characteristics set against the background of bacteriology, the fact that the characteristics specified in D1 have multiple implications for the structure and function of the virus, makes D1 a tool for individuating a fairly homogenous kind, rather than a mere collection of phenomena that did not fit into the existing kinds at the time.24 That with D1 the concept of the virus has become a concept in its own right rather than a wastebasket for eclectic phenomena with nothing in common, was nicely asserted by Rivers, who wrote in 1937: Regardless of lack of complete knowledge of their nature, it is decidedly incorrect to say that these agents are unknown … Thus, to the initiate 23 I discuss in length the integrative role of definitions in Bloch 2011. 24 It is certainly true that some infectious agents were first included under the concept of virus based on D1, and were later excluded from it (see, for example, Helvoort 1994), but such reclassifications occasionally take place as science progresses, and are not unique to definitions formulated in experimental, or negative terms.

214

Corinne L. Bloch

the term virus used in connection with an infectious agent has lost its old indefinite meaning and has acquired a new significance similar in exactness to that borne by the words bacterium and spirochete. (Rivers 1937, 2)

Feest points out that an early scientific definition is often formulated in terms of specific experimental outcomes. What D1 shows is that a definition formulated in experimental language and even in negative terms is able to pick out instances of a kind in a stable manner, by virtue of these characteristics being part of the causal structure of the kind, indicative of its various other differentiating characteristics. Having discussed D1 in length, it is worth noting that D2, as well, has elements that are formulated in negative terms, against a bacteriological background. For example, bacteria multiply by a process called binary fission. In this process, the bacterium grows and divides into two identical daughter cells. As a result of binary fission, the sizes of the individual bacteria in a culture of multiplying bacteria show a continuous distribution. Physicochemical studies in the 1930s which showed, in contrast, the homogeneity of the virus material, excluded the possibility that the virus multiplies by binary fission (Eriksson-Quensel/Svedberg 1936). Lwoff chose to include the criterion “unable to grow and to undergo binary fission” in the D2 definition. The virus’s inability to grow and to undergo binary fission is an important characteristic, reflecting a crucial difference between the multiplication of bacteria and that of the virus, causally explaining various aspects of the physiology of the virus. At the same time, the specification of virus’s inability to undergo binary fission—just like its inability to propagate without susceptible cells, specified in D1—is a negative one, derived from and contrasted with bacteria. D2 is, of course, more causally-fundamental than D1, and reveals much more about the virus’s structure and its physiology. I contend, however, that D1 and D2 show that the difference between what might be called an operational definition and a definition in terms of more causally-fundamental characteristics is a difference in degree and not in kind. I have argued elsewhere that scientific definitions are contextual (they depend on the scientists’ knowledge of the causal structure of the kind they wish to define and their knowledge of the other kinds they wish to differentiate it from), and that they are revised into more causally-fundamental definitions as scientific knowledge expands and deepens, explaining a growing number of differentiating characteristics of a kind (see Bloch 2011). I contend here that an operational definition

Early Concepts in Investigative Practice—The Case of the Virus

215

is the first on a continuum of changing definitions of a kind throughout scientific development, a continuum in which new definitions are more causally-fundamental than the old ones.25 To go back to our case study, Hughes writes that at the beginning of the 20th century: “It was recognized that filterability and ultrascopic size were technique-determined physical characteristics which provided little information about intrinsic biological properties” (Hughes 1977, 87). This tension between definitions that are dependent on techniques (and may even be negations of characteristics of other kinds) and definitions that are derived from scientific knowledge about the nature of the kind in question only exists if one views scientific definitions as aiming to specify eternal metaphysical essences. Such essences cannot depend on the procedures scientists use in the lab or on what they happen to know about other kinds. But if we view definitions as a means of integrating our knowledge—using characteristics that are dependent on the scientists’ context of knowledge but are causally-fundamental within that context—this tension disappears. Accordingly, even with very preliminary knowledge about a kind, scientists can form a new concept and properly define it in a way that latches on to the causal structure of the kind, thus succeeding in both individuating it—experimentally and conceptually—in a stable manner and relating to its additional distinguishing characteristics. At any stage of scientific development, scientific definitions must adhere not only to the causal structure of the phenomenon, but also to a range of epistemic requirements, such as the differentiation of the phenomenon from others. Scientists cannot—indeed they must not—step outside their experimental tools to understand nature. Rather, they should use these tools to do so.

Acknowledgements I am grateful to each of the contributors to this volume for their helpful suggestions during two workshops on Scientific Concepts and Investigative Practice. I especially thank Uljana Feest, Friedrich Steinle, Eva Ja25 This is not to say that operational definitions are not used at a later stage of the scientific development, for purposes of identification of instances of the phenomenon under specific experimental settings. But at that point they no longer perform the explanatory roles I have discussed here, mainly, those they do not reflect the structure of the scientific concept.

216

Corinne L. Bloch

blonka, Allan Gotthelf, James Lennox and Patrick Mullins for detailed and insightful comments. I would also like to thank James Lennox for bringing to my attention the case of the virus as an interesting case study. Reference List Bawden, F. C. / Pirie, N. W. (1937), “The Isolation and some Properties of Liquid Crystalline Substances from Solanaceous Plants Infected with Three Strains of Tobacco Mosaic Virus.” In: Proceedings of the Royal Society of London: Series B, Biological Sciences 123 (832), 274 – 320. Beijerinck, M. W. (1898), “Concerning a Contagium Vivum Fluidum as Cause of the Spot Disease of Tobacco Leaves.” English translation (1942) in: Phytopathological Classics, St. Paul, MN: American Phytopathological Society, 33 – 54. Beijerinck, M. W. (1899), “On a Contagium Vivum Fluidum Causing the Spotdisease of the Tobacco-Leaves.” In: KNAW, Proceedings, Amsterdam 1, 170 – 176. Bloch, C. L. (2011), “Scientific Kinds Without Essences.” In: Bird, A. / Ellis, B. / Sankey, H. (eds.), Properties, Powers and Structures: Issues in the Metaphysics of Realism (Routledge Studies in Metaphysics 5), London: Routledge, 233 – 255. Bogen, J. / Woodward, J. (1988), “Saving the Phenomena.” In: The Philosophical Review 97 (3), 303 – 352. Bos, L. (1981), “Hundred Years of Koch’s Postulates and the History of Etiology in Plant Virus Research.” In: Netherlands Journal of Plant Pathology 87, 91 – 110. Bos, L. (1999), “Beijerinck’s Work on Tobacco Mosaic Virus: Historical Context and Legacy.” In: Philosophical Transactions of the Royal Society of London: Series B, Biological Sciences 354, 675 – 685. Bos, L. (2000), “100 Years of Virology: From Vitalism via Molecular Biology to Genetic Engineering.” In: Trends in Microbiology 8 (2), 82 – 87. Burnet, F. M. (1936), “Immunological Studies with the Virus of Infectious Laryngotracheitis of Fowls using the Developing Egg Technique.” In: Journal of Experimental Medicine 63 (5), 685 – 701. Copi, I. M. (1954), “Essence and Accident.” In: The Journal of Philosophy 51 (23), 706 – 719. Dale, H. H. (1931), “The Biological Nature of the Viruses.” In: Nature 128, 599 – 602. Dubs, H. H. (1943), “Definition and Its Problems.” In: The Philosophical Review 52 (6), 566 – 577. Ellis, B. D. (2001), Scientific Essentialism (Cambridge Studies in Philosophy), Cambridge, U.K. / New York: Cambridge University Press.

Early Concepts in Investigative Practice—The Case of the Virus

217

Eriksson-Quensel, I. / Svedberg, T. (1936), “Sedimentation and Electrophoresis of the Tobacco-Mosaic Virus Protein.” In: Journal of the American Chemical Society 58 (10), 1863 – 1867. Feest, U. (2010), “Concepts as Tools in the Experimental Generation of Knowledge in Cognitive Neuropsychology.” In: Spontaneous Generations: A Journal for the History and Philosophy of Science 4 (1), 173 – 190. Harrison, R. W. / Moore, E. (1937), “Cultivation of the Virus of St. Louis Encephalitis.” In: American Journal of Pathology 13 (3), 361 – 375. Helvoort, T. van (1991), “What Is a Virus? The Case of Tobacco Mosaic Disease.” In: Studies in History and Philosophy of Science 22 (4), 557 – 588. Helvoort, T. van (1994), “History of Virus Research in the Twentieth Century: The Problem of Conceptual Continuity.” In: History of Science 32 (2), 185 – 235. Higbie, E. / Howitt, B. (1935), “The Behavior of the Virus of Equine Encephalomyelitis on the Chorioallantoic Membrane of the Developing Chick.” In: Journal of Bacteriology 29 (4), 399 – 406. Hoffstadt, R. E. / Pilcher, K. S. (1938), “The Use of the Chorio-Allantoic Membrane of the Developing Chick Embryo as a Medium in the Study of Virus Myxomatosum.” In: Journal of Bacteriology 35 (4), 353 – 367. Hughes, S. S. (1977), The Virus: A History of the Concept, London / New York: Heinemann Educational Book / Science History Publications. Ivanovsky, D. (1892), “Concerning the Mosaic Disease of the Tobacco Plant.” English translation (1942) in: Phytopathological Classics, St. Paul, MN: American Phytopathological Society, 27 – 30. Koch, R. (1884), “The Etiology of Tuberculosis.” English translation: Thomas D. B. (1999) (ed.), Milestones in Microbiology: 1546 to 1940, Washington, DC: The American Society for Microbiology, 116 – 118. Ledingham, J. C. G. (1932), “A Paper on Tissue Changes in Virus Diseases.” In: The British Medical Journal 2 (3751), 953 – 957. Loeffler, F. J. / Frosch, P. (1898), “Report of the Commission for Research on the Foot-and-Mouth Disease.” English translation in: Thomas, D. B. (ed.) (1999), Milestones in Microbiology: 1546 to 1940, Washington, DC: American Society for Microbiology, 149 – 153. Lwoff, A. (1957), “The Concept of Virus.” In: Journal of General Microbiology 17, 239 – 253. Machery, E. (2009), Doing without Concepts, Oxford: Oxford University Press. Mayer, A. E. (1886), “Concerning the Mosaic Disease of Tobacco.” English translation (1942) in: Phytopathological Classics, St. Paul, MN: American Phytopathological Society, 11 – 24. Nocard, E. I. E. et al. (1896), “The Microbe of Pleuropneumonia.” English translation (1990) in: Reviews of Infectious Diseases 12 (2), 354 – 358. Norrby, E. (2008), “Nobel Prizes and the Emerging Virus Concept.” In: Archives of Virology 153, 1109 – 1123. Parker, R. C. (1950), Methods of Tissue Culture, 2nd ed., New York: Hoeber. Pirie, N. W. (1948), “Development of Ideas on the Nature of Viruses.” In: British Medical Bulletin 5(4 – 5), 329 – 333.

218

Corinne L. Bloch

Rivers, T. M. (1927), “Filterable Viruses: A Critical Review.” In: Journal of Bacteriology 14 (4), 217 – 258. Rivers, T. M. (1932), “The Nature of Viruses.” In: Physiological Reviews 12 (3), 423 – 452. Rivers, T. M. (1937), “Viruses and Koch’s Postulates.” In: Journal of Bacteriology 33 (1), 1 – 12. Smadel, J. E. / Wall, M. J. (1937), “Elementary Bodies of Vaccinia from Infected Chorio-Allantoic Membranes of Developing Chick Embryos.” In: Journal of Experimental Medicine 66 (3), 325 – 336. Stanley, W. M. (1934), “Chemical Studies on the Virus of Tobacco Mosaic, Part II: The Proteolytic Action of Pepsin.” In: Phytopathology 24, 1269 – 1289. Stanley, W. M. (1935a), “Chemical Studies on the Virus of Tobacco Mosaic, Part III: Rates of Inactivation at Different Hydrogen-Ion Concentrations.” In: Phytopathology 25, 475 – 492. Stanley, W. M. (1935b), “Chemical Studies on the Virus of Tobacco Mosaic, Part IV: Some Effects of Different Chemical Agents on Infectivity.” In: Phytopathology 25, 899 – 921. Stanley, W. M. (1935c), “Isolation of a Crystalline Protein possessing the Properties of Tobacco-Mosaic Virus.” In: Science 81 (2113), 644 – 645. Stanley, W. M. (1936), “The Inactivation of Crystalline Tobacco-Mosaic Virus protein.” In: Science 83 (2165), 626 – 627. Stanley, W. M. (1937), “Crystalline Tobacco-Mosaic Virus protein.” In: American Journal of Botany 24, 59 – 68. Stanley, W. M. (1946), “The Isolation and Properties of Crystalline Tobacco Mosaic Virus.” Nobel Lecture in Stockholm, December 1946. Steinhardt, E. / Israeli, C. / Lambert, R. A. (1913), “Studies on the Cultivation of the Virus of Vaccinia.” In: The Journal of Infectious Diseases 13 (2), 294 – 300. Waterson, A. P. / Wilkinson, L. (1978), History of Virology, London / New York / Melbourne: Cambridge University Press. Wilkinson, L. (1974), “The Development of the Virus Concept as Reflected in Corpora of Studies on Individual Pathogens, Part 1: Beginnings at the Turn of the Century.” In: Medical History 18 (3), 211 – 221. Woodruff, A. M. / Goodpasture, E. W. (1931), “The Susceptibility of the Chorio-Allantoic Membrane of Chick Embryos to Infection with the Fowl-Pox Virus.” In: American Journal of Pathology 7 (3), 209 – 222.

Scientific Concepts in the Engineering Sciences Epistemic Tools for Creating and Intervening with Phenomena Mieke Boon 1. Introduction In philosophy of science, phenomena are understood as observable and unobservable objects, events or processes that have two important roles to play in science. They make us curious and thus point at theories, and they serve as evidence for theories (e. g., Hacking 1983; Bogen/Woodward 1988). Similarly, in the engineering sciences, phenomena are understood as both observable and unobservable objects, processes and properties. Yet, while the philosophy of science focuses on their epistemic role—that is, the role phenomena serve in constructing and testing theories—phenomena in the engineering sciences also are important for their own sake. This is because one of the ultimate purposes of these scientific research practices is technologically reproducing, newly creating, preventing, intervening with and detecting phenomena, especially those that could play a productive or obstructive role in the technological functioning of artifacts. The formation of scientific concepts of phenomena in those domains cannot be understood as merely defining or describing pregiven things, because phenomena of interest usually are not observable in an unproblematic manner. Instead, our conceptualization of such phenomena often is entangled with the development of technological devices that produce, intervene with and/or detect phenomena. Taking this fact as a point of departure, the thesis of this paper is that in the engineering sciences, scientific concepts of phenomena function as epistemic tools for creating and intervening with phenomena that are of technological relevance. I will argue for this thesis by providing an account of (a) how phenomena of technological interest are conceptualized; (b) in what manner their conceptualization is entangled with the development of technological

220

Mieke Boon

devices that produce them; and (c) how such conceptualizations are entangled with already existing concepts, such that the latter enable epistemic uses with regard to the technological production of phenomena. Elsewhere, Knuuttila and I have argued that scientific models of phenomena enable epistemic uses, not because they are first and foremost correct representations of the phenomenon, but rather because they have been constructed in a specific way, that makes them suitable as epistemic tools (Boon/Knuuttila 2009; Knuuttila/Boon 2011). In a similar fashion, I will propose to consider scientific concepts of phenomena as epistemic tools. Similarly, Feest (2010) has analyzed how concepts can figure in the generation of knowledge. She proposes that concepts figure as tools for the investigation of objects. Important in her account is the idea that operational definition of concepts function as tools by providing paradigmatic conditions of application for the concepts in question (also see Feest in this volume). “[Operational definitions] are cast in terms of a description of a typical experimental set-up thought to produce data that are indicative of the phenomenon picked out by the concept” (Feest 2010, 178). As a result, scientific concepts are tools which allow for experimental interventions into the domain of study, thereby generating knowledge about the phenomenon. I will follow Feest in her emphasis on the role of paradigmatic experiments in the formation of concepts. Also, I side with her idea that concepts are tools for the investigation of the phenomenon. Furthermore, I acknowledge that in real scientific practices, scientists may have preliminary scientific concepts that are tentative representations of purported objects, as Feest (2010) suggests. Yet, my account aims to dig a bit deeper into the formation of concepts. Where my account differs from hers is that I am interested in the ways in which phenomena are conceptualized prior to their existence and with the aim of technologically producing them. Metaphorically speaking, I am interested in the design phase of a novel phenomenon, and I think of that design phase as one of conceptualization. It is for this reason that I will pursue the metaphor of design to describe the role of concepts. It will be argued that in scientific practice the successful functioning of scientific concepts has to do with specific features of conceptualizing. Conceptualizing a phenomenon (e. g., phenomena that we wish to create or intervene with for performing technological functions) involves that relevant but heterogeneous content is fitted together similar to how in designing heterogeneous content is fitted together. In this way, heterogeneous epistemic content is introduced, which enables further investigation of the phenomena.

Scientific Concepts in the Engineering Sciences

221

The paper is structured as follows: I begin with a brief explanation of how I view the relationship between phenomena and concept formation by comparing my account of the role of phenomena in scientific practice with the way this role is typically understood in philosophy of science (2.1). I will then argue that Bogen and Woodward’s (1988) analysis of phenomena, if it is to be useful for an analysis of engineering science, must be supplemented with the idea that phenomena have a material function (2.2), and I illustrate this with case study (2.3). In section 3, I draw on these ideas to give a more detailed account of engineering science to elaborate on my thesis that the epistemic role of concepts has to do with specific features of conceptualizing. Section 4 returns to the question of how concepts can perform the epistemic function my analysis attributes to them, highlighting in particular the extent to which heterogeneous concepts are involved throughout the entire process of conceptualizing phenomena, and highlighting the design-character of concepts.

2. What Are Phenomena? 2.1 The Epistemic Roles of Data and Phenomena in Experimental Practices Traditionally, philosophers of science have held that the word ‘phenomenon’ denotes observable objects, events, or processes (e. g., Fraassen 1980; Hacking 1983, 220). The primary epistemic role attributed to phenomena is that of making us curious about the world and allowing for the testing of scientific theories. Philosophers have also emphasized the role of phenomena as the explanandum, that is, as objects, events or processes that attract our attention, and are explained or predicted by the theory. Bailer-Jones (2009, 167) even suggests “to identify a phenomenon with recognizing that something has the potential to be theoretically explained.” Traditional and contemporary empiricists who rejected the idea that science aims at explanations, nevertheless agree on the epistemic role of phenomena in the construction of theories. They assume that theories must be constructed such that they ‘save the phenomena’, that is, make correct predictions of what has been observed or measured (e. g., Duhem [1906] 1962; Fraassen 1980). Thus, according to most of the philosophy of science, the role of phenomena in scientific practices is epistemic.

222

Mieke Boon

In the mentioned accounts, phenomena are observable regularities, events or processes, but also measured data-patterns. Bogen and Woodward (1988) agree with the epistemic role of phenomena, but object to the idea that phenomena must be observable. They propose making a conceptual distinction between data and phenomena. According to them, data are observable but idiosyncratic to the experimental setup. Phenomena, in most cases, are not observable in any interesting sense of that term, but are detected through the use of data. Concerning the epistemic role of phenomena, they characterize phenomena as stable, repeatable effects or processes that are potential objects of prediction and systematic explanation by general theories and which can serve as evidence for such theories. Empirical data play a different epistemic role. They are manifestations of phenomena and evidence for their existence (see also Woodward 2000, 163; Woodward 2011). In this manner, Bogen and Woodward reinterpret phenomena against a ‘thinner’ notion typical of the empirical tradition according to which phenomena are observed regularities (see also Massimi 2011). My position is that in doing scientific research, scientists need phenomena in the broader sense that Bogen and Woodward (1988) have suggested. Also, they need conceptions of them in order to investigate them, as Feest (2010) puts forward. However, considering the question how concepts of phenomena are formed in experimental practices, my difficulty with these accounts concerns the suggestion that phenomena are pre-given, ontologically independent entities. In this paper, I will not elaborate on why this view is problematic in general, but instead focus on the fact that a distinguishing feature of the engineering sciences is that in those areas of research genuinely novel phenomena are produced, thereby suggesting that the above-mentioned accounts of phenomena and concept formation about phenomena fail to illuminate the ways in which phenomena figure in the engineering sciences. My point is not to deny the existence of a world out there. Surely, scientists more or less think this way, but this cannot be appealed to as an explanation of how concepts are formed in scientific practice and why they enable the generation of knowledge. Let me put things straight: On my analysis, data can be manifestations of phenomena, but they are also idiosyncratic to experimental set-up for two reasons. Firstly, scientists acquire data of an unobservable phenomenon by means of the contingent measuring instruments and techniques they have at their disposal. This allows for the idea that other (e. g., past or future) instruments may produce a different set of data of the experimental set-up.

Scientific Concepts in the Engineering Sciences

223

Secondly, stable, repeatable patterns of data are produced by stably and reproducibly functioning experimental set-ups, that is, nomological machines, as Cartwright (1983; 1989) put it. Conceptualization of phenomena draws on patterns of data, and, as Feest (2010) proposes, on the paradigmatic experimental set up that produce those data. Consequently, conceptualizations of phenomena also are idiosyncratic to the instruments and measuring apparatuses and procedures that generate the data, and to the specific experimental set-ups. Following this line of reasoning, I argue that when we ask how concepts are formed (that is, when we inquire about the processes whereby phenomena are conceptualized), the primary question ought not to be whether concepts line up with independently existing, mind independent, pre-given phenomena. The question rather is how conceptualizations arise in relation to manifestations of phenomena that are produced under the highly idiosyncratic circumstances of particular experimental conditions and instruments. My point is that conceptualizing phenomena, as well as the generation of knowledge about them, is inescapably entangled with their technological production and measurement. In other words, concept formation goes hand in hand with the construction of a theory of the domain of the phenomenon, as Feest (this volume) suggests, but also with producing an experimental set-up for investigating it (see also Nersessian in this volume).

2.2 The Material Roles of Phenomena in the Engineering Sciences In the engineering sciences, what we want to know about the world is closely related with wanting to know how to intervene with the world.1 Therefore, our interest in phenomena does not only concern their role as objects, events and processes that guide in the production of scientific knowledge about the world, but also as the objects, etc. that we create and intervene with. 2 By intervening with phenomena we 1

2

Engineering sciences are scientific practices, which must be distinguished from engineering practices. In Boon 2011a, I explain the character of the engineering sciences, and the epistemic relationship between these scientific practices and practices of engineering. The idea that phenomena sometimes are created is not new. Hacking (1983, 220) has argued that phenomena often are created by means of technological instruments, “which then become the centerpieces of theory.” He uses the notion of ‘creating phenomena’ to stress that many of the phenomena in physics

224

Mieke Boon

create other phenomena. What is more, we purposefully aim at creating phenomena, not only because of their epistemic role, but also because we want these phenomena for specific technological functions. I will call this the material role of phenomena. Hence, in these scientific practices, in addition to their epistemic role, phenomena have a material role to play. By ‘material role’ I mean to point to the fact that the phenomena of interest in the engineering sciences literally are productive of physical effects and functions (or malfunctions, as the case may be). Hence, phenomena can play material roles in two ways: (1) as physical processes or regularities that are manifestations of technological functions, and (2) as objects, processes or properties by means of which the proper and/or improper functioning of technological artifacts is produced. As a consequence, observable phenomena and data not only are manifestations of the world ‘behind’ them in an epistemic sense (i. e., ‘data’ in the sense of Bogen/Woodward 1988). They are also manifestations in a material sense, for instance, of technological (mal)functions. It should be clear, then, that while I adopt some aspects of Bogen and Woodward’s distinction between the epistemic roles of data and phenomena. I argue that to understand the role of data and phenomena in the engineering sciences, the analysis must be differentiated somewhat further. I will illustrate this in section 2.3 below. On my account, scientific research involves two kinds of entangled activities (also see Hacking 1992). One is the development of scientific knowledge that enables thinking about the creation of and/or intervention with phenomena. The other is the development of technological devices that produces the phenomena under study and/or allows for interventions with those phenomena. In scientific research, the development of these technological devices also involves the (qualitative or quantitative) detection of data (in the sense of Bogen and Woodward), both of the phenomena that are manifestation of the technological function (as mentioned in 1) and/or those that are manifestations of the ‘underlying’ phenomena (as mentioned in 2). do not exist outside of certain kinds of apparatus. Hacking suggests that they emerge by mere chance in experiments, or that they are predicted by the theory, after which they are experimentally produced. My paper points at yet another possibility. New phenomena are conceptualized by means of theories or otherwise, rather than deduced from them. Subsequently, the conception of a new phenomenon functions as an epistemic tool in the production of scientific knowledge about it to such an extent that this tool enables their (technological) creation.

Scientific Concepts in the Engineering Sciences

225

2.3 The Case of Paint The above-mentioned roles of phenomena can be made somewhat more concrete by means of a very simple example, such as scientific research and development of paint. The technological function(s) of paint include qualities such as protecting a surface, workability in its application, durability and esthetic qualities. The manifestations of these technological functions involve perceivable and/or quantifiable properties of paint such as its color, its viscosity, and its fastness of drying, its adherence to a surface, its smoothness, its shininess, its hardness, and the stability of these properties. Hence, these are the phenomena that manifest (or display) the technological function. Examples of manifestations of technological dysfunctions of paint are properties (i. e., phenomena as mentioned in section 1) such as the tendency to maintain ripples, the increase of viscosity when applied at higher temperatures, the tendency to capture air-bubbles, the toxicity of the solvent, its poor scratch-resistance, formation of cracks in hardened paint, loss of color and the tendency to turn yellowish under the influence of sun-light. Hence, for a technological artifact to perform its technological function(s), we aim at producing the phenomena that are manifestations of its proper functioning, and prevent or change the occurrence of those that are manifestations of its improper functioning. A common sense kind of approach to the improvement of the technological functioning would be trial-and-error interventions with the technological artifact at hand, for instance, by systematically testing the effects of different kinds of solvents or pigments or filling materials in the case of paint. Sometimes this is considered as a typical engineering approach. In addition to the trial and error approach, the engineering sciences focus on creating and intervening with the phenomena that supposedly produce the proper and/or improper functioning of technological artifacts (i. e., the material phenomena as mentioned in section 2). Examples of such phenomena are evaporation of solvent, molecules responsible for the color of paint, degradation of color-molecules under the influence of heat or light, chemical or physical properties of pigments, and properties such as viscosity, diffusivity, hydrophobicity and surface-tension of the substance. Hence, the engineering sciences aim at creating or intervening with the phenomena that are manifestations of technological (mal)functioning. That is, they aim to produce, change, control, or prevent these observable or measurable phenomena. Also, they aim at creating or inter-

226

Mieke Boon

vening with ‘underlying’ phenomena that are held responsible for those just mentioned—that is, the phenomena that supposedly produce the proper or improper functioning of an artifact. Nevertheless, these scientific practices usually investigate phenomena of interest in ways that are very much similar to the approaches of experimental practices in the natural sciences (e. g., Hacking 1983, 1992; Franklin 1986, 2009)—yet, with the difference that the ‘ultimate’ purpose of these research practices are the phenomena and their technological production, rather than theories.

3. How Are Phenomena Conceptualized in the Engineering Sciences? As already explained, scientific researchers in the engineering sciences, in aiming at contributions to the improvement of technological functions, usually think in terms of interventions with phenomena held responsible for the (mal)functioning of a technological device. Technological functions are physically embodied and exerted through technological artifacts such as (assemblies of) materials, processes, apparatus and instruments, which supposedly contain or generate the phenomena responsible for the functioning of these devices. Examples of improving the functioning of existing technological artifacts are: enhancing the energy efficiency of engines, preventing the production of side-products of chemical processes, reducing the degeneration of paint by sunlight, and improving the mechanical properties of biodegradable fibers used in medicine. These are examples of ways in which technological (mal)functioning manifests itself. Scientific research aims at discovering and explaining the causes or contributing factors of, e. g.: the limitation of the efficiency, the formation of side-products, the degeneration of compounds, and physical or chemical relationships between mechanical strength and biodegradability. These causes or contributing factors are what we call ‘phenomena’. Through firstly discovering or determining, and then explaining and/or modeling these causes or contributing factors, scientists and engineers generate knowledge by means of which they may find ways of intervening to the improvement of the technological artifact. Additionally, the engineering sciences often aim at creating new ways of performing existing technological functions and, even, at creat-

Scientific Concepts in the Engineering Sciences

227

ing newly imagined technological functions. This is done by aiming at the technological creation of phenomena that may not exist as yet, but which have been conceptualized in concert with imagining new technological possibilities. An example of the conceptualization of a new phenomenon is ‘artificial photosynthesis’. Scientific researchers imagine that artificial photosynthesis could make possible new ways of generating specific technological functions such as ‘producing electrical energy from sunlight’ and ‘producing fuels from carbon dioxide containing flue gasses and sunlight’. Other examples of technological functions that at some point have been imagined in concert with conceptualizing a phenomenon by means of which these functions could be produced, are: measuring toxic levels of compound X in air (the technological function) by means of utilizing components of the biochemical pathways in lichen L sensitive to X (the phenomenon), separation of pollutant P from waste-water Z (the technological function) by means of a membrane that selectively carries P through its surface (the phenomenon), conversion of sunlight to electrical energy (technological function) by means of a ‘light-harvesting molecule’ (phenomenon), super-conductivity at relatively high temperatures (technological function) by means of a ceramic material that is super-conductive at high temperatures (phenomenon), and a chemical process that produces exclusively one of the isomeric forms of a drug that has chemical composition M (technological function) by means of a catalyst that converts compounds A and B exclusively to the desired isomeric form of M (phenomenon). In these examples, scientists have developed a (preliminary) conception of a phenomenon in view of its role in performing a technological function. Conceptualizing a phenomenon for performing technological functions doesn’t start from scratch, but utilizes and combines already existing concepts. When struggling with a technological problem, existing concepts are crucial in structuring and articulating preliminary ideas that come to mind, for instance, ideas concerning the kind of phenomenon by means of which we could possibly produce a new technological function or improve existing ones in ways that solve the problem at hand. Different kinds of concepts play a role. Firstly there are established scientific concepts that guide in finding the kind of phenomenon we might be able to (technologically) utilize or intervene with in producing the desired technological function. Examples of such phenomena are, ‘membrane transport’, ‘bio-toxicity of compounds’, ‘photosynthesis’, ‘catalytic reactions’, ‘energy-loss in electricity transport’ and ‘electrici-

228

Mieke Boon

ty-conduction by ceramics’. Additionally, there are concepts that enable thinking about physical and/or technological interventions with phenomena, often in connection with how a phenomenon may bring about (the manifestation of) a technological function. These latter concepts concern types of operations. In other words, they concern specific kinds of physical or technological interventions with phenomena. For instance, phenomena (objects, properties and processes) can be physically or technologically created, produced, reproduced, re-built, (and repeated or re-created at other circumstances), isolated, separated, harvested, singled-out, amplified, enhanced, composed, decomposed (in space, time, or matter), joined together, summed up, added up, transformed, transferred, converted, suppressed, repressed, restrained, and reversed. In brief, conceptualizing technologically producible phenomena involves utilizing scientific concepts of existing phenomena as well as concepts of types of operations.

4. How Do Scientific Concepts Enable Epistemic Uses? In the previous section I have provided a brief overview of the engineering sciences, and in particular of the importance of what I called ‘conceptualizations’. I have argued that the conceptualization of phenomena is crucial for the possibility of thinking about them, in particular about how to determine, quantify, create, reproduce, control or otherwise intervene with them. Moreover, I have indicated how my view about the function of conceptualizations fits with the thesis stated in the introduction that scientific concepts are epistemic tools. Additionally, I have illustrated that conceptualizing a phenomenon for performing technological functions doesn’t start from scratch, but utilizes and combines already existing concepts. This makes room for a position according to which existing concepts play a vital role in an ongoing process of conceptualization, i. e., of conceptual change. I will now elaborate my views on this matter.

4.1 Concepts as Epistemic Tools It needs to be explained how we get from perceptions (in daily life or experiments) to concepts of observable or unobservable phenomena, and why these concepts function as epistemic tools. I will propose

Scientific Concepts in the Engineering Sciences

229

that the formation of concepts of phenomena involves adding epistemic content such as empirical and theoretical knowledge, analogies, and other concepts (also see Nersessian 2009). It is through this additional content that concepts of phenomena are formed. Moreover, as I will argue next, it is by means of this additional epistemic content that these concepts function as epistemic tools. The notion of concepts as epistemic tools will be explained by starting from common sense ideas about the use of concepts of perceivable objects, processes or properties in ordinary language, such as ‘apple’, ‘auto-mobile’, ‘storm’, ‘tides’, ‘heavy’ and ‘fluidic’. When a child asks, “What is ‘apple’?” we can explain it by pointing at an apple. We can also explain it by specifying its features: apples can be eaten; they are round, red or green or yellow, sweet and sour, and crisp; they weigh about 200 grams; they are grown on trees; they can become rotten; etc. Additionally, an adult can use abstract concepts— i. e., different kinds of categories—for finding out more systematically what the word ‘apple’ means: Is it about an object, property or process? What kind of object (property or process) is it? How can we use an apple? What kinds of functions do apples have? How is the object (etc.) produced, or how does it come about? Is the object (etc.) stable or transient? Is it natural or artificial? Is it organic or inorganic? Which are its perceivable properties? What are its shape, size, weight, color, smell, taste, and texture? In this way, humans have learned to recognize an apple, and to use the word ‘apple’, as well as to think about doing things with apples. Moreover, we expect that the child has learned to recognize manifestations of apples by perceiving only some of their features (e. g., their smell or shape only). Also, we expect that the child will recognize uses of apples (e. g., in an apple-pie). Furthermore, when grasping a concept of perceivable things, events or properties, humans are able to recognize representations of apples, for instance in texts (e. g., when the author speaks of a juicy, sweet and sour, crispy thing) or in pictures (e. g., on photographs, paintings and drawings). In addition, craftsmen such as the fruit grower or the cook are able to ask sensible questions about apples, and to think of interventions with growing apples that may change specific features. Why do apples get a musty taste? Can we grow apples that are bigger, sweeter, and less sensitive to insect damage, keep longer, and taste of apricots? How should we store apples to keep them? Can we apply apples as a thickener in home-made jam? Clearly, the concept of apples that craftsmen have in mind is epistemically richer than the un-

230

Mieke Boon

derstanding of apples that the child has. Or putting it the other way round, by utilizing the concept for asking questions that are relevant in a specific application context, these craftsmen are enriching its epistemic content. These examples aim to illustrate that concepts of things (or more exactly, phenomena) play a role in diverse epistemic activities that concern, for instance, doing something with these things. I will suggest that thinking about things such as apples in the mentioned kinds of ways—which includes asking sensible questions about them—is guided and enabled by the concept of it that humans have in mind.

4.2 How Do Concepts of Perceivable Objects, Processes and Properties Enable Epistemic Use? Philosophical accounts often have focused on the idea that concepts are definitions, which provide the necessary and sufficient conditions for applying the concept. On this account, the meaning of, say, ‘apple’, is given by specifying a conjunction of properties. If P1, P2, …, Pn are all the properties in the conjunction, then anything with all of the properties P1, …, Pn is an apple (also see Putnam 1970). Wittgenstein opposed this view, arguing that we learn the meaning of a concept by its use rather than by presenting its definition. While refraining to take a position in this debate, I wish to emphasize that the ability to use a concept not only concerns its correct application, whether learned by means of a definition or by its use. Grasping a concept also involves the other mentioned kinds of epistemic uses. Putting it this way emphasizes the epistemic function of concepts. It is assumed that concepts enable the mentioned variety of epistemic activities. Therefore they can be called epistemic tools. Using a concept as a definition is just one way of being an epistemic tool. The question remains, why concepts enable the different kinds of epistemic uses just mentioned. I propose that this has to do with several characteristics of conceptualizing perceivable objects, properties and processes. In what follows, I will present a systematic account of the formation of concepts that aims at explaining how concepts enable and guide epistemic uses. Saying that it is ‘systematic’ means to emphasize that it is a philosophical account, rather than a historical or empirical description of how concepts are formed, nor is it a scientific theory, such

Scientific Concepts in the Engineering Sciences

231

as a theory that describes the psychological mechanism of the formation or learning of concepts. Above, it has been suggested that explaining or learning a concept involves two cognitive activities, namely, perceiving the object and conceptualizing it. I assume that explaining or learning the use of a concept is similar to how new concepts of perceivable objects are formed, and also, how already existing concepts are enriched or improved. This is why the given example of learning, explaining and enriching a concept is used for developing a systematic account of the formation of concepts that explains in what manner these concepts enable and guide different kinds of epistemic activities. The example of how ‘apple’ is explained, illustrates that explaining a concept involves using categories—that is, more abstract concepts are used that enable, for example, to articulate that ‘apple’ refers to an object (rather than to an event or a property, etc.), and that it is a fruit (rather than an animal, etc.). Categories are more abstract concepts, which are ‘applied to’ perceptions of things (objects or properties or processes). Another way of putting this is to say that perceptions are subsumed under specific kinds of concepts, which are applied because these categories suit the perceived thing and general kinds of epistemic uses of the resulting concept.3 In this way, more abstract concepts (such as ‘object’ or ‘fruit’) become part of the concept that is being formed (e. g., ‘apple’)—they are, so to speak, built into the concept. For the sake of simplicity, suppose that we would start off by forming a concept of what we perceive, by means of only the two kinds of abstract concepts (object and fruit) just mentioned. The idea defended here is that the use of these abstract concepts introduces conceptual and epistemic content that goes beyond what is perceived. This additional content enables epistemic uses, both in the further development of the concept, and in thinking about, say, apples. Saying that A is an object informs us about properties that objects typically have, without actually having seen that A has them. For this reason, calling something an object introduces knowledge about it, such as that it is solid; that it has a size, shape, 3

My account of concepts heavily leans on Kant’s ideas. My interpretation lines up with Allison’s (2004) interpretation of Kant’s transcendental idealism. Furthermore, my understanding and application of Kant’s ideas is deeply indebted to Neiman’s (1994) explanation of Kant’s philosophy of science, which she interprets in terms of his third critique. Also see Massimi (2008; 2011), Rouse (2011) and Boon (2009).

232

Mieke Boon

weight, etc.; that it can be transported from one location to another; and that it has specific properties. Calling something a fruit tells us that it is organic, edible, tender, etc. The conception thus formed enables diverse epistemic activities. It directs in asking specific kinds of questions about the object under study, such as what is its size, shape, weight and taste, by means of which the content of the concept is developed further. In sum, conceptualization involves the use of categories by means of which we introduce conceptual and epistemic content that enable creative thinking and asking empirically testable questions about the object, because these categories articulate significant distinctions (e. g., a specific kind of entity or property), contrasts and dissimilarities, or analogies and similarities, which thus adds to what is merely empirically given when perceiving something. This is not to say that the abstract concepts applied in the formation of a concept are automatically adequate. It may very well turn out that we come to agree that the object is more like a kind of vegetable than a fruit, or that these things are apples albeit they are pear-shaped, or that apples tasting of apricots must be classified as a different kind of fruit. Nevertheless, the idea of concepts as epistemic tools accounts much better for the situation that a concept may turn out to be inadequate than the idea that concepts are first and foremost definitions that provide the sufficient and necessary conditions for its correct application. When considered as epistemic tools, concepts typically entail empirical knowledge characteristic of the object, as well as abstract conceptual content, and (hypothetical) epistemic content. The abstract conceptual content is introduced by means of categories of which we expect that they suit to our perceptions. The epistemic content is derived from the conceptual content and concerns aspects that can be empirically tested or determined. Imagine, for instance, that we call this object a fruit. Through applying this category, we are enabled to infer that this object must be sweet and sour. But this inference may be proven empirically wrong. The latter observation may either lead to a revision of the category ‘fruit’ (e. g., not all fruits are sweet and sour), or to a new category (e. g., veggiefruit), or to revising the conception of this object (e. g., it is not a fruit), etc. Hence, there is no warrant that the epistemic content that has been added to the concept by applying certain categories is true about the object. Instead, conceptual content, and the hypothetical epistemic content derived from it, enable investigation of the object—that is, this content enables articulating hypotheses and asking questions, by

Scientific Concepts in the Engineering Sciences

233

means of which new things are learned about it, for instance, whether the epistemic content is correct about the object, and whether the concept sufficiently suits the object. The idea of concepts as epistemic tools is meant to be an alternative to the idea that concepts of objects, events and properties are first and foremost definitions. This is not to say that I reject the idea that definitions can serve a purpose. Once a concept has been firmly established, important functions in ordinary language-use are made possible—for instance, to facilitate economy of words and refinement of meaning, to recognize an object or occurrence as of a particular kind, and to enable adequate descriptions and explanations of a situation. These kinds of uses require that language-users agree on the proper uses of concepts, which implies that concepts must have a definitional character as well. Accordingly, their use as a definition is one of the ways in which concepts function as epistemic tools. Yet, disagreement between the traditional idea that the meanings of concepts are provided by definitions versus the idea proposed here that definitions also function as epistemic tools concerns suppositions about the epistemic function of concepts. Philosophers may have specific assumptions on how definitions are or should be established. Significant to the idea of concepts as epistemic tools is that concepts entail conceptual and epistemic content transcending the mere empirical information given through perception. Conversely, the empirical tradition has been striving at an account of concepts as definitions that are strictly faithful to what is empirically given. In that account, a concept is a description or representation of, say, properties that every apple has. In other words, in traditional views the correctness of a concept involves that it correctly describes or represents perceivable characteristics of the thing it defines. However, the idea that the content of a concept is restricted to empirically correct descriptions or representations—e. g., descriptions of the characteristic perceivable features of the thing it is a concept of— cannot account for several of the epistemic uses of concepts. Most importantly, it cannot account for epistemic uses that are made possible by means of the conceptual content added when subsuming a perception under a category such as discussed above. Another important epistemic use of concepts is that they enable us to recognize that something is a representation of something else. Someone may object that it doesn’t require the concept of an apple to recognize that this drawing is a representation of it. Kant’s famous saying that “perceptions without concepts are blind, whereas concepts without perceptions are empty,” may help

234

Mieke Boon

in explaining my point. Consider, for instance, how humans are able to recognize a one-line drawing of an apple or a rabbit or a duck, say, in a Picasso. Imagine that we present a fox with such a drawing of a duck. Although the fox may be very hungry and would love to eat a duck, we can be pretty certain that the fox won’t recognize a duck on this paper. Kant’s insight applies as follows: Concepts of perceivable objects enable humans to recognize the object when looking at a ‘representation’ of it. More precise, “when looking at a ‘representation’ of it,” means “when humans look at what they are able to recognize as a representation of an object.” The point is that concepts are the tools by means of which the ability of such recognition is exercised. Without concepts humans would respond similar to other animals—they would not recognize the represented object. Hence, the concept of a duck or a rabbit enables recognizing descriptions or representations of them. So far, I have aimed to make plausible that the formation of concepts of perceivable objects, properties or processes involves the interplay between existing concepts and perception. In this way, heterogeneous content is put together, which is the content of the concept. The formation of a concept is described more systematically as follows: (a) Conceptual content is added through the application of relevant categories. These categories can be very abstract, such as ‘object’ or ‘property’ or ‘process’, or more concrete, such as ‘fruit’. (b) Epistemic content is added empirically, that is, by means of perception. Yet, the epistemic content added to the concept of ‘apple’ as characteristic of apples by means of perception is selected by means of the mentioned categories; for instance, calling something a fruit brings more specific categories with it (e. g., taste and shape) that guide which empirical information we gather through perception (e. g., that its taste is sweet and sour, and that its shape is round), and thus become part of the concept. What is more, these specific categories also guide which information possibly given in our perceptions of apples we ignore when forming a concept. It is proposed that concepts can function as epistemic tools because of this heterogeneous conceptual and epistemic content, which must be fitted together, thereby drawing coherent, consistent and relevant relationships by means of which the concept is developed to a whole. In order to clarify somewhat further why concepts thus understood enable epistemic uses, the notion of epistemic tools as designs will be proposed as a metaphor of how concepts are formed and why they enable epistemic uses.

Scientific Concepts in the Engineering Sciences

235

4.3 Conceptualization of Properties in Experimental Practice: The Case of Elasticity I side with Feest’s idea of concepts of phenomena as epistemic tools (or ‘research tools’, as she calls them) for examining them in experimental set-ups. Her emphasis is on the interrelated activity of experimentation and concept formation. Yet, it needs to be explained how scientists get from an operational definition—which on her account is a description of significant aspects of a paradigmatic experiment—to a (preliminary) concept. Feest’s (2010) account seems to suggest that preliminary concepts coincide with operational definitions. I will propose that the formation of concepts of unobservable or ‘underlying’ phenomena involves descriptions of significant aspects of paradigmatic experiments, as Feest (2010) suggests, but also involves adding epistemic content similar to how the concept of ‘apple’ is formed. Again, it is through this additional epistemic content that these concepts function as epistemic tools. Examples of phenomena in the engineering sciences that have been conceptualized by means of paradigmatic experiments are material properties such as ‘elasticity’, ‘specific weight’, ‘viscosity’, ‘specific heat-content’, ‘melting-point’, ‘electrical resistance’, ‘thermal conductivity’, ‘magnetic permeability’, ‘physical hysteresis’, ‘crystallinity’, ‘refractivity’, ‘chemical affinity’, ‘wave-length’, ‘chemical diffusivity’, ‘solubility’, ‘electrical field strength’, ‘super-conductivity’, and ‘atomic force’. Each of these properties is related to paradigmatic experiments by means of which they were initially defined. Hooke’s experimental set-up, for instance, in which he measured the extension of a spring as a function of the weight, can be regarded as a paradigmatic experiment by which ‘elasticity’ was operationally defined. The description of the paradigmatic experiment is something like, ‘the reversible (and proportional) extension of a spring by a weight’, which is the observable phenomenon. The preliminary operational definition of elasticity derived from it is roughly: ‘the qualitative and quantitative property of a spring to reverse its stretch when extended by a weight’. This definition enables proposing a more abstract definition of elasticity as: ‘the measurable property of an object to reverse a deformation imposed by a force’. Hence, by means of the description of a paradigmatic experiment researchers infer to an operational definition of a phenomenon. In turn, this definition can be applied to situations different from the paradigmatic experimental set-up: In any case

236

Mieke Boon

where reversible deformation of an object occurs, we attribute the property ‘elasticity’ to the object and assume that it is quantifiable, independent of the kind of object, the kind of matter and the kind of force involved. In this way, such concepts acquire a definitional character, which enables their epistemic uses in new situations. This example of conceptualizing properties such as ‘elasticity’ illustrates that concepts of phenomena expand on the operational definitions by means of which they were originally formed. Similar to the case of ‘apple’, conceptualizing ‘elasticity’ involves subsuming descriptions of observations under a more abstract concept. As a result, the concept ‘elasticity’ refers to a qualitative and quantifiable property (rather than an object or process).

4.4 Scientific Concepts as Epistemic Tools in the Sense of a Design The question at hand is how scientific concepts of (novel or unobservable) phenomena are formed; how it is possible that these concepts guide in the generation of knowledge about phenomena that cannot be perceived in any interesting sense of that word and which often concerns (aspects of) phenomena that do not even exist as yet. So far, it has been proposed to understand the functioning of scientific concepts of phenomena as epistemic tools. Accordingly, conceptualization must be considered in terms of making epistemic tools. Scientific concepts of phenomena are epistemic tools because they enable thinking about determining, creating, reproducing and intervening with the phenomenon under study. In addition to that, they point at research questions relevant to the ongoing production of knowledge about the phenomenon. In other words, scientific concepts are epistemic tools because they enable the mentioned kinds of epistemic activities. Here, it is proposed to consider these epistemic tools similar to how we understand a design. Accordingly, the formation of a concept is considered as making a design. The metaphor of epistemic tools as a design will be utilized in developing an account of how we acquire knowledge of phenomena that cannot be observed in any direct manner, and of phenomena that may not even exist. A design firstly is an epistemic tool for an object, rather than a definition or a representation of it. A design is not produced by means of representing an object that is observed or has been discovered somewhere. If, for instance, someone would draw the Deposition Church of the Moscow Kremlin, we do not con-

Scientific Concepts in the Engineering Sciences

237

sider this representation a design.4 Instead, a design is made by constructing a representation, either of an object that does not exist as yet, or of what an existing object will look like when it undergoes certain structural interventions. A design must be constructed such that it can function as an epistemic tool in the sense that it enables thinking about how to build a non-existing object or how to intervene with an existing object in order to change it. Additionally, the design of an object functions as an epistemic tool because it enables to ask significant questions about the designed object, thus guiding to the refinement of our knowledge and understanding of it. Scientific concepts of phenomena function in a similar fashion. As will be explained in more depth, the (partly hypothetical) epistemic content introduced through their conceptualization enables to ask research questions which guide the refinement of our knowledge and understanding of the phenomenon. Another important characteristic of a design is that it puts together heterogeneous aspects. In fact, the ultimate challenge of designing is not a correct representation of the designed object, but an adequate fit of heterogeneous epistemic content concerning ‘real world aspects’ of the object, such as its structure, the materials used, the construction techniques needed, the measures taken for its safety and robustness, etc. In designing an object, these different kinds of aspects must be chosen and molded such that they fit together. They must both make up for the epistemic functions of the design and agree to the requested practical functions of the designed object.5 As a result of both its heterogeneous epistemic content, and the relationships drawn between heterogeneous parts of the design, as well the relationships drawn between the epistemic content and ‘real world aspects’, the design can be used as an epistemic tool. It enables thinking about how to actually build (= create) and intervene with the designed object. Also, it may point at aspects that need to be investigated or elaborated in more depth. As a result of this latter role, a design is not only an epistemic tool for thinking about the designed object—it also is an epistemic tool of its own making.6 4 5 6

Nevertheless, as has been argued, concepts—such as the concept of a cathedral—enable humans to recognize a drawing of it as a representation of a cathedral. When the idea of ‘matching heterogeneous content in designing an object’ is applied to how concepts of phenomena are formed, it resembles Hacking’s (1992) idea of the self-vindication of scientific practices. The same point has been made about scientific modeling of a phenomenon: “… models function also as epistemic tools of their own making. Scientists de-

238

Mieke Boon

The notion of scientific concepts as epistemic tools in the sense of a design also suits for explaining why a scientific concept is accepted, and why at some point scientific practices may discard of it. The adequacy of a design does not first and foremost consist in how well it depicts the resulting object, but in its capacity to facilitate the desired epistemic functions. Similarly, a scientific concept of a phenomenon is adopted because it enables thinking about the phenomenon and facilitates investigating it. At the same time, using a scientific concept does not necessarily involve that it is about an existing phenomenon. Rather, a scientific concept makes the phenomenon thinkable in ways that agree with relevant empirical, technological and scientific knowledge of the world (see also Rouse 2011). Conversely, we discard a scientific concept when it appears to disagree too much or when it has become redundant. Similar to how a design enables thinking about aspects of the world that are unthinkable without it, scientific concepts enable thinking about phenomena that may not exist, and that cannot be thought of without them. What is more, they enable thinking about phenomena that cannot be perceived directly. Through scientific concepts we get an epistemic handle for investigating and intervening with the world. In this account, we avoid the idea that the successful epistemic functioning of scientific concepts can only be explained by the apparent miraculous fact that concepts correctly depict phenomena. Instead, scientific practices have the capacity to make scientific concepts that function as epistemic tools, not because scientific concepts correctly depict phenomena, but because their functioning is more like the functioning of a design. There is, however, a conceptual pitfall in the use of ‘design’ as a metaphor to describe scientific concepts as epistemic tools. In case of a design, once built, the real object can be visually compared with the design. At that point, the design is reduced to a picture of the real object—which is not to say that the design has lost entirely its epistemic function, as it may very well be reused in thinking about the real object, such as in making calculations about specific performances, and in thinking about improving its performance. A scientific concept, on the other hand, cannot be directly compared with the phenomenon. Therefore, the comparison falls short in this respect: An established scivelop a model step-by-step, building in new aspects by which the content of the model becomes richer and more advanced. As an epistemic tool it ‘affords and limits’ also its own further development …” (Knuuttila/Boon 2011, 699).

Scientific Concepts in the Engineering Sciences

239

entific concept does not become a representation of the phenomenon similar to how a design ‘finally’ becomes a picture of the real object.

5. Conclusions I have proposed to consider scientific concepts of phenomena in the engineering sciences as epistemic tools in the sense of a design for them, rather than a definition or picture of them. Accordingly, we can think of conceptualizing a phenomenon as the activity of designing an epistemic tool. This activity involves adequately choosing and matching heterogeneous epistemic content concerning ‘real world aspects’ of the phenomenon relevant to its determination, behavior and functioning, such as: its empirical manifestation, its matter, its physical properties, and its physical connections with technological devices. This is done by applying more abstract concepts (i. e., categories) to perceptions (or empirical data)—in other words, perceptions (or empirical data) are subsumed under categories. As has been explained, a scientific concept thus produced entails empirical, conceptual and epistemic content relevant to the (imagined) phenomenon. It also entails relationships amongst this content, and between its content and ‘real world aspects’. The question might be raised, however, how the presented account of scientific concepts of phenomena explains their functioning as epistemic tools. In other words, why does the heterogeneous content drawn together in the concept enable investigating how to actually create and intervene with the conceptualized phenomenon? In particular, how can my analysis of concepts like ‘apple’, which have been formed on the basis of observed phenomena and empirical data, be transferred to the formation of scientific concepts of unobservable phenomena? In this paper I have suggested that through the conceptualization of a phenomenon epistemic content is introduced that enables to ask research questions which guide the refinement of our knowledge of the phenomenon, and also our understanding of how to create it or intervene with it. Firstly, abstract conceptual content is added through the application of abstract categories such as ‘object’, ‘process’, or ‘property’. In this way, the unobservable phenomenon is considered as an object, process, or property. The above example of the concept of elasticity illustrates that abstract categories introduce conceptual and epistemic content that is partly hypothetical. Again, calling something an object introduces knowledge about the purported phenomenon, such as that it is

240

Mieke Boon

solid; that it has a size, shape, weight, etc.; and that it can be transported from one location to another. Conversely, when calling something a property we know that it cannot be transported, but that we may be able to produce it in other objects, or that we may change (e. g., improve) it in a quantitative sense. It has been argued that conceptual and epistemic content thus added to the concept enables specific ways of thinking and asking questions about the phenomenon—for instance, on how it is produced, and whether it can be reproduced in other kinds of materials (in case of a property such as ‘elasticity’), or whether it can be transported (in case of an object). Importantly, applying the category ‘object’, ‘process’, or ‘property’ in conceptualizing an observable phenomenon differs from applying these categories for interpreting empirical findings of experiments thus inferring to unobservable phenomena. On the one hand, these abstract conceptual notions guide to the same kind of thinking and asking questions about the phenomenon. At the same time, we may be wrong in thinking that the unobservable phenomenon has all the aspects that we attribute to it due to calling it an object, etc. Therefore, by calling it an object, we have introduced hypothetical epistemic content to the scientific concept. This hypothetical epistemic content enables epistemic uses, for instance, to the conclusion (e. g., by means of empirical tests), that it doesn’t accord in every respect to the ideas introduced by means of this abstract category (see Knuuttila/Boon 2011 for an example). Secondly, less abstract categories can be employed in the formation of concepts of an unobservable phenomenon, such as categories which tell that it behaves like a gas, fluid, or a solid compound; that it is chemical or hydro-dynamical or electrical or biological; etc. In other words, we apply categories which tell that the purported phenomenon is of a certain kind. The addition of this content enables epistemic uses similar to those just mentioned, namely, adding partly hypothetical epistemic content and asking sensible questions that guide in further research. Finally, another class of concepts that may be used in the conceptualization of concepts of unobservable phenomena concern types of operations (such as those mentioned in section 3), which enable creative thinking about intervening with them or creating them. Carnot (1824), for instance, conceived of reversing the processes that he had conceptualized when explaining how heat produces motive power (also see Knuuttila/Boon 2011). Similarly, concepts of phenomena such as ‘photosynthesis’ enable contemporary scientists to creatively think about technological possibilities, such as artificially producing useful parts of

Scientific Concepts in the Engineering Sciences

241

this phenomenon for technological applications (e. g., Pandit et. al. 2006; Huskens et. al. 2010); and creating artificial molecules for harvesting light in ways that could become technologically applicable (e. g., Savolainen et. al. 2008). If my account of the formation of scientific concepts of phenomena in the engineering sciences is correct, it implies that scientific concepts of phenomena do not present us certain knowledge about what the world is like. I have aimed to make plausible that without adding conceptual content to empirical data in ways such as explained in this paper, we would never be able to get beyond what is empirically given. This idea agrees to Kant’s important insight that the capacity to produce knowledge of the world requires the capacity to go beyond experience (also see Neiman 1994, 59). This idea also agrees with Rouse (2011), who challenges the assumption that the aim of science are true or empirically adequate theories. Rouse argues in favor of an image of science that would place conceptual articulation at the heart of the scientific enterprise. According to him, “[c]onceptual articulation enables us to entertain and express previously unthinkable thoughts, and to understand and talk about previously unarticulated aspects of the world.” Aiming at strict certainty—that is, avoiding any content that goes beyond what is empirically given—would drastically reduce our ability to develop knowledge that enables us to think about interventions with the world that are unthinkable without this knowledge.

Acknowledgments This research was supported by a grant from the Netherlands Organisation for Scientific Research (NWO Vidi grant). I am very grateful to Uljana Feest for her numerous suggestions and critical comments. Also, I wish to thank Friedrich Steinle and Henk Procee for their constructive contributions. Reference List Allison, H. E. (2004), Kant’s Transcendental Idealism: An Interpretation and Defense, New Haven / London: Yale University Press. Bailer-Jones, D. M. (2009), Scientific Models in Philosophy of Science, Pittsburgh: University of Pittsburgh Press.

242

Mieke Boon

Bogen, J. / Woodward, J. (1988), “Saving the Phenomena.” In: The Philosophical Review 97 (2), 303 – 352. Boon, M. (2004), “Technological Instruments in Scientific Experimentation.” In: International Studies in the Philosophy of Science 18 (2&3), 221 – 230. Boon, M. (2009), “Understanding in the Engineering Sciences: Interpretative Structures.” In: Regt, H. W. de / Leonelli, S. / Eigner, K. (eds.), Scientific Understanding: Philosophical Perspectives, Pittsburgh: Pittsburgh University Press, 249 – 270. Boon, M., / Knuuttila, T. T. (2009), “Models as Epistemic Tools in the Engineering Sciences: A Pragmatic Approach.” In: Meijers, A. (ed.), Handbook of the Philosophy of Technological Sciences, Amsterdam: Elsevier Science, 687 – 719. Boon, M. (2011a), “In Defence of the Engineering Sciences: On the Epistemological Relations between Science and Technology.” In: Techn: Research in Philosophy and Technology 15 (1), 49 – 71. Boon, M. (2011b), “Two Styles of Reasoning in Scientific Practices: Experimental and Mathematical Traditions.” In: International Studies in the Philosophy of Science 25 (3), 255 – 278. Boon, M. (2012), “Understanding Scientific Practices: The Role of Robustness Notions.” In: Soler, L. et al. (eds.), Characterizing the Robustness of Science: After the Practice Turn of the Philosophy of Science (Boston Studies in the Philosophy of Science 292), Dordrecht: Springer Netherlands, 289 – 316. Carnot, S. (1986 [1824]), Reflections on the Motive Power of Fire, translated and edited by R. Fox, New York: Manchester University Press. Cartwright, N. (1983), How the Laws of Physics Lie, Oxford: Clarendon Press / Oxford University Press. Cartwright, N. (1989), Nature’s Capacities and Their Measurement, Oxford: Clarendon Press / Oxford University Press. Cartwright, N. (1999), The Dappled World: A Study of the Boundaries of Science, Cambridge: Cambridge University Press. Duhem, P. (1962 [1906]), The Aim and Structure of Physical Theory, English translation by P. P. Wiener, New York: Atheneum. Fraassen, B. C. van (1980), The Scientific Image, Oxford: Clarendon Press. Franklin, A. (1986), The Neglect of Experiment, Cambridge: Cambridge University Press. Franklin, A. (2009), “Experiment in Physics.” In: The Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/entries/physics-experiment/). Feest, U. (2008), “Concepts as Tools in the Experimental Generation of Knowledge in Psychology.” In: Feest, U. et al. (eds.), Generating Experimental Knowledge, Berlin: MPI-Preprint 340, 19 – 26. Feest, U. (2010), “Concepts as Tools in the Experimental Generation of Knowledge in Cognitive Neuropsychology.” In: Spontaneous Generations: A Journal for the History and Philosophy of Science, vol. 4 (1), 173 – 190. Feest, U. (2011), “What Exactly Is Stabilized When Phenomena Are Stabilized?” In: Synthese 182 (1), 57 – 71. Glymour, B. (2000), “Data and Phenomena: A Distinction Reconsidered.” In: Erkenntnis 52, 29 – 37.

Scientific Concepts in the Engineering Sciences

243

Hacking, I. (1983), Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, Cambridge: Cambridge University Press. Hacking, I. (1992), “The Self-Vindication of the Laboratory Sciences.” In: Pickering, A. (ed.), Science as Practice and Culture, Chicago: University of Chicago Press, 29 – 64. Huskens J. et al. (2010), “Nanostructured Solar-to-Fuel Devices.” Research proposal in the FOM/ALW-program Towards BioSolar Cells. Kant, I., (1998 [1787]), Critique of Pure Reason, Cambridge: Cambridge University Press. Knuuttila, T. T. / Boon, M. (2011), “How Do Models Give Us Knowledge? The Case of Carnot’s Ideal Heat Engine.” In: European Journal Philosophy of Science 1 (3), 309 – 334. Massimi, M. (2008), “Why There Are No Ready-Made Phenomena: What Philosophers of Science Should Learn from Kant.” In: Royal Institute of Philosophy Supplement 83 (63), 1 – 35. Massimi, M. (2011), “From Data to Phenomena: A Kantian Stance.” In: Synthese 182 (1), 101 – 116. McAllister, J. W. (1997), “Phenomena and Patterns in Data Sets.” In: Erkenntnis 47, 217 – 228. McAllister, J. W. (2011), “What Do Patterns in Empirical Data Tell Us about the Structure of the World?” In: Synthese 182 (1), 73 – 87. Neiman, S. (1994), The Unity of Reason—Rereading Kant, Oxford: Oxford University Press. Nersessian, N. (2009), Creating Scientific Concepts, Cambridge, MA: MIT Press. Pandit, A. / Groot, H. de / Holzwarth, A. (eds.) (2006), Harnessing Solar Energy for the Production of Clean Fuel: White Paper by an International Task Force under the Auspices of the European Science Foundation (http://ssnmr. leidenuniv.nl/files/ssnmr/CleanSolarFuels.pdf). Putnam, H. (1970), “Is Semantics Possible?” Reprinted in: Margolis, E. / Laurence, S. (eds.) (1999), Concepts Core Readings, 177 – 188. Rheinberger, H.-J. (1997), Toward a History of Epistemic Things, Stanford: Stanford University Press. Rouse, J. (2011), “Articulating the World: Experimental Systems and Conceptual Understanding.” In: International Studies in the Philosophy of Science 25 (3), 243 – 255. Savolainen, J. et al. (2008), “Controlling the Efficiency of an Artificial LightHarvesting Complex.” In: Proceedings of the National Academy of Sciences of the United States of America 105 (22) 7641 – 7646. Woodward, J. F. (2000), “Explanation and Invariance in the Special Sciences.” In: British Journal Philosophy of Science 51, 197 – 254. Woodward, J. F. (2011), “Data and Phenomena: A Restatement and Defense.” In: Synthese 182 (1), 165 – 179.

Modeling Practices in Conceptual Innovation An Ethnographic Study of a Neural Engineering Research Laboratory Nancy J. Nersessian A concept is not an isolated, ossified changeless formation, but an active part of the intellectual process, constantly engaged in serving communication, understanding, and problem solving. Lev Vygotsky Language and Thought

1. Introduction Vygotsky’s statement captures the idea that mundane concepts are dynamic and socio-cultural in nature. As such, they are neither completely fixed units of representation nor solely mental representations, but arise, develop and live in the interactions among the people that create and use them. This idea is quite compatible with the notion of concepts as participants in the investigative practices of scientists.1 As much research has demonstrated, concepts do not arise fully formed in the head of a scientist but are created in historical processes, which can extend for considerable periods and even span generations of scientists. As I have argued previously (Nersessian 1984; 2008), novel scientific concepts arise from the interplay of attempts to solve specific problems, use of conceptual, material and analytical resources provided by the problem situation, and often through model-based reasoning processes. In such reasoning processes, models are dynamical constructions through which scientists make inferences and solve problems that sometimes require conceptual innovation and change. In the conceptual modeling practices I have studied, 1

It is also compatible with the view of „concepts in use“ articulated by Kindi in contrast with the standard philosophical notion of concepts as solely mental entities or abstract objects (Kindi in this volume).

246

Nancy J. Nersessian

analogical, visual, and simulative processes are used incrementally to construct models that embody various constraints drawn from the domain under investigation (target), analogical source domain(s), and, importantly, those that arise in the constructed model itself can lead the reasoner towards a problem solution. Nersessian 2008 details how novel concepts can arise from this kind of ‘bootstrapping process’ in which hybrid models that abstract and integrate constraints from both the domain of the target problem and selected analogical source domains are constructed, analyzed, and evaluated incrementally towards the solution of the target problem. One of the most interesting aspects of this process is that in abstracting and integrating constraints from diverse domains (including constraints that arise from the models themselves), here-to-fore unrepresented structures or behaviors can emerge and lead to the formation of novel concepts. Although it has long been known that analogy plays an important role in creating novel concepts, all the cases I have examined from several data sources—historical, think-aloud protocols, ethnographic studies—point to a significant facet of analogy in the modeling practices of scientists that is neglected in both the philosophical and cognitive science literatures: Often in cutting-edge research, there is no ready-to-hand problem solution that can be retrieved and adapted analogically from a source domain. Rather, analogical domains only provide some constraints that are incorporated into models which are constructed in accord with the epistemic goals of the scientist explicitly to serve as analogical sources for the target domain. That is, the constructed model is built explicitly to provide a comparison to the target phenomena based on analogy. Thus, the core of the problem-solving process consists in building models that embody constraints from the target phenomena and possible analogical source domain(s), solving problems in the constructed models, and then transferring the solution as a hypothesis to the target phenomena. This point will be elaborated in a fascinating case of conceptual innovation that emerged during an ethnographic study my research group was conducting. Until recently, research into scientific concepts has drawn exclusively from historical records. Now observational and ethnographic studies of concepts in use and development are being added to the mix and can increase significantly our understanding of their role in investigative practices. For several years, I have been conducting ethnographic research, which in part aims to investigate how concepts are used, created, and articulated in research laboratories in the bio-engineering sciences. These

Modeling Practices in Conceptual Innovation

247

frontier areas are interesting for investigating the role of concepts in practice because the nature of the research requires some measure of interdisciplinary synthesis, and thus such areas are likely to provide a good source for cases of concept transfer and adaptation and, possibly, the formation of novel concepts. Further, during the period of our investigation, practices in these labs did actually demonstrate and confirm the centrality of modeling to conceptual innovation. However, their modeling practices include not only conceptual models, but physical and computational models as well. Thus the ethnographic studies serve to extend historical accounts of conceptual innovation. Extending the account of model-based reasoning to encompass these kinds of models was the primary reason for venturing into a program of ethnographic research. We know that physical models have been used throughout the history of science (de Chadarevian / Hopwood 2004) and many sciences now make extensive use of computational models. Although historical records might note that such models were developed, the archival records of these artifacts are scant, as are detailed records of how they were made, the various versions and considerations that went into their making, and understanding of what they afford as embodied practices. As for computational models, records of the processes through which these were developed are even more scant, and consequently most of the philosophical literature is focused on the representational relations between the completed model and the world. Ethnographic research can be a valuable means for investigating model-based research in action. In this paper I will develop a case drawn from ethnographic studies of bioengineering research labs which can help to understand how concepts are both generated by investigative practices of simulation modeling— physical and computational—and generative of such practices.

2. Methods: Cognitive-Historical Ethnography Science studies researchers are most familiar with ethnography as a means of investigating the social and material practices of scientists. Over the last 20 years, researchers in cognitive science have adapted ethnographic methods from anthropology to study cognitive processes, such as reasoning and problem solving, in naturally situated practices (Hutchins 1995; Hollan/Hutchins/Kirsch 2000; Goodwin 1995; Lave 1988). In line with this research, we conducted „cognitive ethnographies“ (Hutchins 1995) of bio-engineering sciences research laboratories. Since 2001,

248

Nancy J. Nersessian

I have been leading an interdisciplinary research group that has been investigating cognitive and learning practices in five research labs in bioengineering fields: tissue engineering, neural engineering, bio-robotics, and, in on-going data collection, two in integrative systems biology, one that does only computational modeling and one that conducts experiments as well as modeling. Engineering sciences are emerging interdisciplinary fields where basic scientific research is conducted in the context of complex application problems. A major objective of our research is to develop integrative accounts of problem-solving practices as embedded in social, cultural, and material contexts. Physical or computational modeling is the principal means through which research problems are addressed. The tissue and neural engineering labs we studied between 2001 and 2008 both construct, primarily, physical models that serve as a means of conceptual and experimental exploration. Issues of control and, often, ethics, make it not possible to experiment on target in vivo phenomena, and so research in these labs is conducted by means of what they call simulation ‘devices’—in vitro physical models that are designed and built to serve as structural, behavioral, or functional analogs to selected aspects of in vivo phenomena. These devices participate in experimental research in various configurations called ‘model-systems’. As one researcher put it: „I think you would be safe to use that [notion] as the integrated nature, the biological aspect coming together with the engineering aspect, so it’s a multifaceted model system.“ Research is conducted with these in vitro devices and outcomes are transferred as candidate understandings and hypotheses to the in vivo phenomena. That is, a simulation device is designed „to predict what is going to happen in a system [in vivo]. Like people use mathematical models … to predict what is going to happen in a mechanical system? Well, this [model-system she was designing] is an experimental model that predicts—or at least you hope it predicts—what will happen in real life.“ Intensive data collection was conducted in each laboratory for two years with follow-up of the participants, their research, and questions pertaining to our research for an additional two years. Several members of our research group became participant observers of the day-to-day practices in each lab. The ethnographic part of the study (observations and open (unstructured) interviews) sought to uncover the activities, tools, and meaning-making that support research as these are situated in the ongoing practices of the community. We took field notes on our observations, audio taped interviews, and video and audio taped research

Modeling Practices in Conceptual Innovation

249

meetings (full transcriptions are completed for 148 interviews and 40 research meetings). As a group we estimate our (six) ethnographers made over 800 hours of field observations. Early observations directed our attention to the custom-built simulation models as ‘hubs’ for interlocking the cognitive and cultural dimensions of research. Because of this, the research meetings, though useful, assumed a lesser importance then they have in other research on cognitive practices in laboratories (see, esp., Dunbar 1995). We needed more to elicit from researchers their understanding and perceived relation to simulation artifacts, and see how they functioned within the life of the labs, which were better addressed through interviewing and extensive field observation. Significantly, these laboratories are evolving systems, where the custom-built technologies are designed and redesigned in the context of an experiment or from one research project to another. Researchers (who are mostly students) and simulation artifacts have intersecting historical trajectories. To capture this and other historical dimensions of research in these laboratories we used also interpretive methods of cognitive-historical analysis (Nersessian 1992; 1995; 2008). In this investigation, cognitive-historical analysis examines historical records to advance an understanding of the integrated cognitive-social-cultural nature of research practices. Data collection for this part of our study included publications, grant proposals, dissertation proposals, power point presentations, laboratory notebooks, emails, technological artifacts, and interviews on lab history.

3. A Case Study of Conceptual Innovation in Neural Engineering In this paper I develop a case from the neural engineering „Lab D“ in which we collected data over four years. It involves both investigative practices of physical and of computational modeling as a means of creating basic scientific understanding and the interaction of these in a rather spectacular case of conceptual innovation. This case involves the research projects of three graduate student researchers that were brought together through the development of a computational model of an in vitro model—what might be considered a second order model—constructed initially to understand what they were calling „burst“ phenomena in a physical simulation model—a cultured network of living neurons, locally

250

Nancy J. Nersessian

called „the dish,“ which is the focal model-system designed by the lab. Novel insights about the in silico (computational) dish were mapped and transferred to the in vitro dish and led to the development of what could prove to be a significant conceptual innovation in neuroscience. This case demonstrates the interactions of concepts and modeling practices that can lead to conceptual innovation through transfer of concepts as well as novel concept formation. Although there is overlap in the research being conducted in the lab in the period of interest, I divide the research into phases for clarity of exposition.

3.1 Phase I: „Playing with the Dish“ Lab D had just begun its existence when we started collecting data. The Lab director („D6“) was a new assistant professor who had spent an extended postdoctoral period developing the main technologies he would need to conduct research on living neural networks. He had been interested in computational neural network modeling as an undergraduate and during graduate school in biochemistry he continued „moonlighting as a cognitive scientist“ reading, attending conferences, and doing neural network modeling for fun, plus taking courses on the psychobiology of learning and memory. The current neuroscience paradigm for studying the fundamental properties of living neurons used single-cell recordings. D6 believed that to study learning there needed to be a way to study the properties of networks of neurons, since it is networks that learn in the brain. He recounted to us that somewhere around 1990 (in the middle of graduate school where he was engaged in what he felt was uninteresting research), he had the idea that „perhaps you could make a cell culture system that learns.“ Such a culture would more closely model learning in the brain, which is a network phenomenon, and also enable emergent properties to arise. Learning requires feedback, so the in vitro system would need to have sensory input or „embodiment“ of some kind. Having read the proceedings of a conference about the simulation of adaptive behavior in computational animals or in robots using the computer as the „brain,“ which were called „animats“ by that community, he thought „hey, you could do this in vitro—have an animat that is controlled by neurons and somehow embody it.“ Lab D was founded (twelve years later) to pursue his insight and the general hypothesis that advances could be made in the overarching problem of understanding the mechanisms of learning in the brain by investigating the network pro-

Modeling Practices in Conceptual Innovation

251

perties of living neurons. Figuring out the control structure for supervised learning in the living network, that is, how the dish could be trained to control the embodiment systematically using feedback, posed a significant and multifaceted problem, the solution to which would involve conceptual innovation. The case developed here is sketched in Figure 1, which indicates what each of the three researchers was doing during the three-year period of interest. At the time we were collecting data, we had no foreknowledge that we would be capturing what could prove to be highly significant conceptual innovations for the field. Fortunately, we had collected sufficient and relevant data as the process was unfolding and then conducted follow-up interviews with the Lab D members as their major publications and dissertations on this research were being written. The three graduate students involved in the case were all recruited within a few months of one another. Much of the first year was directed towards constructing the dish model-systems and developing technology and software to interface with the neuron culture and record dish activity—what researchers called „talking to the dish.“ Building the in vitro dish model involves extracting cortical neurons from embryonic rats, dissociating their connections, and plating them (15 – 60 K, depending on the desired kind of network) on a specially designed set of 64 electrodes called a multi-electrode array (MEA) where the neurons regenerate connections to become a network. The design of the dish model-system incorporates constraints from neurobiology, chemistry, and electrical engineering. Given the technical challenges of creating the dish, keeping it alive for extended periods (as long as two years), and controlling it, the group decided to start with a simple model of a single layer of dissociated cortical neurons. The dish model-system was constructed to provide a means of exploring whether learning could be induced in system of neurons with just network properties of the brain, abstracted from other brain structures. What the dish models was a subject of on-going discussion among the lab members. Some maintained that the dish is „a model of cortical neurons,“ while others claimed it „is a model of development [of the brain],“ and, when pressed, some retreated to „it may just be a model of itself.“ However, all agreed with D6’s belief that studying the dish will yield understanding of the basic workings of network-level cortical neurons: First of all, it is a simplified model; I say that because the model is not—it’s artificial, it’s not how it is in the brain. But I think that the model would answer some basic questions, because the way the neurons interact should be

Figure 1. Our representation of the approximate time line of the research leading to the conceptual innovations and development of the control structure for supervised learning with the robotic and computational „embodiments“ of the in vitro dish (years 2 – 4 of the existence of Lab D). The dashed line represents the period after D11 moved back into the main part of physical space of the lab and all three researchers began to actively collaborate on exploiting the findings stemming from the in silico dish.

252 Nancy J. Nersessian

Modeling Practices in Conceptual Innovation

253

the same whether it’s inside or outside the brain … I think the same rules apply.

The dish model-system is designed to provide a basic understanding of how neurons communicate and process information such that learning takes place. The intention of the director continues to be that after developing this understanding, the lab will move on to building and investigating more complex models such as „studying cultures with different brain parts mixed together, or specific three-dimensional pieces that are put together.“ What is of interest for the case I develop is that in this early period, the dish was an object of interest in its own right. When this research started, there were no established models of neuron communication. This pioneering research began with importing some concepts from single neuron studies to start to develop an understanding of the dish and work towards the goal of getting it to learn. As an indicator of learning, they assumed the standard psychology concept, which the director (D6) stated is „a lasting change in behavior resulting from experience.“ Various interfaces were developed to be able to record and stimulate (i. e., provide experiences to) the dish, including a suite of software programs they called MEAbench and two forms of embodiment: computationally simulated „animat worlds“ (extending the computational concept of animat to include simulation worlds in which the behavior of the computational creature is determined by being connected to living neural networks), and robotic devices („hybrots,“ which stands for „hybrid robots“) that could be connected to the dish. Both embodiments enabled closed-loop feedback experiments. They operationalized learning in terms of what is known in neuroscience as the „Hebbian“ notion of learning as plasticity (basically, changing the brain by adding or removing neural connections or adding new cells in response to experience) and the mathematical formulation known as the Hebbian Rule (basically, „neurons that fire together wire together“) as a guide. As D4 recounted later, from Hebb’s postulate—which talks about learning between two neurons— we thought our data will show that something has to be added to the known equation in order for it to manifest in a population of neurons. Our idea was to figure out that manifestation … So it has gone from what Hebb said for two neurons to what would translate into a network.

For recording and displaying the dish activity, they transferred and modified the notion of a spike from single cell electrophysiology. There it refers to the electrical trace left behind when an individual neuron fires

254

Nancy J. Nersessian

Figure 2. The MEAscope per channel visualization of in vivo dish activity showing spontaneous bursting across the channels of the dish. Spontaneous bursting activity is represented by the spikes appearing in the channels. A relatively ‘quiet’ dish would have no spikes in the channels, with all channels looking closer to channel 15.

and it part of a well-established understanding that in neural firing there is a steep jump in voltage potential as the neuron de-polarizes and a proportional drop in potential as it recovers. In the multicellular case of the dish, the researchers estimate that the electrical activity on a single electrode comes from an average of three to five neurons, so what records as a spike can come from several neurons firing together and it is impossible to differentiate among the firing of one or of many. The group has developed a software model of a spike that identifies MEA dish spikes according to specified criteria and keeps a record of the electrical activity immediately around the spike. Open-loop electrophysiology research begins with these filtered spike data, which are represented visually in the format of an oscilloscope by the MEAscope program as output perchannel as in Figure 2. We are now ready to focus on the two borrowed concepts at the center of the significant developments in their research, the notion of burst, transferred from single neuron studies, and the engineering notion of noise. D4 (electrical engineering), who was the first of the graduate students, helped to construct protocols for the dish model-system before moving to open-loop electrophysiology research. D11 (life sciences and chemistry) entered in a few months later and worked on the spike sorting

Modeling Practices in Conceptual Innovation

255

software and then moved to closed-loop research on animats and hybrots. D2 (mechanical engineering and cognitive science) started about a month later and worked on developing some of the MEAbench software and then started closed-loop research on animats and a specific hybrot: a robotic drawing arm. At the start of our case, they were all involved in „playing with the dish,“ which is their term for exploring the problem space by stimulating the dish with various electrical signals and tracking the outputs. D4 then began trying to replicate a plasticity result reported by another research group, but was unable to do so largely because the dish was exhibiting spontaneous synchronous network-wide electrical activity—she called this phenomenon „bursting,“ extending the meaning from single neuron studies where it means the spontaneous activity of one neuron. This dish-wide phenomenon was visualized in Figure 2 as the spike activity for each electrode per recording channel, across all channels. She first attempted to introduce the term „barrage“ into the community to focus attention on the network-wide nature of the phenomenon, but soon reverted to „burst“ when her term did not catch on. Bursting created a problem because it prevented the detection of any systematic change that might rise due to controlled stimulation; that is, it prevented detection of learning. The whole group believed that „bursts are bad“ and thought of them in terms of the engineering concept of noise—as a disturbance in the signal that needs to be eliminated, „it’s noise in the data—noise, interference … so it’s clouding the effects of learning that we want to induce.“ The group hypothesized that the cause of bursting was lack of sensory stimulation—„deafferentation“—and D4 decided to develop different patterns of electrical stimulation to see if she could „quiet“ the bursts. After about a year, she managed to quiet the bursts entirely and initiated plasticity experiments, but for nearly a year she was unable to make any progress. A new problem arose with the quieted dish: Activity patterns provoked by a constant stimulation did not stay constant across trials, but „drifted“ to another pattern. During the same period, D2 was trying to use various probe responses to produce learning through closed-loop embodiments. He also spent considerable time traveling the world with an art exhibit featuring the mechanical drawing arm, controlled via satellite by a dish living in Lab D. As a research project, he was trying to get the dish to learn to control the arm systematically, but as a mechanical art exhibit, its creativity required only that it draws, not that it draws within the lines! Early in the burstquieting period, D11 decided to build a computational model of the dish model-system and moved out of the physical lab space to a cubicle with a

256

Nancy J. Nersessian

computer. He felt that the „dish is opaque“ and what was needed to make progress was more control and measurement capabilities: „I feel that [computational] modeling can tell us something because the advantage of modeling is that you can measure everything, every detail of the network.“ The Lab director doubted the computational modeling would lead to anything interesting, but gave his consent to work on it.

3.2 Phase 2: Computationally Modeling the Dish Model-System Similar to the kinds of bootstrapping processes detailed in my earlier research, this second order model—the computational simulation of a generic dish model—was developed and optimized through a bootstrapping process (see Figure 3) comprising many cycles of abstraction, construction, evaluation, and adaptation that included integrating constraints from the target (their dish model) and analogical sources’ domains (a wide range of neuroscience literature), as well as constraints of the computational model itself (the modeling platform CSIM and those that arose as the model gained in complexity). This computational dish model was built to serve as an analogical source for the physical dish model-system. That is, D11 hoped that insights derived from it could eventually be mapped and transferred over to the target problem: creating supervised learning in the dish model system. As it turned out, the computational model proved a source for both novel concepts for understanding dish phenomena and a control structure for supervised learning that were successfully transferred to the physical dish, solving the original problem. I highlight only the most significant constraints. As a modeling platform D11 chose what he called „the simplest neuron model out there—leaky-integrate-fire,“ (CSIM modeling platform) to see if they could replicate network phenomena without going into too much detail, such as including synaptic models. The only constraints taken from their in vitro dish were structural: 8X8 grid, 60 electrodes, and random location of neurons („I don’t know whether this is true though, looking under the microscope they look pretty random locations“). In the model he only used 1 K neurons, which he believed would produce sufficiently complex behaviors. To connect the neurons he took parameters from a paper about the distribution of neurons. All the parameters of the model—such as types of synapses, synaptic connections, synaptic connection distance, percentage of excitatory and inhibitory neurons, conduction

Figure 3. Our representation of the bootstrapping processes involved in constructing, evaluating, and adapting the computational dish through numerous iterations. Once the in silico dish was able to replicate the in vitro dish behavior and the novel concepts were developed for it, the analysis was mapped (adapted to the specifics of its design) and transferred to the in vitro dish, and evaluated for it.

Modeling Practices in Conceptual Innovation

257

258

Nancy J. Nersessian

velocity and delay, noise levels, action potential effects—were taken from the neuroscience literature on single neuron studies, brain slices, and other in vitro dish experiments. He then just let it run for a while to see what would emerge. For validating the model he first followed the same experimental protocol used with other in vitro dishes to see if he could replicate those data, and used the data (including the burst data) from their own dish only after he had succeeded with the literature replications dishes (an outcome he called „striking“ given the simplicity of the model). By early year three, he had developed the model network sufficiently to begin „playing with the [computational] dish“ (seeing how the computational network behaves under different conditions) and had started getting what he called „some feeling about what happens actually in the [simulated] network.“ Sometime in during this period, he moved back into the physical space of Lab D and all three researchers began working together. Part of getting a feeling for the model involved developing a visualization of the dish activity that proved to be highly significant in solving the supervised learning problem by means of articulating a cluster of conceptual innovations. As he noted, „I am sort of like a visual guy—I really need to look at the figure to see what is going on.“ It is important to realize that computational visualizations are largely arbitrary; he could have visualized the simulated dish in any number of ways, including the one the group was accustomed to: a per-electrode spike representation from MEAscope (Figure 2). However, he imagined the dish as a network and visualized it that way: „I can visualize these 50 K synapses and so you can see—after you deliver a certain stimulation you can see those distributions of synaptic weight change—or synaptic state change.“ This network visualization (Figure 4) is possible because the in silico dish affords control and visualization at the level of individual neurons, whereas the in vitro dish affords control and visualization only at the electrode level (clusters of neurons). D11 made movies of the dish visualization as it ran (that he showed the others and us, so we, too, could „come away with the same thing“), which showed the movement of activity patterns across the network over time. He began to notice something interesting: There were structurally similar looking bursts and there seemed to be only a small number of „patterns of propagation“ of these. This led him to conclude that „you get some feeling about what happens in the network—and what I feel is that … the spontaneous activity or spontaneous bursts are very stable.“ The next step was to at-

Modeling Practices in Conceptual Innovation Raster plot

259

Network

1000

Neuron index

800 600 400 200 0 227.6

227.8 228 Time(sec)

Figure 4. A screen shot of the network visualization of bursting in the in silico dish.

tempt to develop a means of tracking the activity of the possibly „stable“ bursts across the network.

3.3 Phase III: Controlling the In Vitro Dish Model-System From this point, things developed rapidly in the lab as the group worked together on statistical analyses, experimentation to see whether the „drift immune“ measures developed for the computational network could be transferred to the in vitro dish, and whether the „burst feedback“ in the in vitro dish could be used for supervised learning with the embodiments. This phase of research began with the idea that „bursts don’t seem as evil as they once did“ (D4). Most importantly, they began to develop the concept of bursts as signals (rather than only noise) that might be used to control the embodiments. Articulating the notion that bursts can be signals took the form of several interconnected novel concepts—burst type: one of limited numbers of burst patterns (10); burst occurrence: when a type appears; spatial extent: an estimation of burst size and specific channel location; and CAT (‘center of activity trajectory’): a vector capturing the flow of activity at the population scale. With the exception of ‘spatial extent’ all of these concepts were developed for the simulated network first and then mapped to the in vitro dish and modified as required. Although each of these concepts is important, they are quite complex— conceptually and mathematically and so I will only provide some details of the development of ‘CAT’, which is an entirely novel concept for un-

260

Nancy J. Nersessian

derstanding neural activity and could prove to be of major importance to neuroscience. D2 recounted during the final stages of analysis: [T]he whole reason we began looking at the center of activity and the center of activity trajectory is because we are completely overwhelmed by all this data being recorded on the 60 electrodes—and we just can’t comprehend it all. The big motivation to develop this is to actually have something—a visualization we can understand.

The mathematical representation of the CAT concept was articulated by making an analogy to the physics notion of center of mass and by drawing from three resources within the group: 1) D11’s deep knowledge of statistical analyses from the earlier period in which he had tried to create sensory-motor mappings between the dish and the embodiments; 2) an earlier idea of another graduate student at D6’s old institution (who had worked remotely with the group) that it might be possible to capture „the overall activity shift“ in the in vivo dish by dividing the MEA grid into four quadrants and using some subtraction method; and 3) the idea that bursts seem to be initiated at specific sites as shown in a new graphical representation for the in vivo dish (spatial extent of a burst) developed by D4 after the computational model had replicated her in vitro dish results. D4 had been trying to see if she could get at some of the information the computational visualization was providing by graphing more specific information about bursts, in particular, their location and frequency over time. This became what they conceptualized as ‘spatial extent’: „the number of times any neuron near an electrode initiated a burst in 30 minute segments of a 1.5 hour spontaneous recording.“ Spatial extent of bursts is represented by the color and size of the circles in Figure 5, which clearly represents different information than the MEAscope representation of bursts as spike per channel across the channels (Figure 2). However, it does not represent activity as it propagates across the network, which is what CAT does. The mathematical representation of CAT includes a temporal as well as spatial dimension: „I not only care about how the channel’s involved in the burst, I also care about the spatial information in there and the temporal information in there—how they propagate“ (D11). CAT tracks the spatial properties of activity as it moves through the network; that is, the flow of activity at the population scale, as displayed in the visualization screen shot (Figure 6c). It is an averaging notion similar to the notion of population vectors, which capture how the firing rates of a group of

261

Modeling Practices in Conceptual Innovation

30 min - 1 hour

1 2

1 2

3 4

3 4

Row

Row

0 - 30 min

5

5

6 7

6 7

8

8

1 2 3 4 5 6 7 8

1 2 3 4 5 6 7 8

Column

Column 1 - 1,5 hours

20

3 4

10

5

Count

Row

1 2

6 7 8

1 2 3 4 5 6 7 8

0

Column Figure 5. Spatial extent captures the location and frequency of bursts over time in the in vivo dish per channel. The figures show the burst initiation sites and the number of times (count) any neuron near an electrode initiated a burst in 30 minutes segments in 1.5 hour of spontaneous recording. The color and size of the circles represent the number of times any electrode initiated a burst in the 30 minutes of recording. The per channel representation of the MEAscope visualization (Figure 2) is kept, but much different information is analyzed and displayed.

neurons that are only broadly tuned to a stimulus, when taken together, provide an accurate representation of the action/stimulus. CAT differs from a population vector and is more complex because it tracks the spatial properties of activity as it moves through the network. That is, if the network is firing homogeneously or is quiet, the CAT will stay at the center of the dish, but if the network fires mainly in a corner, the CAT will move in that direction (see Figure 6, 6c is the CAT representation).

262

Nancy J. Nersessian network

network

center of activity trajectory (CAT)

Figure 6. The two screen shots of the computational visualization of the network show the flow of burst activity in simulated dish at (a) burst time 1, (b) burst time 2 and (c) shows a corresponding CAT from T1 to T2. The CAT tracks the spatial properties of activity of the population of neurons as the activity moves across the network. That is, if the network is firing homogeneously or is quiet, the CAT will stay at the center of the dish (on analogy with a center of mass), but if the network fires from one direction to another as in (a) to (b), the CAT will move in that direction.

Thus, CAT tracks the flow of activity (not just activity) at the population scale, and much faster than population vectors. It is a novel conceptualization of neuronal activity. What the CAT analysis shows is that in letting the simulation run for a long time, only a limited number of burst types (classified by shape, size, and propagation pattern) occurs—approximately 10. Further, if a probe stimulus is given in the same channel, „the patterns are pretty similar.“ Thus the CAT provides a „signature“ for burst types.

Modeling Practices in Conceptual Innovation

263

D11 was unsure whether it would be possible to transfer the concept to the in vitro dish because his „feeling“ for what the simulated dish was doing, „but the problem is that I don’t think it is exactly the same as in the living network—when our experiment worked in the living network I am surprised—I was surprised.“ But the group decided to try a mapping by replacing individual neurons by individual electrodes where there are clusters of neurons for in vitro CAT and began a range of experiments with the in vitro dish (open loop), the dish connected to an animat version of a robotic drawing arm, and then the dish connected to the real robotic drawing arm. D4 summed up the difference between CAT and the way they had been thinking of the in vitro dish prior to D11’s simulation: „[H] e [D11] describes a trajectory … we weren’t thinking of vectors with direction … so think of it as a wave of activity that proceeds through the network. So he was thinking like a wave, while we were thinking of a pattern.“ The spatial extent analysis tracks a pattern of bursting across the channels (Figure 5), but the CAT analysis tracks a wave of bursting activity across the network (Figure 6). Notably, she continued, „we had the information always … the information was always there.“ What CAT enabled them to do is to tap into the behavior of the system and, eventually, to control its learning. D4 kept working with open loop experiments, did some preliminary research with someone in a medical school that transferred her new understanding of bursts as signals (derived from the computational and in vitro models) to epilepsy, and graduated. D2 and D11 stayed for an additional year and combined CAT and techniques D4 had developed for burst quieting to develop a range of stimulation patterns for the in vitro dish that led to supervised learning for the embodied dish. They got the mechanical arm to draw within the lines, and wrote and successfully defended dissertations on different aspects of this work. Interestingly, the control structure is unlike the customary structure for reinforcement learning, where the same stimulation is continually repeated. Their control structure consists of providing the network with a patterned stimulation, inducing plasticity, followed by providing a random background stimulation to stabilize its response to the patterned stimulation. Again, this method is counterintuitive to existing notions of reinforcement learning, and emerged only in the context of the group’s building and playing with the two different kinds of simulation models.

264

Nancy J. Nersessian

4 Discussion There are several features of this case that are important for thinking about concepts in investigative practices. As Steinle (this volume) reminds us, there are a wide range of epistemic aims that participate in the dynamics of concept formation in science (see also Brigandt in this volume). In frontier biomedical engineering sciences labs, chief among the aims is developing an understanding of novel in vivo phenomena sufficient to enable some degree of intervention and control. The primary investigative practice in many areas is constructing physical models that adequately exemplify the phenomena of interest so as to be able to conduct controlled experiments with the models and transfer outcomes to in vivo phenomena. The case examined here also included developing a computational simulation of the physical model. Such physical and computational models are built towards becoming analogical sources/bases. From the outset, the intention is to build an analogy but the nature of that analogy is determined incrementally, over time, with only certain features of it selected at the time of building. Often to build an analogy requires configurations of more than one model, comprising both engineered artifacts and living matter. These „model-systems“ are dynamical entities that perform as structural, functional, or behavioral analogs of selected features of the in vivo systems. Through experimenting with them, researchers develop hypotheses that they „hope [will] predict … what will happen in real life.“ Such physical and computational simulation model-systems are enactive systems with emergent behavioral possibilities and the need to represent novel behaviors can promote conceptual innovation (Chandrasekharan et al. in press; Chandrasekharan/Nersessian 2011). Since these areas are conducting ground-breaking frontier research, there is little understanding of the phenomena. So, initial ways of thinking about processes taking place in the in vitro model-systems are often provided by transferring concepts from what are thought to be related domains. In the case of Lab D, we saw, for instance, that several concepts were transferred from the well-developed area of single neuron studies to provide an initial understanding of „the dish“—a living neural network. The physical dish model is a hybrid construction, merging constraints, methods, and materials from biology, neuroscience, and engineering. The initial understanding of phenomena exhibited by the in vitro model was in terms of concepts borrowed from single neuron studies (‘spike’, ‘burst’) and engineering (‘noise’). They understood that the emergent properties of the network might require modification of these concepts. In practice,

Modeling Practices in Conceptual Innovation

265

these transferred concepts both facilitated and impeded the research. The concept of ‘spike’ facilitated developing stimulation and recording methods and interpretations of the output of clusters of neurons surrounding an electrode. The concept of ‘burst’, when extended to spontaneous dishwide electrical activity, and categorized as ‘noise’ (in the engineering sense), and thus something to be „quieted,“ impeded the research for an extended period. To deal with the impasse, D11 introduced a new modeling method into the lab research—computational simulation of a physical model— which lead to the formation of a cluster of novel concepts (‘CAT’, ‘spatial extent’, ‘burst type’, ‘burst occurrence’) which together enabled them to understand that bursts could be signals (as well as noise). This secondorder model was constructed to eventually provide an analogy to the physical model; that is, once it had sufficiently replicated in vitro dish behavior, inferences made about the phenomena taking place in it were to be transferred to the in vitro model, and potentially from there to the in vivo phenomena. The computational model merges modeling constraints, intra-domain constraints from other areas of neuroscience (brain slices, single neuron studies), and dish constraints. Constructing the simulation model facilitated D11’s thinking of the dish at the level of individual neurons and in terms of populations of neurons and how these interact dynamically to produce behavior. The network visualization reinforced this idea and provided a dynamical simulation (captured in movies that could be examined more carefully) of the real-time propagation of the activity across the network that allowed D11 to literally see that there were similar looking burst patterns—and these seemed to be limited in number. Further, he could show these to the others who could also see these phenomena. It enabled them, as they said, „to look inside the dish.“ The visualization could have taken numerous forms. However, the network visualization and simulation exemplify (Goodman 1968; Elgin 2009) the network features of the phenomena, whereas the perchannel representation of the MEAscope graph does not. This enabled the group to develop a better grasp of the behavior of the network. The spatial extent graphs only capture structural information, whereas the CAT captures behavioral information as it unfolds over time. This notion could not have been formalized without the interaction of all members with the simulated dish. The information might have been „always there“, but the computational simulation and visualizations make it accessible. These physical and computational simulation models are enactive systems with emergent behavioral possibilities that can lead to novel in-

266

Nancy J. Nersessian

sights which can promote conceptual innovation (Chandrasekharan et al. in press; Chandrasekharan/Nersessian 2011). This interaction underscores that explaining how the conceptual innovations arise requires an analysis of the interacting components within an evolving system of researchers, artifact models, and practices. Physical and computational simulation models embody researchers’ current understandings and suppositions and thus serve as simulations of these as well as of the phenomena they are constructed to model. The case provides an exemplar of what cognitive scientists would call distributed cognition. Concepts and models exist not only as representations ‘in the head’ of the researchers (mental models), but also as representations in the form of physical and computational simulation models, and in inferential processes there is a coupling of human and artifact representations (Nersessian 2009). Further, the manifest nature of the in silico dish network through its visualization enabled the group to exploit it communally. In particular, it facilitated making joint inferences about what might map to the in vitro dish, rejecting false leads, developing extensions, and coming to consensus. Once they had the new concepts, they could think about a range of new investigations, such as how to control the embodied dish model-systems (hybrot and animat) and how to control epilepsy in patients. In concluding, what do we learn about modeling practices around concepts by studying on-going research beyond what historical analysis can provide? For one thing, the investigations of on-going research establish that model-based reasoning has been contributing to conceptual innovation and change across a wide range of sciences and historical periods and on into present-day science. Of course, the specific kinds of modeling possibilities have enlarged over the history of science bringing with them new affordances, for instance, those of dynamical simulation and visualization of the sort afforded by computational modeling. Still, as I noted at the outset, the model-based bootstrapping to conceptual innovation in this and other cases from our ethnographic studies exhibit the same kinds of processes in the abstract as in the historical cases: - analogical domains: sources of constraints for building models; - imagistic representation: facilitate perceptual inference and simulation; - simulation: inference to new states via model manipulation; - cycles of: construction simulation/manipulation, evaluation, adaptation;

Modeling Practices in Conceptual Innovation

267

- emergent analogical relation between the model and the target. I did not go into the ethnographic research with the intent to apply the analysis of modeling derived from cognitive-historical research. However, the features that emerged of physical and computational modeling processes were parallel to my earlier analysis of model-based reasoning in the respects noted above. To use a notion drawn from ethnographic analysis, this kind of conceptual innovation process transfers robustly across different time periods and also across several sources of data and methods of analysis. Thus, the science-in-action studies also lend support to the interpretations of the less rich historical records. Further, as has been established in historical cases, the ethnographic cases of conceptual innovation and articulation underscore that model-based reasoning (in general) is closely connected with analogical reasoning. This has implications for both the analogy and models literatures. The built models are designed to share certain relations with the in vivo phenomena so it does not matter that in many respects they are ‘false models’, which the philosophical literature has puzzled about. What matters is whether these relations are of the same kind as those they are meant to exemplify. For instance, the per-channel visualization does not capture the network features possessed by the in vitro dish and by the in vivo phenomena and the spatial extent visualization captures only a pattern or structure, but not behavior. The network visualization captures the network structure and behavior of both dish models. Perhaps most importantly, though, collecting field observations and interviews surrounding cognitive-social-cultural practices during research processes provides a wealth of insight into creative processes and helps those who study science fathom aspects of such practices that would never make it into the historical records. Most prominent are the evolving dynamics of the interactions among the members of research group, between them and the modeling artifacts, and the evolution of those artifacts. Even for physical records there are many pieces that are unlikely to be archived, since researchers rarely keep detailed records of process. The computational visualization that enabled them (and me) literally to see the burst patterns provides an example: A sentence in a publication remarking that „burst patterns were noted“ does not convey its cognitive impact or the change it sparked in group dynamics which led to integration across the three research projects. My research group was, of course, not able to make all the observations and collect all the records that are pertinent to these conceptual innovations since ethnographic data

268

Nancy J. Nersessian

collection is complex and time consuming, and of necessity selective. Once it became apparent that major scientific developments were coming out of the Lab D research (nearly two years into our research), however, we did have sufficient data to mine to and could go back and collect additional data to develop the most salient aspects of the innovation processes, which are analyzed in outline form in this paper.

Acknowledgments I thank the members of the Cognition and Learning in Interdisciplinary Cultures research group (www.clic.gatech.edu) who worked on this project, Wendy Newstetter (co-PI), Lisa Osbeck, Ellie Harmon, Chris Patton, and Sanjay Chandrasekharan. The Lab D case analysis, specifically, was developed together with Chris and Sanjay. I thank also the Lab D researchers for being generous with their time in letting us observe and interview them about their research over an extended period. The research was conducted in accordance with Institute Review Board criteria on human subjects research and the identities of the researchers cannot be revealed. This research was made possible with the support of the US National Science Foundation grants DRL0411825 and DRL097394084.

Reference List Brigandt, I. (this volume), „The Dynamics of Scientific Concepts: The Relevance of Scientific Aims and Values.“ Chadarevian, S. de / Hopwood, N. (eds.) (2004), Models: The Third Dimension of Science, Stanford, CA: Stanford University Press. Chandrasekharan, S. / Nersessian, N. J. / Subramaninan, V. (in press), „Computational Modeling: Is this the End of Thought Experiments in Science?“ In: Brown, J. / Frappier, M. / Meynell, L. (eds.), Thought Experiments in Philosophy, Science and the Arts, London: Routledge. Chandrasekharan, S. / Nersessian, N. J. (2011), „Building Cognition: The Construction of External Representations for Discovery.“ In: Carlson, L. / Hölscher, C. / Shipley, T. (eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society, Austin, TX: Cognitive Science Society, 267 – 272. Dunbar, K. (1995), „How Scientists Really Reason: Scientific Reasoning in Real-World Laboratories.“ In: Sternberg, R. J. / Davidson, J. E. (eds.), The Nature of Insight, Cambridge, MA: MIT Press, 365 – 395.

Modeling Practices in Conceptual Innovation

269

Elgin, C. Z. (2009), „Exemplification, Idealization, and Understanding.“ In: Suárez, M. (ed.), Fictions in Science: Essays on Idealization and Modeling, London: Routledge, 77 – 90. Goodman, N. (1968), Languages of Art, Indianapolis: Hackett. Goodwin, C. (1995), „Seeing in Depth.“ In: Social Studies of Science 25, 237 – 274. Hollan, J. / Hutchins, E. / Kirsch, D. (2000), „Distributed Cognition: Toward a New Foundation for Human-Computer Interaction Research.“ In: ACM Transactions on Computer-Human Interaction 7(2), 174 – 196. Hutchins, E. (1995), Cognition in the Wild, Cambridge, MA: MIT Press. Kindi, Vasso (this volume), „Concept as Vessel and Concept as Use.“ Lave, J. (1988), Cognition in Practice: Mind, Mathematics, and Culture in Everyday Life, New York: Cambridge University Press. Nersessian, N. J. (1984), Faraday to Einstein: Creating Meaning in Scientific Theories, Dordrecht: Kluwer Academic Publishers. Nersessian, N. J. (1992), „How Do Scientists Think? Capturing the Dynamics of Conceptual Change in Science.“ In: Giere, R. (ed.), Minnesota Studies in the Philosophy of Science, Minneapolis: University of Minnesota Press, 3 – 45. Nersessian, N. J. (1995). „Opening the Black Box: Cognitive Science and the History of Science.“ In: Osiris 10, 194 – 211. Nersessian, N. J. (2008), Creating Scientific Concepts, Cambridge, MA: MIT Press. Nersessian, N. J. (2009), „How Do Engineering Scientists Think? Model-Based Simulation in Biomedical Engineering Research Laboratories.“ In: Topics in Cognitive Science 1, 730 – 757. Steinle, F. (this volume), „Goals and Fates of Concepts: The Case of Magnetic Poles.“

Conceptual Development in Interdisciplinary Research Hanne Andersen 1. Introduction Much scientific activity today is based on interdisciplinary collaborations in which multiple scientists with different areas of expertise share and integrate their cognitive resources in producing new results that cut across disciplinary boundaries. In this chapter, focus will be on how scientists involved in interdisciplinary collaborations link concepts originating in different disciplines or research fields and develop new concepts that cut across disciplinary boundaries.1 In recent publications, Collins, Gorman and others (Collins/Evans 2002; Collins/Evans/Gorman 2006; Gorman 2010) have argued that interdisciplinary collaborations can be analyzed as trading zones varying along two different dimensions: a cultural dimension according to the degree of linguistic homogeneity or heterogeneity, and a power dimension according to the degree to which power is used to enforce the collaboration. On their account, only some trading zones, what they call inter-language trading zones, result in a truly merged culture in which a full blown creole language is the ideal end process. Other forms of trading zones are the enforced trading zone in which the expertise of an elite group is black-boxed for other participants, the subversive trading zone where one language overwhelms that of the other, and the fractionated trading zone which may either be a boundary object trading zone mediated by material culture or an interactional expertise trading zone mediated by language. 1

One may add that there is no clear cut boundary between interdisciplinary research and disciplinary research. If looking closely at traditional disciplines, much research may consist in collaborations between scientists from different subdisciplines. See also Collins, Evans and Gorman (2006, 662) or Abbott (2001) for ‘fractal’ descriptions of disciplinary structures.

272

Hanne Andersen

Based on this typology, Collins, Evans and Gorman (2006) have argued that interactional expertise trading zones are the norm for much new interdisciplinary work and that it will usually be the first step before an inter-language trading zone develops.2 Thus, a common developmental pattern will start from a heterogeneous collaboration, and as members of the trading zone become more interested in each other’s work they will develop interactional expertise, that is, sufficient mutual knowledge of each other’s fields to be able to interact in interesting ways, but without possessing the contributory expertise necessary to make original contributions outside of one’s own field.3 As collaboration intensifies, cultural differences will be reduced and the fractionated trading zone gradually transforms into an inter-language trading zone. However, this analysis exclusively focuses on how a collaboration may move between different kinds of trading zones over time, while it provides no answer to the question of how in detail “trading partners hammer out local coordination” (Collins/Evans/Gorman 2006, 658). In order to answer this question several issues need to be addressed, among them, first, how scientists with different areas of expertise combine their cognitive resources, and second, how they come to agree on conceptual changes in overlapping areas. The rest of this chapter will be focused on these questions. First, I shall provide an account of conceptual development within a community. Second, based on this account I shall then describe how concepts from different disciplines may be linked or in other ways combined. In my analysis of how cognitive resources are combined I shall draw both on work in distributed cognition on the interlocking of mental models and on work in social epistemology on mutual trust and joint acceptance in groups.

2

3

Interdisciplinary research may also exist in the form of an enforced or a subversive trading zone. For example, Rossini and Porter (1979, 1981) report that one of four common ways of organizing interdisciplinary research is integration by leader, where interaction is only essential between individual researchers and the leader that integrates all incoming results, similar to the idea of an enforced trading zone in which the expertise of the elite group / group leader is blackboxed for other participants. Likewise, reductive theory relations can be seen as a particular form of a subversive trading zone where one language overwhelms that of the other. However, in this chapter focus will be on the collaborative rather than the coerced relations. For the notions of interactional and contributory expertise, see, e. g., Gorman (2002) and Collins and Evans (2002).

Conceptual Development in Interdisciplinary Research

273

2. Disciplines and (In)Commensurability Most work on trading zones build on the assumption that the relation between different disciplines can be characterized by Kuhnian incommensurability, and that there therefore is a deep problem of communication when entering an interdisciplinary collaboration. The trading zone is seen as the special location in which communication can nevertheless take place because trading partners are here able to establish local coordination, despite vast global differences (see, e. g., Galison 1997, 783). Although the notion of incommensurability has been introduced to describe the relation between paradigms before and after a scientific revolution, later in his life Kuhn seemed to think that also the relation between different specialties could be characterized by this notion. Hence, describing the development of science as an evolutionary tree in which specialties keep proliferating as new subspecialties split off from their parent trunk, Kuhn suggested that it was incommensurability that separated the different specialties from each other: … [W]hat makes these specialties distinct, what keeps them apart and leaves the ground between them as apparently empty space. To that the answer is incommensurability, a growing conceptual disparity between the tools deployed in the two specialties. Once the two specialties have grown apart, that disparity makes it impossible for the practitioners of one to communicate fully with the practitioners of the other. And those communication problems reduce … the likelihood that the two will produce fertile offspring. (Kuhn 2000, 120)

However, when analyzing interdisciplinary research it is important to realize that the conceptual disparity between two different specialties—in terms of Kuhn’s evolutionary metaphor disciplines placed at different branches of the evolutionary tree of the sciences—is very different from the conceptual disparity between the two specialties at each side of a revolutionary divide.4 Specialties at each side of a revolutionary divide address in some way the same domain and compete on offering the better account of their common domain. This is a relation of incommensurability that may imply severe communication difficulties based on, for example, disagreement on which entities exist in the world. On the contrary, although different disciplines may address domains that are in some ways partially related, they usually do not compete 4

For an analysis and criticism of Kuhn’s view on this point, see Andersen 2006.

274

Hanne Andersen

on offering the better—and in the end only—account.5 While scientists may here also posit different entities, they will not deny the existence of entities posited by other disciplines, they may just not be interested in them. Hence, focusing on communication between different disciplines it is important to bear in mind that in case there is no or little communication between different specialties, this does not necessarily reflect incommensurability but may simply reflect the fact that these specialties address issues that are in some way or other unrelated. Further, the overly strong claim about conceptual disparity ignores exactly the kind of conceptual developments we are interested in here, namely conceptual developments across different disciplines.

2.1 Conceptual Structures as Categorization Modules Before turning to the question of how scientists from different disciplinary communities can combine conceptual resources, I shall first provide an analysis of how concepts and conceptual structures develop within an individual community. Adopting Kuhn’s analysis of lexicons as the cognitive resources that the members of a scientific community share, I shall focus on conceptual structures and the phenomenal world that they carve.6 Thus, I will treat a conceptual structure as a general sort of categorization module that divides objects into groups according to similarity and dissimilarity between perceived objects or problem situations. This account of concepts is basically a family resemblance account according to which conceptual structures build on relations of similarity and dissimilarity between perceived objects. On this account, special importance is ascribed to dissimilarity and the properties which differentiate between instances of contrasting concepts, that is, concepts whose instances are more similar to one another than to instances of other concepts and which can therefore be mistaken for each other (see, e. g., Kuhn 1979, 413). Because the instances of contrasting concepts are more similar to one another than to instances of other concepts, the set of contrasting concepts also forms a family resemblance class at the superordinate level, and family resemblance concepts therefore form hi5 6

For a discussion and overview of pluralist and reductionist views on the intellectual relationship between different disciplines, see Mitchell et al. (1997). Others have taken similar approaches, arguing for the importance of classification in science and in society at large, see, e. g., Bowker/Star 1999.

Conceptual Development in Interdisciplinary Research

275

erarchical structures in which a general concept decomposes into more specific concepts that may again decompose into yet more specific concepts, and so forth—in other words, taxonomies. In this respect, Kuhn’s account is very similar to accounts developed by cognitive scientists in the 1970s as part of the Roschian revolution.7 In one of his unpublished talks, Kuhn illustrated this with a “fragment of a lexicon for physical things” where he showed by use of his favorite example on waterfowls how a superordinate concept decomposes into a group of contrasting concepts, and how this decomposition is determined by sets of features (figure 1). This lexical structure is what the members of a scientific community share. As Kuhn pointed out: What members of a language community share is homology of lexical structure. Their criteria need not be the same, for those they can learn from each other as needed. But their taxonomic structures must match, for where structure is different, the world is different, language is private, and communication ceases until one party acquires the language of the other. (Kuhn 1983, 683)

However, the possibility that different members of the community may draw on different features enables latent differences between individuals with respect to a so-called graded structures of concepts. Since category membership is determined by the degree of similarity to other instances of the concept and difference to instances of contrasting concepts, instances which are similar with regard to many properties may be seen as better instances than one that shares only a few of these. Similarly, instances which are similar with regard to properties that are central for theoretical explanations may be seen as better instances than one that displays less central features. In this way, family resemblance concepts have the capacity to form graded structures, that is, the phenomenon that different instances of the same concept are not necessarily equally good examples of this concept. Differences between individual members of the community with respect to the features used in the categorization will here be reflected as differences in graded structures. 7

Although these were not affinities that Kuhn realized when he started his work on concepts, in his unpublished material the whole Roschian set of issues, most notably the existence of a privileged basic-level in the taxonomy, is explicitly discussed. For a detailed analysis of Kuhn’s family resemblance account of concepts, see, e. g., Andersen 2000. On the relation between Kuhn’s work and the cognitive revolution in general, see Nersessian (1998).

276

Hanne Andersen Fragment of a lexicon for physical Things ANIMALS

Feathers Beaks Number of legs …

Web feet Beak shape Size …

Neck / head-length ratio Body width / length ratio Color ...

DUCKS

BIRDS

WATERFOWL

GEESE

SWANS

Figure 1. To each node in the taxonomy is attached the name given to the category as well as a set of features useful for distinguishing among instances of the subordinate categories. In this way, the features function as differentiae for the subordinate level adoptet from T. S. Kuhn: An Historian’s Theory of Meaning. Talk to Cognitive Science Colloquium, UCLA 4/25/90).

2.2 Anomalies as Triggers of Conceptual Change An important implication of Kuhn’s account of family resemblance is that contrasting categories cannot overlap.8 Kuhn termed this the ‘nooverlap’ principle and explicated that … [n]o two kind terms, no two terms with the kind label, may overlap in their referents unless they are related as species to genus. There are no dogs 8

From classical logic it is a well-known restriction of taxonomic division that concepts formed by division, first, do not overlap, second, are subordinates to the same superordinate concepts, and third, together exhaust the superordinate concept. Kuhn only referred explicitly to the first of these principles, but a discussion of the other principles in relation to Kuhn’s account of concepts can be found in Andersen/Barker/Chen 2006.

Conceptual Development in Interdisciplinary Research

277

that are also cats, no gold rings that are also silver rings, and so on; that’s what makes dogs, cats, silver and gold each a kind (Kuhn 1991, 4).

If an instance is discovered that violates this principle, this will be seen as an anomaly. This is connected to the function of the Kuhnian categorization module as embedding both “certain sorts of expectation about the world” (Kuhn 1990 MS, 8), namely assumptions of what exist and does not exist (what Hoyningen-Huene (1993) has termed the quasi-ontological knowledge) and assumptions on which properties are found empirically to correlate (what Hoyningen-Huene has termed knowledge of regularities). Hence, an instance that violates the no-overlap principle will run counter to expectations about what exists in the world (quasi-ontological knowledge) or expectations about the characteristics that known objects or phenomena have (knowledge of regularities) and will be perceived as an anomaly that questions the conceptual structure.9 However, it is important to note that findings that run counter to expectations are not made easily. It is difficult to discover something that was never anticipated to exist because there is no category by which to classify it. Until such a category has been formed which enables the recognition of an actual object or phenomena it can only be perceived as ‘something wrong’. Further, as described by Andersen, Barker and Chen (2006), the existence of graded structures can explain why anomalies are not equally severe and why different scientists may judge anomalies differently. If an object is encountered that judged by means of different properties seems to be an instance of two contrasting concepts, it violates the expectations regarding which objects exist and how they behave. If the encountered object judged from different properties is a good example of two different concepts in a contrast set, this will be a severe anomaly that calls the adequacy of the conceptual structure into question. On the contrary, if an object is encountered that judged from different proper9

See also Andersen/Barker/Chen 2006, chapter 4.3. As Nersessian and Andersen (1997) have noted, this provides as kinematics of conceptual change. A full dynamics of conceptual change must also include accounts of the reasoning practices involved in arriving at a new conceptual structure. A detailed account of this can be found in Nersessian 2009. Further, there may be other triggering mechanisms apart from such empirical anomalies. For example, Brigandt (this volume) argues that epistemic aims and values may play important roles in conceptual change. Similarly, in his case study, Steinle (this volume) shows how changes in the concept of magnetic poles are related to changes of epistemic goals.

278

Hanne Andersen

ties is a poor example of two different concepts in a contrast set it may not call the conceptual structure in question, but just suggests that further research may be necessary to find out whether, for example, an additional category exists or whether the existing concepts may show some additional properties that allow the objects to be unequivocally assigned to one of them. Further, different speakers may judge category membership from different properties. Thus, if different members of the scientific community have different graded structures of their concepts, they may have different views on which anomalies are severe and which are not. In such cases, only some members of the community will react to an anomaly by changing (parts of) their conceptual structure, while others may instead choose to ignore the anomaly or draw on additional properties to develop a more unequivocal classification into the known categories.10

2.3 Gradual, Conceptual Development within a Lexicon Much conceptual development consists in gradual refinements that do not imply major restructuring. Such refinements of existing lexicons may consist of addition detail to existing concepts or addition of new concepts that can be accommodated into the structure without violations of the hierarchical principles. New concepts may be formed based on emerging differentiae, for example, when instruments or experiments indicate how to distinguish between new kinds. On this view, instruments or experiments are sorting devices which distinguish instances of contrasting concepts by determining specific properties which differ for instances of contrasting concepts. For example, Buchwald has argued from a case study on the development of the wave theory of light that “experimental work divides the elements of the [taxonomic] tree from one another: sitting at the nodes or branch-points of the tree, experimental devices assign something to this or that category (Buchwald 1992, 44). Similarly, Chen argues that “instruments practically designate concepts in a lexical taxonomy by sorting their referents under different categories” (Chen 1997, 269). Thus, a central idea in conceptual development is the idea of introducing new, rudimentary concepts that initially capture a general idea 10 For an historical case study on how differences in graded structures can lead scientists to different assessments of anomalies, see, e. g., Andersen 2009.

Conceptual Development in Interdisciplinary Research

279

but still are in need of further articulation; what Carey (2009) refers to as ‘placeholder concepts’ or ‘placeholder structures’ (see also Nersessian in this volume).11 This kind of conceptual development is what takes place in what Steinle has called ‘exploratory experimentation’, that is, experimentation that is “driven by the elementary desire to obtain empirical regularities and to find out proper concepts and classifications by means of which those regularities can be formulated” (Steinle 1997). According to Steinle, this kind of experimentation can often be characterized by definite guidelines and epistemic goals, including the systematic variation of experimental parameters and the formulation of empirical regularities. In the process of further articulation of such empirical regularities, additional differentiae may be introduced,12 and as more of such details are added, the concept will be perceived as more and more ‘robust’. Thus, although a novel taxonomy may emerge as someone attempts to grapple with a particular device, it will usually require more than just different behaviors of a single device to posit new categories. This idea of the ‘strength’ or ‘robustness’ or concepts has been discussed by several scholars. Among others, Buchwald has suggested that “a taxonomy’s strength—its ability to assimilate and to fabricate new apparatus— depends to a very large extent on its device independence, the ease with which it can be separated from the device” (Buchwald 1992, 44). In other words, “a robust taxonomy is also compatible with many other 11 Similar views on the development of concepts by refining similarities can also be found in the work of Quine who summarized the overall development such that “[l]iving as he does by both bread and science, man is torn. Things about his innate similarity standards that are helpful in one sphere can be a hindrance in the other. Credit is due to man’s inveterate ingenuity, or human sapience, for having worked around the blinding dazzle of color vision and found the more significant regularities elsewhere. Evidently natural selection has dealt with the conflict by endowing man doubly: with both a color-slanted quality space and the ingenuity to rise above it. He has risen above it by developing modified systems of kinds, hence modified similarity standards for scientific purposes. By the trial-and-error process of theorizing he has regrouped things into new kinds which prove to lend themselves to many inductions better than the old” (Quine 1969, 14 f). Carey also refers to the process through which new representations are created that are not entirely grounded in antecedent representations as ‘Quinian bootstrapping’ (Carey 2009, 20 ff). 12 Obviously, experiments not only act as sorting devices in creating new concepts but also serve to establish additional differentiating properties of existing concepts.

280

Hanne Andersen

devices that do what the taxonomy considers to be the same thing that the first one does but in entirely different ways” (ibid.).13 Similarly, Gooding has noted with respect to new experimental possibilities, or, as he terms them, construals, that “until the significance of novel information has been sketched out, construals of it retain the provisional and flexible character of possibility … The fact that an array of effects can be construed tends to support their facticity, and lends credibility to the construal” (Gooding 1985, 219). But although explorative research in a new area may initially be focused simply on empirical examination of various possible correlations of properties, what often follows in later stages of research is the attempt to develop theoretical explanations of these correlations. Such explanations will obviously also contribute to the strength of a conceptual structure and hence also to the strength of scientists’ expectations to the objects and phenomena they observe.14 13 This criterion of ‘strength’ or ‘robustness’ is closely related to the issue of entity realism. For example, Arabatzis has discussed how scientists become convinced of the reality of a hidden entity and lists as one of the important factors that “the over-determination of a hidden entity’s properties in different experimental systems is often an important reason in favor of its existence” (Arabatzis 2008, 14). A similar and Kuhnian position in the realism debate can be described in terms of the projectibility of bundles of features, see Andersen 2001 and 2006. 14 This urge to derive theories explaining the correlation of features is not specific for scientific concepts, but has been the topic of general discussions in cognitive science. Cognitive scientists such as Murphy and Medin (1985/1999) have argued that, generally, people tend to deduce causal explanations for feature correlations, a view that has become known as the Theory-Theory of concepts (Laurence/Margolis 1999, 43 – 51). But there is considerable divergence in the literature on the goals and characteristics of the Theory-Theory, and on what a ‘theory’ might be (for an overview see Laurence/Margolis 1999). Some people take it that the ‘theories’ introduced in the Theory-Theory restores essentialism (e. g., Laurence/Margolis 1999, 47), but there is no overall agreement among theory-theorists on the issue of essentialism. Although the focus on underlying causal explanations of the correlation of surface attributes might seem to encourage essentialist views, Murphy and Medin have emphasized that the features that appear essential are not so because of the structure of the world, but because they are the features that are most central to our current understanding of the world (cf. Murphy/Medin 1985/1999, 454). Later versions of the Theory-Theory, such as the causal status interpretation of attribute centrality developed by Ahn, also point out that people tend to weight features that are seen as the causes of other features more than their effects (cf. Ahn 1998, 138). However, Ahn also makes clear that this view is different from essentialism in that causal features need not be defining features, and by the same

Conceptual Development in Interdisciplinary Research

281

So far this account has been focused on concepts and the development of concepts within a particular lexicon and therefore in relation to a particular discipline. I shall now move to the question how lexicons are combined in interdisciplinary research.

3. Interdisciplinary Combination of Taxonomies In his analyses of taxonomies as a categorization module shared by a community Kuhn was tacitly assuming a mutual conceptual independence of disciplines. This may be surprising, given the obviously unrealistic character of the assumption, yet as already Campbell (1969) spelled out, the view of disciplines as separated by gaps has been widespread. Giving up this assumption, we achieve a much more complicated picture of scientific lexicons and how they relate. First, taxonomies may integrate into each other like a nest of Chinese boxes. Thus, features used to differentiate between instances of contrasting categories are themselves concepts that are also embedded in lexicons with their own features, correlations between features, and theories to explain feature correlations. Further, the same concepts may form part of multiple lexica concerned with, for example, different aspects of a phenomenon (or set of phenomena). One set of attempts to describe concepts cutting across different disciplines is the currently very popular work within science studies on boundary objects (Star/Griesemer 1989). However, where the notion of boundary objects was introduced to describe situations in which the object resides between communities of practice where it is ill structured and has a vague identity (Star 2010), I shall turn the description around and focus on how scientists collaborating in interdisciplinary research combine their conceptual resources, and how they adopt structures and constraints provided by collaborators from other areas of expertise. Thus, rather than focusing on plasticity and vagueness, I shall focus instead on constraints and precision. Analyzing how scientists combine their cognitive resources and how they come to agree on conceptual change in overlapping areas needs to be informed by current work in both distributed cognition and social epistemology. Thus, work in distributed cognition analyzes the collectoken that features need not be dichotomized into essential and non-essential features.

282

Hanne Andersen

tive cognition that happen when several people combine individual knowledge to produce a cognitive output that neither could produce alone (see, e. g., Giere/Moffat 2003; Giere 2007; Kurz-Milcke/Nersessian/Newstetter 2004; Nersessian 2004; Nersessian et al. 2003). As described by Nersessian (2009), much of this work has focused primarily on the artifacts that participate in cognitive processes. However, Nersessian’s own work has provided a useful complement in focusing on “the nature of the representations within the bounds of the human components which facilitate their participation in a cognitive system” (Nersessian 2009, 745; italics added), where she has found that interlocking mental models constitute distributed inferential systems. This notion of interlocking mental models helps to explicate the relations across conceptual structures developed within different disciplines and shows how instruments, experiments, objects or interfield theories can serve as the ‘hubs’ that bridge between these different conceptual structures. Work in distributed cognition has also been concerned with cognitive partnership (e. g., Osbeck/Nersessian 2006), but it has not gone into details with cognitive partners’ mutual commitment to the distributed cognitive processes. This issue has instead been discussed within social epistemology. Recently, work in social epistemology has started analyzing how groups arrive at a joint accept of particular claims (Gilbert 1987, 2000, 2003; Rolin 2008; Thagard 1997; Wray 1999, 2006) and the role played by trust in the production of scientific knowledge by groups of scientists (Goldman 2001; Hardwig 1991; Thagard 2005), but without going deeply into the analysis of how differences in cognitive resources, including conceptual structures and the interlocking mental models of which they form part, influence these processes and relations. Combinations of insights on cognitive partnership, interlocking mental models and joint commitment seem fruitful for explaining how knowledge can be distributed among scientists in a group and how groups of collaborating scientists with different cognitive resources may create new concepts as a joint collaborative activity. To indicate how a combination of these analytical tools may be unfolded, I shall turn to a case study of interdisciplinary collaboration in science.

Conceptual Development in Interdisciplinary Research

283

4. A Case Study To illustrate this extended account of scientific lexicons and how it includes distributed cognition between jointly committed scientists, let me present a brief case study on induced radioactivity in heavy nuclei during the 1930s.15 An important part of this research was to bombard heavy nuclei with neutrons to see whether that resulted in radioactive decay, and, for beta-decay that increases the atomic number, whether elements beyond Uranium in the periodic table could be produced. The groups working in this area included both nuclear physicists and analytical chemists who worked closely together to examine the decay series and to identify the elements that were produced in the process. While nuclear physics informed about the likely decay processes, analytical chemistry informed about identification of the elements. And whereas physicists and chemists had some knowledge of each other’s fields—physicists knowing the basics of the periodic system, chemists knowing the basics on which disintegration processes to expect— there were also non-shared areas such as the nuclear physicists’ detailed computations of tunneling effects or analytical chemists’ detailed knowledge on precipitation processes. In this research, the physical lexicon of decay processes and the chemical lexicon of chemical elements are interlocking. Each decay process produces a daughter element, and this element can be identified by its chemical characteristics. In this way, the produced daughter elements serve as hubs that bridge between the two lexicons. Each of the two lexicons includes models that have been developed to explain various features and their correlations. For example, Gamow’s droplet model of the nucleus explained why only small particles could be emitted from the decaying nucleus. Likewise, models of electron configurations explained why the various elements had different chemical properties. These models were distributed among the researchers, and together they formed a set of interlocking models that worked as a distributed inferential system. This inferential system enabled some scientists to infer what kind of process to expect, given, for example, the energy of the neutrons used to bombard the element, and enabled others to infer which chemical properties to be expected from the element produced in the process. At the same time, new experiments informed both physicists and chemists about additional details of the process of in15 For more detailed accounts of this case, see Andersen 1996, 2009, and 2010.

284

Hanne Andersen nuclear transmutations

α Emission

p Emission

n Capture

β Emission

Z–2

Z–1

Z

Z+1

chemical elements

Figure 2. An important feature in the lexicon of decay processes is the produced element. This feature distinguishes between the different processes in the contrast set of nuclear transmutations, but at the same time the elements are also contrasting categories in the contrast set of chemical elements.

duced radioactivity, the character of the nucleus and the conditions for its transmutation, and characteristics of new, transuranic elements. The production of new knowledge, both ‘quasi-ontological knowledge’ about the existence of new elements and processes, and ‘knowledge of regularities’ about the characteristics of these new elements and processes required the combination of knowledge of several researches. Thus, on the one hand, chemists would draw on the knowledge of physicists about likely processes to narrow down the range of chemical analysis; on the other hand, physicists would draw on the chemists’ identification of the produced elements. In this way, the researchers were epistemically dependent on each other. As described by Hardwig (1991, 1985), dependence on other experts pervades any complex field of research, for example, because the process of gathering and analyzing data would take too long for any individual researcher to accomplish, or because, as in this case, no one knows enough to be able to do the work alone. In such situations scientists need to trust the testimony of other researchers.16 However, this trust is not unqualified. As Hardwig has argued, the principle of testimony—that A has good reasons to believe that p if A has good reasons 16 I shall here be concerned primarily with trust between researchers in the early phases of research when new results are advanced, not in later phases when the produced results become an established part of handbooks, textbooks etc. Focusing on the early phase when results are initially advanced, I shall also deal primarily with the relation between researchers contributing at the forefront of research within the same or closely related areas and who are therefore likely to build directly on each other’s’ results.

Conceptual Development in Interdisciplinary Research

285

to believe that B has good reasons to believe that p—requires not only that the trusted researcher is truthful, but also that the trusted researchers is competent in the area, conscientious in his or her work, and has adequate epistemic self-assessment when reporting results. But how do researchers in an interdisciplinary collaboration evaluate these aspects of each other’s epistemic character? The closer their areas of expertise, the more will they be able to perform direct calibration where they may draw on their own knowledge to evaluate the competence and conscientiousness of each other. Conversely, for remote areas of expertise they may need to perform indirect calibrations where they draw on past track records, the opinion of other scientists, etc.17 Hence, an overlooked aspect of interactional expertise is that it may also play an important role in providing the basis for the calibration of trust among researchers who interact in the production of new results at the forefront of research. Either one must have the contributory expertise required in both fields to argue convincingly for suggested changes, or one must have sufficient interactional expertise to be able to convey to others with contributory expertise why results from one’s own area of expertise calls for changes in their area. This can be illustrated by the case of Ida Noddack,18 an analytical chemist who had worked for several years searching for the still missing elements in the periodic table, especially the elements number 43 and 75 which belonged to the same group as Fermi and his collaborates expected element number 93 to belong to. The chemical properties of elements of this group were therefore very much within her area of expertise, while nuclear physics was not. Shortly after the publication of the Fermi group’s results, she published a paper in which she pointed out that several known elements would behave like Fermi’s new element in the precipitation processes they had used to identify it. But these were all light elements and could not be the product of any of the artificially induced disintegration processes contained in Fermi’s taxonomy of nuclear transmutations. Noddack therefore suggested two different processes which could lead to the production of light elements: either a long series of successive transformations, or the division of the nucleus into several large fractions. However, both these suggestions ran counter to the physicists’ model of nuclear transmutation, but either Noddack did not know or she did not care. She made neither an at17 See, e. g., Kitcher 1993 and Goldman 2001 for discussions of calibration. 18 See Andersen 2009 for details of this case.

286

Hanne Andersen

tempt at explaining how her suggestion would make sense from a physical perspective—she did not have contributory expertise in physics to do so—nor did she engage with physicists directly in order to understand their constraints and address them. This lack of interactional expertise seems to have been one important aspect of why her suggestion was simply discarded by the physicists as vague speculations that merely pointed to a lack of rigor in their argument (Amaldi 1984, 277). While Hardwig’s account of trust and epistemic dependence provides a description of how scientists combine knowledge claims across areas of expertise, it does not address whether there are any additional commitments between collaborating scientists, apart from the relation of trust. However, Gilbert (1987, 2000) has advanced an account of scientific knowledge that builds on group members’ mutual commitments. According to her analyses, members of a group jointly accept the claim that p if and only if it is common knowledge in the group that the group’s individual members have openly expressed a conditional commitment jointly to accept together with the other members that p (cf. Gilbert 1987, 195). One of the important implications of a commitment jointly to accept some claim that p is that “group members are personally obliged not to deny that p or to say things that presuppose the denial of p in their ensuing interactions with each other” (Gilbert 1987, 193 f; italics in original). In this way, the collective acceptances act as constraints on the possibility for group members to change their view or in other ways deny that p. If they speak contrary to the collectively accepted propositions, they have violated obligations that they had with others and they will be regarded as acting out of line. The other members of the party have the standing to take responsive measures, from mild rebuke to ostracism (cf. Gilbert 2000, 42). This indicates that a joint commitment can only be rescinded by the parties to it acting together to form a new joint acceptance of a revised view (cf. Gilbert 2000, 40). This mutual obligation not to rescind individually from the accepted claim seems to go at least some way to explaining the stability of normal science, but as indicated by Gilbert (2000) in her paper on joint belief and scientific change, this leaves us with the question of how to account for scientific change. One key to answering these questions has been laid out by Wray who argues that “one who seeks to change the view of a plural subject must show how the obligation that the group has to accept the view is inappropriate. This may involve providing evidence that suggests that the collectively accepted view is false, but it need not”

Conceptual Development in Interdisciplinary Research

287

(Wray 2001, 326). Thus, important aspects of understanding scientific change is to understand how new evidence that challenges a jointly accepted claim is received in a group that has jointly accepted the claim. Let us therefore examine the mutual commitments between scientists in a group towards a shared conceptual structure and quasi-ontological knowledge and knowledge of regularities that it contains. During the period from 1934 to 1938 the groups working on induced radiation conducted a large number of experiments, and it can be followed step by step how new features and new concepts were introduced in the taxonomies, and how new theories were developed to explain discovered correlations.19 But a major conceptual restructuring was made when the two chemists Hahn and Strassmann in December 1938 discovered that a decay product that they thought was Radium could not be separated from Barium in their precipitation process; for chemists a very clear violation of the no-overlap principle in the taxonomy of chemical elements. Hahn wrote to their former group member, the physicist Lise Meitner, that, although he knew this was impossible from a physical viewpoint, as a chemist he had to conclude that the element was Barium. This would mean that the nucleus had split, and he asked whether she could figure out an explanation. In this way, he transmitted a situation that had been perceived as a severe anomaly in the lexicon of chemical elements, while transforming it instead into an anomaly in the closely related lexicon of decay processes. However, he did not himself finally draw the conclusion that the nucleus had split and that a new decay process had to be introduced; he was dependent on additional knowledge from Meitner on the physical lexicon of decay processes in order to introduce such a new concept. But writing the letter, emphasizing that he saw how his results seemed to violate physical constraints, he showed that although he did not possess the contributory expertise within physics to advance a physical explanation of his result, he did have the interactional expertise to enter a partnership with Meitner on finding a solution. Meitner discussed Hahn’s report with another physicist, her nephew Otto Frisch. Frisch first questioned whether Hahn’s result could be correct, but Meitner made clear that she knew him as a good chemist whose work could be trusted. And soon Meitner and Frisch realized that the resources were available to give a physical explanation of this new fission process by drawing on another model of the nucleus recently developed by Bohr. Having communicated 19 See Andersen 1996 for details.

288

Hanne Andersen

this back to Hahn, Hahn and Strassmann published their result that the nucleus had split, while Meitner and Frisch published the physical explanation of this new phenomenon. Hence, the new concept of nuclear fission was not introduced by any individual person or within a single taxonomy. It was only introduced through the joint acceptance by both the chemists Hahn and Strassmann and by the physicists Meitner and Frisch of an ontological claim about the existence of the process as it could be incorporated in the two interconnected taxonomies of decay products and chemical elements, respectively, as well as a number of claims about which regularities that hold for this new process and how these can be theoretically explained. Further, the introduction of the new concept which had originally been triggered by what would otherwise have been a severe violation of the no-overlap principle in the lexicon of chemical elements implied changes in the lexicon of decay products and these changes required new theoretical explanations. In this process of introducing a new concept, the physicists and chemists brought together various parts of the conceptual structures in which the new concept was embedded, parts that they each individually possessed, and which were jointly necessary to introduce the new concepts without violating any of the expectations entailed by the interlocking taxonomies describing this aspect of the world. Combining their individual knowledge, the involved scientists had to mutually trust each other’s expertise. Thus, the introduction of a new concept in the intersection between two overlapping conceptual structures seems to have worked in the same way as social epistemologists describe joint accept, that is, a conditional commitment jointly to accept the introduced concept. As this brief case study illustrates, conceptual development involving researchers from different areas of expertise requires that their lexicons interlock. These interlocking lexicons can involve multiple models developed to explain existence of categories and correlations of features, and the whole conceptual system therefore involves a multitude of interlocking models that work as a distributed inferential system. However, working together, using such a distributed inferential system, the involved researchers need both to be able to interact in meaningful ways across disciplinary divides and to have trust in each other’s expertise. When scientists from one discipline produce results that have implications elsewhere in the inferential system they must be able to convey to scientists from other disciplines that their result can be trusted. One ingredient in this activity may be a display on interactional exper-

Conceptual Development in Interdisciplinary Research

289

tise sufficient to communicate meaningfully about the consequences in both disciplines and sufficient to engage in a joint accept of changes involving the hubs that bridge between interlocking lexicons.

Reference List Abbott, A. (2001), The Chaos of Disciplines, Chicago: University of Chicago Press. Ahn, W.-K. (1998), “Why Are Different Features Central for Natural Kinds and Artifacts? The Role of Causal Status in Determining Feature Centrality.” In: Cognition 69, 135 – 178. Amaldi, E. (1984), “From the Discovery of the Neutron to the Discovery of Nuclear Fission.” In: Physics Reports 111, 332. Andersen, H. (1996), “Categorization, Anomalies, and the Discovery of Nuclear Fission.” In: Studies in the History and Philosophy of Modern Physics 27, 463 – 492. Andersen, H. (2000), “Kuhn’s Account of Family Resemblance: A Solution to the Problem of Wide-Open Texture.” In: Erkenntnis 52, 313 – 337. Andersen, H. (2001), “Reference and Resemblance.” In: Philosophy of Science 68 (Proceedings), 50 – 61. Andersen, H. (2006), “How to Recognize Introducers in Your Niche.” In: Andersen, H. B. et al. (eds.), The Way through Science and Philosophy: Essays in Honour of Stig Andur Pedersen, London: College Publications, 119 – 136. Andersen, H. (2009), “Unexpected Discoveries, Graded Structures, and the Difference between Acceptance and Neglect.” In: Meheus. J. / Nickles, T. (eds.), Models of Discovery and Creativity, Dordrecht: Springer, 1 – 27. Andersen, H. (2010), “Joint Acceptance and Scientific Change: A Case Study.” In: Episteme 7 (3), 248 – 265. Andersen, H. / Barker, P. / Chen, X. (2006), The Cognitive Structure of Scientific Revolutions, Cambridge: Cambridge University Press. Arabatzis, T. (2008), “Experimenting on (and with) Hidden Entities: The Inextricability of Representation and Intervention.” In: Feest, U. et al. (eds.), Generating Experimental Knowledge. Preprint 340, Berlin: Max-Planck-Institut für Wissenschaftsgeschichte, 7 – 18. Bowker, C. G. / Star, S. L. (1999), Sorting Things Out: Classification and Its Consequences, Cambridge, MA: MIT Press. Buchwald, J. Z. (1992), “Kinds and the Wave Theory of Light.” In: Stud. Hist. Phil. Sci. 23, 39 – 74. Campbell, D. T. (1969), “Ethnocentricism of Disciplines and the Fish-Scale Model of Omniscience.” In: Sherif, M. / Sherif, C. W. (eds.), Interdisciplinary Relationships in the Social Sciences, Chicago: Aldine, 328 – 348. Carey, S. (2009), The Origin of Concepts, Oxford: Oxford University Press. Chen, X. (1997), “Thomas Kuhn’s Latest Notion of Incommensurability.” In: Journal for General Philosophy of Science 28, 257 – 273.

290

Hanne Andersen

Collins, H. / Evans, R. / Gorman, M. (2006), “Trading Zones and Interactional Expertise.” In: Studies in History and Philosophy of Science 38, 657 – 666. Collins, H. M. / Evans, R. (2002), “The Third Wave of Science Studies: Studies of Expertise and Experience.” In: Social Studies of Science 32, 235 – 296. Galison, P. (1997), Image and Logic: A Material Culture of Microphysics, Chicago: University of Chicago Press. Giere, R. N. (2007), “Distributed Cognition without Distributed Knowledge.” In: Social Epistemology 21, 313 – 320. Giere, R. N. / Moffat, B. (2003), “Distributed Cognition: Where the Cognitive and the Social Merge.” In: Social Studies of Science 33, 301 – 310. Gilbert, M. (1987), “Modelling Collective Belief.” In: Synthese 73, 185 – 204. Gilbert, M. (2000), “Collective Belief and Scientific Change.” In: Gilbert, M. (ed.), Sociality and Responsibility, Lanham: Rowman & Littlefield, 37 – 49. Gilbert, M. (2003), “The Structure of the Social Atom: Joint Commitment as the Foundation of Human Social Behavior.” In: Socializing Metaphysics: The Nature of Social Reality, Lanham: Rowman & Littlefield, 39 – 64. Goldman, A. I. (2001), “Experts: Which Ones Should You Trust?” In: Philosophy and Phenomenological Research 63, 85 – 110. Gooding, D. (1986), “How Do Scientists Reach Agreement about Novel Observations?” In: Stud. Hist. Phil. Sci. 17, 205 – 230. Gorman, M. (2002), “Levels of Expertise and Trading Zones: A Framework for Multidisciplinary Collaboration.” In: Social Studies of Science 32, 933 – 938. Gorman, M. E. (2010), Trading Zones and Interactional Expertise, Cambridge, MA: MIT Press. Hardwig, J. (1985), “Epistemic Dependence.” In: Journal of Philosophy 82, 335 – 349. Hardwig, J. (1991), “The Role of Trust in Knowledge.” In: Journal of Philosophy 88, 693 – 708. Hoyningen-Huene, P. (1993), Constructing Scientific Revolutions: Thomas S. Kuhn’s Philosophy of Science, Chicago: Chicago University Press. Kitcher, P. (1993), The Advancement of Science, Oxford: Oxford University Press. Kuhn, T. S. (1979), “Metaphor in Science.” In: Ortony, A. (ed.), Metaphor in Science, Cambridge: Cambridge University Press, 409 – 419. Kuhn, T. S. (1983), “Commensurability, Comparability, Communicability.” In: PSA 2, 669 – 688. Kuhn, T. S. (1990), “An Historian’s Theory of Meaning”. Lecture at UCLA. Kuhn, T. S. (1991), “The Road since Structure.” In: PSA 1990 (2), 3 – 13. Kuhn, T. S. (2000), “The Trouble with the Historical Philosophy of Science.” In: Conant, J. / Haugeland, J. (eds.), The Road since Structure, Chicago: University of Chicago Press, 105 – 120. Kurz-Milcke, E. / Nersessian, N. / Newstetter, W. (2004), “What Has History to Do with Cognition? Interactive Methods for Studying Research Laboratories.” In: Culture and Cognition 4, 663 – 700.

Conceptual Development in Interdisciplinary Research

291

Laurence, S. / Margolis, E. (1999), “Concepts and Cognitive Science.” In: Margolis, E. / Laurence, S. (eds.), Concepts: Core Readings, Cambridge, MA: MIT Press, 3 – 81. Mitchell, S. et al. (1997), “The Whys and Hows of Interdisciplinarity.” In: Weingart, P. et al. (eds.), Human by Nature: Between Biology and the Human Sciences, Mahwah, NJ: Erlbaum. Murphy, G. / Medin, D. (1985/1999), “The Role of Theories in Conceptual Coherence.” In: Margolis, E. / Laurence, S. (eds.), Concepts: Core Readings, Cambridge, MA: MIT Press, 425 – 458. Nersessian, N. (1998), “Conceptual Change.” In: Bechtel, W. / Graham, G. (eds.), A Companion to Cognitive Science, Oxford: Blackwell, 157 – 166. Nersessian, N. (2004), “Interpreting Scientific and Engineering Practices: Integrating the Cognitive, Social and Cultural Dimension.” In: Gorman, M. et al. (eds.), Scientific and Technological Thinking, Erlbaum, 17 – 56. Nersessian, N. / Andersen, H. (1997), “Conceptual Change and Incommensurability.” In: Danish Yearbook of Philosophy 32, 111 – 152. Nersessian, N. et al. (2003), “Research Laboratories as Evolving Distributed Cognitive Systems.” In: Alterman, R. / Kirsch, D. (eds.), Proceedings of the Twenty-Fifth Annual Conference of the Cognitive Science Society. Nersessian, N. J. (2009), “How Do Engineering Scientists Think? ModelBased Simulation in Biomedical Engineering Research Laboratories.” In: Topics in Cognitive Science 1, 730 – 757. Osbeck, L. M. / Nersessian, N. J. (2006), “The Distribution of Representation.” In: Journal for the Theory of Social Behaviour 36, 141 – 160. Quine, W. V. O. (1969), “Natural Kinds.” In: Rescher, N. (ed.), Essays in the Honor of Carl G. Hempel, Dordrecht: Kluwer, 5 – 23. Rolin, K. (2008), “Science as Collective Knowledge.” In: Cognitiv Systems Research 9, 115 – 124. Rossini, F. A. / Porter, A. L. (1979), “Frameworks for Integrating Interdisciplinary Research.” In: Research Policy 8, 70 – 79. Rossini, F. A. / Porter, A. L. (1981), “Interdisciplinary Research: Performance and Policy Issues.” In: Journal of the Society of Research Administrators 13, 8 – 24. Star, S. L. (2010), “This is Not a Boundary Object: Reflections on the Origin of a Concept.” In: Science, Technology, and Human Values 35, 617. Star, S. L. / Griesemer, J. R. (1989), “Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907 – 39.” In: Social Studies of Science 19, 387 – 420. Steinle, F. (1997), “Entering New Fields: Exploratory Uses of Experimentation.” In: Philosophy of Science 64 (Proceedings), 65 – 74. Thagard, P. (1997), “Collaborative Knowledge.” In: Nous 31, 242 – 261. Thagard, P. (2005), “Testimony, Credibility, and Explanatory Coherence.” In: Erkenntnis 63, 295 – 316. Wray, K. B. (2006), “Scientific Authorship in the Age of Collaborative Research.” In: Studies in History and Philosophy of Science A 37 (3), 505 – 514. Wray, K. B. (1999), “A Defense of Longino’s Social Epistemology.” In: Philosophy of Science 66, 538 – 552.

292

Hanne Andersen

Wray, K. B. (2001), “Collective Belief and Acceptance.” In: Synthese 129, 319 – 333.

List of Contributors Hanne Andersen is professor of philosophy of science and head of the Centre for Science Studies, Dept. of Physics and Astronomy at Aarhus University, Denmark. Her research interests include scientific concepts and scientific change, the philosophy of Thomas Kuhn, philosophy of interdisciplinarity, and social epistemology. Among her publications are On Kuhn (Wadsworth 2001) and The Cognitive Structure of Scientific Revolutions (co-authored with Peter Barker and Xiang Chen, Cambridge University Press 2006) Theodore Arabatzis is Associate Professor in the Department of Philosophy and History of Science at the University of Athens, Greece. He has written on the history of late 19th and early 20th century physical sciences, and on philosophical and historiographical issues concerning conceptual change, scientific realism, and experimentation. He is the author of Representing Electrons: A Biographical Approach to Theoretical Entities (University of Chicago Press, 2006), co-editor of Kuhn’s The Structure of Scientific Revolutions Revisited (Routledge, 2012), and co-editor of the journal Metascience. Corinne Bloch has recently submitted her dissertation in Philosophy of Science to Tel Aviv University and her dissertation in Animal Science (neuro-endocrinology) to the Hebrew University of Jerusalem. She is currently a post-doctoral fellow at the Fishbein Center for the History of Science and Medicine, the University of Chicago. Her research focuses on theories of concepts and the dynamics of scientific theories Mieke Boon is a full professor of Philosophy of Science in Practice in the Philosophy Department at the University of Twente (The Netherlands). This department focuses on the Philosophy of Technology. Boon has a firm background in scientific research in the engineering sciences. In 1987, she received a MSc degree (cum laude) in Chemical Engineering at Twente University of Technology. In 1996, she received a PhD degree (cum laude) in Biotechnology at Technical University Delft on the thesis: Theoretical and experimental methods in the modelling of bio-oxidation kinetics of sulphide minerals. Her current interest is philosophy of

294

List of Contributors

science for the engineering sciences, which involves topics such as scientific models, instruments, experiments, concepts, phenomena, robustness and epistemic values, and ‘styles of reasoning’. Ingo Brigandt is Associate Professor of Philosophy at the University of Alberta (Canada). His research pertains to the epistemology and practice of biology, including its history, with a focus on scientific concept use and biological explanations. The biological domains he studies are evolutionary biology, developmental biology—in particular its intersection evolutionary developmental biology—and molecular biology. Brigandt has longstanding interests in attempting to philosophically understand the change of biological concepts in the course of history, e. g., the concept of homology and the gene concept. His most recent project has been on integration (as an alternative to reduction), i. e., the question of how the generation of integrative explanations using ideas from different fields works. One such case of interdisciplinary research across different biological subdisciplines is ongoing attempts to account for the evolutionary origin of novel structures. Brigandt is currently working on the nature of mechanistic explanations, in particular how molecular-mechanistic explanation and mathematical modeling are combined in such areas as systems biology Uljana Feest is assistant professor at the Technische Universität Berlin. She received her first degree in Psychology at the University of Frankfurt, and a PhD in History and Philosophy of Science from the University of Pittsburgh. Her research focuses on the philosophy and history of the cognitive and behavioral sciences, with a special emphasis on the experimental and observational practices of those sciences. She is currently finishing a book manuscript about the ways in which preliminary, “operational,” definitions of concepts figure in the dynamics of research. Other ongoing interests include the history of the philosophy of science (especially focusing on the institutional and intellectual relations between psychology and philosophy in the decades before and after 1900) as well as the epistemic status of first-person reports in cognitive neuroscience. Vasso Kindi is Associate Professor in the Department of Philosophy and History of Science at the University of Athens, Greece. She is the author of Kuhn and Wittgenstein: Philosophical Investigation of the Structure of Scientific Revolutions (Smili 1995, in Greek) and co-editor, with Theo-

List of Contributors

295

dore Arabatzis, of Kuhn’s The Structure of Scientific Revolutions Revisited (Routledge, forthcoming 2012). She has published on Kuhn, Wittgenstein, conceptual change, philosophy of history, and ethics. Miles MacLeod obtained a DPhil from the University of Vienna in history and philosophy of science in 2010, and spent two years as postdoctoral research fellow at the Konrad Lorenz Institute for Evolution and Cognition Research. He is now a US National Science Foundation postdoctoral researcher with the Cognition and Learning in Interdisciplinary Cultures (CLIC) research group at the Georgia Institute of Technology, School of Interactive Computing. His interests include philosophy of biology and general philosophy of science, principally the application of philosophical principles and cognitive science to understanding scientific practice, including the relevance and informativeness of discussions about natural kinds to biology. He is currently pursuing a research project on the novel epistemic and cognitive practices of model-based reasoning in systems biology Nancy J. Nersessian is Regents’ Professor of Cognitive Science at the Georgia Institute of Technology. Her research focuses on the creative research practices of scientists and engineers, especially modeling and how the processes of constructing models lead to conceptual innovations. She has numerous publications on these topic, including Creating Scientific Concepts (MIT, 2008; 2011 Patrick Suppes Prize in Philosophy, American Philosophical Society) and Science as Psychology: Sense-making and identity in science practice (Cambridge, 2010 co-author with L. Osbeck, K. Malone, W. Newstetter; 2012 William James Book Award, American Psychological Association). She is a Fellow of the Cognitive Science Society, a Fellow of the America Association for the Advancement of Science, a Foreign Member of the Royal Netherlands Academy of Arts and Sciences. Dirk Schlimm is Assistant Professor in the Department of Philosophy and Associate Member in the School of Computer Science at McGill University, Montreal. He received his PhD from Carnegie Mellon University and studied previously at Trinity College Dublin and Technical University of Darmstadt. His research interests fall into the areas of history and philosophy of mathematics and science, epistemology, and cognitive science. In particular, he is interested in the developments in the 19th and early 20th century that led to the emergence of modern

296

List of Contributors

mathematics and logic, and in systematic investigations regarding axiomatics, analogical reasoning, concept formation, the use of notation, and theory development. He is currently involved in a research project on empiricism in mathematics and also in editorial projects of the works of Bernays, Hilbert, and Carnap. Friedrich Steinle is professor of history of science at the Technische Universität Berlin. His research focuses on the history and philosophy of experiment, on the history of electricity and magnetism, on the history of colour research, and on the dynamics of empirical concepts in research practice. His books include Newton’s Manuskript ‘De gravitatione’ (1991) and Explorative Experimente: Ampre, Faraday und die Ursprnge der Elektrodynamik (2005), he is co-editor of Experimental Essays – Versuche zum Experiment (1998, with M. Heidelberger), of Revisiting discovery and justification. Historical and philosophical perspectives on the context distinction (2006, with J. Schickore), and of Going Amiss in Experimental Research (2009, with G. Hon and J. Schickore).

Index of Names Abbott, Andrew 271 Ahn, Woo-kyoung 280 Allison, Henry 231 Amaldi, Edoardo 286 Ampère, André-Marie 115, 296 Amsel, Abram 173, 175 Andersen, Hanne 4 f., 18 – 20, 150, 271, 273, 275 – 278, 280, 283, 285, 287, 293 Arabatzis, Theodore 5, 7, 11, 15 f., 43, 50, 56, 149 – 151, 153, 155, 158, 160, 162, 167 – 169, 280, 293, 295 Aristotle 130 Augustine 29 Avery, Oswald 61 Bacon, Roger 109 Bailer-Jones, Daniela 221 Baker, Gordon 41 Balmer, Heinz 109 Barker, Peter 5 f., 276 f., 293 Barsalou, Lawrence 5 Bartels, Andreas 163 Bartha, Paul 143 f. Bateson, William 51, 53 – 55, 58 f., 61, 64 – 66 Bawden, Frederick Charles 203 Beadle, George 54, 61 Beaney, Michael 130 f. Beardsmore, Richard 39 Beatty, John 85 Bechtel, William 75, 179 f., 182 f. Beijerinck, Martinus Willem 17, 196 – 200, 202, 205 – 208, 211 f. Benzer, Seymour 51, 60, 65 f. Beurton, Peter 92 Bevir, Mark 29, 42 Biot, Jean-Baptiste 113, 115 Black, Max 129 f.

Bloch, Corinne 4, 7, 11, 16 f., 56, 69, 97, 191 f., 213 f., 293 Block, Ned 77, 171 Bogen, Jim 18, 168, 182, 186, 192, 219, 221 f., 224 Boghossian, Paul 76 Bohr, Niels 160, 287 Bonola, Roberto 135 Boon, Mieke 17 f., 56, 171, 219 f., 223, 231, 238, 240, 293 Bos, Lute 195, 196, 198, 200, 205, 207, 208, 210, 211 Boveri, Theodor 54, 60 Bowker, Geoffry 274 Boyle, Robert 107 Brandom, Robert 6, 76 Bridgman, Percy 172, 176 Brigandt, Ingo 6, 12 f., 36, 50, 75 – 77, 79 f., 82, 84, 87 – 89, 91, 98 f., 105, 142, 178, 264, 277, 294 Broek, Brian van den 140, 144 Buchwald, Jed 155, 278 f. Buldt, Bernd 141 Burian, Richard 52, 56, 65, 91, 167 Burnet, Frank Macfarlane 203 Call, Tyler 144 Campbell, Donald 281 Cantor, Georg 131, 138 Carbone, Mauro 26 Cardano, Gerolamo 111 Carey, Susan 279 Carlson, Elof 51, 53 f., 65 Carnap, Rudolf 2, 10, 25 f., 131 f., 151, 159, 172, 176, 296 Carnot, Sadi 240 Cartwright, Nancy 25, 161, 223 Castle, Willliam 64 f. Cavell, Stanley 34, 41 f. Cawood, John 114

298

Index of Names

Caygill, Howard 24, 26 Cayley, Arthur 136 Chadarevian, Soraya de 247 Chandrasekharan, Sanjay 264, 266, 268 Chang, Hasok 107, 155, Chen, Xiang 5, 276 – 278, 293 Collins, Harry 271 f. Copi, Irving 192 Coulomb, Charles Auguste de 113 Cracraft, Joel 89 Craver, Carl Frederick 75, 179 – 183 Cuénot, Lucien 63 Dale, Sir Henry 210 f. Dalton, John 107 Darden, Lindley 75, 81, 180 Darrigol, Olivier 120 f. Darwin, Charles 84, 86 Daston, Lorraine 8 Davidson, Donald 33 f. Dean, Megan 99 Dear, Peter 150 Dedekind, Richard 131 Descartes, René 24 Dubs, Homer 192 Dufay, Charles 153, 157 Duhem, Pierre 221 Dunbar, Kevin 249 East, Edward 55, 59, 63 f., 69 Ebbinghaus, Herman 175 Einstein, Albert 5, 131 Elgin, Catherine 265 Elliott, Kevin 167 Ellis, Brian 192 Emerson, Rollins Adams 59 Ephrussi, Boris 54 Eriksson-Quensel, Inga-Britta 214 Euclid 134 – 139 Euler, Leonhard 112, 132, 140 f. Evans, Robert 271 f. Eyster, William 63 Falk, Raphael 51, 54 f., 57 Faraday, Michael 5, 14, 115 – 123, 157, 296

Feest, Uljana 1, 7, 11, 16 f., 48, 69, 99, 106, 124, 150, 163, 167 f., 176 f., 183, 191 f., 214 f., 220, 222 f., 235, 241, 294 Feigl, Herbert 26, 28, 151 Fermi, Enrico 285 Feyerabend, Paul 4, 30, 33, 49, 150, 152 Field, Hartry 77 Fleck, Ludwik 124 Fodor, Jerry 98 Forster, Michael 38 f., 43 Foucault, Michel 30 Fox Keller, Evelyn 106 Fraassen, Bas van 159, 221 Franklin, Allan 167, 226 Frege, Friedrich Ludwig Gottlob 10, 14, 24, 25, 26, 48, 127, 128, 130, 131, 132, 133, 136, 144 Frisch, Otto 287 f. Frosch, Paul 195, 196, 198, 207, 208 Frost, Howard 65 Fuller, Steve 149 Gadamer, Hans-Georg 26 Galison, Peter Louis 273 Gamow, George 283 Gauß, Carl Friedrich / Gauss, Carl Friedrich 114 Gauß, Carl Friedrich 14, 114, 122 f. Gavroglu, Kostas 156, 163 Gayon, Jean 54 Geach, Peter 34 Gellibrand, Henry 112 Geuss, Raymond 29 Ghassemi, Mona 144 Giere, Ronald 282 Gilbert, Margaret 282, 286 Gilbert, William 111, 112, 114, 122 Glennan, Stuart 179 Glock, Hans-Johann 27 Goldman, Alvin 282, 285 Goldschmidt, Richard 58 f., 61, 69 Gooding, David 115, 150, 158, 280 Goodman, Nelson 265 Goodpasture, Ernest 203 Goodwin, Charles 247 Gorman, Michael 271 f.

Index of Names

Gotthelf, Allan 216 Goudaroulis, Yorgos 156 Gray, Jeremy 135 Green, Christopher 176 Griesemer, James 66, 153, 281 Griffiths, Paul 51, 84, 92 Hacker, Peter 26, 41 Hacking, Ian 10, 24, 28 – 30, 43, 47, 75, 154, 161, 167, 219, 221, 223 f., 226, 237 Hahn, Hans 26 Hahn, Otto 287, 288 Hall, Brian 79 f. Hallett, Michael 129, 136, 144 Hamilton, William 136 Hansteen, Christopher 112 Hardwig, John 282, 284, 286 Harman, Gilbert 77 Harmon, Ellie 268 Harrison, Robert 203 Heath, Thomas 135 f. Heavyside, Oliver 121 Hebb, Donald Olding 253 Heidegger, Martin 26 Helvoort, Ton van 197, 202, 204 f., 209, 213 Hempel, Carl Gustav 2, 16, 28, 49, 152, 169 – 176, 179, 185 f. Higbie, Elizabeth 203 Hilbert, David 130, 135 f., 296 Hoffstadt, Rachel 203 Hofstadter, Douglas 143 Hollan, James 247 Hooke, Robert 235 Hopwood, Nick 247 Howitt, Beatrice 203 Hoyningen-Huene, Paul 277 Huff, Douglas 41 Hughes, Sally Smith 194, 196 – 200, 202, 204 – 206, 208, 210, 212, 215 Hull, Clark 16, 169, 172 – 175, 177 – 179, 185 Hull, David 31, 75, 149 Humboldt, Alexander von 113 Huskens, Jurriaan 241 Hutchins, Edwin 247

Ivanovsky, Dimitri

299 195 f., 200, 205

Jablonka, Eva 216 Jacob, François 65 James, William 10, 25, 27 f., 30, 48 Johannsen, Wilhelm 53 – 55, 57 f., 62 Jonkers, A.R.T. 112 f. Kamerling Onnes, Heike 156 Kant, Immanuel 10, 26 f., 231, 233 f., 241 Kaufmann, Walther 162 Kelley, Donald R. 29 Kierkegaard, Søren 29 Kindi, Vasso 5, 10 f., 15, 23, 48, 78, 149, 151, 155, 172, 245, 294 Kirsch, David 247 Kitcher, Philip Stuart 14, 82, 91, 137 f., 141, 143, 285 Klein, Felix 134 Kneale, William 26 Knight, Rob 51 Knuuttila, Tarja 220, 238, 240 Koch, Robert 17, 194 f., 201, 211 Koch, Sigmund 175, 198 Koselleck, Reinhart 42 Kripke, Saul 4, 150, 154 Kroon, Frederick 157 Kuhn, Thomas 4, 30 f., 33 f., 41, 46, 49, 85, 150, 152, 273 – 277, 281, 293 – 295 Kurz-Milcke, Elke 282 Kuukkanen, Jouni-Matti 40, 150 Kuusela, Oskari 42 Lagrange, Joseph-Louis 138 Lakatos, Imre 14, 41, 127 f., 132 – 134, 140, 143 f. Laurence, Stephen 98, 280 Lave, Jean 247 Ledingham, John 206 Leibniz, Gottfried Wilhelm 24, 141 Lennox, James 216 Lewis, Clarence Irving 24 Lipmann, Fritz 204 Lodge, Oliver 121 Loeffler, Friedrich 195 f., 198, 207 f.

300

Index of Names

Longino, Helen 75 Lorentz, Hendrik Antoon 158 – 162 Love, Alan 50, 75, 79 – 81, 234 Lunbeck, Elizabeth 8 Lwoff, Andre 204, 206, 212, 214 MacCorquodale, Kenneth 55 Machamer, Peter 176, 179 Machery, Edouard 6, 24, 208 MacLeod, Miles 11 f., 15, 47, 61, 89, 97, 106, 295 Magal, Oran 144 Margolis, Eric 98, 280 Martin, Thomas 119 Massimi, Michela 222, 231 Maull, Nancy 81 Maxwell, James Clerk 14, 120 f., 160 Mayer, Adolf 195 f. McCarty, Maclyn 61 Medin, Douglas 280 Meehl, Paul 55 Meitner, Lise 287 f. Millikan, Robert 160 f. Minelli, Alessandro 80 Mitchell, Sandra 274 Moczek, Armin 80 Moffat, Barton 282 Monod, Jacques 65 Moore, Elizabeth 203 Morgan, Thomas Hunt 51, 53 – 55, 59 f., 63 f., 66, 70 Moss, Lenny 92 Mueller, Ian 135 Müller, Gerd 79 f. Muller, Hermann 51, 54 – 58, 60, 62 – 64, 66, 69 f. Mullins, Patrick 216 Muntersbjorn, Madeleine 140 Murphy, Gregory 280 Neiman, Susan 231, 241 Nersessian, Nancy 3 – 6, 18 f., 31, 38, 48, 50, 77, 149, 153, 223, 229, 245 f., 249, 264, 266, 275, 277, 279, 282, 295 Neumann-Held, Eva 51 Neurath, Otto 25 f. Newman, Stuart 79

Newstetter, Wendy 268, 282, 295 Newton, Isaac 107, 296 Nietzsche, Friedrich 28 Nilsson-Ehle, Herman 63 Nocard, Edmond 199 Noddack, Ida 285 Norrby, Erling 202, 204 Norton, John 6 Ockham, Wilhelm von 24 Olson, Wendy 79 Ørsted, Hans-Christian 114 f. Ortega y Gasset, José 29 Osbeck, Lisa 268, 282, 295 Owen, Richard 86 Pandit, Anjali 241 Parker, Raymond 201 Pasch, Moritz 14, 127, 142, 144 Pasteur, Louis 194, 204, 206, 212 Patton, Chris 268 Pavlov, Ivan 175 Peano, Giuseppe 131 Peregrinus, Petrus, Pierre de Maricourt / Peregrine 109 f. Peregrinus, Petrus 109 – 111, 123 Periola, Mario 26 Philibert, Andre 212 Pilcher, Stephen 203 Pim den Boer, W. 42 Pirie, Norman Wingate 195, 203 f. Pliny 108 Poincaré, Henri 129, 144 Pollard, Stephen 142 Porta, Giovanni Battista della 111 Porter, Alan 272 Portin, Petter 51, 56 Price, Michael 136 Procee, Henk 241 Psillos, Stathis 159, 162 Putnam, Hilary 4, 49, 150, 152 – 154, 230 Pythagoras 152 Quine, Willard van Orman 2 – 4, 48, 128, 279 Radder, Hans

156, 159

301

Index of Names

Rashotte, Michael 173, 175 Reck, Erich 136 Reeck, Gerald 88 Rheinberger, Hans-Jörg 50, 52 Richardson, Robert 179, 183 Rivers, Thomas 198, 201 f., 205 f., 210 – 214 Rolin, Kristina 282 Rosch, Eleanor 5 Rossini, Frederick Anthony 272 Rouse, Joseph 8 f., 159, 231, 238, 241 Roux, Emil 199, 206, 210 Rudolph, Rachel 144 Russell, Bertrand 48

Sutton, Walter 54, 60 Svedberg, Theodor 214

Salmon, Wesley 2 Sarkar, Sahotra 90 Savolainen, Janne 241 Schlimm, Dirk 5, 14 f., 41, 48, 127, 136, 141 f., 186, 295 Sellars, Wilfrid 2 Shapere, Dudley 3, 5, 153 Skinner, Quentin 38 Skosnik, Katherine 136 Smadel, Joseph 203 Smith, Julian 108 Smith, Laurence 175 Spence, Kenneth 175 Stadler, Lewis 56 f., 69 f. Staley, Richard 162 Stanley, Wendell 202 f., 211 f. Star, Susan Leigh 153, 274, 281 Steinhardt, Edna 201 Steinle, Friedrich 1, 7, 13 f., 36 f., 50, 56, 99, 105, 107, 115 f., 119, 142, 150, 153, 158, 163, 167, 186, 215, 241, 264, 277, 279, 296 Stotz, Karola 51, 92 Strassmann, Fritz 287 f. Sturtevant, Alfred 51, 63 f. Suarez, Francisco 24 Sullivan, Jacqueline 177, 183 Suppe, Frederick 176, 295

Wagner, Günter 79 f. Waismann, Friedrich 25 – 27, 41 Wall, M. J. 203 Waters, Kenneth 89 f., 92, 167 Waterson, Anthony Peter 200 Weber, Marcel 66, 75, 91, 120 Weierstrass, Karl 131 White, Morton 29 Wilder, Raymond 140 Wilkinson, Lise 200 Wilson, Charles Thomson Rees 161 Wilson, Mark 2, 25, 37 f., 138 Wittgenstein, Ludwig 10, 23, 26 – 28, 35 – 37, 39 – 42, 48, 230, 294 f. Wollaston, William Hyde 115 Woodruff, Alice Miles 203 Woodward, James 18, 167 f., 181 f., 192, 219, 221 f., 224 Wouters, Arno 78 Wrathall, Mark 26 Wray, K. Brad 282, 286 f. Wright, Sewall 58

Tappenden, Jamie 137 Tatum, Edward 54, 61 Tessin, Timothy 39 Thagard, Paul 282 Thomson, Joseph John 158, 161 f. Toulmin, Stephen 42 f. Uhlenbeck, George

157

Vico, Giovanni Battista 29 Virchow, Rudolf 198 Vygotsky, Lev 245

Yu, Andy

144

Zadeh, Lofti 129 Zeeman, Pieter 157 – 159