114 61 3MB
English Pages 167 Year 2024
Synthese Library 479 Studies in Epistemology, Logic, Methodology, and Philosophy of Science
Ramón Alvarado
Simulating Science Computer Simulations as Scientific Instruments
Synthese Library Studies in Epistemology, Logic, Methodology, and Philosophy of Science Volume 479
Editor-in-Chief Otávio Bueno, Department of Philosophy, University of Miami, Coral Gables, USA Editorial Board Members Berit Brogaard, University of Miami, Coral Gables, USA Steven French, University of Leeds, Leeds, UK Catarina Dutilh Novaes, VU Amsterdam, Amsterdam, The Netherlands Darrell P. Rowbottom, Department of Philosophy, Lingnan University, Tuen Mun, Hong Kong Emma Ruttkamp, Department of Philosophy, University of South Africa, Pretoria, South Africa Kristie Miller, Department of Philosophy, Centre for Time, University of Sydney, Sydney, Australia
The aim of Synthese Library is to provide a forum for the best current work in the methodology and philosophy of science and in epistemology, all broadly understood. A wide variety of different approaches have traditionally been represented in the Library, and every effort is made to maintain this variety, not for its own sake, but because we believe that there are many fruitful and illuminating approaches to the philosophy of science and related disciplines. Special attention is paid to methodological studies which illustrate the interplay of empirical and philosophical viewpoints and to contributions to the formal (logical, set-theoretical, mathematical, information-theoretical, decision-theoretical, etc.) methodology of empirical sciences. Likewise, the applications of logical methods to epistemology as well as philosophically and methodologically relevant studies in logic are strongly encouraged. The emphasis on logic will be tempered by interest in the psychological, historical, and sociological aspects of science. In addition to monographs Synthese Library publishes thematically unified anthologies and edited volumes with a well-defined topical focus inside the aim and scope of the book series. The contributions in the volumes are expected to be focused and structurally organized in accordance with the central theme(s), and should be tied together by an extensive editorial introduction or set of introductions if the volume is divided into parts. An extensive bibliography and index are mandatory.
Ramón Alvarado
Simulating Science Computer Simulations as Scientific Instruments
Ramón Alvarado Philosophy Department University of Oregon Eugene, OR, USA
ISSN 0166-6991 ISSN 2542-8292 (electronic) Synthese Library ISBN 978-3-031-38646-6 ISBN 978-3-031-38647-3 (eBook) https://doi.org/10.1007/978-3-031-38647-3 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To Isaac, Alejandra, Blanca and John
Contents
1
Introduction: Instruments of Futures Past�������������������������������������������� 1 1.1 Galileo, Kepler and the Telescope in Science ���������������������������������� 1 1.2 The Electronic Oracle and Computer Simulations �������������������������� 6 1.3 The Philosophy of Computer Simulations �������������������������������������� 13 References ������������������������������������������������������������������������������������������������ 16
2
Computer Simulations in Science���������������������������������������������������������� 19 2.1 Equation-Based Simulations ������������������������������������������������������������ 20 2.2 Agent-Based Simulations ���������������������������������������������������������������� 22 2.3 Supercomputing and Simulations ���������������������������������������������������� 23 2.4 Machine Learning and Simulations�������������������������������������������������� 25 References ������������������������������������������������������������������������������������������������ 27
3
The Rise of a Dichotomy�������������������������������������������������������������������������� 29 3.1 Establishing the Dichotomy ������������������������������������������������������������ 30 3.2 Somewhere In-between Model and Experiment ������������������������������ 34 3.3 Experimental Models, Modeling Experiments and Simulating Science �������������������������������������������������������������������� 38 3.4 Simulating Experiments ������������������������������������������������������������������ 44 3.5 The Pipeline ������������������������������������������������������������������������������������ 46 References ������������������������������������������������������������������������������������������������ 48
4
The Via Negativa: Computer Simulations as Distinct�������������������������� 51 4.1 Simply Distinct �������������������������������������������������������������������������������� 53 4.2 Distinct from the Context in Which They Are Deployed ���������������� 58 4.3 Functionally Distinct: They Do Different Things or They Do Things Differently �������������������������������������������� 60 4.4 How Distinct, Really? An Objection ����������������������������������������������� 63 References ������������������������������������������������������������������������������������������������ 66
vii
viii
Contents
5
Technical Artifacts, Instruments and a Working Definition for Computer Simulations���������������������������������������������������������������������� 67 5.1 Defining Instruments ������������������������������������������������������������������������ 69 5.2 Defining Scientific Instruments ������������������������������������������������������ 72 5.3 Taking Stock and Defining Computer Simulations ������������������������� 88 References ������������������������������������������������������������������������������������������������ 92
6
Hybrid All the Way Down ���������������������������������������������������������������������� 95 6.1 A False Dichotomy: The In-betweenness of Computer Simulations Explained������������������������������������������������ 98 6.2 Hybridity and Functional Novelty���������������������������������������������������� 101 6.3 Other Instrument Taxonomies���������������������������������������������������������� 105 References�������������������������������������������������������������������������������������������������� 109
7
Implications of the Instruments View of Computer Simulation�������������������������������������������������������������������������� 111 7.1 Epistemic Entitlements and Computer Simulations ������������������������ 111 7.2 Transparent Conveyers, Expert Testimony and Computer Simulations���������������������������������������������������������������� 117 7.3 Computer Simulations Are Not Transparent Conveyers������������������ 118 7.4 Warrants for One Thing Are Not Necessarily Warrants for Another������������������������������������������������������������������������ 121 7.5 Computer Simulations Are Not Themselves Sources of Expert Testimony�������������������������������������������������������������������������� 129 7.6 Epistemic Technologies, Epistemic Trust and Epistemic Opacity���������������������������������������������������������������������� 131 7.7 Scientific Inquiry Versus Everyday Epistemic Practices������������������ 138 References�������������������������������������������������������������������������������������������������� 142
8
Conclusion������������������������������������������������������������������������������������������������ 147 References ������������������������������������������������������������������������������������������������ 150
References �������������������������������������������������������������������������������������������������������� 151 Index������������������������������������������������������������������������������������������������������������������ 161
Chapter 1
Introduction: Instruments of Futures Past
1.1 Galileo, Kepler and the Telescope in Science As Galileo left Bologna, in what was, by several accounts of the time, an embarrassed haste, those that had the opportunity to interact with his newly minted “looking glass” were completely underwhelmed both by its performance and its promise. Some of them even wrote to Kepler to state that several of the highly-esteemed and educated invitees to Galileo’s telescope sightings openly “acknowledged the instrument deceived.” (Van Helden, 1994). Galileo later tried to convince scholars of the remarkable power of his instrument by shipping some of his prototypes to persons of influence (Van Helden, 1994; Zik, 1999, 2001; King, 2003). However, just having access to the instrument and knowing the maker’s reputation was not enough to generate the trust required to deem the device as a trustworthy technology and for it to be included in the canon of rigorous inquiry methods of the times. Galileo had to further provide a methodology to go along with the instrument whenever he would ship it. In these instructions, he provided a detailed manual of how to position the instrument, when the best viewing times were and even star maps to find locations in space more easily. Often, these instructions were provided with the aim of corroborating sightings that he himself had already published (Van Helden, 1994; King, 1955). This again, was not enough. Some of the users of the instrument were simply unable to work with the telescope or were unable to see the things that Galileo was trying to corroborate. When we think of the kinds of technical artifacts that are part of that exclusive category called scientific instruments, the classical telescope undoubtedly qualifies as one. It was precisely crafted and perfected through rigorous processes and it was trusted as a reliable scientific instrument—within its limited parameters—up until computationally-enhanced alternatives showed up in the latter half of the twentieth century. However, this was not always the case.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3_1
1
2
1 Introduction: Instruments of Futures Past
While some of the particular instances of skepticism towards Galileo’s instruments could be explained by citing several factors of the social dynamics of the time, as some historians and philosophers of science have suggested—factors such as professional animosity, elitist skepticism, power dynamics of one sort or another, worries about appropriation, etc. (Biagioli, 2010)—, the events surrounding the eventual sanctioning of the telescope as a scientific instrument in the following years and decades tell a different and perhaps deeper story about what it takes for a technical artifact to be part of the exclusive category of scientific instruments. At the very least, the efforts by Galileo and others elucidate the fact that the introduction of novel technologies into scientific inquiry is not something that scholars and practitioners took lightly even back then. Galileo himself thought that something else besides mere pragmatic reasons1 or practical applications was necessary to accept an instrument as a serious tool for inquiry. Consider the following quote: Thus, we are certain the first inventor of the telescope was a simple spectacle-maker who, handling by chance of different glasses, also by chance, through two of them, one convex and the other concave, held at different distances from the eye; saw and noted the unexpected result; and thus found the instrument. On the other hand, I, on the simple information of the effect obtained, discovered the same instrument, not by chance, but by the way of pure reasoning2
Of course, Galileo did not invent the telescope, but he is the one credited with turning it into a scientific instrument (Zik & Hon, 2017) and he explicitly noted the difference in value between something being invented by accident and something being the product of “the reasoning and intelligent mind.” (King, 1955) It is the repeated emphasis on the word ‘chance’ in the quote above that betrays the sense of superiority he held about his own process. Of course, regardless of its status at the time, pure reasoning alone was nevertheless not enough to claim that an instrument could tell us something about the real world. It is worth mentioning the fact that at some point not too far removed from this time in the history of science, the mere use of instruments was in itself seen as epistemically suspect. Even as natural philosophy was coming of age, there was a strong distinction between what scholars called ‘merely’ mathematical instruments and what was known as philosophical apparatuses—devices that offered some access to features of the world beyond pure formal methods (Warner, 1990, p. 83). So, while Galileo could claim that his instruments were more reliable and perhaps scientific than others based on the fact that they were constructed with the help of his theoretical understanding of the principles of optics, convincing others of this was not so straightforward. Rather, novel sanctioning methods had to be created. Kepler, for example, in his efforts to validate both Galileo’s instruments and astronomical findings, had to device a way to ensure that any doubts from critics, I engage with basic pragmatic assumptions more in detail later in the book, for now, I take the term to mean considerations of use and/or usefulness in the context of problem solving. This is in contrast to views that value knowledge and knowledge acquisition without giving such considerations a central role. 2 In Il Saggiatore, as cited by King (1955, 2003), p. 36. 1
1.1 Galileo, Kepler and the Telescope in Science
3
skeptics and cynics could be assuaged. Historian of science Albert Van Helden describes the recipe for certifying telescope observations designed by Kepler the following way: “make the observatory a public space by enrolling fellow observers of high social status or other excellent credentials, have them draw what they see independently, and then compare results, thus confirming the observations by means of witnesses” (1994, p. 13). However, contrary to contemporary sociological accounts of scientific norms, Kepler knew that group consensus, by itself, can reasonably be doubted as inadequate and insufficient evidence for the purposes of instrument validation. Rather independently certified procedures and tests had to be devised and included in the sanctioning process. Kepler himself notes: We followed the procedure whereby what one observed he secretly drew on the wall with chalk, without its being seen by the other. Afterwards we passed together from one picture to the other to see if we agreed. (Kepler, 1611 as cited by Van Helden, 1994, p. 12)
As we can see from this, the socially authoritative invitees had to be guarded from themselves in order to truly and legitimately validate the process and perhaps the instrument and its output. Van Helden further notes that Kepler “even went so far as to tell the reader of his published tract on the subject that Prague was his witness that these observations were not sent to Galileo” (Van Helden, 1994, p. 13). Thus, Kepler’s methods included not just ways to sanction the instrument itself but also ways to sanction the processes by which the instrument was sanctioned and legitimized as a scientific instrument: invite authoritative characters to judge for themselves but do not let their authoritative status be the decisive feature on the matter; make them independently draw their observations but do not let the independent observations by themselves determine one’s trust in the instrument, rather, have them compared through a semi-blind process by peers of similar epistemic status; and finally, do not let those with a direct interest in the outcome of the process know any details about the process ahead of time or about the outcomes during the process, but do let the world know this was part of the process. So, as we can draw from some of the precautions taken by Kepler depicted in the passage above, the validating procedures designed and adopted to sanction the telescope as a scientific artifact—i.e., as an instrument that could be accepted to reliably elucidate true aspects of the world—were not just independent from the creators of the instrument. These efforts also had to guard against the possible biases of those with direct interests in the use of such a tool. Throughout the decades after these initial steps were taken by Kepler, the communities surrounding these efforts became quite heterogeneous: they included statesmen, mathematicians, philosophers, and merchants (Drake, 1984; Van Helden, 1994; Malet, 2003). While astronomers, mathematicians, natural philosophers and glass-makers (of course) were always close to parts of the development process of the telescope, its ultimate validation as a reliable scientific instrument also involved the inclusion of experts in unexpected fields such as paper-makers and even font designers. This is because certain papers absorbed ink in ways that created easily recognizable patterns while certain fonts could be easily recognizable despite suboptimal lens crafting or independently of the instrument’s prowess. Similarly, if you have seen the random
4
1 Introduction: Instruments of Futures Past
assortment of letters at an eye doctor’s exam, you can understand why early telescope testers hired by Prince Leopold, in Florence, also had to replace the use of Dante’s well-known texts in these tests: they could be guessed or completed by memory by the often-well-read participants in these tests, rather than clearly seen through a telescope. These issues, it was recognized at the time, undermined the credence of any reliability assessment done unto the instrument. In short, the validation of the telescope as a genuinely scientific instrument both gave birth to and was the product of an independent process of validation akin to an industry that ensured the quality and reliability of the instrument on its own merits rather than through simply relying on a chain of epistemic entitlements connecting to either its creator, its users, or any one group or individual in particular. Ultimately, it was not up to Galileo to say whether the telescopes built or used by him were reliable. It was not up to Kepler to determine whether the findings associated with its use were trustworthy, though he had some innovative ideas concerning how to go about it. And it was also not up to Hevelious, Huygens, Fontana, Torricelli, Prince Leopold or even the whole institution of the Accademia del Cimento—which was instituted in large part to join these efforts. It was up to an open process. So open, in fact, that in certain circles it became a kind of spectator sport where instrument makers and users competed to prove the prowess of their devices over existing ones (Van Helden, 1994). The history of how the telescope came to be accepted as a scientific instrument implies a series of norms and practices that, contrary to modern interpretations in the sociology of science, emerged from an inclination to provide appropriate ways to assess the epistemic standing of the material things with which we seek to understand the world around us. Such a history shows us that, while seeking to corroborate theories and validate experimental practices in science is important, there is also a significant need to ensure that the technical artifacts involved in this process are themselves reliable in a manner that can be assessed and accessed by any interested party. At the very least, histories such as the telescope’s serve to demonstrate that something other than mere community consensus (or even more sophisticated sociopolitical forces and structures) is also at play in the sanctioning of novel technologies, and that this other thing is at least as equally decisive in the establishing of technological developments in scientific practice.3 This piece of history of science, as we will see in this book, has important epistemic and philosophical ramifications for the inclusion of computer simulations in scientific practice. The ultimate sanctioning of the telescope as a scientific instrument and the story of how this came to be shows, for example, that many of the arguments conventionally deployed to justify our reliance on novel computational methods are simply inadequate and that more philosophical work needs to be Importantly, the lesson to be drawn from this is not so much that standards of practice change and evolve over time—as Van Helden (1994) suggests and which is simply true. Rather, the lesson here is that even existing standards of the times were already not enough, that there was a drive to device some new standards where none existed, and that these were thought to have to transcend both existing standards and any attachment that contemporary communities of experts had to them. 3
1.1 Galileo, Kepler and the Telescope in Science
5
done to properly address their status and to understand their epistemic import. For example, as we will see in detail later in this book, the history of the telescope suggests that conventional strategies to justify the use of novel computational methods, such as the appeal to epistemic entitlements—the non-justificatory, non-evidentiary rights to a belief without which ordinary epistemic practices would not be possible—are simply not enough to accept the introduction of a novel technology into scientific inquiry as if it was already reliable or as if it was immediately trustful. Afterall, we are talking about the rejection of the Galileo and his telescope. Someone with much authoritative capital at the time. Yet, this credibility surplus did not simply transfer to the device. The later involvement of Kepler in advocating for the reliability of Galileo’s instrument, as we will see in this book, also shows that besides the critical involvement of an expert community, important infrastructural arrangements have to be in place to ensure the verification and validation of novel technologies as apt for scientific inquiry. This is in sharp contrast to views of scientific practice that account for scientific norms as either fully dependent or determined through the social construction of in-group coherence, community consensus, or as the result of deliberation by privileged stakeholders alone (Kuhn, 1962). Rather, the history of the telescope’s sanctioning suggests that such consensus only has epistemic import if it maps on to norms and procedures that can be accessible to any agent inquiring about them. This is particularly the case, as we shall see throughout this book, when it comes to the introduction of novel technologies into scientific practice. Of course, history does not in and of itself constitute an argument. In particular, just because some things have been the way they are does not justify that they should continue to be that way or even that they should have been the way they were. However, the arguments in this book will not infer from historical fact that things ought to be one way or another. Rather, drawing from diverse sources beyond the philosophy of science—such as the philosophy of technology, the philosophy of engineering and the history of science—the arguments in this book will elucidate from examples such as the one above the reasons why things were one way rather than another. In doing so, it will in turn elucidate the fact that these reasons were not merely contingent—or simply idiosyncratic whims of an imperial age—but rather conceptually and perhaps even normatively justified. More importantly, in detailing historical trajectories such as the telescope’s, the aim is to ultimately show that these reasons may be reasons worth defending and preserving as we move forward in a scientific context that includes novel computational technologies. Sabina Leonelli (2021) uses an eliciting term for these conceptual frameworks that emerge after envisioning or projecting into the future the ways in which technological developments could go or be. She calls these visions ‘imaginaries’ and defines them as the “imagined and projected” contributions or responses of a given technology—in her case data science models—to a given issue (2021, p. 4). Imaginaries, she says, are “linked to specific expectations about what technical, human, and institutional resources (including methods, skills, and supportive socioeconomic conditions) should ideally be developed and combined in order to effectively use [a technology]” (ibid) in a particular context. The history of the
6
1 Introduction: Instruments of Futures Past
telescope discussed here, although taken from events in the past, can be interpreted as such an imaginary. Such a history directly concerns the gradual construction of the expectations and norms surrounding the establishment of idealized or aspirational technical, human, and institutional resources—including as Leonelli suggests, the methods, skills, and supportive socioeconomic conditions—required to effectively rely on a given technology in a particular context: namely the telescope in and for scientific inquiry. Accordingly, this history also provides a contextual framework with which to understand the influential imaginary by which scientific and technical progress was to be informed, not just at the time, but also in the years, decades and even centuries to come. Yet, such an imaginary can also help us elucidate some aspects of both scientific practice and technological development worth safeguarding as we deliberate towards a technological future that includes computational methods such as computer simulations. As an argumentative strategy, the question related to these historical accounts is the following: to what extent have computer simulations been through a similar sanctioning process as other canonically-sanctioned scientific instruments have? This comparison can only be done, as I will argue, once we accept that computer simulations are indeed most closely related to instruments and not, as most of the literature suggests, to extensions of formal mathematical methods such as models (Weisberg, 2012) or to special, deflated cases of experimentation (Morrison, 2015; Lenhard, 2019). Once we acknowledge that computer simulations are more closely related to the instruments we use in science than to these other elements of scientific inquiry, we will be able to adequately compare them and assess them in light of this question. Before we get there however, perhaps more needs to be said to illustrate what is at stake in such a debate. In order to do this, the following section explores yet another imaginary. This time, however, the visions and projections are of an instrument found not in the past but in a not-too-distant future.
1.2 The Electronic Oracle and Computer Simulations In order to better understand the philosophical—particularly epistemological— significance of the historical imaginary depicted above, and which will be depicted in more detail later in the book, we can contrast it with a scenario that seems to be its extreme opposite. Consider the following fictional scenario first posited by John Symons and myself (2019) in our work regarding the inadequacy of epistemic entitlements for the sanctioning of novel technologies. This scenario, set somewhere in the not-too-distant future, involves a piece of equipment whose design, development, originally-intended usage and inner processes are unknown and inaccessible to its users. Imagine a civilization in which people had outsourced the task of predicting and controlling their natural and social environments to electronic black boxes with blinking lights. We can imagine a scenario in which the history of how these boxes
1.2 The Electronic Oracle and Computer Simulations
7
appeared was ultimately lost through the passage of time. For example, a postapocalyptic discovery of a semi-functional, weather forecast machine with a broken display. Let us say that the display can only show seven or eight vertical lines that change subtly in color every day. Unbeknownst to those who originally found it, these lines on the display represent each of the days of the week to come. Yet as far as those discovering the boxes know, these are just changing lights of an otherwise inscrutable representation. We can imagine similar ‘boxes’ with similarly minimal or only partially accessible displays that were made for other purposes (the position of planets, market fluctuations, etc.), the details of which are not available to this future civilization.4 Reasonably, at first, people might have approached these boxes hesitantly for they would have been unsure of whether the boxes could be trusted. After all, to them, they are just boxes with blinking lights or indecipherable color-changing displays. Now, we can imagine these boxes and their broken yet colorful monitors being brought to the center of this future community for exhibition as a curious remnant of some mysterious past. Some of the buttons attached to the boxes remained accessible to those visiting the exhibit, if only because of the colorful changes they brought on to the displays. Sometimes, the colors would change when the buttons were pressed, sometimes they would not, and sometimes they would just disappear and reappear in a different order. Over time, some within the community began to figure out that some of the patterns in the displays correlated to phenomena around them. In the case of the weather forecast machine, we can imagine that some lines in the displays would be a darker shade of blue when rain or clouds were coming. They would take on a lighter tone when clearer days came. Eventually, almost everyone consulted with these boxes and asked the boxes questions about their environment with the expectation of receiving predictively accurate answers while the inner workings of these machines remained a mystery. With some more time, those who trusted the boxes gained an advantage over those that did not and hierarchies emerged. Importantly, in this scenario, when users attempted to inquiry about an explanation of the inner processes of the machines, they were unable to gain any further knowledge about them. Because some fundamental knowledge survived in such a society, users simply assumed that the inner workings of the boxes obeyed conventional physical laws and that they worked according to mathematical principles of computation. However, all this was the product of assumptions. It was simply part of a seemingly commonsensical convention and not something directly related to
While the detail of a broken screen may make the point more obvious concerning the representational opacity of such ‘black boxes’, we can imagine a future in which even conventional representational depictions, such as graphs, or cursive writing, are simply unintelligible to others. A picture of planet earth from afar in the solar system, for example, may be nothing more than a strange pale blue dot to those in a plausible future cultural landscape in which such exploratory endeavors may have been forgotten. 4
8
1 Introduction: Instruments of Futures Past
the inner workings of the blinking boxes.5 By now it should be clear that the boxes are, for our imagined community, for all intents and purposes, functioning as oracles and that neither the details of the software engineering process that produced the oracles nor the computational processes behind their operations are accessible. We can further imagine that a consensus was gradually reached that the boxes rarely made serious errors with respect to important matters such as the paths of hurricanes, or the occurrence and development of epidemics, or the behavior of the economy. They simply worked the majority of the time and that was all that was required of them by the users who benefitted from their predictive prowess: users posed questions to the machines, took past successes as evidence that future answers would be reliable and granted the boxes significant influence in decision-making for matters such as policy and planning. At some point, social norms deemed it both unethical and irrational to ignore or not use the boxes in times of crisis. Thus, they were integrated to all matters of public and private interest in such a society. That is the scenario. As noted earlier, this scenario is also an imaginary in that it provides us, just as the history of the telescope did, with a vision of the specific expectations concerning the technical, human, and institutional resources (i.e., methods, skills, including supportive socioeconomic conditions) developed and combined in such a world in order to effectively use such devices. It is also an imaginary in that it suggests a way in which things could be or go regarding the adoption of technical artifacts. Now three questions arise: 1. How different is this scenario from the scenario depicting the sanctioning processes of the introduction of the telescope into scientific inquiry? 2. How much does our current use of computer simulation resemble the oracle believer’s use of their boxes? and 3. More importantly, for our purposes in this book, which of the two scenarios should more closely resemble our adoption and use of computer simulations in scientific inquiry as we move forward into the future? We can begin by addressing the first question. How different is this last scenario from the historical account above? As we can see, this latter scenario represents, in many ways, the opposite scenario from the arduous and gradual adoption of the telescope. First there is no principled understanding that speaks to either the functioning of the instrument or to the phenomena the instrument is meant to capture. This is precisely the kind of difference that Galileo highlighted when constructing his own instrument—and it is the kind of understanding that both he and, later, other practitioners had or sought: understanding about optics, astronomy and even lens crafting; Second, there are no detailed efforts to justify the results of the instrument by any means other than simple observation of predictive power, i.e., output Notice that the justificatory assumptions from the part of the first people to trust the oracle are not necessarily a theoretical component of their reliance on the oracle. Trust in the oracles could have emerged by default, accident or even superstition (See Skinner’s 1948 study “‘Superstition’ in the Pigeon”. In it, pigeons would continue to perform behaviors they equivocally associated with food rewards solely because the behavior and the distribution coincided successfully in the past). 5
1.2 The Electronic Oracle and Computer Simulations
9
accuracy—or more correctly put, strength of output correlation with some events in the world. There is also no testing of the instrument against theoretical assumptions or even against plausible confounding errors or biases in this scenario. Furthermore, because there are no tests, besides gradual observation of correlated patterns, there is also no testing of the tests and hence no efforts to provide independent warrants that speak towards the veracity of the instrument’s findings or the reliability of the testing with which the instrument may be evaluated. The instrument is simply gradually trusted through a series of epistemic entitlements and output correlations and yet, importantly, still effectively used by this society as they leverage it against their predicaments. At this point we can say, as French technologist Gilbert Simondon did (1958), that this is the way most technical artifacts are actually integrated into our everyday lives. In fact, he argued, this is the only way they can be fully integrated into our everyday lives.6 The more knowledge about the inner workings of a technical artifact that is required, so the argument goes, the less widespread its use will be. Conversely, the opaquer a technology is, the more accessible it will be to an everyday user. In fact, this understanding of technical progress was already taking shape by the start of the twentieth century. Hence, to some, opacity was seen not as a bug to fix but rather a realistic feature of technological progress. Whitehead, for example, thought that opacity was at the center of human flourishing since, according to him “civilization advances by extending the number of important operations which we can perform without thinking about them” (1911, pp. 45–46). Of course, to most of us, these descriptive accounts seem accurate. One has only to think of the advent of the personal computer to attest to the veracity of this sociotechnical dynamic. When it was no longer necessary to know how to assemble the distinct components and the knowledge required to utilize it decreased, more people were able to integrate the personal computer into their everyday lives. This was, as twentieth century marketing lore has it, the key visionary insight of Steve Jobs regarding the possibilities for widespread household use of the personal computer. Nevertheless, there is a key difference between our discussion and Simondon’s point. Particularly as it relates to the context, the purpose and the use of the technology under consideration. While Simondon’s observation seems to track the integration of technical artifacts into everyday or even industrial use, the issue is simply not the same when it comes to the sanctioning of technical artifacts for scientific inquiry. Why this is the case will become clearer by the end of the book, however, for the purposes of our current discussion it suffices to say that alluding to the ways in which technical artifacts are integrated into everyday life says little about the ways in which technical artefacts are or, importantly, ought to be incorporated into scientific inquiry. This is in large part because, as we will see in later chapters, these are both distinct epistemic contexts, the latter of which is, by design, constituted in virtue of a set of distinctly aspirational epistemic norms. Briefly and without getting In saying this, Simondon is actually anticipating extremely similar observations made by yet another French technologist of greater fame, Bruno Latour (1990). Latour argued that technology is basically norms made into artifact and that often these norms continue operating long after they are explicitly recognizable. 6
10
1 Introduction: Instruments of Futures Past
into much detail, one can immediately grasp the inadequacy of the comparison if one considers the substitution of one set of norms at play in one context for the ones in the other and note if in any one of the contexts the nature of the tasks and the associated aims of the inquiry in question are in any way undermined. Can we require laboratory grade precision, rigor and exhaustiveness in the kitchen and still get dinner ready efficiently? Can we accept the use of glassware or other tools in a laboratory with the same laxness we do in our kitchen and still maintain experimental control? While the first question is not as important in that it is merely a practical question, the second one is about the possibility of conceptual consistency. Can we still be said to be doing science without doing what science is epistemically distinguished for, without doing what science is meant to do? Besides the practical examples that make our two scenarios different from one another, this last conceptual and contextual difference will prove to be a key distinction in our discussion going forth. Yet, what about the second question in the list above? How much does our current use of computer simulation in science and industry resemble the oracle believer’s use of their boxes with blinking lights? It is tempting to analogize the situation of the oracle-believers and our own relationship to computer simulations. At first glance it seems that in several important contemporary decision-making contexts we already depend heavily on systems that fit the profile of our imagined electronic oracles. As we will see later in this book, for example, Paul Humphreys described computational systems in general as epistemically opaque (2004). In the case of computer simulations in particular, Humphreys suggests that they are in fact essentially epistemically opaque (2009a, b). That is, they are opaque in a manner that may be insurmountable and in a way that is perhaps even independent from an epistemic agent’s resources: independently of how much time, knowledge and money an agent has, for all practical purposes, some or all of the epistemically relevant elements of the system will remain inaccessible (Alvarado, 2021). Furthermore, given the complexity of modern software, even if agential resources were not the problem, in many cases of interest we cannot fully survey the operation of computational simulations, nor can we be sure that they operate correctly all the time (Symons & Horner, 2014: Horner & Symons, 2014). More details will become apparent throughout the book as to why this is the case, but for now we can briefly identify three major challenges that are immediately evident: first, the exponential growth of paths taken even by a medium-size piece of software to reach its results makes it so that such process becomes intractable and practically impossible to exhaustively survey in order to check on its reliability (Horner & Symons, 2014); second, probabilistic models as well as their stochastic results, which can be multiply realizable, cannot be, in principle, reverse-engineered to understand their actual causal path—i.e., several equally plausible paths could have been taken by the machine such that the actual path taken to produce the results in question becomes in principle unknowable (Symons & Boschetti, 2013); and third, given the non-random, conditionally dependent, cascading nature of error in software, conventional statistical inference theory—which requires a random distributions and statistical independence—could prove to be illegitimate when applied to error assessment in computational methods (Horner & Symons, 2014; Symons & Horner, 2017, 2019). As such, simply
1.2 The Electronic Oracle and Computer Simulations
11
sampling fragments of the code in complex software to derive rates of error and hence create a reliability assessment from these is, at best, inadequate; at worst, strongly misleading. Because of this, the opacity of computational methods, and in particular in computer simulations, may prove unique in its severity. Computational methods are, I suggest, unprecedently opaque not only because the details about their proper functioning are inaccessible, but because the nature and source of the error in them is inaccessible as well. In fact, with the increasing use of machine learning the situation is likely to become opaquer rather than more transparent (Alvarado, 2020, 2021, 2022a). What do these challenges tell us about the resemblance between our use of computer simulations and the attitude of the oracle believers in the second imaginary? If we are ignorant of the method by which certain simulations generated their results, then indeed it seems that we do not have rational, or epistemic (Dretske, 2000), basis to believe such results. Furthermore, if we only access the rate of error of a system but cannot truly access the nature and source of such an error, can we ever claim such system to be reliable? If we are talking about technologies that we use to help us with the acquisition of knowledge, then given our discussion above, it seems that we cannot trust an epistemic technology that is epistemically opaque (Alvarado 2022b). Additionally, as we briefly saw above and will see in more detail in later chapters, non-evidential warrants, such as epistemic entitlements—which would grant an epistemic agent a non-evidentiary right to accept an instrument as reliable and its results as believable simply on the basis of a chain of trust extending from the user to the disciplines of engineering, physics, and mathematics involved in the instrument’s construction, devoid of any empirical evidence (Burge, 1993; Barberousse & Vorms, 2014; Duede, 2022)—seem to be extremely inadequate for scientific purposes. Hence, it is not at all unreasonable to think that we share a lot in common with the users of the electronic oracles described above. Given our discussion so far, it is not surprising then that reliance on software-intensive systems in science has struck some thinkers as questionable and perhaps even unscientific (Newman, 2015). And yet, there is one key distinction between the oracle’s imaginary and our current state of affairs and that is that, as recounted, the situation and the attitude of our electronic oracle worshipers are settled into the future. They have incorporated the technology into their daily lives as is and embraced, rather than fought, its opaque status as they go about their consultations. We, on the other hand, are not there just yet. By now, mentioning the ubiquity of computational methods in the opening lines of academic works on the subject is deemed not just trivial but also trite. Nevertheless, that they are so, both in science and in social contexts such as policy-making, is still worth mentioning. Perhaps even more so now than ever before. For philosophers of computer simulation such as Eric Winsberg, for example, the events surrounding the start of the third decade of the twenty-first century, particularly in the management of the COVID-19 epidemic, marked an unprecedented era in human history in which computer simulations played a previously unmatched role, through policy and scientific rhetoric, in the shaping of human lives. Limited and severely flawed computer simulations of the trajectory and spread of the virus from the Imperial College of London, for example, were at the center of decision-making in several countries
12
1 Introduction: Instruments of Futures Past
around the world (Winsberg & Harvard, 2022; Harvard et al., 2021). Chances are that, given the magnitude and timescales of phenomena such as climate change as well as their relevance to contemporary policy-making endeavors, computer simulations will continue to heavily affect the way we live and govern for the coming decades. This is particularly the case as some of the modeling techniques common in the natural sciences begin to make their way onto modelling endeavors that include human behavior such as migration patterns of displaced individuals, the impact related to the allocation of healthcare resources, etc. In industry and other pragmatic settings, such as disaster response, computer simulations are used to forecast possible states of a system that we are concerned with. Mapping out possible trajectories of a storm system via computer simulation, for example, is a critical part of how emergency responders strategize and prepare. Across a range of enterprises and institutions computer simulations are deployed for assessing risks and opportunities. In a discipline like economics, for example, given its reliance on mathematical models and the complexity of controlled experimentation, it makes sense that they also figure as a central instrument. But they are also increasingly used in other social sciences from psychology to sociology and anthropology. Furthermore, computer simulations are playing an increasingly substantial role in political and corporate decision-making processes. Importantly, given the complexity of the phenomena that are often the subject of study in contemporary scientific inquiry, computer simulations happen to represent some of our best and only access to the world around us: consider the amount of time one would have to wait to empirically observe the development of galaxies or the path of evolutionary processes. Furthermore, computer simulations seem to actually furnish novel insights and epistemic content about the world around us, in part due to their necessary material constituency (Parker, 2009) and in part because of their capacity to compute beyond what humans are capable. Consider the difficulty in observing the intricate and dynamic behavior of biochemical and subatomic phenomena. Something other than the senses we are naturally equipped with is necessary. Hence, computer simulations are not just in widespread use in modern science. They are in fact often indispensable to it. They have been so for a few decades now. In all these uses, computer simulations provide us with novel and important insights. In many of these instances, in fact, computer simulations are the best or only scientific tool we have at our disposal to approach a phenomenon. We cannot wait for thousands or millions of years to empirically observe the evolution of distant galaxies. We cannot observe protein behavior that happened millions of years ago. With a few modeling assumptions and a lot of computational power, computer simulations can be the only approximate access we have to events and causes of extremely large or extremely small magnitudes and lengths of time. And yet, they remain an epistemic novelty (Humphreys, 2009a, b; Frigg & Reiss, 2009). That is, in spite of their widespread use, our philosophical understanding of the epistemic status of computer simulations is far from settled. Computer simulations are, after all, a fairly new addition to human inquiry. In this sense they are importantly different from the oracles described above. When it comes to computer simulations we are still perhaps in the same circumstances that telescope makers in
1.3 The Philosophy of Computer Simulations
13
the time of the Academia del Cimento were: still trying to figure out how and when to appropriately rely on such technology. And like them, we are still trying to figure out not just their reliability but the ways in which we can best assess it and attest to it. We are, so to say, in the verge of choosing between imaginaries. This brings us to our third question: in sanctioning the use of computer simulations will we, should we, be more like the instrument makers involved in the telescope races or like the oracle believers in our fictitious scenario?
1.3 The Philosophy of Computer Simulations As we will see, this last question informs this book’s overarching inquiry. Yet, it is the exploration of ontological and epistemological issues regarding the nature and the understanding of computer simulations in the philosophical literature that will guide us through. For the most part, philosophers of science accept that computer simulations play some role in scientific inquiry. Many of the efforts of those that engage in this discussion goes into formalizing this role, as we will learn in this book, within a very sedimented dichotomy in the philosophy of science. Following the extant literature, one can see that the question is often, for example, whether computer simulations ought to be understood as mere extensions of mathematical models or if they should be understood as more closely related to empirical experiments. Computer simulations, it is often argued, ought to be like one or the other. These are, as we will see, the two main extremes of a dichotomy that has dominated the literature for the past quarter of a century. Sometimes, such debate is presented in nuanced ways whilst still remaining within such a dichotomy. Some, like Johannes Lenhard, for example, think that computer simulations are a special or new kind of mathematical modeling, yet a kind of mathematical modeling nonetheless. Others— such as Mary Morgan and Margaret Morrison—believe that computer simulations represent a special, new kind of experiment, but a kind of experiment nonetheless. Still, others, like Peter Gallison, think that computer simulations are something more like a pragmatic space in between both theoretical principles and experiments where multidisciplinary actors exchange and negotiate expertise. He calls computer simulations and their surrounding environment a ‘trading zone’ of practices and methods. Still, as we will see, it is undoubtedly the literature on scientific models that has shaped the discussion of computer simulations. Often, as I will argue, in ways that are unhelpful. Even in views such as Gallison’s—where the concept of computer simulations is broadened to the point where it includes the environmental factors related to the production of such artifacts—the reference points are inherited from older debates concerning the nature of models and the nature of experimentation in philosophy of science. Because of the influence of these debates, frequently in the philosophical discussion of computer simulation, one cannot escape a dichotomy between these two positions. On one side of the debate, computer simulations are seen as belonging to the abstract realm of elements of inquiry such as
14
1 Introduction: Instruments of Futures Past
mathematical/theoretical models. Under this view, in so far as computer simulations are used to provide solutions to mathematical models or to represent dynamic processes that are specified by theoretical principles and assumptions, then their content and operations can be taken to be just electronic or mechanical extensions of these formal elements of inquiry. The other side of the debate, as I will show, is a direct response to the position just described. This second side of the debate, best exemplified by philosophers such as Eric Winsberg (2010), maintains that computer simulations are seldom a direct result of formal methods and that many additional non-theoretical and non-mathematical factors come in to play when running a simulation successfully. Under this view, computer simulations are not merely model solvers. Those that advocate an understanding of computer simulations as ‘something more than’ formal methods, including philosophers such as Francesco Guala (2002) and Wendy Parker (2009), seek to provide a view of computer simulations as an element of scientific inquiry that is capable of generating new insights about the system being simulated. That is, computer simulations, under this view, are deemed capable of yielding information about their target phenomena beyond what is contained in the assumptions and input of a mathematical or theoretical model, i.e., a posteriori. Therefore, a division appears between those that inherited an understanding of computer simulations as a machine-implemented continuation of formal methods, and those that respond to this position by suggesting that we should position computer simulations on a par—or close to—empirical practices such as scientific experiments. Importantly, however, what this division and what these debates show is that each of these options has significantly different implications. Hence, understanding whether computer simulations are more closely aligned with one side of the debate or with the other will yield different answers to important questions about their status in science. These debates show that understanding what computer simulations are is essential to understanding what role they can play in scientific inquiry and, importantly, how adequate they are for such a role. It also shows that simply looking at computer simulations through their use and their pragmatic success will not suffice to provide a robust understanding of their nature, properties, promises and limitations. It shows that knowing what computer simulations are is important. It shows, in short, that an epistemology of computer simulations is not complete without a proper ontological account of what their nature is. If computer simulations are not models or special types of models; if they are not experiments or special kinds of experiments; if they are not expertise ‘trading zones’ or practices such as medicine and engineering, then, what exactly are they? The main aim of this book is to provide a decisive answer to this question. Computer simulations, I argue, are instruments. Hence, they ought to be understood as instruments in scientific inquiry. As a result, they ought to be treated and sanctioned as such. While it is true that computer simulations can implement scientific models and it is also true that computer simulations involve and realize the many problem-solving techniques often ascribed to them, they are not the experimental techniques they carry out nor are they the models they implement. They are the instruments with which the tasks of experimentation and modeling are executed.
1.3 The Philosophy of Computer Simulations
15
While it is obviously true that computer simulations involve hardware and software and that these are human artifacts of a particular kind, by understanding them as scientific instruments, we can move beyond the dichotomy of modeling and experimentation that has dominated the debate to date. Such an understanding also shows that there is an already existing branch of scientific inquiry that can accommodate computer simulations without having to postulate a sui generis, special kind of category for them: namely the category of scientific instruments. Although this category has been historically neglected by the philosophy of science in general and the philosophy of modeling in particular, its essential place in scientific inquiry is nevertheless undeniable and its acknowledgment, as Davis Baird (2004) noted fifteen years ago, is long overdue. Given the debate and long-standing dichotomy briefly described above, which I will treat in detail in the next chapters, treating computer simulations as scientific instruments is a non-trivial departure from existing positions in the philosophical literature on the subject. Nevertheless, through the course of this book, it will become apparent that this view is preferable to the existing alternatives. In particular, treating computer simulations as instruments will help us answer some central questions in the epistemology of computer simulation such as what role do they play in science? What is unique to computer simulations vi-a-vis other elements of scientific inquiry? How and when should we trust them? Are they nevertheless novel or special vis-à-vis other practices and devices in the sciences? Can they still be considered as in-between theory and experiment? Accounting for computer simulations as instruments provides a framework that offers intuitive responses to these questions. At the same time, this view also offers a unificatory understanding of computer simulations which is compatible with much of the literature and which requires only one particular ontological commitment regarding their nature: understanding them as the instruments that they are. Importantly, this ontological commitment is precisely that, an ontological commitment. Hence, the use of the term ‘instrument’ here is not meant to be part of an argument by analogy as some philosophers have done (See Boge, 2021), and it is definitely not meant to be a metaphorical use of the term ‘instrument’. Rather, it is an ontological claim about the nature of computer simulations as technical artifacts used in scientific inquiry, an ontological commitment about their materiality that can provide ample explanation about the ways we actually use computer simulations, the epistemic imports that they can bring to inquiry, the legitimate limitations of their functioning and even the extent of their possible promise as genuine, sanctioned scientific instruments. Furthermore, as Baird (2004) would remind us and as I will argue throughout this book, as instruments, computer simulations themselves may both encapsulate knowledge and be a direct source of knowledge, independently of their relationship to both theory or experimental practices. This is why understanding them as such is central to any epistemology of computer simulations in science. Hence, while an important part of the task of the epistemology of computer simulation is to explain the difference between the contemporary scientist’s position in relation to epistemically opaque methodologies and the oracle believers’ relation to their electronic black boxes in the story above, another task of the epistemology of computer
16
1 Introduction: Instruments of Futures Past
simulation as we move forward is to explain how, why and when computer simulations belong in scientific inquiry. This, as I will argue, is a non-trivial matter and will strongly depend on our adherence to certain aspirational norms in scientific practice that closely resemble the ones depicted in the first imaginary. This is particularly the case, as we shall see, since science is simply not an ordinary epistemic practice and is not guided by the same norms as every day epistemic practices. In fact, whatever science is—and there is ample debate surrounding this important question—it is at the very least meant to overcome ordinary epistemic limitations. Through these considerations, this book aims to answer the third question above, namely: which of the two scenarios should more closely resemble our adoption and use of computer simulations in scientific inquiry as we move forward into the future?
References Alvarado, R. (2020). Opacity, big data, artificial intelligence and machine learning in democratic processes. In Big data and democracy (p. 167). Edinburgh University Press. Alvarado, R. (2021). Computer simulations as scientific instruments. Foundations of Science, 27, 1–23. Alvarado, R. (2022a). Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI. Bioethics, 36(2), 121–133. Alvarado, R. (2022b). What kind of trust does AI deserve, if any? AI and Ethics, 1–15. https://doi. org/10.1007/s43681-022-00224-x Baird, D. (2004). Thing knowledge: A philosophy of scientific instruments. University of California Press. Barberousse, A., & Vorms, M. (2014). About the warrants of computer-based empirical knowledge. Synthese, 191(15), 3595–3620. Biagioli, M. (2010). How did Galileo develop his telescope? A “New” letter by Paolo Sarpi. In Origins of the Telescope (pp. 203–230). Royal Netherlands Academy of Arts and Sciences. Boge, F. J. (2021). Why trust a simulation? Models, parameters, and robustness in simulation- infected experiments. British Journal for the Philosophy of Science, 75. https://doi. org/10.1086/716542 Burge, T. (1993). Content preservation. The Philosophical Review, 102(4), 457–488. Drake, S. (1984). Galileo, Kepler, and phases of venus. Journal for the History of Astronomy, 15(3), 198–208. Dretske, F. (2000). Entitlement: Epistemic rights without epistemic duties? Philosophy and Phenomenological Research, 60(3), 591–606. Duede, E. (2022). Deep learning opacity in scientific discovery. arXiv preprint arXiv:2206.00520. Frigg, R., & Reiss, J. (2009). The philosophy of simulation: Hot new issues or same old stew? Synthese, 169(3), 593–613. Guala, F. (2002). Models, simulations, and experiments. In Model-based reasoning: Science, technology, values (pp. 59–74). Springer US. Harvard, S., Winsberg, E., Symons, J., & Adibi, A. (2021). Value judgments in a COVID-19 vaccination model: A case study in the need for public involvement in health-oriented modelling. Social Science & Medicine, 286, 114323. Horner, J., & Symons, J. (2014). Reply to Angius and Primiero on software intensive science. Philosophy & Technology, 27(3), 491–494.
References
17
Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press. Humphreys, P. (2009a). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. Humphreys, P. (2009b). Network epistemology. Episteme, 6(2), 221–229. King, H. C. (1955). The history of the Telescope. Dover/Griffin. King, H. C. (2003). The history of the telescope. Courier Corporation. Kuhn, T. S. (1962). The structure of scientific revolutions. Latour, B. (1990). Technology is society made durable. The sociological review, 38(S1), 103–131. Lenhard, J. (2019). Calculated surprises: A philosophy of computer simulation. Oxford University Press. Leonelli, S. (2021). Data science in times of pan (dem) ic. Harvard Data Science Review. Malet, A. (2003). Kepler and the telescope. Annals of Science, 60(2), 107–136. Morrison, M. (2015). Reconstructing reality. Oxford University Press. Newman, J. (2015). Epistemic opacity, confirmation holism and technical debt: Computer simulation in the light of empirical software engineering. In International conference on history and philosophy of computing (pp. 256–272). Springer. Parker, W. S. (2009). Does matter really matter? Computer simulations, experiments, and materiality. Synthese, 169(3), 483–496. Skinner, B. F. (1948). ‘Superstition’ in the pigeon. Journal of Experimental Psychology, 38(2), 168. Symons, J., & Boschetti, F. (2013). How computational models predict the behavior of complex systems. Foundations of Science, 18(4), 809–821. Symons, J., & Horner, J. (2014). Software intensive science. Philosophy & Technology, 27(3), 461–477. Symons, J., & Horner, J. (2017). Software error as a limit to inquiry for finite agents: Challenges for the post-human scientist. In T. Powers (Ed.), Philosophy and computing: Essays in epistemology, philosophy of mind, logic, and ethics. Philosophical studies series (Vol. 128, pp. 85–97). Springer. Symons, J., & Horner, J. (2019). Why there is no general solution to the problem of software verification. Foundations of Science, 25, 1–17. Van Helden, A. (1994). Telescopes and authority from Galileo to Cassini. Osiris, 9, 8–29. Warner, D. J. (1990). What is a scientific instrument, when did it become one, and why? The British Journal for the History of Science, 23(1), 83–93. Weisberg, M. (2012). Simulation and similarity: Using models to understand the world. Oxford University Press. Whitehead, A. N. (1911). An introduction to Mathematics. Courier Dover Publications. Winsberg, E. (2010). Science in the age of computer simulation. University of Chicago Press. Winsberg, E., & Harvard, S. (2022). Purposes and duties in scientific modelling. Journal Epidemiol Community Health, 76(5), 512–517. Zik, Y. (1999). Galileo and the telescope: The status of theoretical and practical knowledge and techniques of measurement and experimentation in the development of the instrument. Nuncius, 14, 31–69. Zik, Y. (2001). Science and Instruments: The telescope as a scientific instrument at the beginning of the seventeenth century. Perspectives on Science, 9(3), 259–284. Zik, Y., & Hon, G. (2017). History of science and science combined: Solving a historical problem in optics – The case of Galileo and his telescope. Archive for History of Exact Sciences, 71, 337–344.
Chapter 2
Computer Simulations in Science
As we will see below, a main function of a computer simulations is to allow us to trace, track, map or predict possible states of a system. Through iterations and a sequential incorporation of these states, computer simulations track the temporal, spatial or numeric development of a system of interest. Oftentimes this process results in a visual product. When this happens, a simulation is deemed to be the graphic display of a system’s dynamic character. Other times the simulation yields purely numeric outcomes that can be later graphed and compared. Though a visual component can later be tacked on these simulations for presentation purposes, a researcher can ‘see’ the development of a system through the simulation process by following the numeric results alone. In other simulations, the visual aspect of the dynamic character of a system is the starting point of the simulation. That is, as I will explain below, rather than attempting to elicit the behavior of a system from the solutions of a mathematical model, some simulations start from inferences and hypothetical rules that can mimic the visual character of a system in order to gain a more thorough understanding of its mathematical dynamics. By identifying coarse-grained details in a system such as agents of interest and observed spatial behavior, for example, a computer simulation can be put together without a purely formal (explicitly mathematical) component or a fully integrated theoretical understanding of the phenomenon being simulated. Examples of this type of simulation are early flock behavior simulations. In order to mimic flock behavior by birds, these simulations were programmed by giving particular agents (birds) on a grid simple update rules. When computed for, these rules prompt complex spatial patterns from a set of agents that resemble flock behavior (Reynolds, 1987). By contrast to other simulation techniques, there are no theoretically principled equations at play, no equations to solve. Below I offer a brief description of the two most commonly known kinds of computer simulations. I also provide a short description of the role of supercomputers in the implementation of computer simulations. When it comes to the simulation
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3_2
19
20
2 Computer Simulations in Science
of complex phenomena, supercomputers are often used. This more recent development is important because, as we will see in later sections, the complexity of the facilities in which these computers are housed and maintained as well as the number of experts and components necessary for their use in computer simulations are at the basis of arguments that resist the understanding of computer simulation as a mere device and hence as an instrument. Overall, the following short discussions and descriptions are meant to provide the reader with an understanding of what computer simulations are, what they do, and how they do it. These brief descriptions are also meant to provide a conceptual framework with which to compare and contrast the different kinds of computer simulations to the different interpretative approaches to simulation in the philosophical literature that will follow in later chapters of this book. More importantly, following our brief discussion above, these descriptions aim to reveal a view of computer simulations as the computational instruments they are by elucidating what they do in scientific inquiry, how they do it, and with what aims they do so.
2.1 Equation-Based Simulations Equation-based simulations are the product of several transformations of complex mathematical procedures into something that can be machine implemented and displayed in a humanly intelligible manner. That is, they are meant to provide a more accessible way to determine the changes in a system than the fully explicit and complete series of numerical steps that would take to solve a long series of mathematical operations. An early motivation to use a computer simulation, for example, was to render intractable mathematical problems to somewhat understandable terms to the engineers and mathematicians trying to solve them (Humphreys, 2004). Hence, the process of solving equations often culminates in computational devices also turning the solutions to these equations into interpretable visual dynamics such as moving pictures that are meant to represent the distinct and discrete states of a system of interest (Gehring, 2017). Importantly, computational systems implement numerical solutions under a critical set of constraints. Strictly speaking, for example, computers do not perform division. Rather, they perform a set of addition and subtraction processes that approximates the mathematical function of division. One of the first steps in making a machine that solves an equation is to craft specifications for such an equivalent process given constraints of this kind. That is, any multiplication or division is turned into a sequence of addition and subtraction processes that yield equivalent or approximate results. These mathematical equivalences are then translated into code that can specify the instructions of such processes to a machine. Hence my use of the term ‘transformations’ in the first line of the paragraph above.
2.1 Equation-Based Simulations
21
Further transformations include the fact that solutions of this kind are necessarily discretized. That is, the computational processes provide solutions within a specified parameter and then they iterate the process for sequential stages. So, rather than provide a full solution to a continuous equation, for example, the process provides solutions to segments of the equation at each run and then uses the output of the previous step to calculate the next. When a computer solves an equation for the motion of a planet, for example, rather than providing a complete solution to an equation that represents a continuous line (or more precisely, an ellipse), the computer provides solutions to multiple points along a grid. Hence the term ‘discrete’. In the case of an equation in this example, which is supposed to be a simulation of a natural phenomenon, the parameters indicating the number of iterations necessary to simulate the phenomenon are provided by contrast with empirical data and not by the original equation or the theoretical principles behind it (Petersen, 2012). Although the term ‘equation-based’ is supposed to be self-explanatory, despite their name, these simulations are not mere equation solvers.1 Rather, they are devices which include a broad and motley (Winsberg, 2010) set of processes capable of yielding results/values that are close enough to those arrived at via an equation. These discretized mathematical solutions are translated into a series of logical steps—a computational model of the mathematical model—that can be coded into a computer language so that a computational device can carry out the simulation. They are specified as instructions to the machine on how best to achieve the operations that can yield the results sought after in the discretized mathematical model. It is important to note that even some equation-based simulations are not strictly speaking the product of straightforward solutions to differential equations. Some phenomena, such as scramjets, are of such particular complexity that they require fine-grained numerical simulations as well as simulations of disruptive flow patterns within the larger simulations. These simulations are added elements to the conventional Navier-Stokes equations conventionally thought to be sufficient to characterize fluid dynamics. As we will see in later chapters, the processes and procedures that are necessary to render the mathematical or theoretical model onto a discretized model, then into a programming language and then onto interpretable form (such as visuals) are filled with practical ad hoc solutions and with extra-mathematical considerations. These considerations often include engineering constraints that do not follow any of the theoretical principles embedded in the original equations or models.
It is important to note that even some equation-based simulations are not strictly speaking the product of straightforward solutions to differential equations. Some phenomena, such as scramjets, are of such particular complexity that they require fine-grained numerical simulations as well as simulations of disruptive flow patterns within the larger simulations. These simulations are added elements to the conventional Navier-Stokes equations conventionally thought to be sufficient to characterize fluid dynamics (Durán, 2018, p. 18). 1
22
2 Computer Simulations in Science
2.2 Agent-Based Simulations Although the most common interpretations of computer simulation are directly linked to some mathematical procedure, there are other simulations methods that instead follow general rules of behavior and not, strictly speaking, mathematical specifications. Agent based-simulations, for example, do not generally solve the equations of fully developed mathematical models. Rather, these simulations are set to mimic the behavior of a target phenomenon by the implementation of explicit rules by which the various agents defined in the system are supposed to behave. Understanding of a system is therefore sought by ‘playing’ hypothesized dynamics out and observing the behavioral patterns of a system and not by seeking solutions to a theoretically-principled equation. A commonly cited early example of agent-based simulations is the Schelling (1971) model of segregation in which individual agents—dots of one color or another—are placed on a grid and assigned preferential dispositions that would dictate whether they would stay or move in the next iteration. In Schelling’s particular simulations the agents were meant to choose their position relative to other agents near them in a simulated neighborhood made of neighbors with similar and dissimilar tendencies and properties. After a few iterations in which these rules played out and the ‘agents’—marked squares on a grid—moved, patterns could be discerned and interpreted as a model of segregation based on simple preference rules of affinity and vicinity. That is, based on affinity of marked squares and relative closeness to other marked squares on the grid, the general special organization of these squares would either form clusters of neighboring similar squares, disappear, or move towards a general direction in which similar ‘neighbors’ could be found. A more technical, yet useful, description of these kinds of simulations can be found in Symons (2008). Following the work of Hu Richa and Xiaogang’s (2003), Symons offers the following description of a general cellular automata model as a quintuple set that involves {Cells, Cell Space, Cell State, Neighborhoods, Rules} (Symons, 2008, p. 478), in which: cells are the basic objects or elements of the CA each having some individual state depending on the rules of the CA. Cell space is defined as the set of all cells and their values at some time. Neighbors are the set of cells surrounding any center cell and rules are the transition functions of cell states, mapping cell spaces to cell spaces. The rules of the CA are defined as being maximally general with respect to the cells in the model and the application of rules updates each cell synchronically.
Today, many agent-based simulation approaches are hybrid in the sense that while each of the agents active in the simulation are merely responsive through a series of rules, these rules are themselves derived from theoretically-principled equations (Dubucs, 2002). Most of the following discussion will directly address computer simulations that are closer to the equation-based type described in the section prior to this one. Yet, the general point concerning their epistemic standing and their role as the instruments with which the implementation of abstract specifications is made possible applies to both types of computer simulations.
2.3 Supercomputing and Simulations
23
2.3 Supercomputing and Simulations At this point, it is important to note a few details about the engineering components (hardware, connectivity, optimization and compatibility efforts) that underlie large scale modern simulation projects of complex phenomena such as weather. This is because, as the phenomena being simulated grows in complexity it is not always possible to consolidate the simulation process in a single computational device. Simulations of large-scale phenomena require the work of multiple computational processes, but also, often, they require the work of multiple full simulations of diverse phenomena. In this section I briefly describe the hardware architecture often required to carry out simulations of large and/or complex phenomena or systems. This description is meant to further elucidate the tangible constraints as well as the artifactual nature of computer simulations. As I will show further below, this description also points towards the fact that complex devices, which are constituted by a multiplicity of other devices and require a broad range of experts to carry out a unified function, can also be singled out as single independent artifacts in and of themselves (Simoulin, 2017).2 Carrying out simulations of weather prediction systems is often such a complex and large endeavor in scale that multiple computing devices coordinated to execute necessary operations are required. In large projects, these coordinated devices are grouped together to form computer clusters. In even larger simulation projects— which, given the amounts of data available in today’s scientific landscape, are more and more common—these computer clusters are linked to other computer clusters. This is in part because the amount of computational procedures required to render these processes intelligible to researchers in a reasonable time is too large for any one computer. Parallel computational processes allow researchers to compute but also to track, say in a visual manner, the dynamics of a system within a reasonable time and with reasonable efficiency constraints. The interconnected clusters of networked devices that make this possible are called supercomputers. These supercomputing clusters introduce a layer of additional complexity regarding not only the number and variety of computing components but also the combination of multiple simulations into one set of consolidated outputs. The simulation in such cases is the implementation of all of the elements above in a supercomputing infrastructure. Computer simulations of weather or climate, for example, are the result of ‘stitching’ together simulations of more specific atmospheric phenomena such as cloud formation, air and water current patterns, etc. (Norton & Suppe, 2001; Winsberg, 2010). Consider the following description of the elements required to follow the detail dynamics of a climate system (Petersen, 2012):
In essence, and in virtue of their imperative properties, algorithms and software specification are also artifacts (Turner, 2018) and/or are an integral part of the artifactual nature of computer simulations. This is something I explore elsewhere. In this book the hardware standpoint suffices for my view to work. 2
24
2 Computer Simulations in Science Comprehensive climate models are based on physical laws represented by mathematical equations that are solved using a three-dimensional grid over the globe. For climate simulation, the major components of the climate system must be represented in sub-models (atmosphere, ocean, land surface, cryosphere and biosphere), along with the processes that go on within and between them. … Global climate models in which the atmosphere and ocean components have been coupled together are also known as Atmosphere–Ocean General Circulation Models (AOGCMS). In the atmospheric module, for example, equations are solved that describe the large-scale evolution of momentum [of atmospheric ‘particles’, acp], heat and moisture. Similar equations are solved for the ocean. Currently, the resolution of the atmospheric part of a typical model is about 250 km in the horizontal and about 1 km in the vertical above the boundary layer. The resolution of a typical ocean model is about 200 to 400 m in the vertical, with a horizontal resolution of about 125 to 250 km. Equations are typically solved for every half hour of a model integration [the time-step in the model is half an hour, acp]. Many physical processes, such as those related to clouds or ocean convection, take place on much smaller spatial scales than the model grid and therefore cannot be modelled and resolved explicitly. Their average effects are approximately included in a simple way by taking advantage of physically based relationships with the larger-scale variables. This technique is known as parametrization. (IPCC, 2001; Petersen, 2012)
What is interesting in this quote is that while it mentions all the different types of systems, as well as their associated models and the equations needed to be solved for the models to yield useful results for each of the levels of the simulation—from ocean currents to cloud dynamics, little is said about how each of the results is incorporated into the whole to provide a “comprehensive climate model.” However, it is precisely at this particular stage of incorporating results that much of interest happens besides the calculation of numerical results, or the solving of equations. It is at the incorporating stage that much extra-mathematical and extra- theoretical work is needed (Winsberg, 2010, 2019). Although equations are at the core of the simulation processes, the results of each computation still have to be coupled together and/or extrapolated for use at lower or higher levels of the modeling process. This fact alone makes earlier philosophical approaches to computer simulations and in particular the definitions provided by these approaches somewhat incomplete. It is true that an important part of computer simulation includes the solving of equations or the formal requirements of computable specifications, as we saw above. However, this is only part of the story. Simulations of both regional weather patterns and global trends in large systems, incorporate daily terabytes of data produced and processed by ocean models, land models and ice surface models of planet earth. They also take into consideration parameterized information from cloud formation models, temperature models and past and future climate simulations. As we will see below, that large scale complex computer simulations are constituted by multiple and diverse components, processes, and put together through the efforts of similarly motley expert communities will serve as the basis for the arguments not just against narrow definitions of computer simulations, but also arguments against the idea that ‘instrument’ is the adequate term to capture the nature of computer simulations.
2.4 Machine Learning and Simulations
25
It is precisely this tension regarding exactly what counts as a relevant epistemic and material constituent of a computer simulations, as we will see in the next chapter, that has given rise to the central epistemological debates about their role in scientific inquiry.
2.4 Machine Learning and Simulations Computer simulations are in principle quite distinct from more recent data science technologies such as machine learning. They function through very different principles, have different properties and are often applied to different kinds of problems. In particular, equation-based computer simulations, as mentioned earlier, are frequently employed in situations where meticulously collected data and theoretical principles related to causal connections are presumed to be established. This approach aims to prvide a robust mathematical framework for modeling the dynamics of a simulated system. Machine learning techniques, on the other hand, are often used when such causal relationships are not present or known. Furthermore, rather than solving for an already-established theoretically principled mathematical model, as computer simulations do, machine learning algorithms are often used to explore plausible models with which to understand the trends and patters within a data set. Nevertheless, recent developments in both practices have led to the development of systems which integrate both methods and devices to computational problems. Afterall, at a high-enough level of abstraction both methods can be said to have a similar aim: to predict the possible future states of a system through computational means (Von Rueden et al., 2020; Symons & Boschetti, 2013). This integration happens in at least three ways: machine learning-assisted computer simulations, computer simulation-assisted machine learning and cases in which the assistance goes both ways. Given the extra layer of challenges that the use of machine learning in computer simulation represents—such as challenges related to opacity, reproducibility, etc., (Resch & Kamisnki, 2019)—it is important for the purposes of our discussion to have at least a basic understanding of the elementary features of these integration efforts. Let us begin by looking at machine learning-assisted computer simulations. In these integrations the role of machine learning can come at many different levels and stages of the production pipeline of computer simulations. This integration can happen, for example, at the model generation stage, where scientists are still working out the details of the best possible model to represent the dynamics of a system. In other words, machine learning algorithms can be used “with the intention to support the solution process or to detect patterns in the simulation data.” (Von Rueden et al., 2020, p. 554) This use can provide researchers with more efficient models in case an existing solution is too costly in terms of computing resources or it can suggest several alternative ways of solving for the patterns in the data.
26
2 Computer Simulations in Science
Machine learning algorithms can analyze the data, extract certain patterns and hence suggest possible models with which to solve for, i.e. simulate, approximate values in the dynamic behavior of a system. Machine learning algorithms can also provide a less resource-intensive way of solving certain parts of a larger model. Neural networks, for example, can be deployed to solve for some partial differential equations and so they can replace more resource intensive elements of a broader simulation model.3 Though some in the literature also suggest that machine learning can be used for broader contexts such as scientific insight and discovery (Roscher et al., 2020), a lot depends on the assumption that such technology can be made to be explainable—that is, the degree to which machine learning and the processes by which it yields results can be accessible to researchers. This, as we will see later in this book, is an assumption that has little to no grounds in the state of the art of machine learning.4 So, while machine learning may be deployed to facilitate certain aspects of computing processes and with some initial exploratory research considerations, it remains to be seen to what extent such methods can be legitimately used for deeper aspects of scientific inquiry. Computer simulation-assisted machine learning on the other hand fulfills a different function, perhaps a more epistemic function. While the uses of computer simulation to assist machine learning algorithms is similarly diverse as the uses of machine learning in computer simulations, and can also be found at multiple stages of the production pipeline, one of the main and most straightforward uses of computer simulations in machine learning is as additional sources of information for an algorithm to learn from. In this sense, the simulation can provide a machine learning algorithm with both extra data and, often, better data to train from. Ideally, if produced via an established simulation procedure, this data will be generated from rich and proven sources in the sciences. That is, the data will be coming from a theoretically informed source and from a more carefully curated process. An example of this is the use of physics engines—computer simulations that approximate the dynamics of physical systems such as fluids, gasses, solids, etc., often used to generate physically realistic scenarios—for the machine learning algorithm to learn from for purposes such as autonomous driving. For image recognitions systems this approach can also prove useful, although there are some limitations since For a thorough account of the ways in which these surrogate models can be used in computer simulations see Von Rueden et al. (2019, 2020, 2021) as well as Cozad et al. (2014). 4 Much of the discussion of explainable methods in machine learning takes it for granted that some post hoc accounts of the possible trajectory of a system count as genuine explanations of the processes of such system in yielding particular results. The arguments behind this widespread approach to explainability often hinge on the suggestion that such an approach should suffice given that many explanations of human decision-making seem to be equally formulated, i.e., in a post hoc manner. This is highly contentious at both the psychological and the normative level. In particular, even if it was the case that humans do provide such problematic accounts to each other and that sometimes there is nothing we can do but accept them, such occurrences say nothing about whether or not we can, should or do accept such type of reasoning in scientific inquiry or from scientific devices and processes. Furthermore, so far, the opacity of machine learning methods, particularly deep neural networks, is such that there is nothing close to explainable AI. 3
References
27
photo-realistic simulators that “can generate large-scale automatically labeled synthetic Data” can also “introduce a domain gap negatively impacting performance.” (Lee et al., 2018) Another use of this kind of integration is for simulations to test machine learning outputs. If a machine learning algorithms suggest a possible material compound for a given engineering project, a simulation can explore the soundness, viability or material implications under different constraints. And finally, although some of the literature mentions them, when it comes to genuinely hybrid systems—in which both machine learning and simulation methods and technologies are simultaneously at work—the state of the art is in its infancy and they are rather non-existent. There are, however, accounts of how these may work in the future (Von Rueden et al., 2020). The idea would be that such systems, called learning simulation engines, would be a system in which computer simulations and machine learning algorithms are combined in such a way as to “automatically decide when and where to apply surrogate models [cost-efficient alternatives to computationally heavy simulations] or high-fidelity simulations.” (ibid) Issues surrounding when and where such an approach would be both viable and desirable are incredibly complex issues even in contexts in which established scientific practices and practitioners are present. Hence, the viability and reliability of such an approach in scientific inquiry is something that remains in the domain of speculation (ibid, p. 557).
References Cozad, A., Sahinidis, N. V., & Miller, D. C. (2014). Learning surrogate models for simulation‐ based optimization. AICHE Journal, 60(6), 2211–2227. Dubucs, J. (2002). Simulations et modélisations. Pour La Science, 300, 156–158. Durán, J. M. (2018). Computer simulations in science and engineering. Springer. Gehring, P. (2017). Doing research on simulation sciences? Questioning methodologies and disciplinarities. In The science and art of simulation I (pp. 9–21). Springer. Hu, R., & Ru, X. (2003). Differential equation and cellular automata models. In IEEE proceedings of the international conference on robotics, intelligent systems and signal processing (pp. 1047–1051). IEEE. Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press. IPCC. (2001). Climate change 2001: Synthesis report. In Contribution of working group I, II, and III to the third assessment report of the intergovernmental panel on climate change. Cambridge University Press. Lee, K. H., Ros, G., Li, J., & Gaidon, A. (2018). Spigan: Privileged adversarial learning from simulation. arXiv preprint arXiv:1810.03756. Norton, S., & Suppe, F. (2001). Why atmospheric modeling is good science. In Changing the atmosphere: Expert knowledge and environmental governance (pp. 67–105). Petersen, A. C. (2012). Simulating nature: a philosophical study of computer-simulation uncertainties and their role in climate science and policy advice. Chapman & Hall/CRC. Resch, M., & Kaminski, A. (2019). The epistemic importance of technology in computer simulation and machine learning. Minds and Machines, 29, 9–17. Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model (vol. 21, no. 4). ACM.
28
2 Computer Simulations in Science
Roscher, R., Bohn, B., Duarte, M. F., & Garcke, J. (2020). Explainable machine learning for scientific insights and discoveries. Ieee Access, 8, 42200–42216. Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1(2), 143–186. Simoulin, V. (2017). An instrument can hide many others: Or how multiple instruments grow into a polymorphic instrumentation. Social Science Information, 56(3), 416–433. Symons, J. (2008). Computational models of emergent properties. Minds and Machines, 18(4), 475–491. Symons, J., & Boschetti, F. (2013). How computational models predict the behavior of complex systems. Foundations of Science, 18(4), 809–821. Turner, R. (2018). Computational artifacts: Towards a philosophy of computer science. Springer. Von Rueden, L., Mayer, S., Garcke, J., Bauckhage, C., & Schuecker, J. (2019). Informed machine learning–towards a taxonomy of explicit integration of knowledge into machine learning. Learning, 18, 19–20. Von Rueden, L., Mayer, S., Sifa, R., Bauckhage, C., & Garcke, J. (2020). Combining machine learning and simulation to a hybrid modelling approach: Current and future directions. In Advances in intelligent data analysis XVIII: 18th international symposium on intelligent data analysis, IDA 2020, Konstanz, Germany, April 27–29, 2020, Proceedings 18 (pp. 548–560). Springer International Publishing. Von Rueden, L., Mayer, S., Beckh, K., Georgiev, B., Giesselbach, S., Heese, R., et al. (2021). Informed machine learning–a taxonomy and survey of integrating prior knowledge into learning systems. IEEE Transactions on Knowledge and Data Engineering, 35(1), 614–633. Winsberg, E. (2010). Science in the age of computer simulation. University of Chicago Press. Winsberg, E. (2019). Computer simulations in science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2015 edition). http://plato.stanford.edu/archives/sum2015/ entries/simulations-science/. Accessed 20 Dec 2018.
Chapter 3
The Rise of a Dichotomy
The importance of the thesis of this book—that computer simulations are and ought to be treated as instruments—can be easily understated without first having an overview, a genealogy if you wish, of the other things computer simulations have been compared and contrasted to since their advent. As we saw in the introduction to this book, philosophers of science trying to make sense of the nature and role of this new technology have been comparing it to other elements of scientific inquiry. Conventional views of computer simulations are situated in a landscape marked by two major reference points: either computer simulations are like mathematical and scientific models, or they are like experiments. The influence of this dichotomy can be found all over the spectrum of positions in the contemporary philosophical debate on computer simulations. Some of the reference points in this literature are inherited from older philosophical debates concerning the nature and role of models in science (Frigg & Reiss, 2009). For some, as we saw above, computer simulations are a set of problem-solving techniques (Hartmann, 1996; Durán, 2018). They are just model and equation solvers. To others, computer simulations are a new form of experimentation that encompasses broad and heterogeneous practices capable of enriching empirical understanding of the world (Winsberg, 2010). This dichotomy can be visualized by plotting each approach and understanding at two opposing extremes from each other. Models
Experiments
As we will see, many approaches are sophisticated enough to not go all the way to either extreme. Nevertheless, the respective assumptions and implications of each one of them will place them nearer to either one of such extremes. Trying to get away from this dominant dichotomy is not easy. Part of it involves understanding what these views entail, what challenges they pose to the instrument thesis of computer simulations, and clearly delineating how computer simulations
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3_3
29
30
3 The Rise of a Dichotomy
are a distinct thing which these conventional views are incapable of capturing. This is the main aim of this chapter: to elucidate the intuitions and assumptions behind these views in order to best understand why the thesis of this book is in such direct contrast to them. Hence, this chapter provides a broad overview of the conceptual landscape in which computer simulations have been positioned throughout the years. While some critical work will be done at this stage, the main criticism of the limitations of these conventional approaches will become clearer as we move along and as the instrument thesis of computer simulations, almost on its own, gradually emerges as a natural alternative to this dichotomous narrative.
3.1 Establishing the Dichotomy In order to best understand the breadth of the spectrum that these conventional views represent, let us first take a look at the views that interpret the formal aspects of computer simulation as direct implementations of models and those that interpret them as special kinds of models. Looking at the spectrum line drawn above, these views can be found closer to the left side of the line. Thinking of computer simulations as either models or special kinds of models is understandable. Computer simulations started mainly as programmable solvers for dynamic models, some of which were too hard to solve by hand (Humphreys, 2004). So, first and foremost, according to some of the views that strongly associate computer simulations with these early uses, a computer simulation’s main task is to manipulate formal content such as numeric values. These operations are in turn specified through equations and these equations belong to dynamic models abstracted from the quantified behavior of a target system. Hence, we can understand that, under this view, the operations computer simulations undertake to yield results and the content which they manipulate to yield these results are in many ways simply mathematical in nature. Secondly, even when these numerical values are embedded in manipulations of higher-level complexity—i.e., operations that go beyond simple arithmetic procedures and involve complicated equations and dynamic assumptions—these manipulations are—rightly—understood to be highly formal and abstract in themselves. Accordingly, early definitions of computer simulations reflected this.1 What is interesting about the adoption of definitions of computer simulations that solely reference their formal characteristics is not just whether or not they are accurate. Another interesting aspect of these definitions is the kind of assumptions
See Hartmann (1996) and Humphreys (2004) which will be treated in detail in Chap. 4 of this book for examples of these definitions. 1
3.1 Establishing the Dichotomy
31
and implications that they bring with them concerning the ultimate nature of computer simulations. There are ontological commitments, for example, regarding the nature of abstract objects that we bring along when we understand computer simulations as subsets of mathematical models or as akin to equations. In particular, consider the ability to not have to be instantiated in any material sense. As Durán points out, abstract objects are simply not subject to change (2018): e.g., there is no wearing down of an equation due to use. Furthermore, a model or an equation are a model or an equation even when you do not run them, even if you do not go through the trouble of solving them or implementing them. You could write them or draw them on a napkin, specify their dynamic properties, point at them and call them a model or an equation respectively without doing anything else to them or with them. Commitments to these kinds of properties of abstract objects get extrapolated onto our understanding of computer simulations when we see them as such. Weisberg (2012), for example, suggests that computer simulations are simply a subset of mathematical models and that their implementation in a computer is simply a matter of ease of calculation. Most contemporary mathematical models, he says, are explored using computers simply “because computers are excellent at doing mathematical calculations” (2013, p. 31). Similarly, given that computer simulations are sometimes understood as special cases of modeling, some variations of these views suggest they need not be instantiated in a computer at all. In a similar vein, Grüne-Yanoff, relying on Humphreys’ early work on the subject, defines simulations as “any dynamic model that represents a target and that is solved through some temporal process.” (2017) Along with this definition and after acknowledging that most simulations have a representational function, he goes on to echo Weisberg’s observation and say that although many are implemented on a computer, “this is not an essential property” of them (p. 86). That is, their implementation is only a contingent element. One may note that perhaps the problem here is that these authors are putting the carriage before the horse and deeming computer simulations as identical (or very closely related in nature and import) to models. One can say, for example, that Weisberg, is not talking about computer simulations per se. Rather, he is talking about mathematical models. Mathematical models, he suggests, even the ones that are computational to a certain extent—such as Schelling’s segregation model—may be explored without the use of a computer. And this may very well be the case, but then we are not talking about computer simulations and if so, then views such as those of Grüne-Yanoff are simply begging the question by equating computer simulations to models. It is in instances such as these that we can see that some of these views are far from being able to take into consideration the machinery and engineering involved in the process of creating a computer simulation. They do not have the conceptual framework to admit or account for such features and are therefore evidently far removed from the instrument view of computer simulations that I will be advocating for in this book.
32
3 The Rise of a Dichotomy
As we will see, it is not clear that the assumptions made by the views described above make sense when we are dealing with computer simulations. It is evident, one can say, that when we are dealing with the multiple and diverse components of computer simulations, there is something more than mere calculation or formal abstraction. If machinery is involved, then what we are talking about is simply not an abstract model or an equation. This machinery may very well be an implementation of an equation or a model, but even then, there is obviously something more involved. This is the case even when one considers views such as Stephen Hartmann’s (Bruce, 1999; Hartmann, 1996), whose definition of computer simulation has served for decades as the starting point for many narrow accounts of computer simulation. Narrow accounts of computer simulations are those that, according to Durán (2018) see computer simulations merely as equation solvers. For Hartmann, computer simulations—even as solvers—can be understood as a series of methodological procedures followed by practitioners, they can offer representational insight towards future inquiry, and can also act as a substitute experiment in cases where both the understanding of a phenomena and the data are closely related to numerical values. They can function as substitute to experiments, Hartmann claims, in cases in which the only access to the phenomena is through indirect formal means, like in the case of astronomy. This more nuanced account of computer simulations is able to accommodate some of the motley features briefly discussed above. This account also points to a more complex view of computational methods than the purely mathematical view. Hence, some within this latter camp have come to view computer simulations more as enriched implemented models and less like conventional mathematical models. Under this latter view, computer simulations are not only the result of simple arithmetic operations. Besides addition and subtraction, they can also encompass more sophisticated operations related to calculus and statistical analysis. This interpretation moves the balance towards a richer understanding of computer simulations and gets the discussion away from their simplistic characterization as mere “number crunchers” while still maintaining a commitment to what some deem their fundamental formal character. Hence, it is still within the camp that considers simulations to be mainly solvers. They are just richer kinds of solvers. On the opposite extreme of the debate, the side of those that advocate for a broader definition of computer simulation, several positions arose that take computer simulations and the practices associated with them to be more akin to scientific experimentation. Under this view, although computer simulations often include, use or are constituted by the numerical tools exemplified in their origins, they are few steps removed from these first uses. Advocates of this view understand simulations as more of a patchwork of diverse processes by which we seek to recreate quantifiable states of a system. These diverse processes often include measurement practices, representational features, and developing and testing hypotheses and theories. Eric Winsberg, for example, suggests that computer simulations are more like “comprehensive methods to study systems” (2010). These methods, through a series of transformations by motley processes, have to be implemented. This implementation
3.1 Establishing the Dichotomy
33
process includes choosing a model, finding a way of implementing that model in a form that can be run on a computer, calculating the output of the algorithm, as well as visualizing and studying the resultant data, amongst other things. According to this view, the method of computer simulation encompasses this entire process as well as the procedures used to sanction the inferences made with such processes. He calls these processes ‘simulation studies’ and describes them the following way: Successful simulation studies do more than compute numbers. They make use of a variety of techniques to draw inferences from these numbers. Simulations make creative use of calculational techniques that can only be motivated extra-mathematically and extra- theoretically (Winsberg, 2019).
By taking into account these extra elements of the practice of computer simulation, Winsberg sheds light on their more complex aspects. Just like experiments, each computer simulation is different from the other and its epistemic import must be independently assessed. In a supercomputer, for example, the implementation of computer simulations is comprised of several steps that integrate components and procedures of many different kinds and at many different levels. This process of integration is often described as a simulation pipeline in which researchers start by identifying a phenomenon to be simulated and then gradually move from the formalization of a model (mathematical/computational/logical) of such phenomenon to its implementation on a computing architecture capable of distributing the work in an appropriate manner.2 In practice, this process includes an arduous collaboration process between scientists, engineers, and programmers. This is particularly the case when the phenomenon in question is something as complex as the variation of wind and sea currents in large portions of an ocean. As we saw in the previous chapter, not only is a mathematical model not sufficient by itself to establish a simulation process. Rather, there are several mathematical models related to distinct phenomena that must be integrated. And furthermore, none of the elements of the integration remain untouched by the conversation, i.e., negotiations, between experts. Engineers handling the computing machinery have to adapt the architecture for the distribution of the necessary computing processes associated with the model. Programmers have to ensure the code is written in a way and in a language that is compatible with such architecture, but furthermore even after this is done, the code still has to be “optimized” for efficiency and other considerations. Often, in this process, the numerical elements of the computational model that discretize the original continuous equations of the mathematical model also have to be adapted to ensure that the approximate results are still achievable under the logical configuration of the architectural distribution of computing power. They have to be optimized. That is, they have to be reconfigured, recoded to best suit the distribution strategy used to compute the many and diverse elements of the simulation.
The representational heuristic of the simulations pipeline will be treated in more detail in a further section within this chapter. 2
34
3 The Rise of a Dichotomy
Furthermore, the description above also signals that in simulations of this scale each of the computation processes is the result of a completely different model, which in turn signals the possibility of there being different assumptions built into each step. There are also distinct machines operating at distinct levels of computation. A super computer may not be needed for smaller scale dynamics, but it is definitely needed to consolidate the distinct models and results into a single simulation. Hence, simulations are in fact comprised of an amalgam of functions, methods, practices and constraints, which often include practical extra-theoretical and ad hoc engineering and programming techniques to overcome these constraints (Winsberg, 2010). Solving numeric problems, that is providing approximate discrete values that correspond to continuous equations, is but one function of the simulation. Computer simulations of complex phenomena incorporate very informal elements in their procedures. As Eric Winsberg puts it, computer simulations are made with the indispensable help of “assumptions about what parameters to include or neglect, rules of thumb about how to overcome computational difficulties- what model assumptions to use, what differencing scheme to employ, what symmetries to exploit- graphical techniques for visualizing data, and techniques for comparing and calibrating simulations results to known experimental and observational data” (2010, p. 45). These practices are integrating efforts that fall well beyond both the scientific theory and the mathematical model of what researchers are trying to simulate. In other words, these practices incorporate extra-mathematical and extra-theoretical elements. In these extra theoretical elements computer simulations seem to be more like experiments: their performance must be sanctioned independently from the formal methods they implement (Winsberg, 2010; Symons & Alvarado, 2019); they ought to be understood as trials of both the model assumptions and the robustness of observed values; and they require a physical implementation (Humphreys, 1994; Norton & Suppe, 2001; Parker, 2009). So, here we have two views of computer simulations: as models and as experiments. Due to the influence of these debates, philosophers of computer simulation like Durán (2018) and Winsberg (2019) identify two different viewpoints on computer simulations: the narrow and the broad views of computer simulation. The narrow view, as explained above, understands computer simulations in terms of their formal and mathematical elements alone. The broad view accepts that extra mathematical and extra theoretical considerations, processes and components enter the picture. Hence a dichotomy arose in the discourse related to the nature and role computer simulations in science.
3.2 Somewhere In-between Model and Experiment It is important to note, however, that the debate does not stop with the simple dichotomy between simulations as models and simulations as experiments. As it has likely already become evident given our previous introductory discussion, computer
3.2 Somewhere In-between Model and Experiment
35
simulations are not easily boxed into the categories conventionally discussed in the literature of epistemology of science: they do not easily fit in the theoretical toolbox, nor do they fit in the empirical toolbox. They are not exactly like experiments and they are not exactly like mathematical models. In his book, Extending Ourselves (2004), Paul Humphreys lists different excerpts from “internal commentators” that explicitly signal this ‘in-betweenness’ of computer simulations and the epistemic impact of this ambiguity on scientific methodology. Here, I will quote some of these remarks in full because they represent rich examples of the views in question. The first remark below, which is also quoted by Winsberg in his book Science in the Age of Computer Simulation (2010 p. 39), addresses this ‘in-betweenness’ in a manner that is a bit less contentious in that the author takes care to qualify the statement by noting that computer simulations may not be the only methodology that elicits this ambiguous feature in scientific inquiry: Computer simulation provides (though not exclusively) a qualitative new and different methodology for the physical sciences, and […] this methodology lies somewhere intermediate between traditional theoretical physical science and its empirical methods of experimentation and observation. (Rohrlich, 1990, p. 507 as cited in Humphreys, 2004, p. 51)
Still within the context of physics, Rohrlich further thought that computer simulation “involves theoretical model experimentation in a qualitatively new and interesting way” (Rohrlich, 1990, p. 907 as cited by Fox-Keller, 2003, p. 200). Here is yet another quote from Physics Today (1984) used by Humphreys to talk about this in-betweenness of computer simulations: Science is undergoing a structural transition from two broad methodologies to three— namely from experimental and theoretical science to include the additional category of computational and informational science. (Nieuwpoort, 1985 as quoted in Humphreys, 2004, p. 51 italics are mine)
And lastly, the following third comment casually, yet boldly, reflects the ingrained neglect of the role of instrumentation in scientific inquiry throughout the history and philosophy of science: For nearly four centuries, science has been progressing primarily through the application of two distinct methodologies: experiment and theory…The development of digital computers has transformed the pursuit of science because it has given rise to a third methodology: computer simulations. (Kaufmann & Smarr, 1993, p. 4 as quoted in Humphreys, 2004, p. 51)
In short, by the start of the twenty-first century, the notion that computer simulations are something in between two major elements of scientific inquiry, and that this status had the power of leading certain sciences such as physics into “a place paradoxically dislocated from the traditional reality that borrowed from both experimental and theoretical domains […] a netherland that was at once nowhere and everywhere on the methodological map” (Gallison, 1996, p. 120) had become familiar and was “generally taken to be uncontroversial” (FoxKeller, 2003).
36
3 The Rise of a Dichotomy
Humphreys notes that all these characterizations as they stand may prove too strong and he criticizes them accordingly.3,4 Winsberg too is critical of some of the implications of the views expressed in the quotes. In particular, Winsberg sees little philosophical benefit from this characterization of computer simulations as ‘in- between.’ Yet, he deems this characterization to be “a natural perspective for both historians and sociologists” (p. 40) and a good start to analyze the role and epistemic status of computer simulations in science. This is because the dichotomy can have social and rhetorical value as we try to understand the many and diverse inputs that are required for a computer simulation to exist. As it stands, however, the unresolved dichotomy offers little insight, according to Winsberg. What is needed, he argues, is a clearer account of what is meant by ‘theory’ and what is meant by ‘experimentation’ in order to fully grasp the role and place of computer simulations and their relationship to either. In particular, Winsberg argues that once we are clear on what is meant by either of these two concepts, it becomes evident that computer simulations share more in common with experimentation than with conventional accounts of modeling.
He mentions three possible objections against the supposed sui generis nature of computer simulations in these claims: (1) you could say that computer simulations are just extensions of numerical methods already in existence. Therefore, they are not entirely an additional methodology, as suggested by the first claim; (2) computer techniques do not qualify as a method apart because they do not directly access empirical content. If so, then it is hard to account for what exactly is their epistemic import in scientific inquiry; and (3) examining the reliability of computational methods has serious-enough challenges that may prevent them from ever meeting the requirements of rigor in scientific settings. Hence, even if computer simulations were to qualify as an additional category of scientific inquiry, that they would be on a par with existing methodologies is highly questionable. 4 Importantly, the question of whether or not simulations in fact represented any novel epistemic challenges gave rise to an important debate about the nature and role of computer simulations in science. In particular, Frigg and Reiss (2009) wrote an important rebuttal of the views expressed by Humphreys which implied that computer simulations were anything new: in their view, not only did simulations not represented the advent of a new methodological paradigm, as some of the quotes above suggest, but philosophically speaking they also did not pose any new epistemically interesting questions. And if they posed any epistemically interesting questions at all, these were questions that could be easily addressed by, or whose context could already be found in, the rich literature in philosophy of science concerning the role of models. It is within this debate on the possible epistemic novelty of computer simulations that important questions and insights begin to emerge regarding the close relationship of computer simulations with models. Those with the intuition that there was indeed something epistemically particular and therefore interesting about the use of computer simulations in science responded to Frigg and Reiss. Amongst them was Humphreys own reply (2009a, b) in which he states that at the very least, computer simulations do in fact represent novel epistemic challenges due to the unprecedented epistemic opacity of their processes. While other methods may be generally opaque, Humphreys argues, computer simulations are essentially epistemically opaque. 3
3.2 Somewhere In-between Model and Experiment
37
In contrast but in a similar vein, Lenhard’s (2019) view of computer simulations is on the side of those that understand them to be a special, and novel, kind of modeling technique and not simply ‘in between’ simpliciter. Nevertheless, as we have seen, the strength of the original dichotomy—that there are only two possible positions to take and that these are relative to the only two elements of scientific inquiry—is such that even positing an in-betweenness pushes philosophers of simulation like Morrison (2015) and Lenhard (2019) to suggest alternatives that are still within the spectrum line drawn at the start of this chapter. If computer simulations are something, they must be either like models, like experimentation or like something in between. When the differences of computer simulations relative to either side become too obvious to ignore, a few argumentative strategies emerge in the literature. As we will examine in detail, some of these strategies postulate computer simulations as a novel and sui generis way of doing science that does not belong to either category. However, the most conservative and hence more widespread amongst them still entail carving a special or novel status for simulations within either of the two extremes.5 Hence, the dichotomy is perpetuated. That is, while it may be problematic to identify computer simulations as belonging strictly to the category of mathematical models or that of empirical experiments, perhaps it would be less so—or so the argumentative strategy goes— if we consider them special cases of either. Hence, philosophers like Johannes Lenhard (2019), for example, suggest not only that the debate concerning the role of models in science is the appropriate framework to understand computer simulations but also that simulation “is a new type of mathematical modeling”. Similarly, Margaret Morrison (2015), following her analysis of conceptual models as mediators (Morgan et al., 1999), suggests that we think of computer simulation practices as special kinds of experiments: namely, experiments in measurement (2015).6 This next section begins by examining these latter more nuanced views. As we will see, another argumentative strategy is to acknowledge the distinction of computer simulations from either category and push the discourse to its natural conclusion: accepting that computer simulations must be in fact a strange and novel element in scientific inquiry altogether. A sui generis way of doing science. In other words, by suggesting that computer simulations are neither exactly identical to formal methods like mathematical modeling or identical to established experimental practices, philosophers and practitioners have tried to make sense of their epistemic role in scientific inquiry by suggesting that computer simulations are epistemologically speaking somewhere in between experiment and theory because they are completely novel additions to science or because they are a distinct kind of science or a distinct kind of scientific practice. While there is much to be said about these other strategies, let us consider the first argumentative strategy first. 6 As we will also see, the view that computer simulations are instruments does not belong to any of these strategies. Yes, the instrument view accepts that computer simulations does not belong to either category. Yet it does not deem them to be special cases of either. And no, it does not deem them to be a novel sui generis element of scientific inquiry either. 5
38
3 The Rise of a Dichotomy
3.3 Experimental Models, Modeling Experiments and Simulating Science Consider the view of Johannes Lenhard (2007). In trying to account for this ‘in betweenness’ of computer simulations, Lenhard acknowledges that using them “makes it necessary to run through a process of repeated reciprocal comparisons between experiment and model” (2007, p. 182). These comparisons, a process of calibration and validation, is, according to Lenhard’s view, a manifestation of the bridge-like nature of computer simulations. As bridges between model and experiment, they automate the process of running multiple tests on the possible fit between models and empirical observations. Lenhard puts it the following way: “one could say that the traditional hypothetical-deductive testing of model assumptions is transformed into a quasi-empirical process. Hence, simulation introduces a new methodology of modeling that opens up a specific perspective on how models manage to be both autonomous and mediators” (2007 p. 182). What Lenhard is trying to point to us—by echoing Mary Morgan et al.’s (1999) work “Models as Mediators”—is that computer simulations are a way of testing many different models and see which ones fair better vis-à-vis our theoretical or observational assumptions. The simulations are autonomous because they automate the process of testing. They are mediators because they are in between our theoretical values and the models with which we test them. Hence, the mediating relationship that Lenhard is alluding to is also not in a single direction from theory to model to simulations to empirical observation. In some instances of simulation, in fact, such as in those used for early atmospheric dynamics, it is the exploration of the simulation model—in what Lenhard calls a quasi-empirical way—that makes it possible to make a theoretical model of the system of interest in the first place. In other words, sometimes the direction goes the other way: from simulating many possible models, we can get to create a theoretical model that best fits the phenomenon we are interested in. Hence, Lenhard suggests, the reason why computer simulations are mediators is because they allow model formation to take place through a hypothetical analysis: construct a model, run it (and or a myriad of variations of it) in a simulation and then compare the model’s attributes in relation to the known features of the target system, fine tune, and then repeat. Lenhard uses this description to explain the peculiar—and possibly novel—role of computer simulations as a method of inquiry. Computer simulations are experiments on possible theoretical models and, in some successful instances, allow us to understand the model and the target system by automating the testing of one against the other. Hence, they are in between both model and experiment. And hence, he concludes, they are special types of modeling. They are a new, experimental way of modeling.
3.3 Experimental Models, Modeling Experiments and Simulating Science
39
This is not the only way to go for this type of strategy. Just like one can carve a special status for simulations on the side of modeling, one can also do something similar on the side of experiments.7 Consider the following. Margaret Morrison (2015), suggests that we think of computer simulation practices as requiring both a computational method and a mathematical model of a target phenomenon. According to her, we can understand the computer simulation as computing a mathematical model’s specifications but also as carrying out the task of empirically analyzing the behavior of the implemented model. The target of our investigation, in this view, is not a phenomenon out there in the world, but rather the model of that phenomenon or—more precisely, the properties of the many variations and iterations of the model as they unfold through the iterated simulation processes. Consequently, according to her, models are experimental tools that can function as measuring instruments. For her, this last point explains the in-betweenness, the moderator aspect, of computer simulations. They are the kind of device, that just like a measuring instrument, is strongly reliant on theoretical values whose internal cohesion can be investigated by the manipulation of numerical values alone without the need for direct external causal manipulation of a phenomenon. After all, it is not likely, for example, that we will soon intervene with the workings of a star or a similarly unreachable/impenetrable system, and yet, through measurements of the kind just described above we are sufficiently confident that researchers are managing some sort of inquiry into such systems. While this may sound similar to Lenhard’s view, Morrison’s conclusion is not that the practice of simulation is a special kind of modeling. Rather, she suggests, computer simulations are like special kinds of experiments—measurement experiments, to be more precise—in which we only deal with theoretical values. Computer simulations are like these experiments, she says, because they “allows us to create the kind of controlled environment where one can vary initial conditions, values of parameters, and so on” (2015, p. 219) and test how these variations map onto known principles and dynamics of a target model. To her, what we are doing when it comes to simulation is rather like the ‘manual’ measuring practices of comparing theoretical values such as initial conditions, system parameters, etc. Experiments of this kind, she says, are often found in particle physics. For example, though one cannot have immediate access to subatomic phenomena, one can draw inferences from the comparison of theoretical values produced by the quantified and modeled detection of quantitative properties that come from the observation of indirectly related phenomena. This is what happens when we use a cloud chamber to infer the presence of subatomic particles. Despite the fact that these measuring practices do not
In fact, it is precisely the detail that computer simulations require the kind of calibration or tuning that Lenhard mentions that makes Florian Boge (2021) think that computer simulations are more like a precision instrument. This view is in many ways like that of Margaret Morrison explained below, in particular, it deploys a metaphorical use of the term ‘instrument’ that makes few ontological commitments, and it is ultimately deflationary vis-a-vis both precision and instrumentation (2015). 7
40
3 The Rise of a Dichotomy
interact directly with the target phenomenon, they can still yield insights onto the models with which we understand the world and therefore they can be regarded as special types of experiments. Importantly, she argues, while there may be a direct detection exercise through the means of an apparatus, much of the experimental part is carried out when values and parameters are examined formally via models and equations. This is intuitive when one considers large-scale, temporally separated and rare events, like astronomical phenomena. While observations about a star’s behavior may come directly from an event perceived through a telescope, for example, much of the inferences and explanations related to such an event happen once these observations have been quantified and their dynamics specified in equations that can be integrated into large-scale theoretical frameworks. Once these values are integrated and both the assumptions and the dynamics specified, they can be changed, played with—in short, experimented upon. In this sense, the experiments are value comparisons. Eran Tal (2012) suggests a similar turn for the concept of accuracy in metrology. A couple of centuries ago the accuracy of a measuring device was compared to a real-world instance of such a measure. If one wanted to know whether the meter- long ruler used in instances that required precision was indeed a meter long, one way of testing this was to compare it with one of the few vetted and controlled instances of a meticulously standardized meter. With time, however, the real-world reference faded in the background and the measure of accuracy took on a statistical nature involving probabilities and differences not relative to a real-world instance, but relative to expected outcomes, changes, and averages. In short, involving the manipulations, comparisons and adjustments of highly theoretical values. As an example of similar processes, Morrison uses the manipulation of physical models by nineteenth century British scientists to determine features of or quantities in electromagnetic fields, which led to the modification of Maxwell’s equations. This is important because it signals that experiments which are focused on theoretical values and/or principled elements of inquiry can and do provide useful knowledge that in turn can be used to fine tune either the empirical experiments (conventionally understood) or the theories upon which these are based. If this is the case, and this is also what computer simulations do, then there is no reason to think that computer simulations are epistemically inferior to these established ‘experimental’ practices. Hence, according to Morrison, computer simulations are just like measuring practices, particularly those carried out in contexts of high-energy physics and other branches of science in which the analysis of a phenomena is comprised of many indirect formal steps. And, if so, they can be understood as experiments. All of the above may sound rather promising for the instrument view that I will be arguing for here. After all, Baird (2004), whose views have strongly influenced the framework advocated in this book and which we will survey in more detail below, also understands models as instruments in their own right. However, closer inspection reveals that terms such as ‘device’, or ‘measuring instrument’ are used by Morrison rather as mere placeholders for yet another account of abstract methods of inquiry and not to refer to any aspect of the necessary material implementation of such. These terms, in other words, are being used mainly as metaphors. This is in
3.3 Experimental Models, Modeling Experiments and Simulating Science
41
sharp contrast to what the instrument view of computer simulations is designed to do. Consider, for example, that Morrison’s strategy for explaining the seemingly ambiguous epistemic role of computer simulations in inquiry is a deflationary one. That is, Morrison’s view deflates empirical practices to suggest that they are nothing more than the manipulation of theoretical values. So, it is not so much that she is bringing computer simulations upwards onto the status of experimental practice and hence closer to the realm of instruments. Rather, she is choosing to focus on a broader and more abstract notion of experiments which include processes such as measurement. Morrison’s influential move also includes analogies to instrumentation, particularly measurement instruments, as she tries to make sense of this in-betweenness of computer simulations and their capacity to provide information that is not solely contained as part of the principled assumptions of mathematical models. However, her choice to use the analogy of scientific experimentation to compare computer simulations to is ultimately a deflationary one, i.e., for Morrison, it is not so much that computer simulations are like conventional empirical experiments, but rather that many special cases of measuring experiments are like computer simulations in that they merely manipulate formalized theoretical values. Hence, when Morrison uses the analogy to measuring instrumentation in order to understand computer simulations, her view of instruments is also somewhat deflationary. In particular, she deems abstract models that deal with abstract content to be a kind of measuring instrument. If this is the case, then it is not so much that simulations are like conventional instruments, but rather that some instruments, mainly models, are like computer simulations. If we call these abstract manipulations of abstract content ‘experiments’ or ‘instruments’, and computer simulations are engaged in the same type of activity and in the same ways, then we have no good reason not to deem computer simulations either experiments or instruments in this sense, so the argument goes. This argument has several layers of relevance to our discussion because of what Morrison is trying to do with it. One important aspect of her project is to provide a broad conceptual framework that synthetizes the two intuitions at the center of the dichotomy we have been exploring concerning the epistemic role and import of computer simulations in science: the first intuition is that there are epistemically relevant formal and theoretical (i.e., abstract) elements to simulation that ought to always be taken into consideration; the second intuition is that computer simulations are able to provide genuine and genuinely novel (i.e., a posteriori) insights about target phenomena beyond the implications of their formal elements. That is, in simulation, models can prove to be empirically insightful. This is in obvious contrast to views of simulation that simply deny this possibility. But it is also in contrast to views that seek to explain such a possibility in virtue of the materiality associated with the physical implementation of computer simulations (Durán, 2018). While Parker (2009), for example, suggests that experimental practices and computer simulations are empirically on a par because they both share the same kind of materiality, namely their necessary physical implementation, Morrison on the other hand thinks that if they are in any sense on a par, it is because some
42
3 The Rise of a Dichotomy
relevant, widespread and well-established experimental practices take the form of modeling, model-comparisons, and the manipulation of theoretical values. It important to note that Morrison is seeking for an alternative way to understand computer simulations as epistemically on a par with experimental practices (Durán, 2018, p. 64) but she is particularly interested in one that goes beyond simply positing their material instantiations. This is because one obvious objection to views such as Parker’s is that the fact that a computer simulation must be physically instantiated does not necessarily say much about the target system being simulated. As Herbert Simon (1969) suggests, it is possible that as a necessarily physical artifact,8 a simulation may become the object of study in such a way as to provide empirical insights to its own workings. Yet this does not mean that the same system therefore also can provide empirically sound insights about anything else. However, note that once we understand that the kind of experimental practice that Morrison is comparing computer simulations to is a deflationary one, her view corresponds more closely to the narrow views of computer simulations rather than to the broad views we saw earlier. In other words, it may not only be that Morrison’s view is that computer simulations are a special kind of experiment and that the experiments she is referring to are of a special kind. But rather, it may well be that Morrison’s view ultimately collapses onto the view that computer simulations are models or special kinds of modeling techniques that exclusively deal with formal considerations. In order to better understand these issues, consider the following. For Morrison, simulation models are basically “the result of applying a particular kind of discretization to the theoretical/mathematical model” (2015, p. 219). The computer program allows us, according to her, to “investigate the evolution of the model physical system.” Insofar as the simulation system is computing the relationship between theoretical values and their evolution, then it is like performing an experiment. The process involves the testing of parameters, the fine tuning of input values, the assessment of changes and effects, and the drawing of inferences. This is the special kind of experiment she has in mind. Further, if some experiments are just interventions on measured variables and the measured variables are the product of the theoretical model, then there is no clear distinction between them and computer simulations since this is exactly what computer simulations do, according to her. She argues that computational methods simulating systems of particles are particularly suited for this kind of description. In other words, she argues that at least in these particular kinds of models, the relevant features of a physical system can only be captured by the mathematical characterizations of their attributes, as the values whose dynamics they manipulate: mass, charge, etc. (2015, p. 221) All of which are quantified parameters within the framework of a dynamic model. Notice, however, that the relationships mentioned here between what the computer simulation does and what these quantities provide are relationships that hold between abstract entities. That is, what she calls the simulation model (which is the
It is worth noting that in the computer simulation literature, even this seemingly simple assertion is highly contested (See Primiero, 2019 Ch. 10–12 p. 171–213) 8
3.3 Experimental Models, Modeling Experiments and Simulating Science
43
machine-readable translation of the mathematical model) is nothing but the application of discrete mathematics to another kind of mathematical or theoretical model. And hence, an important part of Morrison’s view regarding our current discussion is that it sidesteps a thorough discussion of the artifactual nature of computer simulations. Despite Morrison’s suggestion that computer simulations can function “like a piece of laboratory equipment, used to measure and manipulate physical phenomena” (2015, p. 219), the actual physical aspect of their functioning is not explored and is overshadowed by the characterization of their functional specifications as a mere product of the abstract side of their nature. When Morrison mentions that computer simulations are like measuring instruments, it is therefore important to consider the deflationary aspect of this view vis- à-vis the view of computer simulations as instruments that is argued for in this book. For Morrison, computer simulations are like measuring instruments only in so far as measuring instruments are, like rulers, simply a physical representation of a formal, internally cohesive, metric. In a sense, the problem with this deflationary account is analogous to the very old distinction between what were called mathematical and philosophical instruments. The distinction between what was thought of as models and instruments—or philosophical apparatus, as they were called—and a discussion of them can be found debated as early as 1649 in correspondence between Boyle and Bacon (Warner, 1990, p. 83). Even without the complexities introduced by computational methods, the history of the notion of instrument in scientific inquiry has its ambiguities. The term ‘scientific instrument’, for example, was not used before the 1800’s (Warner, 1990). Before then, the devices surrounding what we now understand as practical methods of hypothesis demonstration were called philosophical instruments, whereas those associated with measuring were called mathematical instruments. The main dissimilarity here is between artifacts that provide information about the world and artifacts that only manipulate an arbitrary formal construct. The distinction between philosophical instruments and ‘mere’ mathematical instruments is similar to the distinction between instruments that have what we understand as empirical access to the world and those that are constituted of and solely manipulate formal conventions. If this distinction holds then Morrison’s account of computer simulations is that of a mathematical instrument. That is, an instrument that does not have nor can provide access to phenomena in the world but rather, merely manipulates numerical values found in a priori considerations of a formal method. Lenhard’s strategy to explain the in-betweenness of computer simulations is not significantly different from that of Morrison. In order to do, as he says, more than merely “reconcile theory with experiment” through the use of computer simulation, he posits that computer simulations consist of a methodological cooperation whose novel feature is their subservient role in enhancing (automating, accelerating) modeling techniques. That is, computer simulations bridge theory and experiment but only because they are able to do modeling faster. This allows for modeling to be also tested and modified faster. This is important to him, because this approach helps him make the case that computer simulations are more than mere mathematical machines while keeping the intuitions about their experimental features intact: a
44
3 The Rise of a Dichotomy
computer simulation, in his view, is like a modeling device with which to conduct theoretical experiments. In so doing, Lenhard, like Morrison before him, thinks that he explains the in-between status of computer simulations. Computer simulations are, in short, experiments in modeling. The same conclusions of the accounts above—as well as their deflationary spirit—have since reverberated in many more recent approaches (Lenhard, 2019; Boge, 2021). Although many insights came from these perspectives, it is important to consider the implications of such an approach to the understanding of computer simulations. In particular, it is worth considering that if we are to accept this view of computer simulations and if we ascribe to the following things: 1. Morrison’s deflationary account of experimentation, 2. Lenhard’s view that simulations are experiments on modeling, i.e., modeling models, 3. And Morrison’s account of simulations/models as deflated experiments, i.e., simulations as an exercise in the manipulation of theoretical values embedded in models; Then we may find ourselves working with a framework in the epistemology of computer simulations in which what we are doing when we use these technologies is engaging in the simulation not just of a target phenomenon, but also a simulation of the processes by which we tests the veracity, consistency and coherence of theoretical elements. In other words, when we simulate, according to this view, we are simulating a target phenomenon by simulating the behavior of the theoretical values and their relational dynamics associated with a model of the target phenomena and by simulating the interventions that would elicit such dynamics. Hence, beyond simply simulating phenomena and simulating the experiments with which we come to understand such phenomena, what we are doing is ultimately simulating science itself.
3.4 Simulating Experiments An interesting development within the dichotomous debate we have been exploring are reactant positions vis-à-vis views that seek to straightforwardly equate computer simulations with one or the other side. While we have been exploring subtle comparisons and analogies so far, it is worth mentioning that some positions on either side do not hold back in saying that computer simulations just are either models or experiments. In a non-trivial manner, much of the progress in the debate has been made by philosophers that seek to resist these strong positions. It is through arguments against them that many important contributions have been made to the gradual disentangling of computer simulations vis-a-vis certain elements in the dichotomy. Interestingly however, in this literature, the same people that sought to make sure that computer simulations were not strictly equated with experiments
3.4 Simulating Experiments
45
also sought to expand on Morrison’s take and suggest a stronger connection between computer simulations and experiments by suggesting that computer simulations can and often simulate experiments (Beisbart, 2017; Barberousse et al., 2009; Barberousse & Jebeile, 2019). That is, computer simulations are not strictly speaking experiments, according to these views, but they do play a part in experimentation and they do yield knowledge capable of surprises because they can help us simulate experiments. As we saw above, some reactions to the view that computer simulations were simple extensions of mathematical models introduced considerations about their materiality (Parker, 2003, 2009) or about the extra-theoretical elements that constitute the latter but not the former (Winsberg, 2010). On the other side of the dichotomy, there are those that have sought to separate computer simulations from those that have wanted to equate them with experiments. Views such as Beisbart’s, for example, target positions like those of Norton and Suppe’s, who claimed that computer simulations are not just like experiments or ‘lesser’ substitutes of ‘real’ experiments, but rather “just another form of experimentation” (2001). Beisbart suggest that while computer simulations do bear some pretheoretically and even empirical similarities with experimentation, they are nevertheless not exactly the same. He then goes on to map the different positions related to the relationship between computer simulations and experiments. He speculates that when people say that computer simulations are experiments, they may mean one of the following things: that each computer simulation includes a real experiment; that the epistemic power of one is included in the other; that the experimental element lies in the materiality of the computer simulation; or that computer simulations include an actual experiment on the target phenomenon of the simulation (p. 180). While he rejects the first two outright, he sets up to provide ample detail on subtler versions of the latter two. While he notes that strictly speaking computer simulations do not generally include experiments on their hardware and do not, strictly speaking, include experiments on the target phenomenon—i.e. material interventions, etc.—he nevertheless thinks that he can provide an account that makes sense of all these intuitions by positing that computer simulations “allow scientists to model possible experiments” and that in fact many computer simulations “do model possible experiments.” This view is echoed in Barberousse and Jebeile’s (2019) work when they suggest that computer simulations can embed experimental setups in their conceptual or computational architecture and are in this sense, if not identical to, at least closely related to experimental practices. Without getting into too much detail for now, we can immediately see that an interesting implication of these views is that they actually represent a tacit endorsement of the fact that computer simulations are the things with which experiments may be simulated. In other words, it is implied by these views that computer simulations are the devices with which experiments can be carried out, instantiated or emulated (Winsberg, 2003). Although the interpretation is lacking, this implication goes in the right ontological direction: computer simulations are the kinds of instruments with which experiments can be simulated. We will explore this implication in the next chapter more in detail.
46
3 The Rise of a Dichotomy
3.5 The Pipeline Finally, it is important to note that some views regarding the nature and role of computer simulations have approached the matter in a way that may prove orthogonal to the dichotomy we have been exploring. Recently, for example, other comparisons have surfaced: computer simulations are compared to production pipelines and to broad practices such as engineering (Gehring, 2017; Saam, 2017a, b). As we saw above, they have also been tentatively compared to measuring devices and other precision instruments (Boge, 2021). As discussed, it is precisely in the details of these comparison that the distinct nature and role of computer simulations best comes to light, gradually elucidating their nature as something distinct from each and any of the things they are compared to. Consider the pipeline. The pipeline is a conventional depiction in the philosophical literature on computer simulations. It depicts the design and development stages followed by practitioners from a target phenomenon to the results of a computer simulation. This sort of illustration of a ‘simulation pipeline’ is deployed as a heuristic device—that is, an exploratory, often idealized, representation of a process— that elucidates the relevant components of a computer simulation and their place in scientific inquiry. Roughly speaking they are all a variation on Winsberg’s (2010) pipeline which has the following elements: Theory Model Treatment Solver Result
↓ ↓ ↓ ↓
Practitioners of computer simulations (see Resch, 2017) sometimes include more details and acknowledge that the process often goes in reverse order:
↑ ↑ ↑ ↑ ↑ ↑ ↑
Reality Physical model Mathematical model Numerical scheme Program structure Programming Model Hardware architecture
↓ ↓ ↓ ↓ ↓ ↓ ↓
The idea of the pipeline is to show, in the style of industrial project management charts, the elements and steps that constitute a computer simulation. In a ‘simulation pipeline’ (Winsberg, 2010; Resch, 2013, 2017) the process of developing a computer simulation is depicted as a series of modular items that goes from an inquiry into the world to getting results. In the case of the pipeline depicted in
3.5 The Pipeline
47
Resch’s (2013, 2017) work (fig. 1. left) the practitioner, or scientist, begins by looking at the world. This corresponds to the section called ‘reality’. In the case of physical phenomena—to which Resch is purposely limiting his example to—a depiction of the physical interactions is conceived: an account of the entities, forces, and effects at play in the target system is delineated and a physical model results. From this depiction of physical dynamics (forces, mechanisms, etc.) a set of equations is developed that describes in a formal symbolic form the relevant interactions within the physical model. This is called the mathematical model by Resch and can be seen depicted as the third stage pictured. What follows the mathematical model, according to Resch’s depiction of the pipeline above, is a series of transformations of the equations in the mathematical model into discrete steps apt for a binary machine to provide approximate solutions: this is called by Resch a ‘numerical scheme’ (pictured as the fourth stage above)—a series of mathematical procedures that transform continuous equations into many discrete operations. According to Resch’s pipeline depicted above, the discrete mathematical procedures of a numerical scheme have to be later specified as logical procedures. The steps established in a numerical scheme have to be detailed as machine-readable steps. This happens through the use of preestablished theorems that justify some of the continuous-to-discrete transformations. In turn, these transformations become the program structure, a kind of idealized modular chart that represents a plan to allocate operations and specify their appropriate timing. Finally, as per Resch’s depiction above, a programming model consists on mapping the program structure on to the hardware architecture—a specific machine (or set of machines as in the case of simulations that require supercomputers. This is the final stage of the simulation pipeline above. According to some philosophers and practitioners of computer simulation, if one was to visualize the coming together of a computer simulation as a production pipeline, there are two places where one could point to find the actual computer simulation. That is, if one was to ask the seemingly naïve question ‘where is the computer simulation?’ one could either point to the product of the sum of the complete set of stages represented in a simulation pipeline (e.g. a visualization); or, one could point to one of the stages in the pipeline and say, ‘there it is, that is the computer simulation’. However, as can be noted, it is unclear where it is that one can find a computer simulation in such a process. As we will see later, a computer simulation is not the same thing as a computer simulation’s results. The same applies to the visualizations derived from it. Furthermore, it is not clear that any one of the stages depicted in the pipeline is the computer simulation proper. This is not an accident, but rather a feature of this understanding of computer simulations. According to these depictions of the computer simulation pipeline, the simulation is to be found nowhere specifically and everywhere at the same time. The process is the computer simulation, or so we are led to think. This approach does little to elucidate what exactly are we referring to when we say things like ‘this is a computer simulation of X’ or when we task engineers to ‘build a simulation of X’.
48
3 The Rise of a Dichotomy
As we will see in the next chapter, once we start looking at computer simulations more carefully and analyzing their relationship to either numerical methods that include equations and models, or their relationship to experimental methods, it will become clearer that computer simulations are simply a distinct thing from most of what they have been compared to.
References Baird, D. (2004). Thing knowledge: A philosophy of scientific instruments. University of California Press. Barberousse, A., & Jebeile, J. (2019). How do the validations of simulations and experiments compare? In Computer simulation validation (pp. 925–942). Springer. Barberousse, A., Franceschelli, S., & Imbert, C. (2009). Computer simulations as experiments. Synthese, 169(3), 557–574. Beisbart, C. (2017). Advancing knowledge through computer simulations? A socratic exercise. In M. Resch, A. Kaminski, & P. Gehring (Eds.), The science and art of simulation I: Exploringunderstanding-knowing (pp. 153–174). Springer. Boge, F. J. (2021). Why trust a simulation? Models, parameters, and robustness in simulation- infected experiments. British Journal for the Philosophy of Science, 75. https://doi.org/ 10.1086/716542 Bruce, A. (1999). The Ising model, computer simulation, and universal physics. Models as Mediators: Perspectives on Natural and Social Science, 52, 97. Durán, J. M. (2018). Computer simulations in science and engineering. Springer. Frigg, R., & Reiss, J. (2009). The philosophy of simulation: Hot new issues or same old stew? Synthese, 169(3), 593–613. Galison, P. (1996). Computer simulations and the trading zone. In P. Galison & D. J. Stump (Eds.), The disunity of science: Boundaries, contexts, and power (pp. 118–157). Stanford University Press. Gehring, P. (2017). Doing research on simulation sciences? Questioning methodologies and disciplinarities. In The science and art of simulation I (pp. 9–21). Springer. Grüne-Yanoff, T. (2017). Seven problems with massive simulation models for policy decision- making. In The science and art of simulation I (pp. 85–101). Springer, Cham. Hartmann, S. (1996). The world as a process. In Modelling and simulation in the social sciences from the philosophy of science point of view (pp. 77–100). Springer. Humphreys, P. (1994). Numerical experimentation (pp. 103–121). Scientific Philosopher. Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press. Humphreys, P. (2009a). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. Humphreys, P. (2009b). Network epistemology. Episteme, 6(2), 221–229. Kaufmann, W., & Smarr, L. L. (1993). Supercomputing and the transformation of science. Scientific American Library. Keller, E. F. (2003). Models, simulation, and “computer experiments.”. In H. Radder (Ed.), The philosophy of scientific experimentation (pp. 198–215). University of Pittsburgh. Lenhard, J. (2007). Computer simulation: The cooperation between experimenting and modeling. Philosophy of Science, 74(2), 176–194. Lenhard, J. (2019). Calculated surprises: A philosophy of computer simulation. Oxford University Press. Morgan, M. S., Morrison, M., & Skinner, Q. (Eds.). (1999). Models as mediators: Perspectives on natural and social science (Vol. 52). Cambridge University Press.
References
49
Morrison, M. (2015). Reconstructing reality. Oxford University Press. Nieuwpoort, W. C. (1985). Science, simulation and supercomputers. In Supercomputers in theoretical and experimental science (pp. 3–9). Springer. Norton, S., & Suppe, F. (2001). Why atmospheric modeling is good science. In Changing the atmosphere: Expert knowledge and environmental governance (pp. 67–105). Parker, W. S. (2003). Computer modeling in climate science: Experiment, explanation, pluralism (Doctoral dissertation, University of Pittsburgh). Parker, W. S. (2009). Does matter really matter? Computer simulations, experiments, and materiality. Synthese, 169(3), 483–496. Primiero, G. (2019). On the foundations of computing. Oxford University Press. Resch, M. M. (2013). What’s the result? Thoughts of a center director on simulation. In J. M. Durán & E. Arnold (Eds.), Computer simulation and the changing face of scientific experimentation (pp. 233–246). Cambridge Scholars Publishing. Resch, M. M. (2017). On the missing coherent theory of simulation. In The science and art of simulation I: Exploring-understanding-knowing (pp. 23–32). Springer International Publishing. Rohrlich, F. (1990). Computer simulation in the physical sciences. In PSA: Proceedings of the biennial meeting of the philosophy of science association (Vol. 2, pp. 507–518). Cambridge University Press. Saam, N. J. (2017a). Understanding social science simulations: Distinguishing two categories of simulations. In M. Resch, A. Kaminski, & P. Gehring (Eds.), The science and art of simulation I (pp. 67–84). Springer. Saam, N. J. (2017b). What is a computer simulation? A review of a passionate debate. Journal for General Philosophy of Science, 48(2), 293–309. Simon, H. A. (1969). The sciences of the artificial. Cambridge University Press. Symons, J., & Alvarado, R. (2019). Epistemic entitlements and the practice of computer simulation. Minds and Machines, 29(1), 37–60. Tal, E. (2012). The epistemology of measurement: A model-based account. University of Toronto. Warner, D. J. (1990). What is a scientific instrument, when did it become one, and why? The British Journal for the History of Science, 23(1), 83–93. Weisberg, M. (2012). Simulation and similarity: Using models to understand the world. Oxford University Press. Winsberg, E. (2003). Simulated experiments: Methodology for a virtual world. Philosophy of science, 70(1), 105–125. Winsberg, E. (2010). Science in the age of computer simulation. University of Chicago Press. Winsberg, E. (2019). Computer simulations in science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2015 edition). http://plato.stanford.edu/archives/sum2015/ entries/simulations-science/. Accessed 20 Dec 2018.
Chapter 4
The Via Negativa: Computer Simulations as Distinct
The main claim of this book is the following: computer simulations are best understood as instruments. From this it follows that we should treat them as instruments: situate them in inquiry as instruments, relate to them as instruments, and sanction them as instruments. This is the case, I argue, not because they are like instruments, as others have claimed (Boge, 2021), but rather because they are instruments. Hence the main thesis of this book is a theory of what computer simulations are. From it, many things follow about what our epistemic relationship with them ought to be. This view is in sharp contrast to those I have expounded upon in the previous chapter. It is meant to be. And it is also meant to be a position that strongly signals a break from the dichotomy so far explored. However, in order to move away from the conventional discourse, some preliminary steps can help pave the road ahead of us. This chapter is meant to provide some conceptual moves that will permit us to shut the door behind us as we exit the dichotomous framework. There are good reasons, for example, even if you do not buy the main premise of this book, to think that the dichotomous views described in previous chapters are at best limited or at worst simply misguided. That is, even if the reader is not yet convinced (and I have not yet provided strong reasons for them to be) about computer simulations being instruments, they can at the very least start to see that they are not what others have claim them to be or to be like. In particular, this chapter provides a series of conceptual arguments. They are not empirical tests or case studies. Rather, they are conceptual moves meant to show that the things that we are referring to when we are talking about computer simulations are simply distinct from those they have been compared to, equated to or subsumed under. Much ink has been spilled documenting the exact ways in which computer simulations subtly deviate from conventional experimental practices. Computer simulations are distinct from experimental practices, it is said, because:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3_4
51
52
4 The Via Negativa: Computer Simulations as Distinct
(a) They are not necessarily materially implemented (b) they do not really intervene in the phenomena of interest, etc. (c) because of (a) and (b) they cannot relay new knowledge of the world the way empirical practices are said to do.1,2 As we saw above, a cottage industry emerged in response to such views during the first and second decades of the twenty-first century (Parker, 2003, 2009; Morgan, 2005; Morrison, 2009; Humphreys, 2009a, b; Barberousse & Vorms, 2014; Beisbart, 2017). Even now, the echoes of such debates can still be heard in the work of philosophers of science like Boge (2021), pointing to the differences and similarities between computer simulations and experiments and trying to make the case to understand them as experimental practices. Lenhard too, in his own way, is still trying to answer the third point above by positing that computer simulations can nevertheless still yield calculated surprises, the title of his book (2019). Similarly, as we saw in the previous chapters, part of the reason why the debate in the paragraph above emerged was also because researchers had already figured out that computer simulations were not exactly like the mathematical models they had been compared to. The arguments leading to this conceptual schism between simulations and models had to do with the extra-mathematical and extra-theoretical (Winsberg, 2010) elements required for the construction of computer simulations. Computer simulations, it seemed, could not be treated merely as formal mathematical entities. They are not exactly like equations and they are not just ‘number crunchers.’ So, rather than repeat each and every one of these moves, this chapter offers a separate but related set of arguments to expand on these differences. In
Morrison (2015), Barberousse and Vorms (2014) and others that defend the capacity of computer simulations to furnish empirical, a posteriori, knowledge have had to grapple with the fact that computer simulations do not interact with a phenomenon the way most experimental practice is thought to do: direct material manipulation, causal interventions, etc. In order to make sense of the empirical content extracted by the use of computer simulations a few notable and questionable strategies have emerged. As I showed in the previous chapter, Morrison relies on a deflationary view of some established experimental practices and deems them to be nothing more than the manipulation of theoretical values. If computer simulations can be said to do the same then they are in equal epistemic footing, i.e. we could not say that computer simulations do not provide empirical insight without saying that these other established experimental practices fail to do so as well. Similarly, Barberousse and Vorms, corner their readers into an argumentative conundrum that relies on considering computer simulations as reliable transmitters of knowledge as testimony and memory. They also argue that the labels of ‘a priori’ or ‘posteriori’ do not apply to knowledge per se but rather to the warrants on which knowledge is based. Denying that computer simulations can transmit knowledge (with its respective warrants) would also deny that many forms of established transmission (expert testimony, textbooks, etc.) would fail to do so too. We will encounter and examine these latter claims in detail in Chap. 8. 2 Precisely because much has been written on the subject and because my argument differs wildly from these other argumentative strategies I will simply forgo a detailed recount of these. The interested reader can get excellent overviews of such strategies by consulting the following books: Winsberg (2010), Morrison (2015), Durán (2018). 1
4.1 Simply Distinct
53
particular, it articulates why computer simulations are conceptually distinct from the things they have been compared to, why they are contextually distinct and why they are functionally distinct as well.
4.1 Simply Distinct A theoretical or abstract model of the kind conventionally used in science is a conceptual construct that stipulates the relationships and the dynamic transformations of a system and the relationships of the entities therein (Pincock, 2011). These models abstract and describe the scientifically salient features of a system. As such, they offer a formal representation of a target system (Durán, 2018). However, it is important to note that such models are very different from computer simulations. As we already learned in the sections above and as we will see in even more detail in sections to follow, the conceptual construct of a computer simulation, which may include the abstract description of a target system as part of its referent, is something more than the description itself. It includes extra-theoretical and extra-mathematical content and considerations, it includes further content related to the computational distribution in its procedural architecture, etc. Hence, both concepts—‘computer simulations’ and ‘abstract descriptions of a target systems’, i.e., models—are distinct. More importantly, they track distinct things. In this section I explain what this means. Consider the following. It is obvious, for example, that a recipe for a cake is not the cake itself. More importantly, however, the recipe of the cake is also not identical to the carrying out of the steps required for the cake to be instantiated. Something or someone must implement the instructions in the recipe. Carrying out these steps is crucial for the instantiation of the cake. The distinction between a model’s description and a model’s implementation can be understood by analogy by thinking about the distinctions between representational devices that include instructions for performance or for the creation of a product and the carrying out of such instructions. Similarly, carrying out the specifications of a simulation model is crucial for the instantiation of a computer simulation. The computer simulation has to be carried out, just like the steps in a recipe have to be carried out for the final intended product to emerge. This analogy is not completely without problems, of course. A computer simulation is not like a cake. The computer simulation is not only an end result like the cake is. It is indeed a process, albeit perhaps not as broad as the simulation pipeline depicted in Chap. 3 suggests.3 A computer simulation is an operational procedure in which computational steps have to be carried out. So, while the cake is the final aim of a cake recipe, it is not clear that there is a similar end product for a While a computer simulation may be procedurally distributed amongst its constituent components, this distribution doesn’t have to expand backwards towards developmental stages, or scientific observation. 3
54
4 The Via Negativa: Computer Simulations as Distinct
computer simulation. Of course, an important aim in running a simulation is to get some results. But the results of a simulation are not the simulation. Rather, as we will see, the computer simulation is more like a performative instrument: it is what it is while it is doing what it does.4,5 Computer simulations (of dynamic target systems) are technical artifacts—physical implementations of abstract specifications—that implement/execute the computational processes required by the specifications and descriptions of such included in models. That is, computer simulations are the complex arrangement of devices with which the formal dynamic descriptors, the simulation models, are carried out. While computer simulations may be the product of, or contain within them the specifications of a conceptual model, computer simulations are something other than the model itself: they are the implementation (through hardware architecture and software specifications) of said models. Hence at the most basic level, a distinction can be drawn between the model and a computer simulation of that model by noting that the model is not the simulation and vice versa. They are two distinct things. At the very least, even the fact that computer simulations require a model to implement and that the model requires a computer simulation to be carried out signals a conceptual distinction between the two. However simple these examples may be, through them one can envision the many different conceptual dimensions that separate the instantiation of a process from a description of that process. At the very least, we can conceive of a distinction between the kinds of processes modeling is from what a simulation is. When it comes to computer simulations, we must understand that a simulation model is not identical to its implementation and therefore is not identical to the simulation itself either. In this sense, models and computer simulation are simply distinct concepts. Furthermore, it is well documented that the mathematical operations that form the basis of computer simulations of dynamic systems are quite different from those found in the theoretical stages of inquiry. They are distinct processes. That is, the math carried out by computer simulations is markedly different from the math in scientific models in many non-trivial ways. Here is a non-exhaustive list of the ways in which they differ:
It is important to note that this performative aspect of the dynamic features of a computer simulation is not reducible to mere perceived movement. That is, a computer simulation is also not just the moving pictures often accompanying the transmission of its results. As we will see, in the following section, this visual result of a captured computer simulation is yet another thing, separate from the simulation itself. The visual representation of the simulation is only one of the ways in which the simulation can be captured for future analysis. The simulation is the carrying out of the mimicry inherent in the concept; the simulation is what is while it is being done. When the computer simulation simulates, it does so in the present tense. This function of simulating is carried out by the physical implementation of its functional specifications. 5 Of course, there is a question here about whether perdurantism or endurantism is the case, and why materialism about the computer simulation is assumed rather than argued for. There are important ontological debates with broader metaphysical implications, for example on the nature of a piece of music. The response is that computer simulations are built. And they are built for a specific task. It is as if we had to build a musical instrument particular to its instantiation. 4
4.1 Simply Distinct
55
• They do not involve the same kind of mathematics (continuous vs. discrete) • They do not follow the same kinds of processes (different parameters and different assumptions) • They do not yield exactly the same results (approximations and rounding up/ down are the norm in translations from continuous mathematics to discrete mathematics) • They do not take into account the same considerations (e.g., theoretical principles, practical and material constraints, etc.). • The equations involved in the creation of a mathematical model representing, say, a natural phenomenon will be different from the computational solutions offered by coded program that runs a simulation and yields results. Let us explore some of these points in more detail. While some may argue that both the mathematical model and the computational model yield roughly equivalent numeric results, it is important to think about how different each process is in relation to one another. Consider a computer simulation that is developed in a context in which well-established theoretical principles and thorough mathematical equations exist for a target phenomenon. It is well known to anybody dealing with coding mathematical models into computer languages that the equations in such well-established theoretical models are seldom, if ever, directly part of the computer simulation itself. The continuous equations used to formalize theoretical principles in a given science have to be translated into discrete and approximate solutions that computers can process. As we saw in earlier chapters, the way a computer simulation solves an equation is by providing approximate values to discretized parameters that roughly correspond to the results one gets from a continuous equation. While the results can be similar or approximate to an almost negligible degree, the fact remains that both the results and the methods by which they are arrived at are actually distinct: factually speaking, they are not exactly the same and they are not arrived at in the same ways. Sometimes this detail proves to be relevant, sometimes it does not. Another important detail to consider here is that rounding also plays a role when it comes to the difference between the kind of mathematics we usually associate with scientific models and the kinds of processes at play in a computer simulation. In order to make clear what I mean by this consider the following simplified example. Most computers, however powerful, will have a limited memory. This memory comes in the form of arrays. We can think of them as rows of boxes in which values can be stored: 1
2
3
4
5
6
7
8
These arrays come in different lengths and can accommodate very large numbers. However, they are nevertheless finite and when a calculation yields a result whose digits exceed the number of boxes available, the process defaults to rounding up/down to the most immediate sequence that fits the array. This process is repeated as long as necessary and the fitted numbers are the ones that are used for the next computation required. Thus, these rounded results get compounded in the multiple
56
4 The Via Negativa: Computer Simulations as Distinct
stages of a complex computational process such as the one required for a computer simulation.6 Again, if the argument is that models and computer simulations are one and the same or one simply an extension of the other, this shows a non-trivial distinction between them that should not be ignored. Furthermore, besides the distinct mathematics at play, there is also a difference in formal methods between discrete mathematics and the code to implement them. While these translations from one kind of mathematical model to another are quite sophisticated and substantial research about such methods exists to support our reliance on them, the translation of the latter into computer programs (software) is not as well established. In other words, translating continuous equations to discretized form is a tried and tested practice, while translating discretized mathematics onto computer code is not. The point here is that conceptually speaking, a model and a computer simulation entail even distinct kinds of mathematical content. This is the case even when they both yield roughly equivalent results. This is not to say that there are no successful instances of these translations. For all intents and purposes, they work. In fact, most software is precisely an instance of mathematical operations being reinterpreted as code. However, there are simply too many ways to do it. As many as there are programmers. The translation of mathematical models into code often, if not always, includes many idiosyncratic engineering practices that are far removed from the sound theoretical principles in virtue of which the initial model was constructed (Winsberg, 2010). This is precisely what Winsberg was referring to when he talks about extra-mathematical considerations that are not found in mathematical models yet form part of computer simulations. Consider, for example, that a scientist, when building a model, a dynamic equation that represents the features of a real-world phenomenon, must consider the ways in which the real phenomenon develops. Although idealized, the dynamic transformations of the equation are supposed to represent the dynamic development of the phenomenon. Often these dynamics are specified through theoretical principles or sound empirical practices. That is, in order to ground the mathematical operations in a model of a target phenomenon, a scientist often refers to the theoretical principles that determine and constrain possible dynamics of a system. Other times, the scientist can rely on direct observation of a phenomenon to derive some mathematical insights about their gradual development and the different states that it can be said to have across time. It is important to note, that issues related to these two processes (discretization and rounding) are well understood in computer science practice and in the validation and verification of lower-level computational procedures. They are also very well understood in the context of computer assisted mathematical proofs where precision and correctness are incredibly important. Hence, much of the misalignment between actual results are fixed to almost an indiscernible degree. The point here is not to shed doubt on these processes per se, but rather to point to the factual and actual distinction between one kind of mathematical method and the other. Even if the results can be fixed to yield roughly equivalent results, the fact of the matter is that the processes by which such results are arrived at are not identical and they are not the same. 6
4.1 Simply Distinct
57
On the other hand, the programmer tasked with writing the code that can satisfy the values specified by the discretized mathematics and the computer model needs to only consider the processes by which such values can be acquired in a computer. The constraints most important and with the most consequence for the programmer come not from the way in which the phenomenon behaves in the real world, but rather from the specific coding language that is being used. Other times, the constraints come from extra-mathematical considerations such as computing resources, time allocated to the problem, ease of construction, etc. All this without any specific regard to the principled or empirical grounds of the model seeking to represent the original target phenomenon. Not only will differently coding languages determine different paths and processes to yield approximately equivalent values to a mathematical model, but different engineers will figure out different ways of getting the software to yield a sought-after numerical value. A machine must be told what to do in order to carry out the discrete operations and this is specified through a set of logical commands. The procedures by which the machines execute these logical commands are often the result of engineering ingenuity that has little to no formal basis. This is an important departure between scientific models and the computer simulations with which they are explored. Scientific models often require theoretically principled mathematics that have specific properties that tie them back to the phenomena that they are meant to capture. Numerical methods of the kind used for computer simulations, on the other hand, are more often than not guided by the need to reproduce approximate values that only indirectly tie them to the original scientific models, via the discrete mathematics and the computer model, but not to the phenomena in question. That is, while conventional uses of continuous mathematical abstractions in scientific models require a theoretical justification that ties them back to their subject of inquiry, discrete models are often only responding to the adequacy of approximations. These discrete models just seek out to give out similar values, they are concerned with whether or not the phenomena being modeled behaves in a similar way or arrives at similar results undergoing similar numeric transformations. One can, for example arrive at similar values through many motley processes that respond to engineering constraints, formal language choice, computational ingenuity, etc., without regard to whether or not the methods by which said values are arrived at have anything to do with reality. This point already signals, at the very least, a departure—a gap—between the originating formal aspects of inquiry and their machine-implemented counterpart in computer simulations. The departure consists in the different target, the subject of interest: a scientific model’s target is a phenomenon in the world, a computer simulations target is more often than not a computational model’s output values. At this point we can say without much controversy that computer simulations are not identical to the mathematical models they implement, the equations in such models, or any of the formal aspects that constitute them. They are distinct concepts, referring to distinct things, carrying out distinct procedures, and encompassing distinct content. Below, I detail how they are also distinct from the empirical practices in which they are used.
58
4 The Via Negativa: Computer Simulations as Distinct
4.2 Distinct from the Context in Which They Are Deployed Rather than pointing out the ways in which computer simulations fall short of empirical practices or not, in this section I want to point out that computer simulations and experiments are simply distinct things from one another. It is not that they are better or worse than experiments, it is not that they share similarities or not with experiments. In this section I am not interested in these questions. Rather, the point I want to make here is that experiments and simulations are simply distinct kinds of things. Very similar points to the ones made above can be made about the relationship between computer simulations and experimental settings. That is, computer simulations are simply not the experiments they are often made to simulate and they are also not like the experiments they are made to simulate. They are simply something else. They do different things and when they do similar things, they do them differently. But more importantly, computer simulations are simply not identical to the contexts in which they are deployed. When philosophers ask things like “Are computer simulations experiments?” (Beisbart, 2017) they are simply forgetting that more often than not what they mean by ‘experiments’ are the context in which computer simulations are deployed. As suggested by Barberousse and Jebeile (2019), a computer simulation can be the software/hardware implementation of experimental specifications. Barberousse argues that full experimental settings that include procedures, computations, controls and data manipulations can be encoded in the programs we use in simulations as well as in the architecture used to run them. That is, full specifications and descriptions of an experimental procedure can be encoded for a machine to execute—or more accurately, as we will see below, to simulate. This point has been recently echoed by philosophers like Johannes Lenhard (2019) and more recently by Florian Boge (2021). It is important to note that a motivation behind these views is to make sense of the fact that computer simulations may indeed be capable of providing empirical knowledge, or knowledge that we did not have before about a phenomenon of interest. One of the major points of contention in the early philosophical debates about computer simulations was that their formal nature, like that of models, made them such that they could only examine the development of the formal components under strict principled assumptions. This meant that no empirical information could be yielded by such methods: they only ran the numbers. For philosophers on this side of the dichotomy, it has been important to make sense of the fact that computer simulations can provide information that is not already contained in the assumptions and formalities of their constituent parts. They can, in fact, as Lenhard (2019) suggests, surprise us even by just simulating. However, that this fact warrants thinking of the computer simulations as identical or similar in nature and function to experiments themselves is not immediately obvious. In fact, it is this functionality of computer simulations—the capacity and inherent design to simulate—that precisely separates computer simulations from what an experiment is, from what an experimental setting comprises and from what an experiment’s description and specifications are and do. That is, computer
4.2 Distinct from the Context in Which They Are Deployed
59
simulations are designed, developed and deployed to simulate and this is what makes them distinct from either their constituent parts or the context within which they are deployed. Hence, I suggest that there is a conflation here between what a computer simulation is, the context in which it is deployed and what it is deployed for. A computer simulation may be deployed to simulate an experiment or an experimental setting, but this does not mean that the computer simulation is either of them. In fact, as already noted, it is this capacity to simulate an experiment or an experimental setting that separates it from either. Let us begin by first pointing out some immediate distinctions. Consider the following: a description of an experiment does not simulate the experiment. I take this point to be fairly straightforward: If I write a detailed description and specifications for an experiment in a napkin, neither the napkin nor my markings on it constitute the experiment. Similarly, the drawings on the napkin do not constitute a simulation of the experiment. The drawings simulate nothing. If there was any simulacre related to the dynamic processes statically depicted in the drawings, then such a simulacre was carried out by me and not by the drawing in the napkin. Similarly, a description of an experiment to be simulated on a computer simulation is not a computer simulation. Consider such a description. It can be in the form of a flow chart or explicit instructions. The description alone does not simulate. That is, it does not and cannot implement the necessary processes that carry out the operations required to simulate anything. Furthermore, although both are representations, they are not of the same kind. This is evident when we think of the difference between a written equation that one has to solve and of an equation that is solved by a machine. The written equation does not solve itself. It is but a blueprint, a set of descriptive specifications of a process that still requires an implementation, namely something or someone to instantiate the necessary operations in the order specified to transform input into an output and represent the changes accordingly. If we think of the early computer simulations as being implemented in machines that still ran on punch cards we can also see this. The punch cards themselves are not the simulation. They still need a machine to run them. I take our discussion above to have yielded a few insights about computer simulations, their nature, and their role, even if this was through a via negativa, through analyzing what they are not. Computer simulations are conceptually distinct from the processes and components that constitute them. They are also distinct from the purposes for which, and the settings in which, they are deployed. For example, a computer simulation can be used in an experiment without constituting the experiment itself. Computer simulations can also simulate an experiment, but the fact that it can “run” the simulation of the experiment is already evidence that it is something other than the experiment itself. A simple way to visualize this is to think of a laboratory. A laboratory is a place that has properties such that one can carry out an experiment. A conceptual distinction can almost always be drawn between the place that has the properties for the experiment to be carried out and the properties of the experiment itself. Even more precisely, we can think of the difference between an experiment and the instruments that enable a scientist to conduct it. A microscope is not the
60
4 The Via Negativa: Computer Simulations as Distinct
experiment nor does it represent the experimental setting in which it is deployed. Rather, it plays a part in the experiment, it is deployed in the service of the experiment, or it can conduct experimental specifications. Computer simulations can be understood this way as well.
4.3 Functionally Distinct: They Do Different Things or They Do Things Differently In the paragraphs above I showed that at a very basic level, computer simulations can be conceptually distinguished from their constitutive formal aspects and from the experimental practices for which they are deployed. I also showed that they are distinct in that they are a constitutive part of the experimental settings in which they are deployed and are not identical to such contexts. Whatever they are, computer simulations are not the formal methods and they are not the experiments with which they are often equated or subsumed under in the philosophical literature. While broadly accepted arguments in the literature have tried to show that computer simulations do not yield the same kind of epistemic content (e.g., empirical knowledge) as conventional empirical practices (Beisbart, 2017), the argument in this section takes a different approach. It is an argument about what computer simulations do, what their processes include and their functional aims are. Examining these functions will elucidate that computer simulations are distinct from either models and or experiments not because they yield different kind of epistemic content, but rather because they are artifacts that are designed, developed and deployed to do different things than either experiments or models. Drawing from the points in the last paragraph above, in this section I will show that computer simulations are also distinct from scientific models and from experiments in that they carry out different tasks than those of models and those of experimental practices. Furthermore, when the functions of computer simulations do overlap with the functions that can be carried out by other methods, these functions are carried out in a different way (faster, discretely, approximately, etc.). As mentioned above, early successful computer simulations were deployed predominantly in order to “bypass the mathematical intractability of equations conventionally used to describe” highly non-linear phenomena (Fox-Keller, 2003). This is evidence that these technical artifacts were conceived of, designed with, developed for and deployed in the service of a task other than the tasks that could be carried out by any other tools at researchers’ disposal at the time (hand-written calculations, human computers, abstract and physical models, etc.). In short, a computer simulation does things and is deployed to do things other than what its constituent components, such as models or experimental specifications, do. When they are deployed to do similar things to the things that models do—such as modeling, solving, etc.—computer simulations do them differently. This is in large part why we use them in the first place, because they allow us to do things that neither of the conventional elements of scientific practice allow us to do
4.3 Functionally Distinct: They Do Different Things or They Do Things Differently
61
or they allow us to do some of those things in a preferable way (considering tradeoffs). If this is so, then we also have a functional way to distinguish computer simulations from their components, the stages of their construction, the content they manipulate and the settings in which they are deployed. I expand on these points below. At a fundamental level, the function of the computer simulation to simulate is something that is not done by either the formal elements that constitute it—such as a theoretical model—or by any of the components that comprise it (simulation model, description, computational architecture, etc.). This function is also not done by the experimental specifications encapsulated or embedded in its procedures. I can for example, design a whole experimental setting and then, rather than hire a team of researchers, conduct field studies, build a lab, etc., I can put together a computer simulation to simulate the many steps required by this experiment. The simulations task is to simulate the experiment encapsulated in its specifications. But the computer simulation is not necessarily the experiment and the experiment is not necessarily the computer simulation: the latter does something else than what these things do. More importantly, a computer simulation is designed to do something else. It is this intentional design that is the core of what makes artifacts distinct from one another and from other things, e.g. organisms and other natural objects with functional properties such as pseudo-artifacts (Kroes, 2003; Symons, 2010). The functions for which artifacts are designed are particularly important to technical artifacts and more so when these technical artifacts are deployed in scientific contexts where epistemic requirements are stricter than those in ordinary epistemic experience. A computer simulation is a technical artifact, particularly a physical construct with a specified function at the design level. In a simple way, a computer simulation is a technical artifact designed to run the model(s) or the comparative processes specified by the experimental procedures encoded in it. It is designed to represent in a performative manner the dynamic progressions of a system specified in a model or an experimental setting. While a computer simulation can be used for many scientific purposes—explanation, experimentation, etc.—that may overlap with the purposes of other elements of inquiry like those of models or experiments, computer simulations are, at their core, designed to simulate. As I explained in the previous section, explanation and experimentation are the settings, the context within which they are used, not their function. Furthermore, simulating, as a function, is different from the function of a model, or the functions of the experimental procedures encoded for a simulation to follow, or the experimental settings in which the simulation is itself embedded as an instrument. While it is true that both a simulation and a model share a few common functions and properties, namely those associated with representing, a non-trivial difference lies in their performative status. A simulation can only represent by performing stipulated operations. A model does nothing of the sort. When faced with a static model, for example, or a description of a model, it is the epistemic agent that performs the operations therein. Just like a drawing on a napkin does not simulate, neither does a model insofar as this is not implemented by something or someone.
62
4 The Via Negativa: Computer Simulations as Distinct
Similarly, a computer simulation may share some functions and properties with those of some experimental practices. It is true that both the simulation and a controlled experiment allow a researcher to test parameters, manipulate values, etc. However, the computer simulation is to the experimental set up what the laboratory is to the experiment. An experiment can be differentiated from that which enables us to carry it out: the lab, the instruments, the practices, the different methods at different stages, etc. They each do different things. An experiment can be designed to test, manipulate and explore a given hypothesis; a computer simulation of an experiment is designed to simulate the experiment that tests, manipulates and explores that hypothesis. There is an added functionality to the simulation, namely to simulate, that makes it functionally—and even ontologically distinct. Here again, we can see a fundamental departure that allows us to distinguish computer simulations from both the formal elements that constitute their functioning and from the experimental settings in which they are deployed. A further thing to consider is that a computer simulation is not the end result of dynamic calculations. It is also not the final state of the processes of the target system being simulated. Rather, a computer simulation is the execution of the processes meant to represent the dynamic development of the target phenomenon. To have a clearer view of this point consider the following: If I was travelling in my intergalactic ship in space and I asked my computer assistant to provide me with a still image of a specific point in time in future galactic formations, this image would not be a simulation. This is the case even if the image was produced by following formal specifications, models, equations, etc. It is still just an image and not a simulation. The image may even be the product of a simulation, but it is not the simulation itself. Furthermore, consider that I ask my computer assistant to produce merely a numeric representation of the position of a specific star within this galactic formation in the future: this result is also not the simulation, though it may have been acquired through it. The simulation happens somewhere after the specifications are implemented and before the results are produced. The simulation happens as it is performed. The computer simulation is that which carries out such performance. That is its function. This is so even when the simulation becomes the subject of an experiment. Insofar as there is a function or a property of the experiment or the simulation that falsifies an identity relation between both, then the they are evidently distinct. While some may point out that a computer simulation can be the experiment itself, I take this to mean that the computer simulation can become the subject of inquiry of an experimental set up. This, again, is different from saying that the computer simulation is the experiment. More precisely, it is not the case that there is an identity relation between the experiment and the computer simulation: most of the time in these settings, the computer simulation is either the subject of inquiry of the experiment or the thing that carries out an experiment. If so, they are therefore not identical. Even when these formal abstractions or experimental settings are an integral part of the simulations such that the computer simulation manipulates the formal abstractions or such that it simulates the experimental settings, it is still a distinct thing in
4.4 How Distinct, Really? An Objection
63
virtue of what it is doing to them or with them. The function of the simulation is to simulate and this is a function not found either in the mathematical abstractions or in the experimental settings that it simulates. In short, whatever it is they do, what computer simulations do is not done, or is not done in the same way, by theoretical elements, or models. Similarly, whatever computer simulations do, is also not what experiments do. Rather they are the thing with which the experimental values are entertained, the thing which the entities and transformations of a target in the real world are mimicked, or the thing with which the experimental procedures are automated, etc. Computer simulations are also distinct from conceptual models or experiments in that, oftentimes, they simply carry out different tasks than those of models and/or experimental practices. They do not only compute but also automate computations, they do not only represent but also transform information from one kind to another (numeric/visual), etc. When the functions of computer simulations do overlap with the functions that can be carried out by other methods then they are carried out in a different way. That is, they are automatic, faster, more accurate, approximate and/or simply more convenient. Again, at a fundamental level, the task of the computer simulation to simulate, as briefly mentioned above, is something that is not done by either the formal elements, such as a theoretical model, or any experimental practice encoded onto the set of artifacts that comprise it. This is why a computer simulation is not strictly a formal model or even an experiment in the conventional sense: it does something else.
4.4 How Distinct, Really? An Objection Let us consider the following objection to my view of the functional distinctiveness of computer simulations. A conceptual model, it may be said, is also an artifact that is designed with the specific task to represent the dynamic processes of a target system. It does so through a set of either theoretical or empirical directives meant to help the user of the model or the experimenter replicate, in one way or another, the dynamic features of a target process. In this sense, both the model and the simulation have the same function, namely to represent a target phenomenon. If this is so, their functionality is not a basis for their differentiation, or so one can imagine the argument of such an objection would go. This objection—which can easily be a product of those views that take computer simulations to be mere extensions of mathematical models, logical abstractions and/or other formal methodology in inquiry as well as from those views that acknowledge that computer simulations can encapsulate experimental settings—is an objection to my view only if one neglects the fact that computer simulations are a sort of hybrid instrument, which, much like a measuring instrument, must perform to represent.
64
4 The Via Negativa: Computer Simulations as Distinct
That is, in order for the computer simulation to do the representing part it must carry out the processes by which it represents. Without the execution of procedures, a computer simulation is not a computer simulation but merely a description or conceptual specification of said procedures, like an equation on a napkin or a punchcard. In order for the computer simulation to be a computer simulation (particularly of a dynamic target), it must run. This is similar to the hybridity of instruments such as the thermometer in which a process, a transformation of its components, must occur in order for the thermometer to display its representative function. By contrast, a conceptual model does not have to perform to represent, it does not have to execute the specifications it contains. A model can be specified by a static representation in which the necessary connections and transformation are simply described. This can be done in many different ways: a drawing, a textual description, or even an equation, amongst others. Yet, neither the drawing nor the textual description carries out—performs—the specified operations, they simply describe them. Similarly, an equation can be used to carry out the operations but it does not perform them itself. They are static, descriptive artifacts removed—much like a conceptual model—from things like motion and change (Baird, 2004; Durán, 2018). In contrast, a computer simulation is a technological artifact that has as its main function to simulate, mimic, through the prescribed specifications the dynamic transformations of the simulated target. This performative aspect is the physical implementation of the functional specifications, which often, if not always, require iterations of the formal specifications. The functions of a computer simulation specify that something must be done, operations must be executed on its contents and not merely described. As we saw with the example of the spaceship display, a computer simulation is not the end result of dynamic calculations, nor is it the final state of the processes of the target system being simulated. Rather, a computer simulation is the execution of the processes meant to represent the dynamic development of the target phenomenon. A model of a system can, and is often, constructed with the dynamic provisos of a target system so that it can, when implemented on a separate artifact, provide the necessary specifications for this separate artifact—the simulation—to mimic the behavior (dynamic development) of said system. Yet the model does not itself constitute a simulation. The model does not run the model, the experiment does not run the experiment, nor do they simulate themselves—that is, they do not mimic/ represent themselves: the computer simulation does. Once again, in this sense, a computer simulation has a different function from that of the model or that of the experiment: it is an artifact designed to simulate them. Similarly, an experimental setting can contain the necessary specifications for the transformations of values required for a specific inquiry and a computer simulation can be thought of as an artifact devised to automate these processes. And yet even here there is a further important distinction that must be drawn. The way computer simulations can automate some aspects of some experimental practices is by simulating the equivalent values (initial conditions, input data, expected parameters,
4.4 How Distinct, Really? An Objection
65
etc.) of a specified system and their respective numerical transformations through roughly equivalent discrete methods in coded circuitry. The computer simulation does not run the experiment in a conventional understanding of experimental practice. That is, while the same specifications could be given to a team of researchers for them to run an actual experiment in which direct observations and manipulations are conducted in order to extract data from a target phenomenon in the world, the computer simulation can only simulate this process: hence the name. Saying that a computer simulation runs an experiment means that a computer simulation simulates—mimics, if you will—the experiment being carried out by executing the value transformations specified by the experimental stipulations. In this way, then, the automation that takes place in the computer simulation of the experiment is that of the simulated processes by which the values and data of the experimental setting are transformed and unfolded. In this sense, computer simulations encapsulate the required steps for the transformations of values specified by the inquiry and by carrying out these steps they simulate the steps that would have been taken on a laboratory setting. The computational architecture at the heart of these technological artifacts processes these steps and further transforms the outputs (inferences, calculations) onto intelligible results. In short, whatever it is they do, what computer simulations do is not done, or is not done in the same way, by theoretical elements, or models. Similarly, whatever they do, is also not what the experiment does. Rather they are the thing with which the experimental values are entertained, the thing which the entities and transformations of a target in the real world are mimicked, or the thing with which the experimental procedures are automated, etc. The key words here are ‘the thing with which’. Conceptual elements of inquiry such as theories, their models, equations and other formal components are abstractions. As such they are composed of propositions, claims, beliefs, etc. They are made of the kinds of things that are not subject to physical forces. Instruments, on the other hand, are specialized physical constructions. If, as Baird (2004) suggests, instruments “whatever they may be, are not beliefs”, neither are computer simulations. The point is that even reductionists views—views that attempt to reduce computer simulations to either theoretical elements or empirical elements of inquiry—cannot deny the individuality of the artifact in question: the computer simulation as a distinct thing. If only in virtue of the fact that it is doing something else than any of the things in which it depends (its constituents). This is evidence of the individuality of artifacts in general but also of the distinct nature of computer simulations vis-à-vis the things they have been compared to. That is, computer simulations can be individuated by the fact that they perform a different function than that of models, they carry an extra/independent task from those of their constituents: computer simulations are the things with which we can process models, solve equations, transform/translate, display, model, etc. As we will see in the next chapter, they happen to be a particular kind of technical artifact, a more precise definition of which will be provided in the next chapter.
66
4 The Via Negativa: Computer Simulations as Distinct
References Baird, D. (2004). Thing knowledge: A philosophy of scientific instruments. University of California Press. Barberousse, A., & Jebeile, J. (2019). How do the validations of simulations and experiments compare? In Computer simulation validation (pp. 925–942). Springer. Barberousse, A., & Vorms, M. (2014). About the warrants of computer-based empirical knowledge. Synthese, 191(15), 3595–3620. Beisbart, C. (2017). Advancing knowledge through computer simulations? A socratic exercise. In M. Resch, A. Kaminski, & P. Gehring (Eds.), The science and art of simulation I: exploringunderstanding-knowing (pp. 153–174). Springer. Boge, F. J. (2021). Why trust a simulation? Models, parameters, and robustness in simulation- infected experiments. British Journal for the Philosophy of Science, 75. https://doi.org/ 10.1086/716542 Durán, J. M. (2018). Computer simulations in science and engineering. Springer. Humphreys, P. (2009a). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. Humphreys, P. (2009b). Network epistemology. Episteme, 6(2), 221–229. Keller, E. F. (2003). Models, simulation, and “computer experiments.”. In H. Radder (Ed.), The philosophy of scientific experimentation (pp. 198–215). University of Pittsburgh. Kroes, P. (2003). Screwdriver philosophy; Searle’s analysis of technical functions. Techné: Research in Philosophy and Technology, 6(3), 131–140. Lenhard, J. (2019). Calculated surprises: A philosophy of computer simulation. Oxford University Press. Morgan, M. S. (2005). Experiments versus models: New phenomena, inference and surprise. Journal of Economic Methodology, 12(2), 317–329. Morrison, M. (2009). Models, measurement and computer simulation: The changing face of experimentation. Philosophical Studies, 143(1), 33–57. Morrison, M. (2015). Reconstructing reality. Oxford University Press. Parker, W. S. (2003). Computer modeling in climate science: Experiment, explanation, pluralism (Doctoral dissertation, University of Pittsburgh). Parker, W. S. (2009). Does matter really matter? Computer simulations, experiments, and materiality. Synthese, 169(3), 483–496. Pincock, C. (2011). Mathematics and scientific representation. Oxford University Press. Symons, J. (2010). The individuality of artifacts and organisms. History and Philosophy of the Life Sciences, 32, 233–246. Winsberg, E. (2010). Science in the age of computer simulation. University of Chicago Press.
Chapter 5
Technical Artifacts, Instruments and a Working Definition for Computer Simulations
If computer simulations are not the things they’ve been compared to, then what exactly are they? Computer simulations, as I argue throughout this book, are instruments. Unlike conventional approaches to computer simulations in the philosophy of science, in order to gradually establish the case for this thesis I will be exploring concepts and developments in the history of science and in the philosophy of technology. More precisely, I will be looking into the historical details surrounding the sanctioning of scientific instruments in the era of Galileo’s telescopes. As far as the philosophy of technology goes, in this chapter and the next two after it, I will be looking specifically at developments in the philosophy of technical artifacts. The use of these two sources for the investigation of computer simulations present a novel and, to a certain extent an innovative contribution to the literature on the subject. These choices are not merely contingent. Rather, they are choices that reflect the ontological commitment to taking the materiality—and the matériel, in scientific inquiry seriously. As we will see in detail later, philosophers such as Davis Baird (2004) believe that instruments are a category on their own within scientific practice. Central elements of scientific practice are, of course, theory and experiment. However, instruments are a definitive third category if there ever was one concerning the systematic approach to understanding our world. While I agree with Baird, I further expand on this particular aspect of his view. Computer simulations are instruments and instruments matter in a particularly relevant way to how we understand the world around us (Hacking, 1987). There are certain phenomena we could not even have access to were it not for the enhancement provided by instruments. There are certain features of the world that seem different to us when we understand them through instruments of all kinds, particularly things like distances and time. Understanding computer simulations as instruments matters because what an instrument is in science differs in a specific, more rigorous, sense than it does when considering instruments
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3_5
67
68
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
more generally. Hence, they ought to be validated in ways that reflect this. There is, for example, a difference in the specifications required for glassware in a kitchen and glassware used in a laboratory. Although we can perhaps use the latter in the former context, we would be advised to not use the former in the latter context as kitchenware hardly meets the precision specifications required by scientific contexts. As I will argue in these later chapters, if we are to understand computer simulations as instruments and we are to trust them in science, we should ensure that they are validated in the way other scientific instruments are. Before we get to this particular point in my argument, however, we have some work ahead of us. In this section that work consist in providing a few preliminary definitions and in positioning computer simulations within their context in scientific inquiry. Figuring out what is it that they do and how they do it, as we will see, will also clarify to us what computer simulations actually are: instruments. But instruments are a very specific kind of object. As I argue throughout this book, knowing what something is can help us elucidate the epistemic implications of its use. In the case of computer simulations, understanding what they really are and what they do can lead us to best understand their role and their limitations in science. Before we discuss what I take an instrument to be and how computer simulations can fit into this category, some aspects of both the history of instruments in science and philosophical thoughts about the role of instruments in science should be briefly discussed. This is particularly relevant because, surprisingly, as briefly mentioned throughout this book, contemporary philosophy of science has had little to say about the epistemic nature of instrumentation and its role in formal inquiry. With a few notable exceptions,1 the role of instruments in science has been mainly interpreted to be as that of a subservient element in inquiry. Instruments are mainly understood in terms of helping to corroborate theoretical principles or as mere aids in the service of empirical endeavors. Almost never are they considered as central to the creation of scientific knowledge and rarely, as Baird does, are they taken to be sources of knowledge that may in fact be independent from both empirical and theoretical elements of inquiry. This chapter addresses this issue directly by providing an epistemic framework that puts instruments back in the center of inquiry and hence sets up the appropriate space for us to asks whether computer simulations are up to the task.
See the collection of essays edited by Radder (2003) which includes important work by, amongst others, Hon, Harré, Fox-Keller and Kroes, as well as the rare book-length projects on the topic by Baird (2004) and by Humphreys (2004) for an overview of the debates. Noticeably, these are some of the only serious works on the subject within analytic philosophy of science in the last two decades. Importantly, developments in the philosophy of technology of the French tradition would often seek to reduce these technical objects to either the text-based assertions that they represented or to their symbolic significance in social representations and power dynamics. For these, see Latour (1983), Simondon, (2011) and Ellul (1964). In doing so, they too often missed identifying their materiality as central. 1
5.1 Defining Instruments
69
5.1 Defining Instruments In order to truly understand computer simulations as scientific instruments we must explore in some detail the fact that both the term ‘instrument’ in general and the term ‘scientific instrument’ in particular have a rich debated history. As we will see in this section, this history is not merely conceptual. Rather, the coining and deployment of both terms relates to many contingent factors in the historical development of science itself and the scientific communities around it. It also directly relates to the changing use, epistemic role, and perception of instruments in formal inquiry. Hence, defining what a scientific instrument is tricky and tracking the concept could be both historically laborious and perhaps futile (Warner, 1990, p. 93) since so many disparate objects seem to qualify for the category. Nevertheless, certain kinds of objects sharing certain epistemic properties have been grouped and understood together as belonging to a peculiar kind of epistemic practice very closely related to what we now understand as science. It is these kinds of instruments and these kinds of properties within such an epistemic context that I aim to identify. My aim in this section is to provide some of the additional foundational elements for a framework to understand computer simulations as scientific instruments. The basis for this understanding, as already alluded to in sections above, brings along with it ontological commitments related to the materiality of such artifacts but also to their indispensable role in inquiry. Hence, it would be best if we begin by first providing an understanding of the broader term ‘instrument’ in terms of technical artifacts before we move on towards an understanding of the more particular subset of ‘scientific’ instruments. Given the broadness of the term, rather than providing a historiography of the general term ‘instrument’ let us begin with a working definition of our own that highlights the fact that when we talk about instruments we are talking, for the most part, about material objects of some or ample technical sophistication. So, let me start by providing a brief working definition or what an instrument is for the purposes of our discussion: Instruments are technical artifacts—a materially instantiated object/process whose design is intrinsically and intentionally teleological.
Here, I treat the definition of an instrument as being maximally tolerant and as admitting of many functional technical artifacts. Notice, for example, that a blueprint of a technical design may enhance our understanding of the thing it represents, but it is not the thing it represents nor does it do the same things as the instantiation of the thing it depicts. Hence, while a description of an instrument may meet our definition of an instrument, notice that the description qua description is a different kind of instrument than the instrument it describes. That is, the combination of ink, paper and a diagram representation is in itself an instrument: it is, as a technical artifact, a materially instantiated object whose design is intrinsically and intentionally teleological. However, a blueprint of a machine and the machine are not one and the same. For now, while this is a maximally tolerant definition and may warrant future scrutiny, we can utilize it to identify the kinds of devices that are used in science.
70
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
For the purposes of our discussion—and closely following Davis Baird (2004)—an important implication of this definition is that whatever instruments are, they are simply not linguistic propositions, nor are they necessarily constituted by propositional knowledge. Instruments are something else than propositions alone. Therefore, they are not simply a compilation of specifications or of theoretical principles, though their creation may be informed by such. The definition I provided above states that instruments are technical artifacts. An important way to understand artifacts is by looking at what it is they do or what they were designed to do. That is, in order to know what an artifact is we must ask what the artifact was designed, developed and deployed for. Else, we risk not fully capturing what they are. There is a sense in which describing a corkscrew to a child as a pointy and twirly piece of metal without mentioning its intended use will leave the child clueless as to what the artifact really is. Hence, artifacts in general, but technical artifacts in particular, can only be properly individuated by appealing to their two ontologically distinct sets of properties: the functions for which they were intentionally designed and the physical elements that can instantiate these functions. Going back to our initial discussion concerning technical artifacts, I said earlier that artifacts, in general, but technical artifacts in particular, can only be individuated by appealing to their two ontologically distinct sets of properties: the functions for which they were intentionally designed (Symons, 2010) and the physical elements that can instantiate these functions (Simon, 1969). Why is the word ‘intentionally’ in italics here? This is because some objects are artifacts in virtue of having a property that is already identified by someone as useful for a given purpose. For example, one may find a rock in the form of a soup bowl. Others artifacts are intentionally synthesized from resources and materials to have a desired property for a specific purpose: solidity, flexibility, etc. Between both kinds of artifacts, the latter kind of artifacts are the more definite artifacts. This is because they are explicitly designed and constructed with a purpose in mind. The former, by contrast, are what Peter Kroes (2003) calls ‘pseudo-artifacts’. They are ready-found objects that prove useful given our interest. Furthermore, while all artifacts have an inherently designed function (i.e. they are made for something), some artifacts are more highly sophisticated and are more meticulously constructed than others. Some instruments in science, for example, are highly sophisticated technical artifacts. When they are well made, they are constituted by highly theoretical and functional specifications. They are also made with very specific material, which is optimal for the execution of the functional specifications in light of theoretical requirements. This, as we can see, already constitutes a sort of conceptual distinction of technical artifacts such as scientific instruments from other material objects and other simpler tools. At the very least, technical artifacts of the kinds used by science are made with distinct norms in mind and with distinct materials and objectives. This is also a useful way of distinguishing tools like a hammer, which can potentially be made of a single solid piece of material (i.e., one can easily imagine a hammer made from a single piece of iron) and other tools, such as machinery, whose functioning require the precise connection of parts. But it can also help distinguish between more ordinary assemblage of artifacts and
5.1 Defining Instruments
71
those whose sophistication answer to even more rigorous norms. A bicycle, while perhaps more complex than a hammer in that it requires the careful assemblage of diverse interconnected parts, is still far removed from anything like the large hydron collider, or even simpler but standardized instruments in science laboratories. In any case, once they are made, these highly sophisticated technical artifacts are deployed in laboratories where they are used for the acquisition of knowledge. In such settings, technical artifacts are placed somewhere between theory and experiment: they may have been made with theoretical principles embedded in their construction and they may have been placed in the service of an experimental setting, but as they are made, they are not strictly theoretical nor are they experiments in and of themselves; as they are deployed, they are not so either. Rather, they a third kind of thing: the things with which principles may be manifested and with which experiments may be carried out. More strongly put: instruments are a third—separate, distinct, and independent—source of knowledge in scientific inquiry. As such, they deserve to be investigated and accounted for as ontologically and epistemically independent from either theory or experiment.2 This is something that was not lost on Ian Hacking (1992), who was one of the few philosophers in the Anglo-American tradition to pay attention to the material elements of scientific inquiry in the late twentieth century. To him, instruments in the laboratory, including substances, were “flanked on the one side by ideas (theories, questions, hypotheses, intellectual models of apparatus) and on the other by marks and manipulations of marks.” (1992, p. 32) Simply stated, instruments belong to the category of ‘things’ in his triad of “ideas, things and marks” as the necessary elements in the epistemology of scientific inquiry. The discussion above points to the ontological and epistemic distinctiveness of instruments. The distinction is ontological in that it is seeking to discern a kind of thing that is not like the other things. Furthermore, as we will see, the view of computer simulations as instruments is not to be taken as an analogy. They are not just like instruments, as some in the literature suggest (See Boge, 2021). They are instruments. This is an ontological commitment related to the nature of computer simulations. As we will see, because of this preliminary ontological distinction, several epistemic implications follow: as instruments, they play a distinct epistemic role from that of theory or experiment in scientific inquiry; furthermore, as distinct tings from theory or experiments, the ways in which we interact with them, the way we trust them, the way we rely on them must come with their own set of justifying reasons. These reasons will not just be the same reasons as the ones we deploy when
There are however, influential definitions in the philosophy of technology, particularly from French Technologists such as Bruno Latour that reduce much of artifacts to text and symbol. Latour, for example, thought that an instrument was any technical/ deliberately complex process (Latour, 1983) that produced an inscription in a scientific text (Radder, 2003, p. 3) such that places like the laboratory where several instruments could be found were deemed by him “an inscription factory”. (ibid) The problem with such as view is that, as several philosophers of science and technology have pointed out (Radder, 2003), this view simply neglects, once more, to take the materiality of such devices seriously. 2
72
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
we trust theoretical aspects of our inquiry, for example. I may trust a theoretical principle or an equation because of a priori reasons. I cannot simply appeal to such reasons to justify why I trust a given artifact that implements such an equation. Just because we trust mathematics for one reason, it does not mean we can trust the physical things we build with the help of mathematics for the same reason (Symons and Alvarado, 2019). If this is so, when we understand computer simulations as instruments, we may find that their epistemic status must also be reassessed through separate means which differ from the theoretical underpinnings that inform their functionality and from the empirical warrants of the settings in which they are deployed.3
5.2 Defining Scientific Instruments As briefly alluded to above, we cannot forget that an instrument is still different from a scientific instrument. So, what is a scientific instrument and what does it take to be one? In order to better understand the philosophical implications of this term, before we get to discuss what I take to be the desiderata surrounding such a term, it is important that we delve into some of the details of its history since its use has been highly contested and has had a rich heritage of debate in the history of science. It is true, as many historians of science have noted, that the term itself has undergone many transformations and that because of these, it is hard to understand such a term without the risk of imposing modern attributes anachronistically. To start with, the term is a product of the modern world and did not exist in antiquity. For example, though similarly exclusionary and specific terms—such as mathematical, optical and philosophical instruments—were already in use to distinguish more technical kinds of tools from things like a hammer, by the late eighteenth Century, there was still no cohesive term to unify them all. Importantly, the distinction between philosophical and mathematical instruments is such that some have thought The nature of technical artifacts, as we saw above, is strongly contingent upon both their materiality and, importantly, their intended use or purpose. It is important to note however, that use, in and of itself, contrary to conventional views, is not sufficient for an artifact to qualify as scientific (Alvarado, 2022a, b). A can opener that is used in a scientific laboratory, even when used for scientific purposes, does not automatically qualify as a scientific instrument simply on the virtue of this use. Rather, we must understand that the intentions, mentioned by Symons (2010) above, with which such artifacts have been designed involve a set of considerations within a specific specialized context that can be characterized as simultaneously aspirational and normative. In other words, these artifacts are intended to be used in science, but science is what it is in part due to the fact that it is a specific and distinct kind of epistemic endeavor— one that was conceived of as to be differentiated from ordinary epistemic practices in that it, at the very least, sought to systematically and programmatically avoid the common shortcoming of such ordinary epistemic practices (Daston & Galison, 2021). Hence, in considering the intentions with which these artifacts are designed, we must take into consideration that such intentions include the distinctive epistemic features that are uniquely related to scientific inquiry. These features will be explored in the following section. 3
5.2 Defining Scientific Instruments
73
that it “implied that the observations, measurements and experiments of natural philosophers were made in a search for truth, and thus differed from the observations, measurements and experiments which mathematicians and mechanics made for merely practical purposes.” (Warner, 1990, p. 84) And yet, at some points in its history, the term “scientific instrument” was expressly used in response to industrial and curatorial interests rather than in virtue of any distinctive epistemic criteria (Taub, 2009). That is, what counted as a scientific instrument was at times defined by whether or not there was a commercial interest associated with the denomination. In this way, tradesman, guilds, government institutions and businesses servicing them got to decide what counted as scientific instrument and what did not. Similarly, some definitions were put in place simply in virtue of the fact that some museums needed to decide what to include and exclude in their acquisition efforts. Hence the seemingly clear distinction between mathematical and philosophical instruments was contested as the use of the term was responsive to both practical commercial sensitivities and commitments to understanding the world as it is. More often than not, these interests were at play simultaneously. For example, in order to organize a Special Loan Collection of Scientific Apparatus for the British Committee of Counsel on Education, James Clerk Maxwell provided what is thought to be the first proper definition of scientific instruments in the English language. While everything required for an experimental setup is part of an ‘apparatus’ in Maxwell’s taxonomy, “a piece of apparatus constructed specially for the performance of experiments, is called an Instrument” (Maxwell, 1876 as cited in Warner, 1990, p. 88). Maxwell further distinguished between experiments used for ‘illustration’ and those used for research noting that it was the latter that mattered for his purposes (ibid). And yet, nevertheless, by then, such kinds of devices were already being distinguished in other languages, such as French and German in virtue of other features. Even if these distinctions were not made through formal definitions, they were made in ways that grouped these artifacts together or separated them in virtue of their technical functions, capacities and aims of those involved in their creation and use (Warner, 1990). Importantly, as we will see more in detail below, as a minimum condition, most of these aims included the need/desire to overcome human natural epistemic limitations (Werrett, 2014). Hence, even considering that the use of the term has not remained static throughout history (Taub, 2011), we can identify a unifying minimum requirement grouping these artifacts together and delineating a first—if not also a definitive—epistemic aim that they all shared. This is not the only unifying element we can draw from, however. The identification of technical artifacts as scientific instruments throughout modern history also responds to at least the following intuitive assumptions identified by Van Helden (2020): 1. There is a proper, even essential place for such devices in the study of nature since the human senses alone are too limited for most scientific investigations; 2. The results or readings obtained with them are usually beyond question; 3. Scientific instruments are based on undisputed scientific principles; 4. Newer instruments are more accurate, powerful, or convenient than older ones.
74
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
As a precautionary reminder, Van Helden notes that some of these modern assumptions or intuitions were not always present at the time of an instrument’s introduction and that they were only adopted “as the use of instruments became common place” (ibid). However, there is evidence that something like these assumptions was at play even in the early days of modern-day science. We can neglect the last point on the list regarding accuracy, power and convenience as simply a matter of fact, granting that it may be an observation of the way technological development has progressed in scientific inquiry without ascribing much of a philosophical or normative commitment to such a fact. Whether this fact is indeed true or whether at times older, less accurate, less powerful, or less convenient artifacts have been considered as more “scientific” than others is not relevant for our current discussion. The important elements for us here will be the other three points on the list. Consider the first point in Van Helden’s list. According to Werrett (2014), Lorraine Daston and Peter Galison (2021) suggest that instruments in the nineteenth century came to be regarded as a means to safeguard experimental practice against human fallibility. This fallibility was mainly perceived as a collection of embodied “weaknesses, personal biases, and idiosyncrasies” (Werrett, 2014, p. 11). Hence, Daston and Galison interpret these efforts as attempts to exclude the human body from inquiry. This line of criticism, however, also extended beyond the shortcomings of the senses that rationalist philosophers were concerned with to include other cognitive and epistemic limitations as well. Bacon’s understanding of science, for example, was as a practice that involved the “expurgation of the intellect to qualify it for dealing with truth” and the establishing of a “display of a manner of demonstration […] superior to traditional logic” (as cited by Hon, 2003). The hope then was the systematic exclusion of the whole of human fallibility and not just that of the body or the senses. Hence, there is a sense in which, even at the time, the fallibility of our limited epistemic agency was used to justify, and even call for, the use of instruments that were meant to supplement or even replace such fallibility in careful inquiry. Furthermore, there is additional historical evidence that although such norms may have developed in tandem with the introduction and gradual acceptance of some instruments, there was already something in place guiding the introduction of such norms to start with at the time some scientific instruments were introduced. And while the foundations of scientific instruments seemed to imply a defense against our epistemic shortcomings, they also had an extremely productive and operational aspect to them. Consider once more the advent and sanctioning process of the telescope. As is well known, Galileo himself was not the first to come up with the idea of a telescope. As Galileo himself notes, he first heard of such a device having been created and peddled by a dutchman: Hans Lippershey. At the time, the study of optics and the creation and playful combination of several different kinds of polished glasses was well underway. However, no clear attempt to consolidate this knowledge and these glass artifacts into a single practical instrument with the specific function of the telescope are documented before Lippershey’s instruments. More particularly, combining both convex and concave glasses in a tube seems to not have
5.2 Defining Scientific Instruments
75
occurred to anyone up until then. It is certainly true, that even if this had been the case, it was Lippershey who first successfully pursued a practical manufacturing method and use for the instrument as he explicitly sought out the endorsement of governments and financial backers (King, 1955). At the time however, the instrument was dismissed even for practical uses such as military or navigational endeavors (Biagioli, 2010, p. 204). While at least two other contemporaries of Lippershey also filed documents for similar creations around the same time, the exact historical details for this part of the telescope’s history are not quite relevant to our purposes. What is interesting however, is how Galileo went about acquiring a telescope once he heard the rumors of its existence and some non-trivial details about the use of two distinctly carved glasses for it. Despite his own claims to the contrary, Galileo may have had access to an early telescope shortly after having heard of the rumors of its invention and furthermore he may have had detailed descriptions about its mechanics at least once, albeit in a highly secretive and brief manner (Biagioli, 2010). However, rather than reaching out through his network of intellectual and political acquaintances to acquire one, he set up to build one himself. In order to do this, he did not just polish lenses and tested combinations of them. Rather, he started by carefully studying the principles of optics known at the time, particularly the study of refraction, and then working out the mathematics and the physics required for the construction of such an instrument. Hence, Galileo did not just acquire an opaque instrument. Instead, upon hearing of such a device, he worked from principles to create one. This is an importantly divergent method of technological development. While Galileo’s use of such a method was not necessarily novel, one could easily qualify it as significant in that it helped solidify it as a highly preferred practice in years to come. What I mean by this is the following. Even at the time of Galileo, approaching such a task by starting out with theoretical underpinnings first was worth noting. Galileo himself contrasts both his creation and his method to that of a “simple spectacle maker” by stating that his was a product not of empirical tests or of chance but of pure reasoning.4 5 This was an important distinction at the time and, as we will see, it still is in the context of modern conceptions of scientific inquiry. The same sentiment, for example, was expressed later by Christopher Huygens, who thought that the original work of Lippershey was the product of a glass technician with little theoretical background. An observation that at the time was meant to be diminishing rather than flattering (King, 1955; Van Helden, 1974). While working knowledge was certainly
See King, 2003, p. 36. The chronology of the construction of Galileo’s telescope as well as the extent and nature of the access he may or may have not had to an instrument prior to constructing his own is a highly debated subject in the history of science. For a thorough historical account of these debates see Biagioli’s (2019) account of Paolo Sarpi’s letter as evidence that problematizes Galileo’s own accounts and conventional chronological accounts of his telescopes. This problematization, however, is not of relevance to our point here regarding the importance, to Galileo and his peers, such as Huygens, of theoretical understanding and reason to scientific inquiry and instrument development. 4 5
76
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
a praiseworthy mark of a craftsman, what Galileo was doing was not merely the work of a craftsman. This point in itself, as we will see, is already an important step towards the distinguishing of scientific inquiry from other conventionally experimental crafts, both practically and epistemologically. As Biagioli (2010) notes: in The Assayer [Galileo] did not argue that he was the inventor of a telescope that was unique by virtue of having better resolution and enlarging power than all previous instruments. (That would have been an engineer’s argument, and Galileo, eyeing the court, did not want to cast himself as an engineer, not even a very good one). He claimed, instead, a kind of inventorship defined by a specific process of invention (a reason-based one) rather than by the quality of the product resulting from that process. (Biagioli, 2010, p. 229)
Even this early in the history of the telescope we can already begin elucidating something of a criterium for principled inclusion on to the canon of scientific instruments, namely the following: (a) That the artifact be constructed according to principled knowledge and not merely through working knowledge. (b) That the processes by which a novel technology is assessed be somewhat independent from any one of the stakeholders (i.e., instrument-makers and instrument users). (c) That the norms by which these assessments take place be distinct (i.e., subjected to superior rigor) from ordinary epistemic practice or from the way in which we sanction ordinary epistemic practice. Before delving into the items in this second list of criteria, it is apparent that these norms align with Van Helden’s list above while also presenting notable differences, particularly in terms of their nature and status. Van Helden considers the items on his list as assumptions, acknowledging the possibility that they might not have always been in place during the emergence of new technologies. However, when we examine the self-professed methodological preferences of Galileo and Huygens, we can already discern that elements similar to these assumptions were rather treated as commitments and were already integral to the practical steps followed by dedicated practitioners. These commitments held, at the very least, some non-trivial aspirational value and were by then seemingly ingrained in the perceived epistemic obligations of scientists of that time. This challenges Van Helden‘s suggestion that these criteria were either assumptions or that their aspirational value emerged at a later stage. Nevertheless, let us go through each one of the items in this second list above so that we can best understand what they mean in the context of instrument validation in scientific inquiry. The first point—that the artifact be constructed according to principled knowledge and not merely through working knowledge—reflects a restrained version of Van Helden’s point that “scientific instruments are based on undisputed scientific principles.” It is a restrained version because it does not stipulate that such principles be ‘undisputed.’ But it also reflects the idea that any instrument in science should seek to transcend the limitations of the human senses. Implied in this, is the fact that like others before him, Galileo does not think much of chance discoveries.
5.2 Defining Scientific Instruments
77
Simply playing around with things and gathering working knowledge does not a scientific instrument make. As we will see, simply knowing what something does, knowing that it fails or functions without knowing why it does so, is insufficient to assess an instrument’s reliability. This point, of course, has a long philosophical tradition in epistemology dating back as far as Plato’s time and his definition of proper knowledge as justified, true belief. The justification condition in the conventional tripart definition of knowledge is meant to disqualify accidentally true statements from counting as genuine instances of knowledge. Similarly, the observation by a spectacle-maker of a magnifying phenomenon created by the accidental juxtaposition of two different types of glass was not as impressive to Galileo as the ability to infer such a phenomenon from well-founded principles in refraction theory, as we saw in the brief story told in the introduction to this book. Here, it is important to note that not only is Galileo dismissive of the epistemic import of chance discoveries by themselves. What is evidenced by Galileo’s attitude as cited above, is also a minimizing of the epistemic status of empirical work that is not accompanied by principled theoretical understanding, whether accidental or not. One may purposely design the repetition of observations to infer that a given arrangement of things, say chemical agents, produce a specific reaction, without having a full understanding of theoretical principles behind such a phenomenon. Similarly, a glass maker may have tried, even systematically, to put together several arrangements of glasses while observing the different results without knowing much about why these results occurred. It is this that Galileo considers of lesser epistemic import when compared to achieving the same results in virtue of an understanding of theoretical principles. It is thus then that Galileo set out to build not one but several improved versions of a telescope within a few weeks. Through his connections in Florence, his theoretical prowess and the fruitfulness of his findings, Galileo managed to remain at the center of telescope-making technology until his death (Van Helden, 1974, 1977, 1994; Van Helden et al., 2010). This, by itself, is worthy of remark given how widely distributed the technology became and speaks to the power and influence he enjoyed while alive (Biagioli, 2010). Nevertheless, this also signals that neither Galileo’s name by itself, nor the prowess of his instruments on its own, nor the theoretical foundations of his improvements were enough to establish the telescope as a fully sanctioned, reliable and trustworthy instrument. In fact, as mentioned in the introduction to this book, even having direct contact with the instrument and with Galileo was simply not enough to convince other practitioners of its usefulness. This shows us that personal authority alone is not enough to confer scientific credibility upon a novel artifact, even at the time of Galileo. At the very least, it shows that such authority does not simply rest with its creator. As we will see below, the legitimacy of the telescope as a scientific instrument was not even conferred by the final consensus of the community of practitioners surrounding its use and development. Rather, its sanctioning was the product of a series of methods and procedures that sought to transcend both the epistemic limitations and concerns of either one of the individuals involved as well as the limitations of the scientific community as a whole. This is precisely the claim
78
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
of the second point in our list of desiderata above—that the processes by which a novel technology is assessed be somewhat independent from any one of the stakeholders (i.e., instrument-makers and instrument users)—which we will analyze in detail below. When there is only one producer of a specific instrument and that instrument provides evident advantages not available prior to its advent we may have no choice but to trust the instrument maker as an authoritative source. In other words, one may be entitled to trust the instrument and its maker for lack of an alternative. Galileo was also a practitioner of the science, namely astronomy, and had been making progress within it. Galileo, after all, had access to his very expensive instrument the most. He could take his time and obsessively track elements in the sky. However, after the initial advent of his perspicillum, and later with the early days of his telescope, Galileo began claiming things that were not so evident to others. In part this was due to his exclusive access to his powerful instrument, but it was also in part due to the fact that his instrument required a lot of background knowledge on the part of the user even when access was granted. The community of astronomers, mathematicians and intellectuals involved in the practice of astronomy required a certain amount of procedural and propositional knowledge about the instrument in order to use it accordingly. That is, they required a certain level of theoretical and mathematical understanding regarding the heavens and the objects therein but also some principled and procedural knowledge regarding the ways in which to track them with this invention. And even then, just having those two things was often not enough to corroborate Galileo’s findings. So, when he released his findings regarding Jupiter’s moons he led a serious methodological and informational campaign, along with diagrams, to convince others about their validity. The practice of releasing astronomical findings accompanied with a series of diagrams was in itself a novel technique only preceded by some earlier accounts of comets. So, this alone reflects a new element in the way astronomy was practiced thereafter. And while the instrument that Galileo was using enabled this new element of astronomical practice, it would not be correct to ascribe too much responsibility to the instrument itself for Galileo’s decision to draw what he saw. Nevertheless, even this proved to be insufficiently authoritative a process to sanction his invention (Van Helden, 1994). This is in part why other, independent, methodologies had to be put in place by others, like Kepler and his associates, in order to assert the authority of the instrument. As we will see in the following paragraphs, the communities surrounding these efforts were quite heterogeneous including the expertise, authority, trutworthiness, and epistemic status of statesman, mathematicians, philosophers, and merchants (Drake, 1984; Van Helden, 1994; Malet, 2003). This is where the second element of our criteria enters the picture: that the processes by which a novel instrument is sanctioned transcend the interest and biases of each one of the parties involved. While we may be tempted to call this a mind- independent process (and I certainly will call it thus later on) at this point it is only necessary to point that even if at some level the processes themselves were dependent on the interest and biases of the many groups involved in the development of the artifact, these processes themselves nevertheless transcended such interests and
5.2 Defining Scientific Instruments
79
biases in such a way that they were subject to checks and balances external to them. Hence, while the method responded to the interests and biases of those deploying it, the verification process of such methods did not. That this is the case is evidenced by the following three developments. The first one has to do with Kepler’s involvement in getting both the instrument and its results to be trusted by others, which we already saw in the introduction to this book. As we saw there, it was not just his word that enabled this to happen. Rather, it had to do with how he went about it that, once again, proved to be, even if not novel, a watershed moment in the mainstream adoption of certain practices that are central to scientific practices even today. The second development was the decision by others to involve a reputable third party, Prince Leopold, to vouch for the reliability of the instrument. More importantly, it was the decision of that third party to deploy substantial resources to outsource the independent assessment of the instrument and hence to ensure appropriate warrants behind its acceptance. As we will see, when Prince Leopold was asked to weigh in a controversy about the relative reliability and merits of different telescopes, rather than simply using his Royal status and his reputation as a decisive factor, he commissioned a group of astronomers, mathematicians, etc., to come up with a suitable set of processes by which to do it. Thus, he established the Accademia del Cimento. Importantly, the offloading of the sanctioning to an independent panel of experts and the development of appropriate warrants was in itself an important epistemic development, since the task involved not just the assessment of the epistemic status of the telescope, but also the assessment of the epistemic status of the criteria and the tests developed and deployed for such a task. Notice, yet again, how rather than being assumptions, these were aspirational epistemic commitments, which if not completely established at the time, were at least already ‘in the air’ and already valued by those involved in technological and scientific development of the times. The third development in this story comes as a result of the latter development. Given that Prince Leopold had essentially created and incentivized an institutional infrastructure to create tests and test different instruments against each other, an industry emerged. This industry, as we will see, was comprised of stakeholders that did not belong to the class of users (mathematicians, philosophers, astronomers or economic and political elites of the times) and at times did not even belong directly to the community of craftsman immediately involved with the construction of telescopes, such as glass makers and others. Rather, they were paper makers, font designers, land surveyors, and others. Let us begin with Kepler’s involvement in the sanctioning of the telescope. As briefly mentioned above, Kepler’s reputation alone and even his more amiable disposition than that of Galileo were not sufficient to get others to trust Galileo’s instrument or its results. Rather, as mentioned in the introduction to this book, Kepler had to device a way to ensure that any doubts from critics, skeptics and cynics could be assuaged. In order to not force the reader to revisit the introduction of this book, let me simply remind us of what was said, risking some repetition for argument’s sake. Van Helden describes the recipe for certifying observations invented by Kepler thus:
80
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations “Make the observatory a public space by enrolling fellow observers of high social status or other excellent credentials, have them draw what they see independently, and then compare results, thus confirming the observations by means of witnesses.” (1994, p. 13)
Further, Van Helden notes, Kepler made sure to tell his readers that “Prague was his witness that these observations were not sent to Galileo although he owed him a reply.” (Van Helden, 1994, p. 13). Hence, here we can see that Kepler took written note of his methodology for justificatory purposes. In modern terminology, he proclaimed epistemic and testimonial independence as a third party free of conflict of interest and influence and made sure to note that at the time of publication there had been no contact with biased sources. There was an explicit move to remove or reduce the possibility of human bias, unconscious intervention, and error by way of methods that could address as many counterfactuals as the skeptic may come up with. Let us look at Kepler’s efforts a bit more carefully. The first step was to have a public event of sorts. Why is this important? Mainly, because the observation of stars was always somewhat of a public event before the telescope arrived. While it may not seem that important, the fact that two, three or more very different people, could, just by looking up towards the sky could see roughly the same things was epistemically significant. One was not alone in the experience of things. Hence, mental or perceptual distortions could to a certain extent be discarded in the reporting of observations if several people reported seeing similar things. This in turn ensured that a certain solipsism about observations was kept at bay, at least within the privileged group with access to the instrument and the knowledge required for meaningful engagement with it. This practice was not novel. Rather, it had been in place in the use of early observatories. Tycho Brahe’s star-gazing instruments, large architectural projects, for example, were not only public structures accessible to many people at the same time. They were also accompanied with a set of detailed instructions about their use and the nature of sightings. The theoretical principles behind Tycho’s methods and structures were well understood at the time (Drake, 1984; Van Helden, 1994). The theoretical underpinnings of optics that Galileo deployed for the construction of his instrument, on the other hand, were not well understood by many at the time. Hence, doubt and weariness about the reliability of the instrument was warranted amongst those that thought of themselves as seeking and ensuring the kind of knowledge that was less susceptible to doubt than conventional epistemic practices. As we will see in the following paragraphs, this will prove highly important for our discussion, for it is here, through these concerns and efforts that scientific inquiry can start to be understood as a distinct kind of epistemic practice, particularly one that not only involved but rather required the following of superior epistemic norms. Besides the fact that the events Kepler organized were somewhat public, a second important part of the process had to do with who Kepler invited to these evening sightings. He made sure, for example, that the people he invited to these soirees were of reputable recognition. He included rich, influential but also learned people. The idea behind such a roster was, of course, that these were man thought of as credible. Although these details may sound like trivial anecdotal aspects of Kepler’s parties, it is important to note that they were all epistemically motivated and
5.2 Defining Scientific Instruments
81
designed. While the people involved in these sightings could in principle be doubted, that they came from specific social backgrounds made it at least, by the social standards of the time, less likely. A key element of these public events, was that Kepler himself guided the use of the instrument. Hence, not only did these events provide an opportunity for first- hand acquaintance with the instrument, but this was also complimented by an expert guide on how to use it. As was evident by several accounts (Van Helden, 1974, 1994; King, 1955; Malet, 2003, 2005), including the one anecdote at the start of this chapter, although earlier versions of the telescope involved what we would now recognize as more of a direct perceptive experience—at least compared to the computer mediated instruments of today—what was seen was not always immediately obvious. Some of the people observing through these earlier tubes were aware about the effects of aberrations due to poor lens craftmanship or due to overreach—trying to see things beyond the capabilities of the instrument itself. Hence, even having direct access to the instrument and looking with one’s own eyes through it was not a guarantee that what one saw was indeed something that could withstand skepticism of one form or another. Kepler’s guiding of the process therefore adds an extra layer of confidence that would not have been there otherwise. Importantly, this by itself was not deemed sufficient by Kepler. Although this was a step in the right direction towards the mitigation of doubt about Galileo’s findings and the prowess of his instruments, more epistemic hygiene was required. As we will see, although part of the argument in this section is that scientific epistemic standards are inherently more onerous than those imposed on ordinary epistemic practice, the main argument in the development of these anecdotes about procedures and steps taken by Kepler is that epistemic hygiene of this kind is even more necessary in the initial stages of the introduction of a novel technological artifact into the practice of scientific inquiry.6
Here, it is important to note that there are certain views of technological development that reject the notion that technological artifacts are “introduced” into a given community. Rather, these views emphasize, technical artifacts are born already emmeshed in sociological interdependencies (Ropohl, 1999). To suggest that a technical artifact is designed and developed in isolation and only later deployed— or “plopped”, as Deborah Johnson (2004) characterizes it— into a societal context is to buy into what is deemed a naïve framework of technological determinism. Technological determinism is comprised of two main premises: (a) technology is developed in isolation, and (b) once it is deployed in its final form it has transformative effects on the core of societal structures such as its values, interest, and futures. In contrast, these views suggest that all technology begins and ends embedded in and imbued with societal contexts. While there is some truth to this, it is important to consider that at the time of Galileo, when the opportunities to engage in and the characters engaged in technological development were few and far in between, the isolated nature of technological development was more of a reality. It is also important to note that not only were there incentives for secrecy and isolation already in place at the time by competing governing institutions, but that Galileo himself was highly protective of his intellectual endeavors. The Dutch, for example, would incentivize anybody who thought they had developed a technical innovation to first contact a government official to secure an exclusivity contract which guaranteed both funding for future development or a life-long pension bounded by secrecy or the stopping of further inquiry into the technology itself. This was in part to protect the technology from falling into the hands of the Spaniards with whom they were warring at the time. 6
82
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
Another important element of Kepler’s efforts was that, as reported by him, he did not communicate with Galileo himself during these trials. He was not even able to get an instrument from Galileo himself. Why is this important? Beyond the historical detail, it evidences an effort to shield the sanctioning practice from doubt or skeptical criticism.7 Particularly, it shields it from accusations of possible bias that could discredit the enterprise. One thing that is very important to note, is that there is an added dimension to our analysis here that is often neglected in the history and philosophy of science literature. These steps were meant to shield not only the scientific results of Galileo’s findings from skeptic criticism. Rather, Kepler’s efforts were also providing warrant for the reliability and hence trustworthiness of the instrument with which such results were found. This is a crucial distinction. Often, the notions of peer review, double-blind experimentation, or reproducibility are treated with exclusive relation to scientific results, or propositional claims. That is, it is often the case that such procedures are deemed to justify scientific findings. However, what is happening in the context of the introduction of novel technologies is that such methods also serve to solidify the epistemic standing of the material objects designed to yield such findings. This is the case here and it will prove to be the case henceforth in the history of technological development in scientific inquiry. Furthermore, as will become evident below, the interventions financed by Leopold meant that such sanctioning process was not only applied to the results of an instrument or to the instrument itself, but rather also to the methods of sanctioning both the instrument and its results. As we saw in the introduction to this book, a fourth and extremely important element of Kepler’s efforts must not go unnoticed. And that is that he devised a way for the attendees of his evening sightings to not influence one another or the experiment in a way that could undermine its findings. Part of these procedures were described by Kepler himself as follows: “We followed the procedure whereby what one observed he secretly drew on the wall with chalk, without its being seen by the other. Afterwards we passed together from one picture to the other to see if we agreed.” (Kepler, 1611 as cited by Van Helden, 1994, p. 12)
This method, though nowhere near what we would now consider good scientific procedure, nevertheless points to an obvious aspiration of epistemic hygiene such that at least some immediate doubts about the reliability of the results could be minimized. At the very least we can see that neither reliance on social authority nor epistemic entitlements concerning the transfer of reliability warrants from one individual to another was sufficient. Furthermore, we can see that consensus amongst those present was also not deemed sufficient to confer authority to the instrument, the findings, or the process. Rather, each one of those elements had to be isolated, protected if you will, even if it was to a minimal and flawed extent, from the Although, there is ample literature on the subject in the philosophy of science, what I want to point to is more of an observation of a common assumption in most influential debates, particularly debates that are critical of the Baconian or early positivist views of science. One common misconception of this process is to focus on the idea that one is building certitude and neglecting the fact that, there is a secondary, at the very least coextensive function to this process: defending against doubt. 7
5.2 Defining Scientific Instruments
83
influence of individuals, from a group consensus, and even from any biased methodological interventions. Hence, we saw that Kepler decided to hold a public event, he did not think merely getting a telescope himself and corroborating Galileo’s findings sufficed. He decided to invite certain kinds of people to such events. Not just any one could attend but rather only those he and others deemed qualified to provide testimonial credence to the process, whether this be by political influence or by epistemic standing. We also saw that he guided the handling of the instrument for careful and informed use. Additionally, he explicitly and deliberately had not had any contact with Galileo while these evening sightings were happening. Importantly, some of the steps in these procedures show that there was reason to think that simply reaching a consensus of competent agents was not sufficient on its own to warrant trust. At the very least, it shows a recognition that having something other than mere consensus was preferable than consensus on its own when it came to trusting an epistemic process. And finally, we can see that Kepler had both foresight and intention concerning the guarding of the results and the process from influence that may put their credibility in question. This included guarding the process from the participants themselves. While Van Helden (1994, p. 18) interprets these accounts to be evidence of personal authority as the driving force keeping Galileo’s telescope technology at the forefront, it is important to see that neither Galileo or Kepler were appealing to such authority to make their case. While in hindsight it may look like Galileo’s name was the cause behind his technology’s success—and it is undeniable that it may have played a role in ensuring financial and social support that others did not have—it nevertheless cannot have been the whole story. In fact, it could not even be a significant part of the explanation. This is because, as I already mentioned, even when he was touring his telescopes and guiding sightings himself, doubts and skepticism regarding the reliability of this novel instrument did not go away. If his personal authority was all that mattered then he would not have had to go through the many pains and efforts to convince others of the truth of his findings and of the reliability of his technology. At every turn, it seems that what Van Helden calls personal authority can be reducible to methodological sanctioning that signals the objectively independent procedures that sought to ensure that skepticism and doubts about the possibility of procedural bias could be kept at bay. Furthermore, despite Kepler’s efforts and Galileo’s own success in getting significant support from important people in Rome literally certifying “the legitimacy for the telescope and the discoveries made with it” (Van Helden, 1994, p. 13) the telescope was merely deemed something akin to a ‘mathematical’ instrument and not a ‘philosophical’ or empirical one. This is because while mathematicians in Rome had indeed been able to corroborate the observations as described by Galileo regarding Jupiter, they were cautious to note that the interpretations of these observations was an independent matter that had not yet been solved. In other words, while they were able to see something not seen before with his telescopes, this could still be an artifact of the instrument’s mechanics and not necessarily something out in space. What this did was to sanction/certify Galileo’s instrument as a mathematical instrument but not a philosophical, or what we now call empirical instrument (Van Helden, 1994; Turner, 1969; King, 1955; Zik, 2001). Still, Galileo made sure he remained the authority on
84
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
telescope-making and astronomical findings arrived at with the instrument. Furthermore, through his association with powerful people both in Florence and in Rome, he ensured that the heart of telescope-making remained in Florence as long as he was alive (Van Helden, 1977, 1994). Besides Galileo’s central involvement in telescope technology, advances therein followed many trajectories with many different and important players. Nevertheless, telescopes in general remained a novel, not easy to use technology whose results were not very robust on their own. Consequently, astronomers remained justifiably skeptical of them as precision instruments. Despite Galileo’s success with his own instruments, for roughly the first quarter of a century after their invention, telescopes were mainly only trusted as military and naval tools to observe things here on earth rather than in the skies. This was already a big step in the sanctioning of its capabilities, since an early letter reporting on the instrument’s invention disqualifies the first versions even for such uses (Biagioli, 2010). Nevertheless, after Galileo’s death, it was clear that other telescope makers were also producing high-quality instruments and significant findings with them. However, as the many different instruments kept being pushed to their limits and their findings continued to yield unprecedented information, several controversies arose regarding the nature and interpretation of astronomical phenomena. Details such as Saturn’s rings, for example, continued to prove elusive to astronomers for quite some time. The only way to settle controversies about unclear astronomical phenomena was to test the power of the different telescopes in indirect ways in order to settle which sighting was the most accurate. Huygens, for example, tried to settle such controversies by appealing to other clearer discoveries arrived at with his instruments and then extrapolating from these successes to prove the superiority of his instruments in more ambiguous cases. The matter was ultimately brought to the attention of prince Leopold when several telescope makers appealed to him, as a man of letters, arts and the sciences—and generous funder of such enterprises—to settle some such controversies. Leopold, rather than settling the issue himself by decree, commissioned a group of experts to do it for him. Importantly, a crucial aspect of this commission was that the group of experts, known as members of the Accademia del Cimento, was not only tasked to test the instrument. Rather the order of operations was to first come up with the best possible method by which to test it. This led to the emergence of a series of competitive races initiated by the Accademia del Cimento’s efforts. These races involved not just one, but several interested communities. They also involved certain standardization practices for testing: for example, the instruments had to be of the same length, of comparable technology, tested with the same methods, at the same distances, etc. Different communities swore by different telescope makers and developed ways to mobilize the credibility of their preferred instrument-maker. Telescopes were tested with objects on earth and then tests of the tests were conducted. That is, as the instruments were tested so were the tests in order to ensure that they were measuring what they were supposed to be measuring (Van Helden, 1994). This also led to a change in methodology regarding error and bias correction (King, 1955; Zik, 2001). Letter tests, for example, the standard targets for early
5.2 Defining Scientific Instruments
85
telescopes—like the ones you find at the eye doctor but placed miles away—went through significant modifications. They changed many times over not just in content but in style and material too. Problems emerged early on when the tests contained fragments of literature that could be guessed by well-read segments of the population—the very same members of the population participating in testing the telescope—after reading only the first lines. Fonts too underwent changes due to tests on telescopes: fonts that made letters easily recognizable, even if not clearly seen, through the instrument, were discarded (Van Helden, 1994, p. 26; Zik, 2001). Special paper, special ink, and even special ways of drying the letter tests were developed. When the printing of the letters left marks on the paper, the paper had to be beaten flat to ensure that there were no shadows cast by the print that would interfere with clarity of the observations. In short, the telescope shifted authority to quality control standards even in the printing industry and away from practitioners. Thus, the telescope triumphed and is now known to be one of the first-ever scientific and precision instruments (Koyré, 1957). As we saw above, Kepler’s involvement was simply crucial. But more important, it was the procedures devised by Kepler and later the members of the Accademia del Cimento—the academy of experimentation—that made the greatest difference. It was a process of building trust, constructing trust, and not just eliciting trust or extracting it from agents. Even Prince Leopold, who was tasked with being a judge himself, decided to delegate this honor to the Accademia del Cimento which was to create both a process and criteria by which to judge what the best instrument was. They did it by also appealing to independent standards of evaluation. These required not only the participation of a third person but also of a whole body of properly vetted witnesses as well as a whole industry of assurances. This happened inside and outside of the community of practitioners close to the scientist themselves. This was not just about individual practices anymore; it was about epistemology. In this sense, the practice behind using telescopes changed even what was considered as credible evidence. It also changed the devices and norms we use to assess and ensure that credible—or at least appropriately defensible evidence, is indeed gathered. This change in authority is not merely, as some suggest (Van Helden and Hatkins, 1994), the result of social and epistemic standing on individual members of sanctioning committees. Sometimes the sanctioning itself is the product of a methodology that in and of itself reflects superiority over conventional epistemic practices in everyday life (Zik, 1999).8 This point speaks to the importance of superior epistemic norms embedded in scientific inquiry and the sanctioning of its components. These epistemic norms are often in fact in tension with pragmatic concerns that call for the instrument to be deployed and understood as a series of localized empirical successes that further its deployment. In this story, we can see the opposite: that those involved thought that a simple chain of epistemic entitlements leading back to either the creator of such instrument or to the theoretical principles, or to community consensus alone was insufficient to
By superiority I simply mean a level of epistemic rigor and/or epistemic hygiene that is meant to overcome some limitations inherent in conventional or everyday epistemic practice. 8
86
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
make the case for the reliability of the instrument or the truthfulness of the results arrived at with it. These were not merely assumptions taking for granted our epistemic aspirations and capacities. Rather, each one of the efforts undertaken by Kepler, by the members of the Accademia del Cimento, and by all the other mathematicians and astronomers, including Galileo himself, were efforts that were born from the assumption of quite the opposite: reasonable skepticism was called for and could only be overcome with similarly appropriate reasonable means to mitigate it. This point aligns with similar anxieties and commitments, voiced by Bacon and others at the time of Galileo, that science, as we saw above, had to involve both the “expurgation of the [human] intellect” as if it was qualified “for dealing with truth” and the establishment of a “manner of demonstration […] superior to traditional logic.” (Hon, 2003, p. 195). This brings us to the third point in our list, namely that the norms by which these assessments take place be distinct (i.e., subjected to superior rigor) from ordinary epistemic practice and distinct from the way in which we sanction ordinary epistemic practices. As we will see, these facts—that a process had to be devised to provide strong reasons of support for an instrument’s use, and that the ultimate sanctioning of an instrument such as the telescope was the product of such an independent, open process and not direct community consensus—are evidence of yet another important feature of scientific inquiry as an epistemic practice: it is simply not the same kind of epistemic practice as ordinary every-day epistemic practice. In particular, science does not function under the same epistemic norms and requirements as everyday epistemic practices. A simple test suffices to demonstrate as much: swap these norms and requirements and see if either kind of epistemic practice can be conducted appropriately. That is, conducting ordinary epistemic practices with the norms and requirements of scientific practice would fail as an ordinary epistemic practice. Similarly, conducting scientific inquiry with the standards of ordinary epistemic practices would simply be bad science.9 While some would be quick to point out that this distinction in norms and requirements is simply a matter of degree rather than kind, the latter point proves otherwise.10 Without getting into much detail, again, consider
Of course, there is ample literature concerning what is and is not science—this is called the demarcation problem. As it will become clear later in the chapter, my claim is indeed stronger than what is so far proposed. Science practiced merely by following the norms and requirements of ordinary epistemic practice does not look at all like science. However, for the purposes of this simple test, it suffices to say that science conducted in such a manner would simply be bad science. 10 Some may look to credit such a view to idealist and antiquated epistemic hygienist standards detached from actual scientific practice. As we learned from Kuhn (2012) and later social epistemologists, science, they would say, is not only just a direct subset of ordinary epistemic practice, but cannot escape the social dynamics that inform and precede it. This may perhaps be the case, however the point here is that once these standards are instituted and open to scrutiny, whether or not they are met is not a matter of arbitrary sociality. Others may go even further and suggest that science is not even an epistemic practice at all; it is a problem-solving enterprise and so epistemic standards are only important as long as they do not stand in the way of practical achievements. To this latter point my only response is to suggest that they carefully consider whether or not and to 9
5.2 Defining Scientific Instruments
87
substituting whatever epistemic norms commonly associated with scientific practice with whatever norms are often accepted of everyday epistemic practice. The results are that neither can be carried out appropriately. Carrying out every-day epistemic practices while following strict scientific epistemic norms would result in an unnecessary and often self-undermining burden. Similarly, conducting scientific inquiry whilst only requiring that everyday epistemic requirements be met risks undermining what makes science a scientific enterprise to begin with: its norms and rigor. We can apply a similar logic to the incorporation of technical artifacts into both our daily-lives and into science. Introducing a non-calibrated and/or non-sterile utensil from our kitchen into a laboratory setting can jeopardize the aims of a scientific experiment. While requiring laboratory grade precision and hygiene norms in our kitchen can seriously jeopardize ever making dinner. Hence, if we accepted either assertions or instruments in science the way we accept assertions or devices in everyday life, we would not be doing science. Conversely, if what it took to accept an assertion or a device in everyday life was similar to what it takes to do the same in science, very little would get done and very little would be believed. There is a philosophical lesson to be learned from all this. In particular, there is a philosophical lesson about philosophy of science. In gradually elucidating the history behind the final sanctioning of one of the most established scientific instruments as a scientific instrument, we also elucidated a criterion by which it qualified as such. As we will see below, we are also elucidating the reasoning behind such a criterion. Consider the following and consider it within the context of reasons to uphold a specific methodology. Nagel (1989) suggests that there are certain kinds of reasons to do something, say take an analgesic, that are agent-neutral reasons. What he means by agent-neutral is that one can see that such reasons for someone to do something are reasons for anyone in the same circumstances to do that same thing. The reasons then are neutral regarding the identity of the agent. Kepler’s reasoning and his methods, understood to be part of an epistemic defense strategy (rather than a quest for conclusive and certain explanatory or evidential methodology) shows us that the reasons why Kepler went around doing the things he did in the way he did them are agent-neutral in this same sense: if anyone wanted to preempt as many of the possible reasons to doubt the inferences of a method, this are some of the things that should be done. Moreover, these very reasons may serve as sufficient grounds for individuals who wish to effectively justify and authorize a new technology within the realm of scientific investigation. If this is indeed the case, and if we possess valid justifications to adhere to at least some variation of the aforementioned criteria while evaluating the implementation of a novel technical tool in formal scientific inquiry, and if we regard computer simulations as such tools, then we can delve deeper into scrutinizing their construction, evolution, and implementation.
what extent the success of such practical endeavors, like problem-solving, is related to or determined by anything to do with successful knowledge acquisition practices. If there is at least a minimum yet relevant relation to be found, then epistemic factors such as the reliability of knowledge-acquiring practices ought to concern them.
88
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
By doing so, we can determine whether they have undergone this procedural examination or whether it is even necessary for them to undergo such an assessment. Many non-trivial considerations lie in between the design, development and deployment of a technical artifact and its sanctioning as a scientific instrument. Even paradigmatic examples of scientific instrumentation, such as the telescope, had to undergo extensive trials before they were accepted by the scientific community. However, the first step in recognizing a scientific instrument as a scientific instrument and having it undergo the comparative and epistemological assessment that would qualify it as member of this category implies recognizing it as an instrument first. In order to know whether computer simulations can indeed qualify to be used in scientific inquiry as scientific instruments and what their use and contribution is to scientific inquiry, we must first establish their nature: what they are, what they do and how they do it. For now, our task is to understand what it means for computer simulations to be the kind of object categorized as an instrument and what kind of an instrument they are.
5.3 Taking Stock and Defining Computer Simulations So now we know that an instrument is a technical artifact. We have, to a certain extent, distinguished artifacts from non-artifacts and we have distinguished technical artifacts from other artifacts. We have also distinguished an instrument from a scientific instrument. Furthermore, we have a broad, albeit brief and incomplete, overview of the kinds of computer simulations there are, their use and their functioning. Now, it is time to provide a definition of computer simulations that corresponds somewhat to our discussion in this and previous chapters. Although it is true, as many have noted, that a fully articulated and unified definition of computer simulation is difficult to extract from a survey of the literature across diverse disciplines, an initial common-sense definition can be generated in a fairly straightforward manner when one regards them as instruments. In fact, this is one of the initial benefits of understanding computer simulations as instruments rather than as practices or extensions of formal methods. As with other instruments, we can identify computer simulations and their role in inquiry via elucidating the way they function, the functions they perform and what they are deployed for. Although these last three things sound like they are one and the same—and they often overlap—they are not. Consider, for example, that a carburetor mixes air and fuel, it does so through the motions of calibrated valves and it is deployed, mainly, to help combustion in engines. These three things are not one and the same. Providing a definition of computer simulations, however, does not require us to reinvent the wheel with a completely new definition. Rather, we can find evidence that supports our ‘instrument thesis’ if we just refocus our analysis on the overlooked elements of existing definitions. Even in those that can be classified as
5.3 Taking Stock and Defining Computer Simulations
89
narrow definitions of computer simulations. Take the following as an example. A commonly used definition of computer simulations is Paul Humphreys’ definition, which goes as follows: A system S serves as a computer simulation of an object or process B just in case S is a concrete computational device that produces, via temporal process, solutions to a model that correctly represents B, either dynamically or statically. If in addition the computational model used by S correctly represents the structure of the real system R, then S provides a core simulation of system R with respect to B. (2004, p. 110)
Philosophers of science in general—and philosophers of modeling and experimentation in particular—would quickly notice that several of the success terms in this definition are worth unpacking in detail. There is a great deal of work to be done in order to understand what it means for a model to ‘correctly represent’, for example. While it is true that some degree of similarity is the aim of simulation in general there must be an explicit acknowledgment of the role of idealization in scientific representation. Sometimes these simulations are successful in accurately representing a real-world system; sometimes they are successfully in representing the dynamic development of coherent theoretical principles; sometimes computer simulations are about other artificial systems, including computational ones, and not natural phenomena. These are all important points to consider. Philosophers of modeling would very likely focus on the nature of computational models and the representational adequacy of such, for example. Hence, in the case of Humphreys’ definition above, the focus would be on whether or not something, whatever, in fact “produces [...] solutions to a model that correctly represents.” However, while there is a vast literature within philosophy of science concerned with what it means to correctly represent a real system, this will not be the focus of my analysis here. Rather, the importance of Humphreys’ definition for the purposes of this book’s project is its characterization of system S. For it is system S that is the simulation. Furthermore, as I will argue, it is system S that we should understand as the instrument in use. I can build a watch on the wrong model of time but if someone were to ask me to show them the watch, I would not point to the model of time but rather to the watch itself. Similarly, if we are providing a definition of a computer simulation, in my view, we do not point to the model, its virtues or its vices. Rather, I would point to the system that runs it. Hence, we can abbreviate Humphreys’ definition to bring forth the elements relevant to our thesis: what system S is supposed to be, what it is doing and what it is deployed to do. While there are some success terms—such as ‘correctly’—attached to these three things in Humphreys’ definition, we can leave those aside for the moment and focus on the artifacts and functions named. If we do so, the definition would read as follows: A system S serves as a computer simulation of an object or process just in case S is a concrete computational device that produces, via temporal process, solutions to a model either dynamically or statically. If in addition the computational model used by S represents the structure of [a] system, then S provides a core simulation of [a] system.
90
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
Of particular interest to our discussion is the requirement of a concrete computational device. This requirement is evidence that even in conventional definitions of computer simulations, which often draw from debates in mathematical modeling, we find an inevitable departure towards the world of technical artifacts. As we saw in our discussion concerning kinds of computer simulations, there is a way in which this definition still reflects a very rudimentary and early understanding of computer simulations as ‘number crunchers’. While this may have worked for the early days of computer simulations, computer simulations are now understood to be much more than that. Therefore, in order to account for the complexity of modern computer simulations while still safeguarding the very important ontological commitment in Humphreys’ definition to the required artifactual nature of computer simulations, we can be more specific in our definition of computer simulations going forward. Here is our new definition: A computer simulation is a procedurally arranged assemblage of concrete computational devices that produces intelligible solutions, via temporal processes, to a representational abstraction (model) designed or hypothesized to mimic another system or process.
For the purposes of this definition what a representational abstraction is can be left broadly construed. At its most basic and broad, a representational abstraction is an epistemic abstraction like any other: a way in which an agent seeking to know a system retrieves and synthesizes relevant information accessible to them to process and understand anything about the world. However, because we are dealing with computer simulations, the spectrum of possible abstractions will be limited to what can be implemented on a computing machine. Hence, we can include the word model in there. This seemingly simple constraint, however, is yet another way in which the computational device that carries out a computer simulation also narrows what a computer simulation is and can be. If I use an analogy about love to represent a given system, for example and we consider that love is non-computable, then love cannot be used to simulate anything in the sense our definition intends. Contrary to Humphreys’ definition, the definition above also captures two other important and desirable aspects of the concept of computer simulations. First, it captures the possibility that a computer simulation can be an unsuccessful one. This is because this definition captures the artifactual nature of computer simulations more explicitly. Technical artifacts such as scientific instruments can fail to achieve their intended design. They are identified in virtue of what they are meant to do even when they fail to do it. For all practical purposes an airplane is still an airplane when it is on the ground, when it fails to lift off, or when it fails to continue in the air. Similarly, for all practical purposes a failed experiment is still an experiment. These are things that abstract objects cannot do. A word processor that does not process words, for example, is not a word processor. An addition that fails to add is not an addition. Computer simulations, understood as technical artifacts, are still so even when they fail to correctly simulate their intended target: they are a failed computer simulation. Concrete devices can and do fail. Relatedly, since they are an assemblage of procedurally arranged concrete devices that can fail or whose aim may not
5.3 Taking Stock and Defining Computer Simulations
91
be achieved, my definition of computer simulations captures the fact that whether or not computer simulations are a sound addition to scientific inquiry remains, at least in many instances, something that has yet to be assessed, their ubiquity notwithstanding. Secondly, this definition also captures the fact that the processes by which scientists seek to simulate a phenomenon can be hypothetical. That is, the computer simulation, as a technical artifact, can still be understood as a computer simulation even when what it is simulating is only hypothesized as being the way the target system is. Consider that we have a case in which the misunderstanding of a natural system is such that none of what we think we know about the system is true. Running a computer simulation with specifications that come from the content of a wildly misguided theory can still be understood as a simulation in my view. What the simulation simulates in cases like this is the misguided hypothesized model of a target phenomenon. This is a highly desirable feature of a definition of computer simulations, because, as it turns out, most simulations are of hypothetical scenarios, some of which are wildly misguided (Harvard et al., 2021). What I mean by this is the following. At many stages of scientific inquiry, particularly at the stages of documenting observed phenomena, what seem to be the relevant aspects of a system roughly correspond to the immediately salient elements captured by the resources of the scientist. At these stages, the systems characterization is merely hypothesized given the limited information immediately available and does not always correspond to the actually relevant elements of a system. However, we can still build simulations of misguided, limited, or false observations and these simulations can be said to be successfully satisfying the requirements of the term as defined above. Understanding, even in a broad sense, the way that computer simulations work, how the various computational processes are stitched together to provide intelligible information to scientist and having a definition that captures their basic features allows us to see more clearly how they are more like instruments than anything else they have been compared to. Yet this is only the start. Due to the major influence that the dichotomy between models and experiments has had in the philosophy of computer simulations in the past two decades, we must also work our way through that maze with our newfound perspective to find solid ground upon which to establish this new framework regarding the nature, place and limitations of this new instrument. Instruments, however, as artifacts, are defined by what they do. In other words, what kind of instrument a computer simulation is will be determined by what kinds of tasks it carries out as an artifact. Instruments are designed to help us in specific tasks and can be categorized accordingly. Some instruments are physical enhancers, others not so much. The kinds of tasks in which computer simulations are involved however, are of a very specific kind. As we will see, these tasks happen to be of an epistemic kind. If this is the case, as we will see in the next chapter, then computer simulations are a specific kind of instrument: namely an epistemic enhancer.
92
5 Technical Artifacts, Instruments and a Working Definition for Computer Simulations
References Alvarado, R. (2022a). Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI. Bioethics, 36(2), 121–133. Alvarado, R. (2022b). What kind of trust does AI deserve, if any? AI and Ethics, 1–15. https://doi. org/10.1007/s43681-022-00224-x Baird, D. (2004). Thing knowledge: A philosophy of scientific instruments. University of California Press. Biagioli, M. (2010). How did Galileo develop his telescope? A “New” letter by Paolo Sarpi. In Origins of the Telescope (pp. 203–230). Royal Netherlands Academy of Arts and Sciences. Biagioli, M. (2019). Galileo’s instruments of credit: Telescopes, images, secrecy. University of Chicago Press. Boge, F. J. (2021). Why trust a simulation? Models, parameters, and robustness in simulation- infected experiments. British Journal for the Philosophy of Science, 75. https://doi.org/ 10.1086/716542 Daston, L., & Galison, P. (2021). Objectivity. Princeton University Press. Drake, S. (1984). Galileo, Kepler, and phases of venus. Journal for the History of Astronomy, 15(3), 198–208. Ellul, J. (1964). The technological society. Translated From the French by John Wilkinson. With an introduction by Robert K. Merton. Hacking, I. (1987). Review of data, instruments and theory: A dialectical approach to understanding science, by R. J. Ackermann. The Philosophical Review, 96(3), 444–447. https://doi. org/10.2307/2185230 Hacking, I. (1992). The self-vindication of the laboratory sciences. In Science as practice and culture (Vol. 30). University of Chicago Press. Harvard, S., Winsberg, E., Symons, J., & Adibi, A. (2021). Value judgments in a COVID-19 vaccination model: A case study in the need for public involvement in health-oriented modelling. Social Science & Medicine, 286, 114323. Hon, G. (2003). Transcending the “ETC. LIST”. In The philosophy of scientific experimentation (p. 174). University of Pittsburgh Press. Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press. Johnson, D. G. (2004). Computer ethics. In The Blackwell guide to the philosophy of computing and information (pp. 63–75). Blackwell. King, H. C. (1955). The history of the telescope. Griffin. King, H. C. (2003). The history of the telescope. Courier Corporation. Koyré, A. (1957). From the closed world to the infinite universe (Vol. 1). Library of Alexandria. Kroes, P. (2003). Screwdriver philosophy; Searle’s analysis of technical functions. Techné: Research in Philosophy and Technology, 6(3), 131–140. Kuhn, T. S. (2012). The structure of scientific revolutions. University of Chicago press. Latour, B. (1983). Give me a laboratory and I will raise the world. In Science observed: Perspectives on the social study of science (pp. 141–170). Malet, A. (2003). Kepler and the telescope. Annals of Science, 60(2), 107–136. Malet, A. (2005). Early conceptualizations of the telescope as an optical instrument. Early Science and Medicine, 10(2), 237–262. Nagel, T. (1989). The view from nowhere. Oxford University Press. Radder, H. (Ed.). (2003). The philosophy of scientific experimentation. University of Pittsburgh Pre. Ropohl, G. (1999). Philosophy of socio-technical systems. Society for Philosophy and Technology Quarterly Electronic Journal, 4(3), 186–194. Simon, H. A. (1969). The sciences of the artificial. Cambridge University Press. Simondon, G. (2011). On the mode of existence of technical objects. Deleuze Studies, 5(3), 407–424.
References
93
Symons, J. (2010). The individuality of artifacts and organisms. History and Philosophy of the Life Sciences, 32, 233–246. Symons, J., & Alvarado, R. (2019). Epistemic entitlements and the practice of computer simulation. Minds and Machines, 29(1), 37–60. Taub, L. (2009). On scientific instruments. Studies in History and Philosophy of Science, 40(4), 337–343. Taub, L. (2011). Introduction: reengaging with instruments. An International Review Devoted to the History of Science and Its Cultural Influences, 102(4), 689–696. Turner, G. L. E. (1969). The history of optical instruments: a brief survey of sources and modern studies. History of science, 8(1), 53–93. Van Helden, A. (1974). The telescope in the seventeenth century. Isis, 65(1), 38–58. Van Helden, A. (1977). The invention of the telescope. Transactions of the American Philosophical Society, 67(4), 1–67. Van Helden, A. (1994). Telescopes and authority from Galileo to Cassini. Osiris, 9, 8–29. Van Helden, A. (2020). III. The birth of the modern scientific instrument, 1550–1700. In The uses of science in the age of Newton (pp. 49–84). University of California Press. Van Helden, A., & Hankins, T. L. (1994). Introduction: Instruments in the history of science. Osiris, 9, 1–6. Van Helden, A., Dupré, S., & van Gent, R. (Eds.). (2010). The origins of the telescope (Vol. 12). Amsterdam University Press. Warner, D. J. (1990). What is a scientific instrument, when did it become one, and why? The British Journal for the History of Science, 23(1), 83–93. Werrett, S. (2014). Matter and facts: Material culture in the history of science. Routledge. Zik, Y. (1999). Galileo and the telescope: The status of theoretical and practical knowledge and techniques of measurement and experimentation in the development of the instrument. Nuncius, 14, 31–69. Zik, Y. (2001). Science and Instruments: The telescope as a scientific instrument at the beginning of the seventeenth century. Perspectives on Science, 9(3), 259–284.
Chapter 6
Hybrid All the Way Down
The previous chapters offered a series of distinctions that allow us to differentiate computer simulations from both the formal and the empirical elements of scientific inquiry that they are conventionally compared to or subsumed under. Simply put, simulations are something else. These precedent chapters also offered a working definition for computer simulations that already had the foundational elements for a conceptualization of them as technical artifacts of a specific sort. They are the kind of artifact that we deploy in epistemic contexts to deal with epistemic content in epistemic ways. Hence, they are the kind of instrument that Humphreys aptly categorized as epistemic enhancers. Yet epistemic enhancers do not enhance our epistemic capacities in just one way. As we will see in detail in this chapter, they do it in at least three ways, according to Humphreys (2004). According to him, computer simulations are clearly able to enhance our epistemic capacities in at least two of those three ways while whether they do so in a third way remains an open question. In this sense, computer simulations are indeed a hybrid kind of epistemic enhancer under this taxonomy. This in itself, I will argue, is yet a further explanation of their perceived ‘in-betweenness.’ Not only is the fact that they are instruments a better explanation of why they may be perceived as being in between both formal and experimental methods, but the fact that they may also be hybrid epistemic enhancers further elucidates why this in-betweenness persists even when they are now correctly categorized as instruments: i.e., they are hybrid instruments. While Humphreys’ framework suffices to allow me to make this case, it is important to note that computer simulations as instruments continue to elicit this hybridity under different taxonomies of instrumentation. In this chapter I provide a detailed overview of how this is the case. As we saw earlier, some philosophers have indeed argued that computer simulations are like certain specific instruments such as measuring devices (Morrison, 2015; Boge, 2021). However, as we also saw, these views do not genuinely seek to account for their material implementation or the particular epistemic ramifications
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3_6
95
96
6 Hybrid All the Way Down
that their nature as materially implemented things implies.1 Rather, as we saw in Morrison’s case, what these views were trying to account for was the fact that certain empirical practices that we acknowledge to be experimental are nevertheless closer to modeling techniques than to material experimentation. In this sense, I deemed such approaches as deflationary.2 In contrast, in this chapter I want to offer the thesis of computer simulations as instruments in virtue of a serious ontological commitment. That is, I want to argue not only that they are like instruments, but rather that they are instruments. Furthermore, following Davis Baird’s views on the epistemic status of instruments, I want to position such a claim within the understanding that instrumentation is in itself a distinct and separate category of the elements of scientific inquiry. I further argue that doing so has extensive explanatory power concerning several issues related to the epistemology of computer simulations. For example, understanding computer simulations as instruments helps explain many of the recurring observations and intuitions about their nature that we have seen in this book, as well as some that we have yet to see. For example: • • • •
Their necessary computational implementation Their extra mathematical/ extra theoretical aspects Their ‘in-betweenness’ Their independent sanctioning requirements, to name a few.
Hence, following Davis Baird (2004) and others before him (see Van Helden, 1994; Van Helden & Hankins, 1994; Heilbron, 1993) who suggest that there is already a well-established—if often neglected by the philosophy and history of science— separate branch of scientific inquiry in instrumentation, in this chapter I argue the following two things:
As we have briefly seen in previous chapters, while computer simulations may not be singular material objects per se, like a hammer or a bicycle, their necessary implementation on machinery and computational architecture, makes them necessarily material in this sense. 2 Others have also come close to the instruments view. Boge (2021), for example, as we saw before, does provide a nice argument for the consideration of computer simulations as instruments. He argues that computer simulations should be understood in a similar way to precision instruments because they share an important aspect to their successful implementation, namely in situ calibration. His argumentative strategy, however, is not particularly strong. In his view, computer simulations can be understood as instruments in virtue of the fact that they often require deployment behaviors that are similar to the deployment of precision instruments. Boge references Tal (2012) on gauges that require calibration and include flexible parameter-setting. Boge’s strategy, while on the right direction, is ultimately a flawed strategy that borders on a false analogy. The argument as is amounts to saying that both computer simulations and precision instruments require and are apt to calibration in situ. Hence, computer simulations must be understood as instruments. If this is the only similarity upon which the analogy is based then it is a poor one. Furthermore, unlike the main claim in this book, Boge’s stance on the instrumentality of computer simulations is an epistemic claim. That is, he claims that we can understand computer simulations as instruments, not that they are instruments. Furthermore, as we saw, some relevantly similar views, such as that of Margaret Morrison, signal to a rather deflationary view, which strongly contrasts with the one championed in this book. 1
6 Hybrid All the Way Down
97
(a) Computer simulations are best understood as belonging to this latter category: i.e., they are instruments. (b) They are a hybrid kind of instrument. Understanding (a) provides us with a broad framework to elucidate why computer simulations simply do not fit neatly into the conventional categories that they have been erroneously subsumed under. As we saw in previous chapters, views that attempt to reduce computer simulations to either theoretical elements or empirical elements of inquiry cannot deny the distinctiveness and therefore individuality of the artifact in question: the computer simulation. Philosophers have known for a while that computer simulations did not immediately fit within each category of the conventional dichotomy under which they were subsumed. To make sense of this and of their epistemic role in scientific inquiry, philosophers and practitioners suggested that computer simulations are, epistemologically speaking, somewhere in between experiment and theory (Rohrlich, 1990; Morgan et al., 1999; Humphreys, 2004). These views are somewhat correct. Computer simulations do show a dual nature of sorts, and they do not belong on either side of such a dichotomy. However, this is not, as I will argue below, because of their novelty (Frigg & Reiss, 2009; Humphreys, 2009a, b) vis-à-vis scientific methodology as a whole. For example, they do not represent—strictly speaking—a paradigm-shift regarding foundational theoretical principles or values in scientific inquiry. Furthermore, computer simulations are not epistemically ‘in-between’ because of their novel methodological characteristics—as Lenhard (2007, 2019) and others suggest, though they may have some of these novel characteristics. They are also not ‘in-between’ solely because of the fact that they can function as other measurement practices do—a suggestion made by Margaret Morrison as a means to accommodate the seemingly ambiguous epistemic nature of computer simulations as moderators. In fact, the intuitions in both of these latter views—Lenhard ’s and Morrison’s—can easily be accommodated when we situate computer simulations in the realm of novel technologies, a novel device with which to do old things better and with which to do some new things. This is because technical artifacts, such as instruments, in general exhibit a dual nature between the abstract specifications of their design—which in the case of scientific instruments often include theoretical formalities—and the materiality of their implementation. Yet, as a device, computer simulations do not represent a novel branch of scientific inquiry. Rather, they are a novel addition to an often- neglected branch of inquiry: instrumentation. Understanding (b) requires that we see what kinds of roles are played by instruments in scientific inquiry and which kinds of instruments fit those roles. As we will see below, computer simulations continue to elicit a certain in-betweenness even within the category of instruments.3 This is because, as many other complex
Perhaps in this sense the argument here is analogous to arguments we saw in previous sections where authors try to explain the in-betweenness of computer simulations by stating that they are special kinds of experiments or special kinds of models. What my argument here amounts to is that computer simulations are indeed a special kind of something. That is, a special kind of instrument. 3
98
6 Hybrid All the Way Down
instruments, in order to function, computer simulations must do many and varied things. Hence, as I will show below, they are hybrid instruments. In the following sections I first present the instrument thesis of computer simulations as an argument to the best explanation of the undeniable in-betweenness of computer simulation. Once this is settled, I also go through different taxonomies of instrumentation and show how computer simulations, as technical artifacts, are also epistemically diverse and therefore hybrid: they enhance our understanding in many different ways. As mentioned above, this further solidifies the intuitions discussed above regarding their in-betweenness and their novelty, but this time within the category to which instruments belong. And lastly, I offer an overview of several taxonomies of instruments from the philosophy of technology and show that even in these other taxonomies computer simulations can be made sense of as hybrid instruments and thus preserve both their in-betweenness and their instrument qualities.
6.1 A False Dichotomy: The In-betweenness of Computer Simulations Explained In a previous chapter I concluded that computer simulations were simply distinct from the models they run. They are distinct in what they do, in how they do it, and in what they were designed, developed, and deployed for. Furthermore, I showed that computer simulations also do not do what experiments do, they have different functions. I also showed that when computer simulations are built to simulate what experiments do, they are built in a way that does it very differently from conventional empirical practice. Hence, computer simulations are also distinct from experiments. However, an important lesson of this discussion will prove to be that the dichotomy positing that computer simulations having to be one or the other may have been a false one to start with, it was the product of legacy debates. Philosophers of science began trying to understand computer simulations by appealing to analogies of the one thing they understood best: scientific representation and the role of models in science. Then, of course, there was a response to this. Then an alternative framework emerged: they could be understood as similar to experimentation. And then a response to this emerged as well. Then a halfway response emerged. So on and so forth. However, given the discussion in previous chapters, we can say with confidence that the following is true: 1) computers are simply not identical to either of the things they have been compared to or subsumed under and—following Lenhard (2007, 2019), Morrison (2015) and others (Morgan et al., 1999)—2) computer simulations do not fit neatly on either category of the conventional dichotomy between formal abstractions and experiments. So much has been established by the complex set of debates canvased so far. Nevertheless, there is something interestingly true that is reflected in the debate illustrated by the pendulum-like swings between theoretical and experimental elements of computer simulations in our discussion so far. As we saw above, computer simulations do show a dual nature of sorts, and hence they do not seem to belong on
6.1 A False Dichotomy: The In-betweenness of Computer Simulations Explained
99
either side of the conventional dichotomy. Eric Winsberg (2010), characterizes this set of views as suggesting that “simulation represents an entirely new mode of scientific activity—one that lies between theory and experiment” (p. 39). As we suveyed in detail in previous chapters, this same sentiment concerning the dual and uncertain status of computer simulations has continued to be echoed by influential works in the debate. More recently, for example, Johannes Lenhard (2019) characterized the view that simulation was “neither (empirical) experiment nor theory” but rather something in between both, as representing one of the “largest factions” in debates about the nature and epistemic status of computer simulations. As we saw in Chap. 3, Humphreys was skeptical and critical of some of these approaches. While he thought that there were features of computer simulations that indeed introduced novel epistemic challenges, he did not think that this fact contributed to them being a completely new ‘branch’ of science. However, let us focus on a critical limitation of Humphreys’ objections to the ‘sui generis’ account of computer simulations. It is true that computer simulations are not, by themselves, a completely new branch of science. Yet, this does not mean that they do not belong to a separate methodological category or branch of scientific practice: namely instrumentation. Particularly, let us focus on the fact that Humphreys fails to note that this “third methodological category” to which computer simulations supposedly belong, had already emerged in physics centuries prior to the advent of computers. It had also already emerged in chemistry at least decades before—its importance already widely documented, at least in the history of science (Golinski, 1994)—where it was embodied by the advent of tailor-made, precision instrumentation in inquiry. In fact, the distinctive role of instruments as well as their dual nature as physical and epistemic enhancers has been documented as far back as Bacon when he stated the following: “Neither the naked hand nor the understanding left to itself can effect much. It is by instruments and help that the work is done, which are as much wanted for the understanding as for the hand.” (Bacon, 1965 as cited by Van Helden & Hankins, 1994, p. 4)
Here, Bacon is not only touting the existence and usefulness of instruments in inquiry, but is also signaling that there are at least two kinds of them, those that are used for physical labor and those that are used for epistemic purposes. Hence, a better and more immediate explanation of the in-betweenness of computer simulations that we saw in Chap. 3, emerges once we understand computer simulations as belonging to the category of instruments, as we will see in detail in the section below. Just like a stethoscope which is neither a purely theoretical construct in medicine nor an experimental practice in and of itself, computer simulations are also neither. Stethoscopes can be used properly when they are theoretically informed. That is, when their design, development and deployment emerge from a deeper conceptual understanding of the phenomenon of interest and the phenomena at play in their workings. However, stethoscopes can also be a device within an experimental setting within medicine. Yet, they are neither the theory nor the experiment. Rather, they are instruments designed, developed and deployed in clinical practices of medicine. Similarly, computer simulations belong to a third—and equally important element of scientific inquiry: namely, instruments.
100
6 Hybrid All the Way Down
As we will see, Instruments cannot be understood in terms similar to those with which we understand theories because they are not constituted of the same things (Baird, 2004, p. 4). That is, if theories are the systematic and coherent collection of propositional knowledge, then instruments, by virtue of not being propositions themselves but something more than propositions, cannot be understood solely through the same articulations we understand theories with. Hence, the life of an instrument, as such, requires its own epistemological account independent from the one for theory (Baird, 2004). This explains why they do not fit on either side though they can carry out functions associated with both. This explains too why computer simulations, as instruments deployed in scientific inquiry, elicit all these features that make them be seen as in between. They are ontologically and epistemologically separate and distinct from both theory and experiment. In fact, as we saw in previous chapters, computer simulations as instruments are categorically distinct from either the theoretical elements that inform them and the experimental settings in which they are deployed. In short, what Humphreys, Winsberg and others failed to point out as they analyzed this in-betweenness, and as they resisted the interpretation of simulations as a sui generis category of inquiry, is that all this time we have been dealing with a false dichotomy to start with. What the views we examined concerning the in-betweenness of computer simulations are ultimately missing is a broader picture in which computer simulations are in between theory and experiment in virtue of belonging to yet another class of objects with independent epistemic properties from theoretical and experimental elements of inquiry: namely, they belong to the class of objects, more precisely artifacts, we call instruments. So, yes, they are ‘in between’ theory and experiments. But no, this is not because they are a special third branch of scientific inquiry on to themselves. Rather, it is because, as Ian Hacking noted, the things with which we conduct scientific experimentation are flanked by theory on one side and by method and practice on the other. The philosophical views above, including Humphreys’, do indeed shed some light on aspects of how this is the case. Both Lenhard’s and Morrison’s attempts to account for the seemingly ambiguous epistemic role and position of computer simulations in fact—inadvertently—capture an important aspect of computer simulations as technical artifacts, artifacts that necessarily have a dual nature (Kroes, 2003)—as physical implementations of functional specifications with conceptual and intentional origins (Symons, 2010). This in betweenness, ambiguity if you will, of computer simulations which we just analyzed in detail through Lenhard’s and Morrison’s attempts at reconciliation, can be easily explained—or even explained away—by understanding that computer simulations are first and foremost technological artifacts and that technological artifacts are of such a dual nature.4 So, let us now look into this more in detail.
Whether by gears or electrons, but physically executed nonetheless.
4
6.2 Hybridity and Functional Novelty
101
6.2 Hybridity and Functional Novelty As we saw at the start of this chapter and in previous chapters, attempts at explaining the in-betweenness and the apparent novelty of computer simulations and their role in scientific inquiry have usually taken the form of an appeal to an unprecedented third way of doing science altogether. However, there is a simpler and more general explanation to why computer simulations are often found to be at the epistemic intersection of theory and experiment, or why they can be understood as measurement devices, or why they seem to have an ambiguous epistemic status as almost-models and/or almost-experiments: it is because they are technical artifacts, which elicit all these properties. They have a dual nature—as the physical implementation of a conceptual design with a function; as measuring devices that can manipulate theoretical values and contrast them with data (real or simulated); and finally as ‘almost theoretical’ and ‘almost experimental’—because they are neither. They are instruments and, as I will argue following the views of Davis Baird (2004), instruments are the third element of scientific inquiry. First and foremost, computer simulations, as technical artifacts go, belong to the class of technical artifacts whose main function is to serve as epistemic enhancers (Humphreys, 2004). This rough characterization is sufficient to provide an intuitive framework in which we can differentiate them from the kinds of artifacts that enhance other capacities or help us overcome other limitations of human agency, such as physical strength or perceptual abilities. The kind of enhancement that a calculator provides, for example, is different from that of a bulldozer, which is in turn distinct from the kind of enhancement provided by a microscope or a hearing aid. While in a scientific setting any instrument can be said to contribute to the general aim of knowledge acquisition, we can still differentiate between the artifacts that augment our physical capacities and those that augment our epistemic ones. If computer simulations enhance anything, they enhance our ability to acquire knowledge and not our ability to push harder or dig deeper. According to Humphreys (2004), there are three ways an epistemic enhancer can extend the reach of our understanding. The first one is extrapolation, which is the capacity of an instrument to expand “the domain of our existing abilities” (p. 4). Then there is conversion, which happens when “phenomena that are accessible to one sensory modality […] are converted into a form accessible to another” (2004, p. 4). And finally, there is augmentation. This last kind of enhancement occurs when, mainly through one of the other sorts of enhancements—particularly conversion (p. 4)—we are “given access to features of the world that we a not naturally equipped to detect in their original form”. At first sight, it is easy to take computer simulations to do all three and often at the same time. A computer simulation can, for example allow us to gain insights into the evolution of galaxies which would take millions of years to examine in real time. At the same time, they can convert intractable numerical values into immediately intelligible visualizations of complex dynamics. Furthermore, as with the example of the evolution of galaxies, they can provide access to “features of the
102
6 Hybrid All the Way Down
world that we are not naturally equipped to detect in their original form.” A careful reading of what Humphreys has in mind, however, reveals that there are some challenges in this characterization. In order to better understand the epistemic role and position of computer simulations in scientific inquiry it is worth going through these three types of enhancement in detail as they provide a picture of the kinds of epistemic endeavors that computer simulations undertake as well as a glimpse into why they are hybrid instruments across many domains and dimensions of inquiry. We can understand each one of the distinct kinds of enhancements proposed by Humphreys with the help of some examples. Humphreys (2004) begins by pointing to the perceptual enhancement characteristic of optical instruments to exemplify extrapolation. Telescopes and microscopes, for example, expand the domain of the visible things for us. They also expand the level of detail of a perceptive ability that most of us are already acquainted with, namely vision. Similarly, other kinds of telescopes expand the range of the spectrum of electromagnetic radiation available to us without them. When it comes to computer simulations, we can see that, at the very least—particularly if we share Humphreys’ understanding of them as mathematical machines—they enhance our existing ability of analysis. That is, if we consider that we as epistemic agents have an analytical ability, then we can see that computer simulations indeed expand on this existing modality. Therefore, computer simulations enhance the domain of our existing epistemic abilities. They extrapolate in Humphreys’ sense. That computer simulations can enhance our epistemic capacities via conversion is a lot more straightforward. Consider the necessary conversion that musical notation undergoes when implemented on a musical instrument: the information on the sheet of music is of a different kind, namely visual or logical, and is converted into sound. In the scientific context of computer simulations one can immediately see this type of enhancement occurring when computed numerical values are converted into pixelated gradients on a grid and the transformations of such values are displayed as spatial changes on a screen. In this example, mathematical, or merely numerical information is transformed into visual information. Hence, computer simulations also convert. Whether or not computer simulations allow us to augment our epistemic capacities in the sense specified by Humphreys is an interesting question. Conversion as you may recall occurs when we are “given access to features of the world that we a not naturally equipped to detect in their original form.” Humphreys himself notes that this is not immediately obvious from looking at what computer simulations do. According to him, simulations are the kind of thing that we use solely for mathematical tasks. In his view, computational methods have not yet proven to have given us access to mathematical features that we are not naturally equipped to detect in their original form. This is a contentious issue that exceeds the scope of this book, for now, it suffices to say that the ability of computer simulations to both extrapolate and convert is evidence of their hybridity as epistemic enhancers. Importantly, others, particularly Symons and Boschetti (2013) believe the function of a computer simulation is simply to predict and that this alone should constitute the basis upon which we judge their merit. (2013) However, they also admit that they can do other
6.2 Hybridity and Functional Novelty
103
things, particularly be used in exploratory tasks (2013, p. 813). What this shows, at least, is that they are not just one kind of epistemic enhancer but rather that they can have multiple functions and function as multiple kinds of instruments at once. Computer simulations also prove to be hybrids of some other sort. As briefly stated above, they can be more than one kind of instrument at the same time. If we look at some taxonomies of types of instruments, this will become clearer. According to Baird (2004, p. 45), for example, there are three kinds of instruments: models, which represent; devices that create a phenomenon; and measuring instruments which can either detect the instance of a property or compare theoretical values against a phenomenon. Conventional measurement instruments, such as thermometers, as we will see, are, according to Baird, hybrids between the kinds of instruments that represent and those that create or recreate a phenomenon. This is because they must create/recreate a set of procedural steps in order to obtain their reading. Models, according to Baird, are not merely representative in that they ‘stand in’ place of actual phenomena of interest. Rather, they are representative in that they integrate knowledge and are constituted by knowledge of the target itself in an epistemically independent way. That is, in their own way and not necessarily in the same way that theory or experiment do. He explains this epistemic independence of models as instruments via Watson’s and Crick’s double helix DNA model. In this particular example, Baird says, they “did not use the model as a pedagogic device. They did not simply extract information from it. The model was not part of some intervention in nature and it was also not a part of an experiment” (p. 36). Hence the model was not theoretical and was not part of an interventionist empirical practice such as a conventional experiment. And yet, the model had the standard theoretical virtues since “it can be used to make explanations and predictions. It was confirmed by X-ray and other evidence, and it could have been refuted by evidence” (p. 36). Computer simulations can also function like this when they are used as a device that is independent from both theory and or empirical experimentation to test or inform theory and experiment construction. Lenhard (2007) for example suggests that computer simulations can be used to fine tune the model specifications, parameters and assumptions of an experiment before having to carry it out. Furthermore, computer simulations are often designed with their representative functions in mind. Some simulations, like those of cellular automata are paradigmatic of the dynamic Baird is alluding to. They were developed independently of any theoretical framework associated with any particular phenomenon, or even discipline. They were also developed independently of any particular experimental setting associated with an inquiry onto a target phenomenon. While they were themselves experimental, they were not part of a premeditated focus inquiry besides that of investigating the features of the machines that produced them. It was only later that they came to be used as a tool that could to provide both theoretical and experimental insight regarding the formation and development of natural systems deemed to be similar enough to them. Measuring instruments on the other hand, according to Baird, work by generating a signal from an interaction with a given target “which, suitably transformed, can then be understood as information about” that target (Baird, 2004). According to
104
6 Hybrid All the Way Down
Baird, measurement requires that we can “produce, in laboratory conditions, a stable numerical phenomenon over which one has remarkable control.” (Hacking & Hacking, 1983, as cited in Baird, 2004) Measuring instruments are “encapsulated knowledge” (Baird, 2004, p. 68) because they are constituted by the integration of a material object and the kind of knowledge provided by a model, theoretical values and principles. Hence, measuring instruments are hybrids in that they must reproduce and perform a set of specified procedures in order to represent their reading. A key insight in this description comes from Baird’s use of Hacking’s definition of measurement in which the main function of a measurement is to produce a “stable numerical phenomenon” in a setting of rigorous control. Computer simulations, in fact, are the kinds of technical artifact that can and do produce numerical phenomena. In fact, even if we take only the narrow definition of computer simulations (Durán, 2018) as equation solvers, this is what computer simulations strictly do. Furthermore, as far as controlled situations, it just simply does not get any better than the abstract realm in which some philosophers take computer simulations to operate. If computer simulations are, for example, anything like implemented models as Herbert Simon (1969) suggests—machine-automations of mathematical relations—then they are the kinds of instruments that Baird alludes to. Much of the heavy lifting in this characterization of models and simulations as measuring instruments is being done by the first part of the description regarding the generation of a signal by a measuring device, this can be easily interpreted to be exactly what the display in some computer simulations is doing. We can think of an instrument which upon detecting a certain signal reacts accordingly. We can also think of an instrument which only produces such a reaction when other indirect values are computed, such as the ones that Morrison describes in particle physics. These two kinds of instruments are different in one sense. They do not both interact with the phenomenon in an equally direct way. However, they are also similar in that a computation must take place, whether it be analogous or digital in order for the detection to occur. If so, the difference is one of degree and not of kind and computer simulations can indeed qualify as a version of the latter kind (Morrison, 2015). However, computer simulations also have to carry out, reproduce, a set of procedural specifications every time they are meant to represent whatever they are simulating. In this more physical sense, computers are reproducing a certain state of affairs as they implement the specifications of their simulation model. As we saw above, one of the things that simulations do is to encapsulate, through their procedure, the testing of models (Lenhard, 2007). But computer simulations do not only encapsulate knowledge regarding the principled theoretical values and the direct experimental data, they also encapsulate the procedure by which to transform/ manipulate the content. That is, they encapsulate experimental settings too (Barberousse et al., 2009). As such they are a hybrid instrument in Baird’s terms. And as such we can characterize their in-betweenness within the realm of instrumentation without appealing to a sui-generis branch of science altogether. Thus, understanding computer simulations as instruments best explains the in-betweenness that so many philosophers of science have pointed to.
6.3 Other Instrument Taxonomies
105
6.3 Other Instrument Taxonomies There are, of course, other ways of cataloguing the kinds of artifacts found in laboratories. And computer simulations also fail to simply fall under one single category or another in these other taxonomies. Heidelberger (2003), for example, distinguishes between four distinct functions of instruments in scientific experimentation. First, according to him, they either fulfill a productive or a constructive function. A scientific instrument is productive when it produces a phenomenon that does not not normally appear in everyday epistemic experience. A constructive instrument, on the other hand, is the kind that can intervene in the target of interest in order to modify its behavior (2003, p. 146). If we consider that computer simulations are capable of elucidating properties of a system that are not easily found in the world and manipulating data in ways that are not usually available in the world, we can construct this as meeting the first conditions. Much of astrophysics, and or particle physics would be unavailable to us otherwise. So, we can agree that computer have the capacity to manipulate data in such a way as to mimic behaviors of a system that are not easily found in the world. Granted, as we have seen in previous sections, whether this constitutes ‘doing experiments’ or not is a contentious manner. However, we can always fall back and appeal to Morrison’s conception that at least sometimes, in some cases, the way we conduct experiments in physics is not so far removed from the way experiments are characterized by simulations processes. If so, then we can say that the manipulation of data in fact constitutes both the production of phenomena not easily found in the world and an intervention that modifies its behavior. Heidelberger’s view also includes yet two more important categories for our discussion. The performative aspect of computer simulation is indeed important, but this performance is often deployed with an ulterior epistemic purpose, namely to render intractable processes intelligible, often through visualization, etc. Heidelberger calls those instruments that do this the imitative instruments, which “produce effects in the same way as they appear in nature without human intervention” (2003, p. 147). He also posits instruments as acting in a representative role, where the “goal is to represent symbolically in an instrument the relations between natural phenomena and thus better understand how phenomena are ordered and relate to each other” (p. 147) Without going too much into detail, we can see that computer simulations straightforwardly carry out more than one of these tasks: they represent, they reproduce in an imitative manner, they are constructive in their control of variables, etc. Yet another taxonomy of instrumentation is offered by Harré. For Harré (2003), not every piece of equipment in laboratory equipment is an instrument. Models, for example, are an apparatus. While instruments have elements that are causally related to the world, an apparatus merely serves as a “working model of some part of the world” (2003, p. 26). Hence, Harré directly distinguishes the apparatus from the instrument. He thinks that the working model guiding an inquiry—that is, the specifications of an experimental setting, for example—is more like an apparatus (p. 26).
106
6 Hybrid All the Way Down
Apparatus, for Harré, is “an arrangement of material stuff integrated into the material world in a number of different ways” (p. 19) He reserves the word instrument for “that species of equipment which registers an effect of some state of the material environment, such as the thermometer” (ibid). While the word “apparatus”, to him, refers to the kind of equipment that is a model of some “naturally occurring structure or process” (Harré, 2003, p. 20) Harré is right in emphasizing a distinction between those kinds of instruments that detect a property or register an effect—say a sensor—and the work done by a model. However, as we saw above, models do fit into the taxonomy laid out by Baird (2004). In other words, models themselves are the kind of technical artifact that can be viewed as scientific instrument. Baird, however, emphasizes the role of physical models as exemplars of his view. Computer simulations happen to be hybrids in both views. They require a physical implementation in that they are simultaneously performative and representative in nature. In the case of Harré’s distinction, they are also hybrid in that computer simulations are a species of equipment that is capable of modeling. As we saw, again, in Sect. 3.3 computer simulations are not just equivalent to the model, they are not just the specifications, but they are the thing with which those specifications are implemented. In this sense, computer simulations are both the apparatus and the instrument in that they are at the same time “an arrangement of material stuff integrated into the material world in a number of different ways”, a kind of equipment that is a model of some “naturally occurring structure or process”, and also a “species of equipment which registers an effect of some state of the material environment”. What registering here means can be a source of tension. However, we can flesh out the ‘registering of’ as merely enabling a researcher to capture dynamic changes that would have otherwise been unavailable. This suffices for the argument that they are hybrid, particularly since Harré uses the thermometer as an example. The thermometer as we saw above in Baird’s account, is in a way like computer simulations in that it uses data inputs to represent the presence of a property in a system. Consider that a digital thermometer in which the multiple components are to be considered independently of one another. There is a set of components that enable ‘detection’, namely those in contact with the thing whose temperature we are measuring. In fact, there is but a single component that interacts with the phenomena in question, namely temperature. The rest of the thermometer functions in virtue of the data to determine that something was in fact ‘detected’. In this sense, computer simulations are similarly detached from the components that gather the actual data in the world, but their functioning in processing the data in such a way as to make a property accessible is similar to that of the digital thermometer. In this way, we can accept Harré’s distinction and still claim that a computer simulation is a hybrid in relation to it. These other taxonomies provide yet another explanation for the recurring intuition that computer simulations are something that is always neither here nor there, but rather in-between of our efforts to categorize them. The difference here however is that this in betweenness is no longer characterized as happening at the level of meta-methodical aspects of scientific inquiry. That is, the in-betweenness of computer simulations is not in between formal and experimental practices of science,
6.3 Other Instrument Taxonomies
107
but rather in between conventional categories of instruments and artifacts found within scientific inquiry. While the details in each of these cases can be vastly expanded, what this point is poised to show is that, at the very least, computer simulations are the kind of instrument that does not fit easily into conventional categorizations of instruments in scientific inquiry. But it is also to say that computer simulations are the kind of instrument that can do these and other things, that incorporates the functions of many instruments and that it is a hybrid instrument. Perhaps computer simulations may indeed be a novel kind of instrument that requires its own epistemic assessment with regards to its status in scientific inquiry. If this is so, it is not because it is a sui generis kind of method, or a third branch of inquiry all on its own. Rather, it is because as an instrument it may indeed have genuinely novel properties that therefore pose genuinely novel epistemic challenges. This, by the way, is a common trajectory for all novel instruments introduced into scientific inquiry. Hence even their seemingly novel epistemic character can be explained by the view that understands them as instruments and not something else. In short, computer simulations, are hybrid epistemic enhancers (Humphreys, 2004) in that they help us transform one sort of information into another, they help us enhance existing capabilities and they allow insight into areas that we would not have access to otherwise; they are hybrid instruments in that they are also capable of simulating the processes by which an experiment is conducted while also being capable of intervening in its development (Barberousse & Jebeile, 2019). And finally, computer simulations are also hybrid in that they are capable of being both productive and constructive instruments in Heidelberger’s terms. In other words, they are able to produce (simulate) a phenomenon in an environment that does not exist in nature as well as of modifying the (simulated) behavior of a system through intervention. All of this, they can do because they are instruments. As such, they can elicit an epistemic independence, they can serve as bridges between theory and practice and they can be understood as having a dual nature. As we saw in previous chapters, an intuitive way to individuate an artifact is by its relation to some agential intentionality. Artifacts come in many different shapes and from many different sources. Some objects can be said to be artifacts solely in virtue of already having a desired property when encountered. Others are made to have a desired property in virtue of their design and materiality. The former, as you may recall, are simply pseudo-artifacts. The latter are the more definite artifacts. They do not only have some artifactually advantageous property as found but are explicitly designed and constructed to have that property. Artifacts in general, but technical artifacts in particular, due to their close relationship with agential intentionality can be said to be the marriage of two ontologically distinct sets of properties: specified/defined/expected functions and the physical elements that can instantiate them. Herbert Simon, for example recognizes that artifacts are interfaces, or a “meeting point […] between an ‘inner’ environment, the substance and organization of the artifact itself, and an ‘outer’ environment, the surroundings in which it operates” (1969). Technical/technological artifacts are those which are explicitly, by design, constructed with an inherent practical function. As such, in order to fully
108
6 Hybrid All the Way Down
account for them (i.e. individuate them, describe them, understand them, etc.) both their physical properties and their teleological properties must be included. Besides the fact that instruments are literally a third element of scientific inquiry. These added details about the nature of technical artifacts, explains, to a certain extent, the recurring intuition that computer simulations have something of a dual ontological nature between theoretical and empirical practices: they are the product of conceptual specifications and of the physical implementation of the functional character of these specifications. In other words, their nature as technical artifacts further explains the ontological independence, discussed above, in a way that makes sense of their formal underpinnings and their necessary materiality as the things with which the teleological specifications are instantiated. The epistemic in betweenness of computer simulations also arises from the fact that they are technical artifacts, but the details surrounding this aspect of computer simulations is different from those concerning epistemic independence. Instruments, such as the ones used in science, are highly specialized technical artifacts. When they are well made, they are constituted by highly theoretical and functional specifications as well as by very specific material which is optimal for the execution of the functional specifications in light of theoretical requirements (Symons & Alvarado, 2019). This much constitutes their ontological independence: instruments are highly specialized technical artifacts that are the product of sophisticated functional specifications and sophisticated material properties to carry them out. Yet, once they are made, these highly specialized technical artifacts are taken in to the laboratory setting where they are deployed to perform a role in the acquisition of knowledge. It is there that technical artifacts are further placed somewhere between theory and experiment. This is yet another level of consideration from that which we have been discussing. And here too, instruments such as computer simulations show a kind of conceptual independence: as they are made, they are not strictly members of theory or experimental practice; as they are deployed, they are not so either. The relationship between the specific instrument and the experimental setting or the theoretical underpinnings of the specific inquiry can be very diverse, as diverse as the number of scientific instruments out there. Therefore, it would be misguided to attempt a general description here of the nature of this relationship. Rather, recall the following. One of the reasons why there is a relationship in the first place between one and the other is because instruments are not theory or experiments, they are a third distinct thing. That is, there is a relationship between those other elements of inquiry and instruments precisely because instruments are an independent element of inquiry.5 The ontological independence together with the functions that these Of course, some things can have a relationship to themselves: identity. The basic conceptual claim here is that instruments are non-identical to the other elements of inquiry. Practically speaking, the reason why there are still questions about the role of instruments in relation to theory and/or questions regarding the role of instruments in relation to experiments is because instruments are not strictly speaking identical to the theories or to the experiments they are related to. For a full analysis on the independent ontological and epistemic status of instruments see Baird (2004). For a thorough analysis of some of the non-trivial contributions of instrumentation to scientific inquiry see Hacking and Hacking (1983). 5
References
109
instruments carry out beyond those of the theoretical and experimental functions is evidence that they are also epistemically independent. Instruments can serve distinct epistemic tasks but they can also consist of independent epistemic sources. More strongly put: instruments are a third—separate, distinct, and independent)—source of knowledge in scientific inquiry (Baird, 2004). So, whether the view is that computer simulations function as aids to some sort of deflated empirical/experimental practice such as measuring devices or whether one thinks of computer simulations as something in between, these features, capacities and roles of computer simulations can in fact be better understood if we understand computer simulations as instruments.
References Baird, D. (2004). Thing knowledge: A philosophy of scientific instruments. University of California Press. Barberousse, A., Franceschelli, S., & Imbert, C. (2009). Computer simulations as experiments. Synthese, 169(3), 557–574. Barberousse, A., & Jebeile, J. (2019). How do the validations of simulations and experiments compare? In Computer simulation validation: Fundamental concepts, methodological frameworks, and philosophical perspectives (pp. 925–942). Springer. Boge, F. J. (2021). Why trust a simulation? Models, parameters, and robustness in simulation- infected experiments. British Journal for the Philosophy of Science, 75. https://doi. org/10.1086/716542 Durán, J. M. (2018). Computer simulations in science and engineering. Springer. Frigg, R., & Reiss, J. (2009). The philosophy of simulation: Hot new issues or same old stew? Synthese, 169(3), 593–613. Golinski, J. (1994). Precision instruments and the demonstrative order of proof in Lavoisier’s chemistry. Osiris, 9, 30–47. Hacking, I., & Hacking, J. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge University Press. Harré, R. (2003). The materiality of instruments in a metaphysics for experiments. In The philosophy of scientific experimentation (pp. 19–38). H. Radder. Heidelberger, M. (2003). Theory-ladenness and scientific instruments in experimentatiovn. The Philosophy of Scientific Experimentation, 8, 138–151. Heilbron, J. L. (1993). Some uses for catalogues of old scientific instruments. In Essays on historical scientific instruments..., Aldershot, Variorum (pp. 1–16). Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press. Humphreys, P. (2009a). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. Humphreys, P. (2009b). Network epistemology. Episteme, 6(2), 221–229. Kroes, P. (2003). Screwdriver philosophy; Searle’s analysis of technical functions. Techné: Research in Philosophy and Technology, 6(3), 131–140. Lenhard, J. (2007). Computer simulation: The cooperation between experimenting and modeling. Philosophy of Science, 74(2), 176–194. Lenhard, J. (2019). Calculated surprises: A philosophy of computer simulation. Oxford University Press. Morgan, M. S., Morrison, M., & Skinner, Q. (Eds.). (1999). Models as mediators: Perspectives on natural and social science (Vol. 52). Cambridge University Press.
110
6 Hybrid All the Way Down
Morrison, M. (2015). Reconstructing reality. Oxford University Press. Rohrlich, F. (1990, January). Computer simulation in the physical sciences. In PSA: Proceedings of the biennial meeting of the philosophy of science association (vol. 1990, no. 2, pp. 507–518). Philosophy of Science Association. Simon, H. A. (1969). The sciences of the artificial. Cambridge University Press. Symons, J. (2010). The individuality of artifacts and organisms. History and Philosophy of the Life Sciences, 32, 233–246. Symons, J., & Alvarado, R. (2019). Epistemic entitlements and the practice of computer simulation. Minds and Machines, 29(1), 37–60. Symons, J., & Boschetti, F. (2013). How computational models predict the behavior of complex systems. Foundations of Science, 18(4), 809–821. Tal, E. (2012). The epistemology of measurement: A model-based account. University of Toronto. Van Helden, A. (1994). Telescopes and authority from Galileo to Cassini. Osiris, 9, 8–29. Van Helden, A., & Hankins, T. L. (1994). Introduction: Instruments in the history of science. Osiris, 9, 1–6. Winsberg, E. (2010). Science in the age of computer simulation. University of Chicago Press.
Chapter 7
Implications of the Instruments View of Computer Simulation
The arguments in the previous chapter sought to establish the distinct character of computer simulations as scientific instruments within the general landscape of scientific inquiry. They are distinct and novel, however, as part of an already established third branch of scientific inquiry: namely they are an addition to the set of instruments with which we conduct scientific inquiry. As such, computer simulations do involve challenging problems for the epistemology of science. Computer simulations call our attention to the more general need for a genuine attempt at an epistemology of instruments (Baird, 2004). While this general project is beyond the scope of the present project, there is clearly a need for more restricted analysis of the novel features of computer simulations relative to other technical artifacts. In this chapter I will argue that in our efforts to understand computer simulations as instruments some substantial changes follow to the way we can and/or must establish their epistemic status as devices capable of enhancing knowledge acquisition in scientific inquiry.
7.1 Epistemic Entitlements and Computer Simulations There is a range of puzzling questions concerning the epistemological framework with which to assess the status of computer simulations in the context of scientific inquiry. This is particularly the case when views such as the ones discussed at the beginning of this book are endorsed. When computers simulations are understood to be a completely novel practice or methodology, almost a new way of doing science (Rohrlich, 1990; Kaufmann & Smarr, 1993; Nieuwpoort, 1985 [as cited by Paul Humphreys, 2004]) justifying our sudden and ubiquitous reliance on them across scientific disciplines becomes problematic. On what grounds are we justified to trust the results of computer simulations? If they are a completely novel way of
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3_7
111
112
7 Implications of the Instruments View of Computer Simulation
doing science, do we need a new epistemology? (Primiero, 2019)1 If this is not the case, if they are somewhat similar to other elements of scientific inquiry, what are the existing epistemic resources to assess their status in the laboratory? All these questions point towards a need for a justificatory source for our reliance in such devices. In epistemology these are questions about epistemic warrants. The sources of justificatory force explaining our reasons to trust. Since my response to some of these questions, as argued for in previous chapters, is that computer simulations are similar in nature to—and should be sanctioned as—other instruments, then the question becomes clearer: what epistemic warrants back our reliance of computer simulations as scientific instruments? A growing tendency in the philosophical literature on computer simulation (Barberousse & Vorms, 2014; Beisbart, 2017) is to appeal to a particular kind of epistemic warrant, called epistemic entitlement, as the kind of epistemic warrant that can provide such justification. An epistemic entitlement is a non-evidential epistemic right to hold a belief that something is the case just in case there are no reasons to doubt that it is indeed the case (Dretske, 2000). I may be entitled to believe, for example, that given certain circumstances (of limited time, depth and intricacy of subject matter, etc.,)—on grounds of good authority and without having to look any further for evidential support—that most of the content in my physics textbook is somewhat correct. I am also entitled to trust, without strict evidential requirements, the meteorologists’ prediction that it will rain during a typical fall day in the pacific northwest. Because of their non-evidential nature, epistemic entitlements are reasonably suspect as justificatory sources in important epistemological debates (McGlynn, 2014). In this section of this chapter I will show how epistemic entitlements are even more problematic in the context of the warrants justifying the propositions and/or results stemming from a device, namely computer simulations, deployed in scientific inquiry. Briefly, in what follows I will show that epistemic entitlements are inadequate to warrant trust in the results of computer simulations in science. One way that philosophers of science have suggested explaining why we trust computer simulations appeals to the ways that epistemic entitlements work in other less controversial forms of inquiry (See for example Barberousse & Vorms, 2014). For example, it might make sense to trust simulations in the same way that we trust our perceptual faculties and our memories, or in the same way that we trust expert testimony. We are entitled to base our beliefs on the evidence of our senses or the testimony of experts in spite of not having full access to the underlying workings of the senses or the full understanding of the inner workings of those testifying. We do not need to fully understand the intricacies of the organs in charge of vision in order to take most of what is perceived by our eyes as true. Similarly, we do not have full access to the education and abilities of the expert. Yet, for all ordinary purposes, one is warranted in believing information stemming from either source. We have the right to believe—we are entitled to the belief—that the universe is over one billion years old without knowing more than a few paragraphs worth of cosmology.
For more details see Winsberg’s (2019) updated entry on “Computer Simulations in Science” at the Stanford Encyclopedia of Philosophy. 1
7.1 Epistemic Entitlements and Computer Simulations
113
The argument from those advocating for entitlements in the context of computer simulations in science is that we are in a similar way entitled to the belief that the results of computer simulations are trustworthy. This, according to these views, is also in accordance with sociological and practical aspects of the way science and scientific communication happens. The assumption is that, sociologically speaking, for example, scientists do not have to fully replicate each other’s extensive work in order to utilize each other’s results in furthering their inquiry. They are entitled to trust each other.2 This is the case in an even more striking way, according to these views, when the results of scientists are communicated outside of academic contexts. Most of us, for example, have no choice but to trust what our science textbooks say. For Barberousse and Vorms (2014) denying this fact amounts to the dismantling of most knowledge-acquisition and production infrastructure that our species has relied on since written documents have been used to transmit information. Philosophers in this camp argue that something similar is the case for computer simulations: computer simulations are either like expert systems and we ought to trust their testimony like we would that of an expert, or they are like some of the elements that make up our perceptual capacities. These views acknowledge that computer simulations are extremely complex in more than one way and that in general they operate in ways that are not surveyable by ordinary human minds. That is, computer simulations are often what is called epistemically opaque. However, this fact is taken as yet another reason why the notion of epistemic entitlements make sense in this context. Under these circumstances, some argue, it might still be reasonable to trust computer simulations given the assumption that everything is working smoothly (Beisbart, 2017) just as it is ordinarily reasonable to trust our senses or the testimony of other people, even when we do not have full access to their inner processes. That is, you should trust the expert, the testimony or your senses unless you have evidence of facing a person whose reasonableness you cannot assume (including your own). When this line of reasoning appears in the philosophical literature, it frequently draws on insights from Tyler Burge ’s work. In particular it takes as a starting point Burge’s view of how human beings ordinarily maintain a posture of acceptance in epistemic matters. Just as we ordinarily trust our senses, we also ordinarily tend to accept what other people say unless we have reason to disbelieve them. The justification, according to Burge, that enables us to acquire beliefs from others may be glossed, to a first approximation, by this principle: While this may certainly be the case, sociologically and practically speaking, when multidisciplinary research is conducted, it isn’t entirely clear that this is what happens or what should happen in intradisciplinary interactions. As a matter of fact, an easily available example of where this entitled-kind of trust does not happen is at specialized academic conferences. Rather than accepting each other’s research via a non-evidential epistemic process, scholars gather to thoroughly inspect each other’s work. A similar process is implied with reference to peer reviewed publications in science in which a significant portion of the publication is dedicated to a “justificatory” section. In such sections great efforts are undertaken to make the methodology and the processes involved in the acquisition of scientific results as transparent as possible. While some degree of trust is necessary, there is a big difference between the kind of trust necessary to trust that your peer is reading their paper correctly and accurately and whether or not their paper is correct and/or arrived at in a correct manner. 2
114
7 Implications of the Instruments View of Computer Simulation A person is entitled to accept as true something that is presented as true and that is intelligible to him, unless there are stronger reasons not to do so. Call this the Acceptance Principle. As children and often as adults, we lack reasons not to accept what we are told. We are entitled to acquire information according to the principle-without using it as justification-accepting the information instinctively (1993, p. 467).
Burge points out that this ordinary disposition to accept the testimony of others is a necessary condition for acquiring language, as well as for a range of other social phenomena.3 As far as science is concerned, failure to act in accordance with the Acceptance Principle would make rational collaborative projects of inquiry impossible. On this view, testimonial sources such as published research, standard pieces of scientific equipment, and expert opinion should be trusted by default as a precondition for the possibility of ongoing inquiry. Adopting unreasonably high epistemic standards makes inquiry impossible. It must be noted though that there is a difference between unreasonably high epistemic standards and non-evidential warrants when it comes to what is acceptable in peer review processes such as academic conferences and publications. One could immediately point to the fact that at least internally, that is within the same discipline, acceptance of expert testimony is not the default position, nor is it a precondition for the possibility of ongoing inquiry. It is true that in rare occasions a reasonable amount of trust is needed in order to speed up the process of inquiry. It is also true that in some cases there is little option but to trust, i.e. some experiments and/or processes are too difficult to replicate by others. But these instances are neither the norm nor exemplars of normative behaviors in scientific inquiry. This is particularly the case if one believes that scientific inquiry should be at least a little more rigorous than everyday sense-data observations: we methodically document, test, repeat, ensure that others can corroborate by repeating the same procedures and then ask them to do so before we accept what is to be scientifically reliable. This is not something we do before crossing the street. This is part of what makes appeals to entitlements so puzzling in the context of scientific inquiry. There is a sense in which appealing for a non-evidential warrant to provide justificatory force to a reason to trust anything in science sounds highly dubious. For Burge, acceptance is the epistemic default position for human beings in most epistemic practices and it grounds what he calls an a priori entitlement whereby a person is entitled to accept a proposition that is “presented as true and that is intelligible to him unless there are stronger reasons not to do so, because it is prima facie preserved (received) from a rational source, or resource for reason; reliance on rational sources— or resources for reason—is, other things equal, necessary to the function of reason” (1993, p. 469). Like Barberousse and Vorms a decade after him, Burge also talks of entitlement in terms of being a precondition for the possibility of knowledge and extrapolates what can be said about this in the context of sensorial experience into a context of more complex types of inquiry such as computer-assisted mathematical proofs. Burge ’s account of the Acceptance Principle (as he acknowledges) is very similar in spirit to the Principle of Charity, as it figures in Quine (1960) and Davidson (1973). The principal difference between these principles is the role that Burge’s notion of preservation of content plays in his account. 3
7.1 Epistemic Entitlements and Computer Simulations
115
Given that the Acceptance Principle serves as a necessary precondition for the function of collaborative projects of inquiry, it has been natural for philosophers influenced by Burge to imagine that a priori warrants can ground computer simulations in a similar way. This is particularly the case if one sees computational methods as mere extensions to existing mathematical methods, which is the case for many philosophers of computer simulations (Humphreys, 2004; Weisberg, 2012; Morrison, 2015). For such philosophers, the origin of the computer simulation as what can be characterized as a ‘number cruncher’ still plays an important role even in contemporary settings. This is because while computer simulations become more and more complex, they are still tasked—at least in the case of simulations mimicking the behavior of complex dynamic systems—only, according to these views, with providing discrete mathematical solutions to computational models that are derived from mathematical models and the continuous equations therein (Morrison, 2015). As we saw in previous sections, the pipeline associated with characterizing the steps required to create a simulation from an empirical observation to a dynamic representation is often taken, by some (Morrison, 2015), to be constituted by purely formal methods. This in a sense explains why some would think that the warrants justifying the mathematical content that computer simulations manipulate would also be relevant when assessing the reliability of the computer simulations themselves. Furthermore, even when there is an acknowledgement of the engineering and implementational challenges of computer simulations in these pipeline representations (as in Resch, 2017), these steps are assumed to simply preserve the formal content they transmit when the engineering is “done right” (Beisbart, 2017). As explained in sections above, why this is the case may be a simple case of folklore in which engineering is being idealized as applied physics and physics is being idealized as applied mathematics. Regardless, the advocates of this view of computer simulations sees them as nothing more than computing complex equations by following formal rules. As such, it is easy to see that, if this is all they do and they have been properly put together to achieve it, they will qualify as transparent conveyers of content whose warrants are of the a priori kind. This is where Burge ’s work figures significantly in the discussion of computer simulations. Burge’s reflections on computer-assisted mathematical proofs in the context of computer simulations emphasize the role of what he calls transparent conveyers of warrants. So far, I have discussed some details concerning the notion of transparent conveyers and content preservation. I will discuss the idea of transparent conveyers in more detail below. However, for now, it suffices to note that a transparent conveyer is one that does not modify the content it conveys in any epistemically relevant way. This means that the processes/methods/devices by which it is conveyed do not change in any significant way the nature of the knowledge itself nor the warrants that serve for its justification. One immediate problem for the application of the idea of transparent transmission of warrants in simulation practices is that we cannot, in fact, be confident that computer simulations do not introduce epistemically relevant changes to the content being manipulated. For example, as Eric Winsberg notes, simulation involves what he calls motley sets of practices and technologies (Winsberg, 2010). As I explore
116
7 Implications of the Instruments View of Computer Simulation
further below, and as we saw in previous chapters, this motely nature of computer simulation means that the idea of transparent conveyance for warrants is not as applicable as it is in the case of computer assisted mathematical proof. This is also the case given the inhomogeneity of components in computer simulations. But more importantly, as it turns out, each one of the various and diverse components and processes involved in computer simulation do in fact change the content as it passes through them. Sometimes, it is by this very modification that they serve their purpose (as will be exemplified below in the case of discrete transformations of continuous equations to make them executable by computers). When computer simulations are compared to the process of computer-assisted proof, the latter clearly involves a more homogenous epistemic context (Barberousse & Vorms, 2014). In computer aided mathematical proof one has a much better chance of ensuring the transparent transmission of warrants and the preservation of a priori content (Arkoudas & Bringsjord, 2007; McEvoy, 2008, 2013). This is in part because the very concept of proof pushes the endeavor of computer-assisted proofs to be more rigorous with the transparency and provability of the software and hardware components of the machines involved. Furthermore, given the specialized task and the nature of the task—a mathematical proof—special care can be justifiably taken to exhaustively test the hardware and software components of computer- assisted proofs. This is not the case—as has been extensively shown in other sections of this book—with conventional development and use of computer simulations in science, particularly those that are developed in academic settings, which often have little adherence to industry standards. This is, of course, ignoring the fact that even in mathematical proofs, one may argue, and even in conventional mathematical proofs—as opposed to computer aided proofs—the process is not as transparent as one may initially assume so. It turns out that there are several reasons to doubt the conveyers of mathematical proofs even when such conveyers are peer reviewed human-made publications (Frans & Kosolosky, 2014). This is evidence that Burge’s starting point may be more challenging than he leads us to believe. In particular, it may be strongly presupposing the reliability of many things that actually are subject to reasonable doubt, which would in turn violate his Acceptance principle. This leads us to a more fundamental problem with taking Burge’s strategy as the basis for the epistemology of computer simulation. The problem lies in his construal of our everyday reliance on the testimony of other people. As we shall see in the following sections, computer simulations and computational methods in general— as instruments deployed in scientific inquiry—are neither reliably transparent conveyers in all contexts, nor can they be regarded as equivalent to expert sources of testimony. Clearly there are occasions in which one is entitled to a belief independently of one’s subjective grasp of the epistemic rights or warrants supporting that belief. We can be entitled to believe, for example, what the weather forecaster has told us independently of the state of our knowledge of meteorology (Dretske, 2000; Williams, 2000; Adler, 2015). Given the epistemic entitlements described by Burge, we are frequently exempted from justificatory practices such as citing evidential support or of giving reasons more generally (Wright, 2004; Burge, 1993; Lackey, 1999; Dretske, 2000; Davies, 2004; McGlynn, 2014). However, there is one last note concerning epistemic entitlements that must be addressed before I can move on.
7.2 Transparent Conveyers, Expert Testimony and Computer Simulations
117
As mentioned above, the way epistemic entitlements are usually deployed in the context of computer simulations strongly relies on Burge’s view that, barring a reason to doubt, one should take an acceptance attitude towards a reasonable source. In the literature regarding the epistemic status of entitlements, however, entitlements are not treated as monolithic and homogeneous type of warrants that simply give the receiver of information a right to hold a belief that what is transmitted can be trusted (Wright, 2004; Lackey, 1999; Davies, 2004; McGlynn, 2014). It is unclear, for example, whether the epistemic entitlements at play in the views described above constitute a reason to trust the results of computer simulations or reasons not to doubt the processes by which computers simulations arrive at their results. If the former was the case, then an argument could be made that given non- epistemic reasons—such as resource constraints, lack of a better option, etc.—we have the right to trust the results of computer simulations. This is, however, not strictly speaking an epistemic warrant, even if it is a practical warrant (Dretske, 2000). If the latter was the case, however—that the entitlement represents a reason not to doubt the processes rather than a reason to trust the results—one would wonder how it is that such a warrant was acquired. How is it that we have a non-evidential right not to doubt the reliability of a process/method/device deployed in science? Particularly a novel, yet-to-be-sanctioned one. This is a sort of entitlement that even those who really want us to trust in the results of computer simulations would want to stay away from, if only because it implies a sort of a prioricity. For them, a warrant that speaks to the latter case, that of not doubting the processes by which a computer simulation arrives at its results, is warranted by, in their view, the vast empirical evidence on their behalf and not by some rational precondition. But this is exactly what we do not have in the case of novel methods or technologies such as computer simlations. Hence, either interpretation of what an entitlement is an entitlement to comes up short of what is meant to deliver. Nevertheless, let us look into the details that problematize the Principle of Acceptance above as well as the assumptions behind some deployments of epistemic entitlements in the context of computer simulations more carefully.
7.2 Transparent Conveyers, Expert Testimony and Computer Simulations In the following sections I offer a series of arguments that directly undermine the deployment of epistemic entitlements as an adequate justificatory source for our reliance on computer simulations. The first argument concerns the treatment of computational methods, processes and components underlying computer simulations as anything resembling transparent conveyors. For many reasons, detailed below, I will show that they are not. The second argument will also show that computer simulations and their associated components and processes should not be treated with a similar epistemic position as that with which we treat ordinary epistemic practices. In particular, I argue, computer simulations are not—like
118
7 Implications of the Instruments View of Computer Simulation
perceptual or testimonial sources may be—the kinds of epistemic sources whose well-functioning should be taken for granted. That is, unlike certain epistemic elements of inquiry, computer simulations do not enjoy having the status of a precondition for the acquisition of knowledge, nor do they represent a source of possible knowledge whose functioning grants an environment in which one can assume that there is no reason for doubt. A second and conclusive argumentative section will follow, which will show via an example that when it comes to scientific inquiry and computer simulations, a pragmatist position is ill equipped to capture the superior normative expectations of science vis-à-vis everyday epistemic practices. In this example, I show that what science is after is, at the very least,4 slightly more than what a group of oracle- believers requires of their predictive tool. If we are to make sense of our reliance on computer simulations for genuine scientific inquiry and for socially consequential decision-making strategies, such as policy and protocol design, simply expecting them to work, assuming they do so, and/or appealing to their predictive power are not epistemically satisfactory approaches.
7.3 Computer Simulations Are Not Transparent Conveyers Following Burge’s application of the Acceptance Principle to the epistemology of computer-assisted mathematical proofs, Barberousse and Vorms (2014) argued that the epistemic warrants supporting our reliance on computer simulations are not intrinsically constituted or enhanced by appeal to empirical evidence. They argue that the warrants behind our trust in computational methods can and do take the form of Burge-style entitlements. Just like other devices whose role, like that of memory or perception, is to transmit and preserve intact information, computational methods that manipulate mathematical content appropriately should be considered transparent conveyers (Burge, 1993; Barberousse & Vorms, 2014). In a transparent conveyer, if the propositions being conveyed are justified a priori, for example, then this justification will not change from a priori to empirical when we assess its
Although my personal view on the requirements and the normative aspect of scientific inquiry is slightly stronger in that it calls for a full view of scientific enterprise as an enterprise of understanding and/or elucidating the way the world is and/or works, for my argumentative purposes here only a minimum threshold claim is required. That is, for the sake of the argument in this section the only thing required to show is that scientific inquiry is, at the very least, meant to be a slightly better external methodology of record meant to overcome direct observational underperformance of everyday epistemic sources such as perception, memory, etc. 4
7.3 Computer Simulations Are Not Transparent Conveyers
119
output. Following Burge, this is called content preservation.5 A method, process, or device ensures the preservation of the content it transmits by not introducing any epistemic warrants that do not belong to the kind of warrants that the content it manipulates is already justified by. If I was, for example, conveying to you the result of a geometrical theorem but in the process of doing so I appealed to the authoritative social position of Pythagoras as reason to endorse its results, I would have in fact have introduced into the transmission of the theorem’s propositions an epistemic warrant (appeal to authority) whose nature (a posteriori) is distinct from the warrants of the content being transmitted (which are presumably of the a priori kind given that we are talking about a geometrical theorem). If in contrast, I do not appeal to Pythagoras’ authority and solely rely on my assumedly well-functioning memory I ensure the content is preserved in Burge’s sense, and I and my well-functioning memory serve merely as transparent conveyors of geometrical propositions. From this perspective, a warrant for belief can be a priori even if the manner in which one attained the warrant depends on some contingent fact about the world, such as the fact of having a brain or some particular perceptual capacity, even memory. The fact that a human being needs a brain to do arithmetic is irrelevant to the justification of an arithmetical proposition. One implication of this position is that a cognitive capacity such as memory “is no more intrinsically an empirical faculty than it is a rational faculty. Its function in deductive reasoning is preservative” (Burge, 1998).6 In other words, when memory does what it is supposed to do, it conveys information without altering it in any epistemically relevant manner. However, as Burge acknowledges, when it comes to the transmission of information outside one’s own cognitive processes things are more complicated. Although Burge believes that there are similarities between the memory example and the way we can gather knowledge from other sources (from testimony or otherwise), he acknowledges that it is “only in special cases that a priori knowledge can be preserved through interlocution” (Burge, 1998, p. 5). Though they acknowledge that there is a substantial difference between computer assisted mathematical proofs, such as the ones Burge focused on and complex computer simulations in terms of content preservation, they justify the aprioricity of a scientist’s entitlement to trust a simulation in virtue of a second strategy (which we will inspect in detail in the following section): trusting computer simulations, they argue, is like trusting expert testimony: you do not need full and transparent access to the experts’ thought process to trust it. Though expert testimony may be fallible, for this view, casting a general doubt on the practice absent specific reasonable doubt can be seen as irrational. This is in part because it is seen as undermining much of what we take to be reasonably acquired knowledge. It is also deemed irrational because it is assumed to be made in the absence of a reason to doubt. As we shall see below, that this absence is the case is not immediately obvious in scientific inquiry and particularly when dealing with the sanctioning of an instrument for scientific purposes. 6 Similarly, when we rely on our senses, we grant that when they are working the way they are supposed to they transmit information without altering it. That is, as explained above, they are transparent conveyers. Thus, though one can acknowledge their fallibility, in the absence of a plausible reason to doubt their well-functioning, it is rational to rely on the senses (Burge, 1993). 5
120
7 Implications of the Instruments View of Computer Simulation
It is very important to note, for example, that Burge’s account of content preservation and transparent conveying requires that the recipient already has reason not to doubt the source.7 Much of Burge’s argument relies on the intelligibility of other minds, and the inescapable need for trust in testimony as a condition for the possibility of the most basic epistemic and linguistic practices. However, on my view, it is crucial to recognize that scientific inquiry is not a basic epistemic practice but rather a very special cultural practice that is designed, developed, and/or deployed in large part to overcome the evident limitations of our ordinary epistemic conditions.8 This particular point will be explained below, but, for now, suffice it to say that at the very least, the scientific enterprise aims to overcome evident and ordinary limitations concerning memory (by writing things down), observation (by repetition), and possible bias (by efforts in transparency and reproducibility). Furthermore, when the issue is trust in a technical artifact whose prowess and adequacy is still very much on trial, we are already far removed from basic epistemic and linguistic practices. At the very least, it should be acknowledged, the preconditions for the possibility of knowledge were well in their place by the time computer simulations came about. That is, knowledge is evidently possible without them. Hence, it is not the case that we must trust them because they are an essential, inescapable element of knowledge-acquisition. Content preservation, furthermore, fails in an immediate sense in the context of instruments deployed in the aid of inquiry. That is, using the notion of content preservation as a strategy to justify our reliance of a method/process/device fails in how the warrants that underwrite the content being manipulated by scientific instruments are inadequate to warrant trust in the instruments themselves. Even if the content being transmitted is justified via priori warrants, that the method of transmission is adequate and/or capable of transmitting such content, that it can do so transparently, and/or whether the warrants that justify our reliance on such method are of an a priori or an a posteriori kind are questions whose answer must come from independent epistemic evaluations from anything there is to say about the warrants that underly the content being transmitted. In other words, it is not clear that the warrants that justify the content in question are/can be the same as the warrants that justify a conveyor. We should further note, for example, that often the warrants that justify our reliance on transmitted content have only an asymmetrical relation with the warrants that justify our reliance of the method of transmission. This is particularly the case when the content is already justified or justified via a priori warrants. What this means will become clearer in the next section.
As we will see below, and briefly mentioned above, whether one has reason not to doubt, no reason to doubt, or reason to trust represent significantly distinct challenges for this prerequisite. 8 Philosophy is another social practice that sets abnormally high epistemic standards. In the philosopher’s case, we aim high with respect to what should count as a rationally persuasive argument. 7
7.4 Warrants for One Thing Are Not Necessarily Warrants for Another
121
7.4 Warrants for One Thing Are Not Necessarily Warrants for Another Consider, for example, that in some instances the justification of our reliance in some content is negatively affected if there are reasons to doubt the transparency of the conveyor. If you are a known delusional neighbor for example, I will have reason to doubt that what you just said about the Pythagoras theorem is in fact true. In an even simpler situation, if I do not consider you to be a full expert on Pythagorean matters, I may receive your information concerning the relationship between heights and lengths of triangles with a reasonable suspicion of alteration. So, reasonable doubt on the method of transmission has an immediate negative effect on the warrants for trust in the content. By contrast, while reasonable doubt regarding the warrants of the content can also have some effect on the reasons for trust in the method of transmission, it is not to an equal degree. We can see this when scientists take a second look at their inputs way before they cast any doubt on their method of analysis. Whether or not a method of transmission is a reliable method is still an open question even when the justificatory force of the content being transmitted is a closed matter. Just because a scientist is dealing with algebra does not mean that they are any more justified in trusting the device they are using to manipulate it (an abacus, an analog machine, or a digital computer) or that the same warrants justifying the content transfer to or have much to say regarding the warrants related to the reliability or trustworthiness of the device. Similarly, and this is an important detail concerning the asymmetry mentioned above, if the device is warranted then the reasons for our reliance in the content being transmitted is in fact enhanced. If I know for a fact, and ahead of the fact, that you are a particularly brilliant mathematician and you tell me some truths about the relationships between numbers I have at least one more reason to warrant my belief in the propositions conveyed. In this sense, while the warrants justifying the content have little to nothing to say about the warrants justifying our reliance on the device that transmits it, the warrants justifying our reliance on the method of transmission have a larger impact in the warrants that justify our accepting the content being transmitted. This proves important when we are considering computer simulations since we can say that just because computer simulations solve for mathematical equations (even if we were to accept that that is all they do) this does not do much for the reasons to trust the results of the computer simulations themselves, while having reasons to trust computer simulations does in fact provide further warrants for the justification of our reliance on their results. As we will see below, computer simulations are not transparent conveyors, but if we are to trust the arithmetic results of computer simulations (that is once they are transmitted by a mediator), if we are able to warrant them, it is in fact because of the introduction of further warrants that do not belong to the kinds of warrants providing justificatory force to the content they manipulate. It will be because there are other evidential warrants providing justificatory force. That is, if there are any warrants at all. Unlike perception and memory, or even the human tendency to trust others, the introduction of computational methods into scientific inquiry has been the product
122
7 Implications of the Instruments View of Computer Simulation
of artifacts and practices whose reliability has involved a gradual process including laborious efforts on the part of scientists and engineers (Winsberg, 2010). As Evelyn Fox-Keller points out, contemporary uses of computational methods in science are the product of massive collaborative efforts since the Second World War involving trial and error approaches to practical problems (Keller, 2003). As Winsberg argues, the development of computer simulations includes an array of distinguishable evidential benchmarks and other features such as considerations of fit, calibration, in addition to extra-theoretical and extra-mathematical engineering practices (Winsberg, 2010). These practices involve both hardware and software innovations that are familiar parts of scientific practice but are seldom discussed in the epistemology of science. From architectural considerations for optimal processing to mathematical discretization, justificatory practices involving computer simulations involve facts about the history of their successful deployment but also the appropriate management and assessment of uncertainties, errors and calibration procedures (Ruphy, 2015). These sanctioning processes, and therefore the computer simulation itself, cannot be regarded as transparent in the sense required by Burge.9 As mentioned above, the kind of homogeneity that transparent conveyers of epistemic warrants require is simply not available. For example, given the role of engineering constraints, implementation, discretization, questions of fit, calibration, and countless other non-explicit features of computer simulation it would be a mistake to think that the warrants underwriting our trust in these artifacts are simply derived from the formal character of the computer code that these systems run. Consider the well-known features of discretization techniques that we have discussed in previous chapters. Many computer simulations are the result of a process that includes the transformation of differential equations in a mathematical model into expressions that represent approximate values for specific finite spatiotemporal states of a system.10 Discretization techniques involve determining practical ways of implementing calculations in manageable chunks. This transformation/translation procedure introduces epistemically relevant decisions on the part of the modeler that are distinct from the original mathematical model. In fact, sometimes, the discretization techniques are of such complicated nature that mathematicians specialized in conversion methods are enlisted in order to achieve some of the mathematical transformations required. In such cases, it is not only the idiosyncrasies and/or As we will see later, they cannot be ignored either, as is the case when the justificatory force behind their sanctioning is said to be drawn from a non-evidential warrant. What ought to be done is to take these benchmarkings seriously and ask whether or not they are representative of a theoretically grounded process that included very carefully curated data in addition to the empirical trials mentioned. Only then, do these qualify as good reasons to support our trust in computer simulations. As it turns out and as I will argue later, this may not be the case. If so, computer simulations may not be granted entrance in to the canon of laboratory instrumentalia. 10 One can think of discretization as trying to approximate a circle by drawing one regular polygon after another with more sides each time starting form a square. Of course, a square is a terrible circle, but a polygon with millions of sides may be visually indistinguishable for practical purposes. Nevertheless, at each point, one is not drawing a continuous curve but rather a series of straight lines at an angle from each other. 9
7.4 Warrants for One Thing Are Not Necessarily Warrants for Another
123
available tools of the modeler, or those of the coder, that come into play, but also those of a whole other individual or team of specialists. That is, often the reason a specific set of equations is chosen in the mathematical model has nothing to do with the challenge of approximating the model via a computer simulation. Computational constraints such as whether a machine can handle the mathematical dynamics observed in the world need not be a constraint on mathematical models. By contrast, the choice of specific discretization techniques in a computer simulation will be responsive to the practical necessity of implementing a model in a digital device. In fact, when it comes to discretization techniques, the decision to select one technique over another is often a matter of engineering trade-offs (Winsberg, 2010, pp. 12, 23). Discretization techniques, a fundamental aspect of computer simulations, undermine the possibility that computer models can serve as transparent conveyers insofar as they alter the nature of the epistemic justification of the content being manipulated. And this is so even when one grants that they are working as intended, as Beisbart (2017) suggests. Further, even if the reasons to trust the results of the equations in the original mathematical model are grounded in well-established theory, or are supported a priori on purely mathematical grounds, the introduction of discretization techniques involves an engineering element that alters the justificatory considerations involved in a computer simulation of the mathematical model. A further aspect to consider is that the actual numbers related to the discretized mathematical models run by the components in computer simulations are constrained by a limited memory array with restricted available digits to express a result. That is, several important computer components are often tasked with storing computational results within memory arrays that have a limited number of slots. Because of this, the actual results of a numerical computation are rounded up and/ or down to the most relevant number of digits that can fit into the array. This happens on every component, on every machine and on every computation of every simulation. At every turn, from one relevant component to another, the results are rounded up or down in order to fit the constraints of the machine, the architecture or the implementation model of a computer simulation. Even just at a fundamental level, while the mathematics involved in mathematical models may include continuous equations and assumptions about infinite elements in their variables, discrete mathematics does not: it is both parametrized into segments as well as from where the calculations begin or end. This introduces a significant discrepancy, or error, between the actual results of a computation and the results that can be conveyed given the memory constraints of a component. While methods exist to assimilate the numeric discrepancies generated by such processes, the fact remains that the rounding up, the error correcting and/or the approximation methods represent an extra manipulation of the content in epistemically meaningful ways.11 There is a lot that can be said about what is and what is not epistemically relevant in a computational operation. Durán and Formanek (2018), for example, argue that complete surveillability of computational processes is unnecessary for validation and verification purposes and therefore unnecessary for reliability assessment and our trust on computer simulation results. Others—most notably Symons and Horner (2014)—suggest that the lack of accessibility to error sources and 11
124
7 Implications of the Instruments View of Computer Simulation
Some such processes are in fact epistemically opaque and do not offer easily identifiable ways to correct for their artifactual results (Kaminski, 2017). Thus, in a very basic sense, computer simulations cannot count as transparent conveyers given Burge’s characterization because justificatory elements distinct from the ones warranting the original content do in fact enhance, decrease or constitute the epistemic warrants of the manipulated content (McEvoy, 2008).12 Furthermore, as we noted above, computer simulation involves independent epistemic warrants at different stages. Consider the following. Even if the mathematical model is fully warranted and works as intended and even if the discretized version of the model also works as intended there is no reason to think that we have reason to trust the latter because of the former, as explained above. When a discretized model is ultimately implemented in a device, for example, it requires yet another epistemically relevant transformation in the process of coding. In coding, considerations of fit, trust and/or reliability of a given algorithm will depend on independent factors from those involved in the discretization process. Unlike discretization techniques that involve established techniques and theories, code is often the result of highly idiosyncratic problem-solving approaches. Coding practices are error-prone. Consider for example the way many significant software bugs are ‘patched’. That is, they are not ‘fixed’ per se, as in actively engaging with their malfunctioning components or erasing an erroneous block of code. Rather, code is added that ‘patches’ the original code by superseding previous functionality. These patches almost always introduce their own new bugs, making the process of assessing the reliability of the software even harder than it already was. In the process of patching, as is often observed, software, which is at the core of any and all rates in a computational process represents a significant epistemic challenge to reliability assessments of software technology. This complicates matters for formulations of epistemic opacity that hinge on the term “epistemically relevant elements” such as that of Paul Humphreys (on which much of the literatures of epistemic opacity rests). However, for my purposes and for the scope of this section, I am using the phrase “epistemically significant” to refer to elements of the computational process involved in computer simulations that introduce a procedure by which the content being conveyed is transformed, limited, and/or enhanced by technical modifications that are not strictly formal and thus constitute a violation of Burge’s sense of content preservation mentioned in this section. The physical constraints of memory arrays that modify the mathematical content of computational models, for example, as well as the correcting mechanisms introduced to mitigate the extent of such modifications are related to and solved by engineering issues and material elements that do not belong to purely formal, mathematical, strategies. How we judge these strategies—as epistemically sound or not, trustworthy or not—is independent from the way we judge the mathematical content being manipulated by the machines in questions. As such, they introduce a relevant epistemic modification to the content they are conveying. If this is multiplied by the several components and/or machines involved in computer simulations it is easy to see how the rounding of results represents an epistemic challenge to the notion of transparent conveyors in computational processes. 12 McEvoy in his response to Tymoczko and Kitcher on the aprioricity of computer assisted mathematical proofs concedes as much by saying “What determines whether a proof is a priori is the type of inferential processes used to establish the conclusion of that proof. If the method of inference for any of the steps in the proof is a posteriori, it is a posteriori” (2008, p. 380).
7.4 Warrants for One Thing Are Not Necessarily Warrants for Another
125
processes of computer simulations, tends to grow over time merely because it can (Holzmann, 2015).13 The apparent inevitability of errors in the practice of coding is an empirical reason to decrease trust in any content manipulated via computational methods.14 This is particularly the case if the warrant in question is an epistemic entitlement. Thus, even if one grants that a computer simulation seems to work as intended, it cannot be regarded as a transparent conveyer in Burge’s sense. If, as Burge suggests (1998, p. 4), one of the main issues to address is whether an entitlement or justification has any independent justification apart from empirical evidence, or as Barberousse and Vorms (2014) put it, whether the justification of a warrant is in any way constituted or enhanced by empirical means, then the answer concerning the warrants underlying out trust in computer simulations is clear. Typically, the results of computer simulations have been through a process that definitely alters whatever formal content they have manipulated by a series of processes that are dependent upon empirical considerations. In very practical terms, for instance, software design is often limited by empirical and engineering constraints. Furthermore, some of these constraints are such that alteration is the way they work, as in the case of discretization methods and coding that underly the transformation from mathematical models to computer implementations of a target simulation. In an effort to provide a more refined version of the Burge-style entitlements account, Beisbart (2017) acknowledges some of the practical complications that we have noted, but argues instead that warrant transmission takes place from one stage in the simulation process to the next. On his view, the results of computational methods can be said to provide knowledge in virtue of a sequence of inferential transfers from one level of propositional content to another. That is, at each step in the simulation process, there is a result, a proposition to be considered: from target phenomenon to mathematical model, from mathematical model to computerized model, from computerized model to computational implementation, and from single implementation to iterations (Beisbart, 2017, pp. 160–162). The important point here is that the warrants of the previous step are not merely transferred to the next step as providing justificatory force within the next step. There is also not too much hinging on whether or not the warrants are unaltered. Rather, once each step is concluded with its own warrants (whatever they may be) we are warranted in inferring that we can move on to the next step without having to exhaustively investigate the warrants of the previous step. In this sense, there is still an assumption taking place, and there is still an entitlement kind of warrant at play as we move along the process, but it is not playing the same role as it is with the views explored above,
This is especially the case now that memory has become so inexpensive in modern computing. The ‘true’ command in Unix, for example, which originally consisted of an empty file with nothing to execute grew to nearly 23,000 bytes from 1979 to 2012. 14 See Horner and Symons (2019) for a review of the empirical literature on software error. They show that there has been a relatively consistent level of error reported in empirical studies from 1978 to 2018—for every 100 lines of code between 1 and 2 lines of code contain errors. (See also Symons & Horner, 2017). 13
126
7 Implications of the Instruments View of Computer Simulation
namely those of Barberousse and Vorms and other Burge-style arguments. At any given step our trust is justified if we assume that that our methods are the adequate ones and we assume they work as intended. In other words, we are warranted in inferring from one stage to the next if we grant that each level is somewhat/somehow warranted to begin with. For Beisbart, an agent is “inferentially justified in believing a propositional result constructed from a computer simulation if she is justified in believing the dynamic equations used to feature the system under scrutiny and if she is justified to think that the simulation works as intended” (2017, p. 171). However, for the agent to be justified in believing that the simulation works as intended, all that is required is that the “epistemic agent has sufficient reason to believe that it does so” (p. 169). This “sufficient reason” in turn implies, the assumption that at every inference step, from mathematical model to final display, certain types of errors (round-off errors, modeling errors, hardware failures) are “excluded or are sufficiently small” (pp. 161–163). At every step however, trust is warranted with the proviso that there are no significant epistemic challenges (2017, p. 162). As we will see, on my view, this is exactly the open question that has not been and ought to be settled. In all these views there is the assumption that we could assume or that we must assume the well-functioning of computer simulations in order to allow for scientific inquiry to be conducted through them. However, I have shown how the latter is not the case, we do not have to assume anything regarding computer simulations’ epistemic status (computer simulations are not like other preconditions for the possibility of knowledge, as some argue). Furthermore, the aim of this section is to show that the former, that we can assume that computer simulations work well, is also a questionable strategy, in particular because there is nothing that warrants this assumption. In other words, we have no epistemic reason to believe that the results of computer simulations are true and in fact we have good reason to doubt that the processes are reliable to start with: they are novel, their aggregation is novel, or their integration is novel. The project of sanctioning computer simulations, whether in practice or in normative projects such as the present one is precisely to establish a criterion by which they can be assessed to be well functioning, and then apply that criteria and figure out whether their place in scientific inquiry is indeed warranted or not. On Beisbart’s account, a scientist is ultimately justified in believing a proposition derived from a computer simulation because what the computer itself has done is drawn inferences from the propositions of a discretized conceptual model (2017, p. 165). The scientist draws the final inference that ties the results back to the target phenomenon by assuming that at any given step the methods involved worked as intended. This is in a sense why Beisbart ultimately believes that computer simulations are more akin to arguments and not experiments. Note that Beisbart recognizes the distinction between warrants supporting the content of simulations and warrants supporting our reliance on their results. Strikingly, his analysis highlights precisely the relevant empirical and historical considerations that must ground our trust in computer simulations, namely a strong foundation in theoretical principles and processes that ensure that well curated data are put in to them.
7.4 Warrants for One Thing Are Not Necessarily Warrants for Another
127
As we have seen, the practices involved in simulation processes are such that they are typically not the product of transparent conveyers even when they seem to work as intended. The reasons why each of the steps can be said to be reliable and/ or trustworthy are independent from the step before or after it, offering no single kind of warrant transmission that spans the entire process of computer simulation. Warrant transmission in purely epistemological contexts, without the addition of complex technical artifacts as mediators, is already a complicated philosophical issue (Beebee, 2001; Davies, 2004; Pryor, 2012; Moretti & Piazza, 2013). Furthermore, the question of whether epistemic entitlements can at any point provide the justificatory force required from genuine epistemological processes poses important challenges that are not easily established. Whether or not, for example, an epistemic right such as a non-evidential entitlement can ever become a sufficientlyenough justificatory source to generate knowledge is often casted as epistemic alchemy and regarded as highly suspicious in epistemology (McGlynn, 2014). Can knowledge proper, usually thought to be the product of evidential efforts, suddenly emerge from a series of steps that prominently figure non-evidential warrants? If this epistemic alchemy is the case then we all know a lot of things without having done proper evidential diligence, which may be the case. If epistemic alchemy is not possible then it means that only some of us know some things properly and the rest of us trust but not know. This is also strongly plausible. While this may be a conservative view on what knowledge really is out there and who really does know something, it strongly resonates with the intuition that some of us really do know physics while others do not. This is a restrictive view I am willing to take. On the other hand, if epistemic alchemy is accepted in order to rationalize the acceptance of computer simulations as knowledge-generating instruments, then there is in fact no distinction between those that go through strenuous evidential efforts to ensure that something may be the case and those that do not. This applies to our discussion of computer simulations and their results. And this seems to me highly unlikely. As I will explain below, whatever reason we have for trusting the results of one stage in a simulation as we move to the next should generally not be ad hoc; it should not be based solely on the factors that are unique to particular instances of a simulation practice itself but should be the product of established theoretical principles and engineering standards. The methods and technologies used in practice are not simply deployed in an unchanged form across different applications, processes and platforms. They are modified, often in non-trivial ways, to suit the task at hand. Throughout these modifications there are ways in which one can establish benchmarks that help to sanction the results of a simulation. These might involve, for example, repeated runs and internal comparison, comparison to outputs of other simulations, or most importantly, comparison to real- world data (Symons, 2008; Winsberg, 2010; Gramelsberger, 2011). Thus, when computer simulations can be trusted it is because of their adherence to theoretical principles, empirical evidence, or engineering best practices and not because of their output alone. Like other instruments deployed in scientific inquiry, computer simulations do not simply inherit their warrants in virtue of their manipulating
128
7 Implications of the Instruments View of Computer Simulation
formal syntax in a rule-governed manner. They also do not directly inherit them from the fact that theoretical principles themselves, or the data analysis methods themselves, are warranted. Rather, just like instruments, they must invoke their own set of warrants (which in turn must include considerations of coherence with independently justified theoretical principles and empirical observations). It must be noted however that unlike other instruments in science whose implementation and reliability has often been determined through processes that have depended on well- established traditions and principles, digital computers and computer simulations have had a relatively short period of use. And while some scientific instruments have been adopted quickly and successfully in restricted domains of inquiry (e.g. the electron microscope, MRI, etc.) computer simulations are applied so ubiquitously that the assessment of their success is not as straightforward as it is for many more targeted scientific instruments. As mentioned several times above, in addition to the importance of recognizing different kinds of epistemic practices in different stages of the construction of the simulation, we also need to consider that even when the foundational components of the simulation are sound, the processes of combining them into a working application can fail in many challenging ways. To put this point simply, we can say that the inference from warrants at the level of “parts” to warrants at the level of the “the whole” computer simulation is not sound. The reason that it is a mistake to regard an epistemic warrant as being successfully conveyed from relatively simple, “component-level” inferences to the behavior of the simulation as a whole, even in cases where the components of a software system are well understood, is because of the role of software engineering itself. Software, which is an essential part of any general-purpose computing system, is a source of error insofar as it is created via human engineering and is built to serve human purposes. Human engineered software contains errors at the level of coding and vagueness at the level of specification (Tymoczko, 1979, p. 74; Fresco & Primiero, 2013). These errors mean that normal notions of epistemic-warrant-transmission from elegant and well-understood foundations to the behavior of aggregates of these components simply do not hold (Winsberg, 2010). In other words, as Stéphanie Ruphy notes, “the computer simulation does not simply inherit” its epistemic credentials (2015, p. 139) Rather, its reliability is rather the product of a range of diverse sanctioning processes. As we have seen in this section, our trust in computer simulation takes place within the motley traditions of engineering and scientific practice. The actual process of building a simulation involves distinct stages, each of which has its own epistemic standards and warrants. This heterogeneous context makes the idea of transparent conveyers inapplicable to the epistemology of computer simulation. Furthermore, as we will see in the next section, in themselves, computer simulations are not what should be taken as good sources of testimony. This is because it is their credentials that we are trying to establish. To treat them as possible sources of expert testimony is already granting them the ‘expert’ part of expert testimony that makes any testimony have the desired credentials for anyone to be entitled not to doubt it, if one was ever to appeal to such a position or epistemic warrant.
7.5 Computer Simulations Are Not Themselves Sources of Expert Testimony
129
7.5 Computer Simulations Are Not Themselves Sources of Expert Testimony This section distinguishes between the ordinary epistemic thresholds involved in trusting others and those that ought to be deployed in trusting computer simulations as expert sources in scientific inquiry. According to Burge-style views of epistemic entitlement, when we are dealing with a given piece of testimony that we have no plausible reason to doubt, the default rational position is to accept it as truthful (Burge, 1993). If, for example, I hear that weather models predict that a hurricane is likely to hit my city, it would be rational for me to heed the meteorologist’s warning without rigorously investigating the methods and evidence supporting the prediction. In this scenario, it would seem that I am trusting the model in the same way that I would trust an expert. This, however, is not the right way to understand the non-scientist’s attitude towards the meteorologist’s prediction. First, we would argue that in the weather forecasting case I do not trust the models as experts. Instead, I trust the judgment of the human meteorologists with respect to the models in question. But even then, the reasons why I trust the human meteorologist include a wide array of epistemic warrants of different kind and different degrees of justificatory force. I would, for example, at the very least, trust one meteorologist over another depending on the journalistic trustworthiness of the station broadcasting the predictions. If so inclined, I could appeal to past predictions and their accuracy or also reference to the sort of education one meteorologist has received over another if I had the option. Hence, the ‘expertise’ of the human meteorologist would have been established by me or others I trust (news sources, institutions, etc.,) before I trust the meteorologist herself. Whether computer simulation have been similarly vetted is what we are trying to find out, not what should be assumed when we are wondering whether to trust their results or not. Second, while it is reasonable for me to regard the meteorologist as a reliable source of testimony (Borge, 2003; Jenkins, 2007), it would not be reasonable for the meteorologist him—or herself to maintain this epistemic attitude towards the computer simulation. It is reasonable for us, in ordinary life, to trust the weather forecaster insofar as they have the endorsement of experts. Neither we, nor the weather forecaster should treat the simulation itself as an expert. Again, as we saw above, the simulation in itself is not self-validating. When things go well, the community of experts and the tradition of using simulations in successful scientific practice grounds the confidence of experts in the use of simulations. A scientist may certainly be justified in relying on the computer simulation’s pronouncements once it is integrated into her practice or into the practice of other experts (Audi, 1997). As explained above, this is a long and gradual process. Sometimes it is a process that can take decades. It is never the case that a scientist should give as a reason to trust a simulation the claim that the simulation is an expert. To treat our trust in simulations by analogy with our trust in human experts, would be to miss the actual warrants that ground the trust granted by human experts to simulations. When non-experts take the testimony of experts seriously, they are not granting that same level of credence to the tools used by the experts they trust.
130
7 Implications of the Instruments View of Computer Simulation
For example, they might not even know of the existence of those tools. In our case, most consumers of the weather forecast simply trust the experts to interpret computer simulations and other tools for them. In other words, the one thing we humans assume when we trust other humans in the way described above is that the human in question has done due diligence and passed all the credential-awarding procedures that allow them to interpret and then convey, somewhat transparently, what the science says about the weather with an instrument as an enabler. We do not assume the instrument to be the conveyer by itself or the one that merits the trust by itself, and certainly not by default. Justification to trust testimonial evidence comes from observing a “general conformity between facts and reports [by which] with the aid of memory and reason, we inductively infer that certain speakers are reliable sources of knowledge” (Lackey, 1999, p. 474). It is the conformity between facts and reports that does the heavy epistemic lifting when we trust experts. The facts and reports are analogous to the theoretical foundations and empirical processes in the context of more advanced epistemic practices such as science. What is described above may be the case for those humans that we consider experts, but it certainly is not the case for computer simulations. For simulations, the reasons that experts would count them as reliable sources of knowledge is their relationship to theoretical principles and engineering best practices in addition to their predictive successes. For example, a reason to trust a given computer simulation of a system is not merely because it is able to predict a given state of the system, but rather because it does so in the appropriate way—in conformity with the laws of physics, in conformity with engineering practices that are amenable to error assessment, etc. (Resnik, 1997; Symons & Alvarado, 2016). All of this is to say that the task for experts in the evaluation of computer simulations is fundamentally different from the consumption of computer simulations by non-experts. Even if we had to accept that some of us, most of us, have to somewhat trust computer simulations as their results are transmitted by scientist and others, this is not the case for those involved in the development and deployment of computer simulations. Furthermore, it is definitely not the case for those of us seeking a philosophically sound argumentative strategy to establish their epistemic status. Non-experts depend on experts to have certified the relevant computer simulations as worthy of attention. The testimony that non-experts are trusting is the testimony of expert communities, not directly the output of their simulations. A scientist relying on computer simulations should not trust their results simply as a matter of a default rational position. A philosopher trying to establish the normative guides by which a novel scientific instrument is to be deemed epistemically viable in the context of scientific inquiry has even less reason to appeal to such a strategy.
7.6 Epistemic Technologies, Epistemic Trust and Epistemic Opacity
131
7.6 Epistemic Technologies, Epistemic Trust and Epistemic Opacity At this point we can say that computer simulations are not only instruments but that they belong to a special class of instruments designed, developed and deployed in very particular contexts to deal with very particular tasks: they are epistemic enhancers, as Humphreys (2004) rightly categorized them, they are designed to be deployed in epistemic contexts (inquiry) and they are designed to carry out epistemic tasks (enhance our ability to gather knowledge). Yet, unlike the telescope, however, their relationship to knowledge-acquisition is still more directly related to cognitive processes that we often associate with epistemic tasks. The ordinary optic telescope, for example, does not add or multiply. The telescope does not manipulate models or their content. As an instrument deployed in inquiry the telescope does not engage with epistemic content such as information nor does it manipulate it. Telescopes manipulate light through mirrors and lenses.15 In this sense, the telescope, while broadly deployed in epistemic contexts, is a bit further removed from carrying out epistemic tasks than the computer simulation is. There are, of course, as we saw earlier in Chap. 5, different kinds of epistemic enhancement, and the status of computer simulations as epistemic enhancers may very well be due to their combined enhancement capabilities. They are hybrid instruments in this sense. But it is important to separate them from other instruments that are deployed in epistemic contexts. Again, while an oscillating shaker in a chemistry lab qualifies as an epistemic enhancer in that it is the kind of device deployed in inquiry, its nature and that of its work is vastly different from that of a computer simulation. And while a telescope or microscope are more closely aligned with epistemic tasks than the oscillating shaker, since they are directly involved with the enhancement of a perceptual ability, we can nevertheless distinguish them from the kind of instrument that a computer simulation is and the kind of work that a simulation does. In this sense, computer simulations are part of what I call ‘epistemic technologies.’ These are technologies that are designed, developed and deployed for epistemic purposes, in epistemic contexts, and whose main operations involve epistemic tasks. This is not to say that the computer simulation on its own is carrying out epistemic tasks such as knowing.16
While the light input that a telescope received can be interpreted as an informational source by an epistemic agent, light by itself is rather a physical phenomenon and not an informational one. We can, of course, interpret light as information and even understand much about its behavior and nature through this framework, but this is akin to interpreting food intake as an informational process: useful but limited. The point here, however, is not to question the nature of light or the extent of the application of the theory of information on nature, but rather to signal that there is a distinction between the kinds of material that each instrument must manipulate in order to carry out their operations: computer simulations deal with values, models, computational processes, time and logical architectures; conventional telescopes deal with light, mirrors and lenses. 16 Based on the concept of epistemic enhancers, there are certain technologies, in particular AI technologies, that are epistemic in important and particular ways. Not only are they deployed in epistemic contexts such as inquiry—something that computer simulations can also be said to be— 15
132
7 Implications of the Instruments View of Computer Simulation
However, with relation to the telescope, it is obvious that it is closer to those latter tasks than the telescope is. Just like the telescope is more closely aligned to epistemic endeavors and aims than a generic hammer or wall may be. We have seen what it takes to sanction a scientific instrument in general. But, are there any specific considerations we must entertain when it comes to trusting instruments that are explicitly made to enhance our epistemic abilities? As it turns out, there are (Alvarado, 2022a, b). Not only are these instruments a specific kind of instrument based on the human capacities they target to enhance (Humphreys, 2004), but there are also specific kinds of trust that can be allocated to distinct technologies on the basis of their relationship to such capacities. How do we assess this? The following is a set of considerations regarding the specificity of these kinds of instruments as well as that of the appropriate kind of trust that can be allocated to such. David Danks (2019) argues that we should focus on the function of both trust and the artifact that is being trusted in order to know what kind of trust is adequate for specific cases. Simply put, there are distinct kinds of trust and they are allocated for different reasons in different contexts (Mcknight et al., 2011). Furthermore, not all constructs of trust are adequate for all technologies (Lankton et al., 2015). Danks argues that besides knowing what an artifact does in a given circumstance we also need to know what trust is supposed to do for us in that context as well. Only then, can we properly deliberate about and allocate the kind of trust required and the extent to which we ought to allocate it. As we saw above, knowing the actual nature of a given artifact also helps us appropriately deploy the justificatory warrants that accompany such allocation of trust. When we consider both the function of the artifact and the function that trust is playing in a given circumstance, we also get at a more sophisticated account of a trusting instance. That is, we do not just get something like “A trusts B”. According to Toreini et al. (2020, p. 274), we get something like the following: “A trusts B to do X (or not do Y), when Z pertains…”
This more sophisticated account of what it means to trust provides a formulation of trust that takes into consideration distinct agents (A and B), a very specific action (X) to be undertaken by someone or something (B in this case), a comparison class of undesirable contrast actions or outcomes (Y), and a context (Z).
but are also designed and developed to manipulate epistemic content. More importantly, however, they are designed to manipulate such epistemic content through the carrying out of epistemic operations: e.g., pattern recognition, inference-making, etc. In this sense, although computer simulations are designed and developed to be deployed in epistemic context, and may deal with symbolic formal content, they do not do so epistemically. At least not to the extent that artificial intelligence technologies do. Artificial intelligence methods, such as machine learning, are not simple software algorithms but rather also deal with complex information processing that often includes semantic and relational structures beyond simple symbol manipulation. As such, they may be more of an epistemic technology than computer simulations are, just like computer simulations may be more of an epistemic technology than calculators and calculators more than hammers.
7.6 Epistemic Technologies, Epistemic Trust and Epistemic Opacity
133
The variable ‘X’ in this formulation is particularly important. It tells us what we are trusting agent B to do. Although when it comes to interpersonal trust one often ascribes or allocates a general sense of trust to those close or known to us—as when we say things like “I trust my spouse/employees/candidate”, etc.—when we are talking about more formal expectations in the context of expertise or artifacts designed with a specific function in mind, we are, even unknowingly, more discerning about what we mean. As Oreskes (2021) puts it: “we would not trust a plumber to do our nursing, nor a nurse to do our plumbing” (p. 55). We trust each to do what we expect them to know how to do. As briefly mentioned above, artifacts are intentional, functionally identifiable objects (Symons, 2010). That is, according to Symons (2010), the functionally identifiable aspect of an artifact was intentionally designed into it.17 Hence, knowing what an artifact is entails knowing what it does. We saw already that the tasks that a computer simulation carries out are epistemic in nature. If we take into consideration, as we noted above, that there are distinct kinds of trust, that not all kinds of trust are adequate to allocate to all kinds of technology, and that in order to know what kind of trust we ought to allocate and to what extent we ought to allocate it we must consider the function of both the artifact and of trust, then we get the following. We trust computer simulations to do these epistemic tasks for us in these epistemic contexts which entail epistemic aims and expectations. Hence, following Danks, the kind of trust we can allocate to them must also be of an epistemic kind. But what does this mean, exactly? And what are the implications of our discussion so far if this is the case? According to Wilholt (2013) ‘epistemic trust’ is “a particular sort of trust.” Expressing a similar sentiment as the one above concerning the differentiation between distinct kinds of instruments deployed in similar epistemic contexts, Wilholt adds that “we may trust the scientists who conduct research on venomous snakes to keep the objects of their study locked away safely, but that is not what we mean by epistemic trust.” That is, although the scientist’s snakes are being used for epistemic purposes (inquiry) and in an epistemic context (a lab experiment), that they are properly stored is not what we are trusting the scientists with when we say we trust their results. Rather, to invest epistemic trust in a person such as a scientist, Wilholt concludes, “is to trust her in her capacity as provider of information.” Wilholt is here not only speaking of trust in people, however. Epistemic trust is a concept that can be expanded to a process such as science. When we ‘trust science’ therefore, what we mean is that we trust that science, expert scientist and specific scientific methods have the capacity to provide information about the natural or social world (Oreskes, 2021, p. 56). Hence, if we are to trust computer simulations at all, when we trust them, we must trust them as epistemic sources because their function is epistemic. Hence, we Some may call into question this assumption given that some artifacts can acquire functions beyond their intended design. While this may be true, unintended functions of an artifact are often only possible given that the artifact was designed in a given way with a given set of properties. In this sense then we can say that whatever the artifact does, even if was not originally intended is the product of an intention. 17
134
7 Implications of the Instruments View of Computer Simulation
also trust them epistemically because this is what our trust is meant to do, to warrant our epistemic reliance on them as epistemic enhancers. In short, if/when we are to trust them, we are to trust them as partaking in the processes of knowledge- acquisition and so the adequate kind of trust to allocate to them is epistemic trust. However, that we can or should in fact trust computer simulations as reliable providers of information is not something that simply follows from the arguments above. In other words, simply knowing what kind of trust is adequate to allocate in a given circumstance does not imply that such trust can or must be allocated in that particular circumstance. Knowing what kind of trust is adequate is simply a preliminary step that follows from understanding the function of the artifact we are dealing with and what we expect trust to be doing for us in such a circumstance. Rather, as we saw in this chapter, the challenges of knowing how and why to trust artifacts such as computer simulations are non-trivial. To see one more instance of why this is the case consider the phenomenon of epistemic opacity. Roughly, the term epistemic opacity refers to the lack of access that an inquiring agent may have to the relevant aspects by which a process fulfills its specific tasks. If we take a refrigerator’s task, for example, to be that of cooling its insides in order to keep its contents fresh, then not knowing how the refrigerator does it counts as an instance of epistemic opacity. Paul Humphreys (2004) famously defined epistemic opacity as follows: A process is epistemically opaque relative to a cognitive agent X at time t just in case X does not know at t all of the epistemically relevant elements of the process.
Defining epistemic opacity this way allows for a very broad interpretation of the processes that can be characterized as epistemically opaque. We can, for example imagine many processes to be epistemically opaque to many people at many times. In fact, most processes are opaque to most of us for most of the time. It suffices to think of the number of processes for which any given human knows all the epistemically relevant elements of. This is particularly the case when such processes are of a certain, even moderate, complexity. An implication of this definition, as can be gathered, is that a process that is opaque in this way can be opaque to some people but not to others. For example, the functioning of my refrigerator is opaque to me but not to a technician. Furthermore, such processes can stop being opaque to the same person at a different time. Again, I can, for example, take a course on refrigeration technology and the opacity I face when investigating it will vanish. In other words this is a general and relative account of epistemic opacity (Alvarado, 2021). Hence, it is also not a very informative, or philosophically interesting account of epistemic opacity. However, according to Humphreys computer simulations and their process are not just opaque in the manner described above. Rather they are essentially epistemically opaque. A process, according to him, is opaque in this essential manner if and only if “it is impossible, given the nature of X, for X to know all of the epistemically relevant elements of the process” (2009a, b). This stronger kind of opacity is important because, as Humphreys suggests, essential epistemic opacity is the kind of opacity that does not just obstruct an agent from knowing the relevant elements of a
7.6 Epistemic Technologies, Epistemic Trust and Epistemic Opacity
135
system or process, but rather makes it impossible for such kind of agent to know the epistemically relevant elements of that process. This obstruction is different from the one in the previous definition in that the task of investigating into the relevant epistemic elements of a process is, well, impossible. This impossibility is, of course, as mentioned above, in virtue of the nature of the agent—its essence so to speak.18 Hence the name. Despite recent attempts to explain away the difference in kind between these two accounts of epistemic opacity (see San Pedro, 2020), it is important to note that essential opacity is not just a stronger version of the general opacity described above. Rather, it is a different kind of opacity. This is made clear by two main factors: the first one is that it is the kind of opacity that arises in virtue of the agent’s nature and not in virtue of something else like circumstances, lack of external resources, etc.19 Essential opacity arises in virtue of something other, something more essential, than general opacity. It is also conceptually and practically distinct. Solving or minimizing cases of general opacity do not in any way, shape or form solve or minimize essential epistemic opacity. Secondly, as the definition of essential opacity suggests, given the agent’s nature, it is impossible for that agent to access the relevant epistemic elements of that process, system and/or device. This impossibility is a markedly distinct feature of essential epistemic opacity. As I argue elsewhere (Alvarado, 2021), even if one can compare one challenge with another, the fact that one is a challenge that allows for a possible solution and the other one is impossible to overcome makes them a different kind of challenge from one another.20
While it may prove strange to some practitioners and pragmatist philosophers for Humphreys to use the terms and concepts of impossibility, nature and/or to elicit the ghosts of essentialist metaphysics, that Humphreys does this as a philosopher is not strange at all. This is particularly evident if one considers his body of work concerning non-anthropocentric epistemologies: in his book Extending Ourselves (2004), for example, he speculates about a science neither for us or by us. Similarly, in his paper “Network Epistemology” (2009b), he posits the possibility of knowledge by complex objects such as networks. This is another instance in which it is worth noting that Humphreys’ use of this term is neither trivial nor superfluous but rather deliberate. 19 It is the appeal to the nature of an agent that makes this definition ‘essential’. In other words, one can interpret the opacity in question as having arisen in virtue of the essence of the agent’s epistemic nature. 20 Deflationary accounts of essential epistemic opacity fail to capture the essential aspect of essential epistemic opacity. What approaches of this kind may have been able to show is that Humphreys initial assertion that “perhaps all of the features that are special to simulations are a result of this inability of human cognitive abilities to know and understand the details of the computational process. The computation involved in most simulations are so fast and so complex that no human or group of agents can in practice reproduce or understand the process” (2009a) may have been too fast, too broad, and/or too early stated. It may very well be the case that at least some computer simulations, and/or some instances of opacity, that he thought were essentially opaque were not. But this only shows that some kinds and instances of epistemic opacity in computer simulations may not be of the essential kind. This does not take away, however, from the philosophical enterprise of identifying two distinct kinds of epistemic opacity and/or of identifying a second, different and insurmountable, kind to emerge in the context of computational methods such as softwareintensive artifacts—and by extension computer simulations, machine learning methods, etc. (Symons & Horner, 2014; Symons & Alvarado, 2016; Alvarado, 2021). 18
136
7 Implications of the Instruments View of Computer Simulation
It is important to note that while many natural processes may also happen to be essentially epistemically opaque, Humphreys’ main focus in the elaboration of this formulation is still computational processes, and more particularly those associated with computer simulations. Furthermore, as I have also argued elsewhere (2021), some instances of essential epistemic opacity are agent-neutral. Borrowing from Nagel’s (1989) understanding that there are some reasons for action that apply to any agent in the same circumstances, I argue that if it is impossible, in virtue of a feature of the nature of an agent which is also shared by all other epistemic agents, to know the epistemically relevant elements of a process, then that process is essentially epistemically opaque in an agent-neutral manner. Nagel argued that there are certain circumstances which give any agent a reason for a specific action. For example, having excruciating pain due to third degree burns gives any agent in those circumstances reason to take a pain reducing drug if available. Similarly, there are certain instances of essential epistemic opacity that would arise for (almost) any epistemic agent.21 It is in this sense that they are agent-neutral, or simply not agent-relative in the sense that epistemic agents of different kinds would change little to nothing in the outcome. Because of Humphreys’ views, and because of limiting proofs provided by Symons and Horner (2014) concerning error and reliability assessments in software systems,22 many of these severe instances of opacity arise in the context of computational methods such as those involved in computer simulations. Notice that up to now however, the three accounts of epistemic opacity so far considered are still agent-based. That is, they refer to the nature of the agent when accounting for the emergence of the opacity at play. When it comes to the computational processes like those involved in computer simulations, however, there is something else that needs to be considered. What agent-based accounts of epistemic opacity, including agent-neutral accounts such as the one defined above, fail to capture is the fact that some instances of opacity, strictly speaking, do not have much to do with an agent’s epistemic limitations. Of course, these instances limit an agent epistemically, but they do not arise in virtue of their essential limitations. Rather, as Obviously, this would not apply to an omniscient God. When it comes to software-intensive artifacts, here is an example of an instance of opacity from Symons and Horner (2014) that applies in the way defined above. We can begin by conceiving of a small software program consisting of only 1000 lines of code. In conventional software practices, on average 1 out of 10 of those 1000 lines of code will be a command of the form ‘if/then/else’. This is in contrast to commands that follow the form ‘if/then’. That is the former, but not the latter include an extra alternative bifurcation of possible paths the code can follow. Checking a similar size program that only had if/then commands can be done by examining each one of the 1000 lines of code. Including an if/then/else command, however, bifurcates the possible paths that the code can take. This increases the lines of code that have to be checked. Consider that a program with 10 if/then command lines has 10 lines to check. The same program with one if/then/else command will have 20 lines. If we have a program with 1000 lines of code, which includes an average of 1/10 if/then/else commands will have so many lines of code to check that it will take several times the age of the universe to do so (Symons & Horner, 2014; Horner & Symons, 2014, 2019; Symons & Alvarado, 2016). Software systems involved in computational methods such as machine learning and computer simulations of scientific phenomena by far exceed the number of lines of code used in this example. 21 22
7.6 Epistemic Technologies, Epistemic Trust and Epistemic Opacity
137
Nicole Saam (2017a, b) points out, in the context of computational methods, such as computer simulations, sometimes these processes “are not opaque because of sloppy modeling [practices]” or because these processes are “poorly understood.” Rather, they are so simply “because they [the computational processes themselves] are complex”23 (Saam, 2017a, b, p. 80). What Saam is pointing out in this quote is a cause and source of opacity that is not derived from an agential feature and captures the fact that some instances of opacity arise in virtue of properties of the processes per se—things outside the agent’s epistemic properties. Consequentially, we can see now that while the distinction between general opacity and essential opacity will capture many instances of opacity, it will fail to capture instances of opacity that (a) are not enhanced or reduced by changes in the limitations of an agent or (b) do not emerge from the limitations or abilities of agents. Interestingly, as we saw above, many of these instances are of relevance to us because they concern our ability, or lack thereof, to examine the inner workings of computational methods such as computer simulations. These kinds of opacity are, in this sense, immune to our consultations. It is these instances that I call agent-neutral and agent independent respectively. These issues become even more complex and more challenging when one considers, as we did in Sect. 2.4. the introduction of machine learning, deep neural networks and other more intrinsically opaque AI methodologies into computer simulation practices. This is because such technologies are also representationally opaque (Alvarado & Humphreys, 2017; Alvarado, 2021, 2022a, b). That is, they function through representational means that are both unrecognizable and inaccessible to us (Burrell, 2016; Duede, 2022). For example, the kinds of pattern recognition methodology used by machine learning methods is non-trivially dissimilar from the way by which we humans operate in at least two ways: deep neural networks and other similar processes require millions of samples to capture relevant features of a pattern; at the same time, in order to capture such patterns, the many parameters and weights of the models at play are revised multiple times at speeds and scales that impede their surveillability. Furthermore, the specific processes and inference paths by which the kinds of models utilized in machine learning and other AI technologies arrive at their results, cannot simply be reversed engineered to understand them (Symons & Boschetti, 2013). Humans and human epistemic endeavors simply do not function like this nor can we fully have epistemic access to processes such as this. If, as Humphreys points out, computer simulation processes are essentially opaque; if they are opaque in an agent-neutral manner given Symons and Horner’s proofs; and if some of these instances prove to be opaque in an agent-independent manner, as I suggest, then we have a non-trivial challenge concerning our ability to trust computer simulations as providers of information. This is particularly the case if one of the main elements of assessing their reliability as information providers includes the capacity to examine all or some of the epistemically relevant elements of their operations. An important thing to consider however, is that this is particularly a serious challenge, if it is not possible to assess the source and nature of error
23
Emphasis and italics are mine.
138
7 Implications of the Instruments View of Computer Simulation
in these systems. While we can acknowledge that full transparency, say of the source code of a system, in computational methods would seldom provide meaningful information to an inquiring agent (Durán & Formanek, 2018; Durán & Jongsma, 2021), knowing the rate, but also the source and nature of error is nevertheless a crucial aspect of trusting an artifact (Symons & Alvarado, 2019; Alvarado & Morar, 2021). As we will see in the concluding section to this chapter, this is particularly the case when the artifact in question is a novel technology (Alvarado, 2021) which we are looking to sanction for the most rigorous epistemic endeavor that we humans embark on, i.e. scientific inquiry. This leads me to the final point of this chapter. While living with epistemic opacity, of any of the kinds mentioned above, may be an inevitable aspect of our everyday life and everyday lay epistemicpractices, to introduce novel technologies that elicit such instances of opacity in epistemic contexts such as scientific inquiry may prove to be a serious issue.
7.7 Scientific Inquiry Versus Everyday Epistemic Practices I have argued that Burge-style approaches to epistemic entitlement are inappropriate in the context of computer simulations when it comes to scientific or other high- stakes applications. However, I have also contended, albeit briefly, that science should be, and often also is organized around more stringent epistemic norms for the evaluation of computer simulations. In fact, it might even be the case that since time constraints are less pressing in much of science than they would be in, for example, military or policy contexts, scientists have the luxury of being even more rigorous and demanding with respect to the epistemic standards governing their devices. Of course, in saying this I am not discounting the serious time constraints faced by individual scientists in their careers, such as the need to publish novel findings, the pressure of funding agencies, or the pursuit of tenure requirements. Rather, I am referring to scientific inquiry as a series of methods aimed at furnishing the best available understanding of our world. As such, scientist (from astrophysics to geology to biology- and some social sciences) can and ought to ensure a rigor in their methods that the nature of industry and war seldom afford. In this section I want to briefly expand on some fundamental differences between basic epistemic practices—such as the ones described by Burge and others appealing to perceptual and testimonial capacities as model epistemic practices in scientific inquiry—and epistemic practices that that have emerged from more sophisticated normative standpoints, which in turn require them to transcend common limitations of basic practices. In particular I want to point to the fact that scientific inquiry is not at all like perception or other everyday epistemic practices. There are two main points that make this case. The first one is that while there may be reasons for someone to trust either perceptual and/or testimonial sources in the way that entitlements suggest, these entitlements may not be of an epistemic kind. Clearly, there are instances in which time does not allow for someone
7.7 Scientific Inquiry Versus Everyday Epistemic Practices
139
to fully survey an expert’s propositions and must trust the contents being conveyed; other times one may not have full access to the instruments or processes by which a team of researchers arrive to a conclusion. In these cases, we may have a reason to default towards accepting the findings that come from others, particularly experts. But note that these reasons, insofar as they can be deemed as reasons to trust, are practical reasons and not epistemic reasons. That is, they do not provide further epistemic justificatory force, just practical, or pragmatic, justification for me to fall back into a position in which I have no other option but to trust, say for the sake of moving on with a conversation or a project (Symons & Alvarado, 2019). This may be what is happening when philosophers appeal to entitlements in the context of scientific inquiry. If this is so, then maybe we can agree that sometimes there are reasons, other than epistemic reasons, that play a role in the development of scientific results and that this is what happens in the context of computer simulations. In my view, whether this can constitute the proper basis for a normative argument in the philosophy of computer simulations is highly dubious. That is, just because it happens to happen, it does not mean that it should happen. The second argument is rather quite simple and, as we saw in the section related to scientific instruments, it consists of replacing one set of norms with the other. When we do this, we can see that epistemic norms at play in everyday epistemic practices are simply inadequate in scientific contexts and vice versa. The conclusion is that there is a stark distinction between both. A very plausible explanation of this distinction is that it has something to do with a difference in rigor, an aspirational aim that calls for the adherence to theoretical principles in our explanations, and a normative call for higher epistemic hygiene of one context over the other one. As we had previously seen, scientific standards would be regarded as unreasonably strict if we were to adopt them for everyday epistemic practices such as normal decision- making, trust in perception, and credence with respect to the testimony of others. Not much would get done if we were to deploy epistemic norms taken from scientific inquiry in every day epistemic practices. This is because the requirement to accept something as part of scientific inquiry are substantially more demanding than those of our everyday practices. Of course, there are indeed aspects of scientific practice that are just like the epistemic practices in ordinary life. Scientists are people too, and in this sense, Burge’s Acceptance Principle will have an important role in the way scientist speak to each other, convey casual information about their research to one another and ultimately rely on one another on a regular basis. In order for scientists to engage in research, they need to trust one another to some extent. They cannot maintain an attitude of radical skepticism and they need to make reasonable trade-offs between time spent in various epistemically relevant tasks. This much is obvious and cannot be denied without a deeply undermining and undesirable epistemic cost. However, there are a few distinctions that ought to be drawn in this discussion. For example, there are epistemic and non-epistemic entitlements that can be invoked as justifications to trust something. For instance, an epistemic entitlement might be playing a role in our decision to trust a medical device given our belief that it may
140
7 Implications of the Instruments View of Computer Simulation
be beneficial to a patient. Yet, a non-epistemic reason to believe the same thing might involve reasons that involve other factors such as social considerations or resource constraints. A person may accept or reject some scientific hypothesis for social reasons. Perhaps this can be regarded as reasonable insofar as that person’s individual commitments have little practical import. Their views on climate change, evolution, the biology of sex and gender, or other socially controversial matters, for example, may strengthen their place within their social network by assenting to the consensus position of that group, often even assenting to an exaggerated version of that consensus position. In those cases, as stated above, one may indeed have reasons to believe/trust, but they are not strictly speaking epistemic reasons. Dretske, for example, suggest that if what we are appealing to in accepting some proposition as true is to be regarded as a genuinely epistemic reason then the only grounds are its truth or probable truth (2000, pp. 593–594). Epistemic reasons provide justification for the belief that a proposition is true. Having to accept somebody’s proposition because of lack of resources to access evidential sources, or because they have a gun to our head, as Symons and Alvarado (2019) argue, does not meet this condition. Furthermore, assessing the truth or probable truth of the results of an instrument used in scientific inquiry must come from a reliability assessment. As we have seen, assessing the reliability of an instrument or method is dramatically distinct from the process of personal justification to rely on such an instrument (Williams 2000). Again, because we may have non-epistemic reasons to rely on something (like lack of a viable alternative) and because these reasons have little to no bearing on whether or not that something is in fact reliable, these two things ought to be considered separately. That is, whether or not an instrument is reliable is not the same as whether or not we believe it to be so. Appealing to an epistemic entitlement to trust the results of a computer simulations, should not be confused with the effort to provide what Dretske calls “a pragmatic vindication of the practices we use to fix beliefs” in place of efforts to “validat[e] the beliefs themselves” (2000, p. 598). That is, merely showing that a “practical purpose or need is served by accepting certain practice does not address the problem of our epistemic right to accept the individual beliefs that occur in this practice” (ibid). As we can see, even if we must appeal to an entitlement to trust computer simulations, it does not necessarily mean that such an entitlement is in fact a warrant that provides justificatory force to believe that their results are true, or more likely to be so. Scientists are held to a higher epistemic standard than ordinary epistemic agents for a reason (Nola & Sankey, 2014). Consider Burge’s use of the example of Newtonian calculus (1998, p. 8). Burge’s claim is that non-demonstrative/non- evidentiary reasoning can underwrite sound mathematical beliefs. An example of this, he argues, is that Newton’s “knowledge” of elementary truths of calculus was available to him before formal explanations were available. However, consider too that without such formal explanations few would have considered Newton’s knowledge to have any epistemic value. Similarly, Burge provides another example in which people accept the Pythagorean theorem solely on the basis of a diagram or the word of another without being able themselves to produce its proof. This
7.7 Scientific Inquiry Versus Everyday Epistemic Practices
141
acceptance only takes place in circumstances where no consequential inquiry is being conducted on the theorem itself, or in which the theorem does not play a central role. In fact, to accept the Pythagorean theorem in this way is to treat it on a par with, for example, one’s unreliable neighbor explaining that recycling is collected with the trash every second week. The significance of knowing when to put out the recycling with the trash is (under normal circumstances) low enough, that believing one’s unreliable neighbor is a reliable strategy. If it were very important, for some reason to know the recycling and trash collection schedule with greater certainty— imagine, for example, that we were the person in charge of the agency collecting the trash—then one should make additional efforts to determine the truth. Burge’s example of how one might come to accept the Pythagorean Theorem without really understanding the proof is convincing only insofar as we have relatively low standards for what we take to be true in mathematics. In ordinary life, it is true that our standards will often be low. There are cases where knowledge of the Pythagorean Theorem will have the same priority and role in our everyday lives as knowledge of the trash/recycling schedule. However, in the practice of science, our standards are higher and in the judgment of scientific evidence, a critical educated public should also hold important matters to a higher epistemic standard. Consequently, expert scientific testimony is not on a par with ordinary testimony and ordinary epistemic norms are not adequate to formulate an epistemology concerning scientific practice or the public understanding of science. While, or course, no individual scientist is an epistemic saint, the scientific enterprise collectively seeks to ensure truth-aptness and reliability independently of the shortcomings of each individual practitioner. If there is a trait of science shared across scientific disciplines, this sounds like a very good candidate to begin with. In relation to modeling and simulation, this has practical import. Humphreys, for example thinks that practitioners should always possess working knowledge of a given instrument and he argues that the background theory of its principles should always be at hand to the practitioner. This is not only because some instruments, for example, an MRI scanner, require this knowledge in order to be used effectively (Humphreys, 2004, p. 38), but also because, the theory of how the instrument operates is “routinely needed to protect the user against skeptical objections resulting from ways in which the instrument can produce misleading outputs” (p. 38). Computer simulations are often used in contexts where highly demanding scientific standards are in place and where even ordinary well-grounded scientific practice falls below the threshold for acceptability (Ruphy, 2011; Morrison, 2015). These are not distinctions made by those deploying arguments of entitlement towards the acceptance of computer simulations. If they are referring to entitlement as providing a reason to trust and they accept that this reason is not an epistemic one, then we can concede that there are indeed, often, reasons why scientist and non-scientist have no option but to trust one another’s propositions. This may apply to cases that involve the use of computer simulations. However, just because I may have reasons to trust the results of computer simulations, does not mean that I have reason to believe the results of computer simulations are true. While this is far from a devastating blow to the use of computer simulations in science, what it does show is that there is a significant distinction in the kinds of epistemic entitlements and that
142
7 Implications of the Instruments View of Computer Simulation
this distinction has to be taken into consideration. This is particularly the case if we believe science to be fundamentally an epistemic endeavor and not merely a practical one. Epistemic endeavors require epistemic justifications. In the case of science, these epistemic justifications are simply more rigorous and hence superior in the context of epistemic defenses against reasonable scrutiny. Like the telescopes we learned about in the previous chapters, computer simulations should be subjected to these superior epistemic requirements and not merely accepted to be trustworthy devices on the basis of pragmatically driven adaptations of our criteria as suggested by philosophers such as Hubig and Kaminski (2017), or non-evidential warrants such as epistemic entitlements (Beisbart, 2017; Barberousse & Vorms, 2014). Accordance with, and normative calls for meeting more rigorous criteria for adoption of an instrument was the driving force behind the birth of the instruction manual printing convention that emerged along the precision instrument industry (Heilbron, 1993 p. 10–13). This has also been the case for newer laboratory equipment (Baird, 2004, p. 126). As we move to understand computer simulations as instruments, we must take into account this fact: whether or not something makes it as a scientific instrument has to do with whether or not it does what it is supposed to do in accordance with these rigorous norms and not just with whether or not it does what it is supposed to do. In other words, merely doing something, and or doing it well is not enough for science. The same applies to instruments: a history of factual success, by itself, is not enough to deem an instrument a scientific instrument. A more rigorous criteria is required. Efforts to understand both the nature and sources of both function and error in an instrument must also be in place.
References Adler, J. (2015). Epistemological problems of testimony. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2015 edition). https://plato.stanford.edu/archives/sum2015/ entries/testimony-episprob/. Accessed 20 Dec 2018. Alvarado, R. (2021). Computer simulations as scientific instruments. Foundations of Science, 27, 1–23. Alvarado, R. (2022a). Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI. Bioethics, 36(2), 121–133. Alvarado, R. (2022b). What kind of trust does AI deserve, if any? AI and Ethics, 1–15. https://doi. org/10.1007/s43681-022-00224-x Alvarado, R., & Humphreys, P. (2017). Big data, thick mediation, and representational opacity. New Literary History, 48(4), 729–749. Alvarado, R., & Morar, N. (2021). Error, reliability and health-related digital autonomy in AI diagnoses of social media analysis. The American Journal of Bioethics, 21(7), 26–28. Arkoudas, K., & Bringsjord, S. (2007). Computers, justification, and mathematical knowledge. Minds and Machines, 17(2), 185–202. Audi, R. (1997). The place of testimony in the fabric of knowledge and justification. American Philosophical Quarterly, 34(4), 405–422. Baird, D. (2004). Thing knowledge: A philosophy of scientific instruments. University of California Press.
References
143
Barberousse, A., & Vorms, M. (2014). About the warrants of computer-based empirical knowledge. Synthese, 191(15), 3595–3620. Beebee, H. (2001). Transfer of warrant, begging the question and semantic externalism. The Philosophical Quarterly, 51(204), 356–374. Beisbart, C. (2017). Advancing knowledge through computer simulations? A socratic exercise. In M. Resch, A. Kaminski, & P. Gehring (Eds.), The science and art of simulation I: Exploringunderstanding-knowing (pp. 153–174). Springer. Borge, S. (2003). The word of others. Journal of Applied Logic, 1(1–2), 107–118. Burge, T. (1993). Content preservation. The Philosophical Review, 102(4), 457–488. Burge, T. (1998). Computer proof, apriori knowledge, and other minds: The sixth philosophical perspectives lecture. Noûs, 32(S12), 1–37. Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512. Danks, D. (2019, January). The value of trustworthy AI. In Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society (pp. 521–522). Davidson, D. (1973). Radical interpretation. Dialectica, 27(3–4), 313–328. Davies, M. (2004). II – Martin Davies: Epistemic entitlement, warrant transmission and easy knowledge. In Aristotelian society supplementary (vol. 78, no. 1). The Oxford University Press. Dretske, F. (2000). Entitlement: Epistemic rights without epistemic duties? Philosophy and Phenomenological Research, 60(3), 591–606. Duede, E. (2022). Deep learning opacity in scientific discovery. arXiv preprint arXiv:2206.00520. Durán, J. M., & Formanek, N. (2018). Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds and Machines, 28(4), 645–666. Durán, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 47(5), 329–335. Frans, J., & Kosolosky, L. (2014). Mathematical proofs in practice: Revisiting the reliability of published mathematical proofs. THEORIA. Revista de Teoría, Historia y Fundamentos de la Ciencia, 29(3), 345–360. Fresco, N., & Primiero, G. (2013). Miscomputation. Philosophy & Technology, 26(3), 253–272. Gramelsberger, G. (2011). Generation of evidence in simulation runs: Interlinking with models for predicting weather and climate change. Simulation & Gaming, 42(2), 212–224. Heilbron, J. L. (1993). Some uses for catalogues of old scientific instruments. In Essays on historical scientific instruments..., Aldershot, Variorum (pp. 1–16). Holzmann, G. J. (2015). Code inflation. IEEE Software, 2, 10–13. Horner, J., & Symons, J. (2014). Reply to Angius and Primiero on software intensive science. Philosophy & Technology, 27(3), 491–494. Horner, J. K., & Symons, J. (2019). Understanding error rates in software engineering: Conceptual, empirical, and experimental approaches. Philosophy & Technology, 32, 1–16. Hubig, C., & Kaminski, A. (2017). Outlines of a pragmatic theory of truth and error in computer simulation. In M. Resch, A. Kaminski, & P. Gehring (Eds.), The science and art of simulation I (pp. 121–136). Springer. Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press. Humphreys, P. (2009a). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. Humphreys, P. (2009b). Network epistemology. Episteme, 6(2), 221–229. Jenkins, C. S. (2007). Entitlement and rationality. Synthese, 157(1), 25–45. Kaminski, A. (2017). Der Erfolg der Modellierung und das Ende der Modelle. Epistemische Opazität in der Computersimulation. echnik–Macht–Raum: Das Topologische Manifest im Kontext interdisziplinärer Studien, 317–333. Kaufmann, W., & Smarr, L. L. (1993). Supercomputing and the transformation of science. Scientific American Library.
144
7 Implications of the Instruments View of Computer Simulation
Keller, E. F. (2003). Models, simulation, and “computer experiments.”. In H. Radder (Ed.), The philosophy of scientific experimentation (pp. 198–215). University of Pittsburgh. Lackey, J. (1999). Testimonial knowledge and transmission. The Philosophical Quarterly, 49(197), 471–490. Lankton, N. K., McKnight, D. H., & Tripp, J. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 1–918. McEvoy, M. (2008). The epistemological status of computer-assisted proofs. Philosophia Mathematica, 16(3), 374–387. McEvoy, M. (2013). Experimental mathematics, computers and the a priori. Synthese, 190(3), 397–412. McGlynn, A. (2014). On Epistemic Alchemy. In D. Dodd & E. Zardini (Eds.), Scepticism and perceptual justification (pp. 173–189). Oxford University Press. Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 1–25. Moretti, L., & Piazza, T. (2013). When warrant transmits and when it doesn’t: Towards a general framework. Synthese, 190(13), 2481–2503. Morrison, M. (2015). Reconstructing reality. Oxford University Press. Nagel, T. (1989). The view from nowhere. Oxford University Press. Nieuwpoort, W. C. (1985). Science, simulation and supercomputers. In Supercomputers in theoretical and experimental science (pp. 3–9). Springer. Nola, R., & Sankey, H. (2014). Theories of scientific method: An introduction. Routledge. Oreskes, N. (2021). Why trust science? In Why trust science? Princeton University Press. Primiero, G. (2019). A minimalist epistemology for agent-based simulations in the artificial sciences. Minds and Machines, 29(1), 127–148. Pryor, J. (2012). When warrant transmits. In W. Crispin (Ed.), Mind, meaning, and knowledge: Themes from the philosophy of Crispin Wright (pp. 269–303). University Press. Quine, W. (1960). Word and object. MIT Press. Resch, M. M. (2017). On the missing coherent theory of simulation. In The science and art of simulation I: Exploring-understanding-knowing (pp. 23–32). Springer International Publishing. Resnik, M. (1997). Mathematics as a science of patterns. Oxford University Press. Rohrlich, F. (1990, January). Computer simulation in the physical sciences. In PSA: Proceedings of the biennial meeting of the philosophy of science association (vol. 1990, no. 2, pp. 507–518). Philosophy of Science Association. Ruphy, S. (2011). Limits to modeling: Balancing ambition and outcome in astrophysics and cosmology. Simulation & Gaming, 42(2), 177–194. Ruphy, S. (2015). Computer simulations: A new mode of scientific inquiry? In S. O. Hansen (Ed.), The role of technology in science: Philosophical perspectives (pp. 131–148). Springer. Saam, N. J. (2017a). Understanding social science simulations: Distinguishing two categories of simulations. In M. Resch, A. Kaminski, & P. Gehring (Eds.), The science and art of simulation I (pp. 67–84). Springer. Saam, N. J. (2017b). What is a computer simulation? A review of a passionate debate. Journal for General Philosophy of Science, 48(2), 293–309. San Pedro, I. (2020). Degrees of epistemic opacity. [Preprint] Can be found here: http://philsciarchive.pitt.edu/18525/ Symons, J. (2008). Computational models of emergent properties. Minds and Machines, 18(4), 475–491. Symons, J. (2010). The individuality of artifacts and organisms. History and Philosophy of the Life Sciences, 32, 233–246. Symons, J., & Alvarado, R. (2016). Can we trust big data? Applying philosophy of science to software. Big Data & Society, 3(2), 2053951716664747. Symons, J., & Alvarado, R. (2019). Epistemic entitlements and the practice of computer simulation. Minds and Machines, 29(1), 37–60.
References
145
Symons, J., & Boschetti, F. (2013). How computational models predict the behavior of complex systems. Foundations of Science, 18(4), 809–821. Symons, J., & Horner, J. (2014). Software intensive science. Philosophy & Technology, 27(3), 461–477. Symons, J., & Horner, J. (2017). Software error as a limit to inquiry for finite agents: Challenges for the post-human scientist. In T. Powers (Ed.), Philosophy and computing: Essays in epistemology, philosophy of mind, logic, and ethics. Philosophical studies series (Vol. 128, pp. 85–97). Springer. Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., & Van Moorsel, A. (2020, January). The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 272–283). Tymoczko, T. (1979). The four-color problem and its philosophical significance. Journal of Philosophy, 76, 57–82. Weisberg, M. (2012). Simulation and similarity: Using models to understand the world. Oxford University Press. Wilholt, T. (2013). Epistemic trust in science. The British Journal for the Philosophy of Science, 64(2), 233–253. Williams, M. (2000). Dretske on epistemic entitlement. Philosophy and Phenomenological Research, 60(3), 607–612. Winsberg, E. (2010). Science in the age of computer simulation. University of Chicago Press. Winsberg, E. (2019). Computer simulations in science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2015 edition). http://plato.stanford.edu/archives/sum2015/ entries/simulations-science/. Accessed 20 Dec 2018. Wright, C. (2004, July). Warrant for nothing (and foundations for free)? In Aristotelian society supplementary (Vol. 78, No. 1, pp. 167–212). The Aristotelian Society.
Chapter 8
Conclusion
At the core of the arguments in this book and interweaved throughout its chapters there are two distinct methodologies that set it apart from more conventional—and even from some more recent—works in the philosophy of science on the subject of computer simulations. The first method involves the argumentative deployment of details from the history of science concerning the sanctioning of scientific instruments in order to elucidate the nature of and the reasoning behind the extraordinary epistemic norms and requirements surrounding the adoption of technical artifacts as legitimate sources of knowledge in formal inquiry. These historical details elucidated the fact that such norms are not, as conventionally suggested, merely the contingent product of social interests, dynamics or whim. They also showed that they are not idealistic assumptions of the modern era retroactively imposed on the practices of those at the center of the emergence of science. Nor were they merely the result of arrogant assessments of the status of human capacities over nature. The deployment of these details also showed that factors conventionally associated with the sanctioning of instruments such as authoritative figures, expert status or community consensus were not considered sufficient, even at the time, to accept novel technologies as reliable and trustworthy. Rather, this unconventional methodology showed that such norms and requirements emerged from a set of judicious epistemic aspirations that also centrally included a humble understanding of human limitations. The second distinctive method used throughout this book involves the deployment of concepts and distinctions from the philosophy of technology to better understand the peculiar dual nature of technical artifacts, their properties and their epistemic roles in scientific inquiry. Insights form this approach showed that instruments have always been at the center of scientific practice. Moreover, these insights further proved that rather than being subservient to the conventionally recognized elements of scientific inquiry, namely theory and experimentation, instruments have always been epistemologically indispensable to science. Both of these methods have a unificatory reason for having been chosen to guide this project since they © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3_8
147
148
8 Conclusion
both naturally emerged as obvious avenues to highlight the ontological commitment at the center of this book regarding the materiality surrounding scientific practice and in particular that of computer simulations. This materiality was there when early telescopes were designed and continues now as computational methods take the center stage in scientific inquiry. As you already know by now, this book stated its main premise from the start: that computer simulations are instruments. A series of arguments were then deployed to show why this is the best explanation of the nature of computer simulations. It also shows that understanding computer simulations as instruments is also the best explanation of some of the features observed by other frameworks that have sought to situate them within other more well-known and understood categories of scientific inquiry: formal or experimental methods. However, the main claim is not just that we can or ought to understand computer simulations as instruments, but rather that we can and ought to understand them as such because they are instruments. Although many epistemic implications arise from this understanding, the main claim is rather an ontological claim, an ontological commitment to what computer simulations are. This is in part what genuinely distinguished the contents and discussion of this book from conventional and more recent approaches to the understanding of computer simulations. Importantly, this one ontological commitment offers a somewhat unifying approach to computer simulations in that it is able to not just accommodate but also best explain central features of computer simulations identified by other conceptual frameworks. It can accommodate the observation that computer simulations are somewhat in between theory and experiment. It can also better explain why this is the case: they are in-between theory and experiment because instruments are a third element of inquiry that is in between theory and experiment. This view of computer simulations as instruments can also better accommodate the fact that computer simulations represent a somewhat special and unprecedented methodology in inquiry, as Humphreys (2004, 2009) and others tried to articulate. They do this because they are hybrid, special kinds of instruments capable of enhancing our epistemic capacities in more than one way at the same time. This view can also best accommodate the fact that computer simulations involve extra-mathematical elements to their construction—as Winsberg (2010, 2019) suggested—and require either electronic or mechanical implementation, incorporating Parker’s (2009) intuitions about their materiality. They have to be ran. Importantly, this view can also accommodate the fact that our reliance on computer simulations must be assessed independently from our reliance on the content they manipulate or the experts that construct them: they are distinct to them, and so they must be sanctioned through independent warrants. All of this, much of what had been said by other attempts at understanding computer simulations, can be captured by a single ontological commitment: that computer simulations are instruments. An important element of this framework is that because computer simulations are instruments, and ought to be understood as such, then they also must be treated as instruments. As we saw above not every instrument is a scientific instrument. So, what does it mean to treat computer simulations as scientific instruments? In the
8 Conclusion
149
opening chapter of this book we saw a tale of two approaches: one fictitious, one historical. It was meant to show a difference in treatment. The telescope was treated rigorously and skeptically from its inception even by those that would most benefit from its use; the electronic oracle was not. We now know that computer simulations are instruments. But are they scientific instruments? They are used in science—this book began by pointing to that fact—but should they? Part of answering this question can be found in the comparison and contrasting of the two stories that began this book. Are computer simulations more like the telescope or more like the oracle? Importantly, have we treated computer simulations more like the telescope or more like the oracle? The characterization of computer simulations as hybrid instruments should serve as a starting point of a more accurate understanding of their nature, significance and role within scientific inquiry. What this understanding does for us is elucidate their true nature and hence we can begin to better elucidate both their true promise, but also their limitations. As it is, this view already shows important ramifications for the way we should and should not accept their ubiquitous presence in scientific laboratories. In particular, this view offers a framework that allows us to question whether and when we are using them as an oracle believer would use their oracle or when we are using them as we use and sanction other scientific instruments; whether we trust them because they have withstood a healthy and rigorous dose of empirical skepticism or simply in virtue of some pragmatic version of technological determinism. As I argued in the final chapter of this book, whenever the latter is the case, we can easily identify it as an instance that fails to meet important norms in scientific inquiry, particularly those that ensure it as a slightly better way of overcoming our very fallible epistemic limitations. Warrants such as epistemic entitlements, which require no evidential effort on the part of the epistemic agent, are not adequate to sanction recently introduced instruments in scientific inquiry. To suggest so is to miss the point of science, which is, at the very least, to overcome some of our deeply rooted and ordinary epistemic limitations and vices. Computer simulations have the capacity to revolutionize the way we do science, but revolutions can have catastrophic results. By treating them as instruments we can more carefully discern the warrants that justify their use and we can more carefully ensure that the changes they bring about are consistent with the superior epistemic requirements at the center of scientific inquiry. As we saw, even though we can now more easily identify which kind of trust can be legitimately allocated to computer simulations, this does not mean that such trust is de facto apportionable. Of course, we have no other option but to trust computer simulations in certain contexts. Particularly when the science in question involves phenomena that is not otherwise empirically accessible. But when considering their limitations and our limitations towards them, the use and epistemic import of the instrument must include an extreme sense of epistemic humility. Just like the one surrounding the sanctioning of the telescope. These novel instruments are not simply like mathematics with centuries of rigorous application. They are not like experts whose values and reasoning we can empathetically recognize or reasonably scrutinize. They are new and they have unprecedented epistemic implications. We must
150
8 Conclusion
approach them with reasonable optimism and caution. We must approach them like we have approached other instruments before we accepted them as part and parcel of scientific inquiry. If not, Humphreys’ speculative fantasy of opaque technologies taking over our epistemic duties will become a reality, and science—simulated or not—as he posited in the introduction to his book, will really end up being henceforth “neither by us nor for us.”
References Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press. Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. Parker, W. S. (2009). Does matter really matter? Computer simulations, experiments, and materiality. Synthese, 169(3), 483–496. Winsberg, E. (2010). Science in the age of computer simulation. University of Chicago Press. Winsberg, E. (2019). Computer simulations in science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2015 edition). http://plato.stanford.edu/archives/sum2015/ entries/simulations-science/. Accessed 20 Dec 2018.
References
Adler, J. (2015). Epistemological problems of testimony. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2015 edition). https://plato.stanford.edu/archives/sum2015/ entries/testimony-episprob/. Accessed 20 Dec 2018. Alvarado, R. (2020). Opacity, big data, artificial intelligence and machine learning in democratic processes. In Big data and democracy (p. 167). Edinburgh University Press. Alvarado, R. (2021). Computer simulations as scientific instruments. Foundations of Science, 27, 1–23. Alvarado, R. (2022a). Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI. Bioethics, 36(2), 121–133. Alvarado, R. (2022b). What kind of trust does AI deserve, if any? AI and Ethics, 1–15. https://doi. org/10.1007/s43681-022-00224-x Alvarado, R., & Humphreys, P. (2017). Big data, thick mediation, and representational opacity. New Literary History, 48(4), 729–749. Alvarado, R., & Morar, N. (2021). Error, reliability and health-related digital autonomy in AI diagnoses of social media analysis. The American Journal of Bioethics, 21(7), 26–28. Anderson, K. (2013). Beyond the glass cabinet: The history of scientific instruments. Revista Electrónica de Fuentes y Archivos, 4(4), 34–46. Arkoudas, K., & Bringsjord, S. (2007). Computers, justification, and mathematical knowledge. Minds and Machines, 17(2), 185–202. Audi, R. (1997). The place of testimony in the fabric of knowledge and justification. American Philosophical Quarterly, 34(4), 405–422. Baird, D. (2004). Thing knowledge: A philosophy of scientific instruments. University of California Press. Barberousse, A., & Jebeile, J. (2019). How do the validations of simulations and experiments compare? In Computer simulation validation: Fundamental concepts, methodological frameworks, and philosophical perspectives (pp. 925–942). Springer. Barberousse, A., & Vorms, M. (2014). About the warrants of computer-based empirical knowledge. Synthese, 191(15), 3595–3620. Barberousse, A., Franceschelli, S., & Imbert, C. (2009). Computer simulations as experiments. Synthese, 169(3), 557–574. Beebee, H. (2001). Transfer of warrant, begging the question and semantic externalism. The Philosophical Quarterly, 51(204), 356–374. Beisbart, C. (2012). How can computer simulations produce new knowledge? European Journal for Philosophy of Science, 2(3), 395–434.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3
151
152
References
Beisbart, C. (2017). Advancing knowledge through computer simulations? A socratic exercise. In M. Resch, A. Kaminski, & P. Gehring (Eds.), The science and art of simulation I: Exploringunderstanding-knowing (pp. 153–174). Springer. Beisbart, C. (2018). Are computer simulations experiments? And if not, how are they related to each other? European Journal for Philosophy of Science, 8(2), 171–204. Beisbart, C., & Norton, J. D. (2012). Why Monte Carlo simulations are inferences and not experiments. International Studies in the Philosophy of Science, 26(4), 403–422. Beisbart, C., & Saam, N. J. (Eds.). (2018). Computer simulation validation: Fundamental concepts, methodological frameworks, and philosophical perspectives. Springer. Belfer, I. (2012). The info-computation turn in physics. In Turing-100. Biagioli, M. (2010). How did Galileo develop his telescope? A “New” letter by Paolo Sarpi. In Origins of the Telescope (pp. 203–230). Royal Netherlands Academy of Arts and Sciences. Biagioli, M. (2019). Galileo’s instruments of credit: Telescopes, images, secrecy. University of Chicago Press. Boge, F. J. (2021). Why trust a simulation? Models, parameters, and robustness in simulation- infected experiments. British Journal for the Philosophy of Science, 75. https://doi. org/10.1086/716542 Boge, F. J., & Grünke, P. (2019). Computer simulations, machine learning and the Laplacean demon: Opacity in the case of high energy physics∗. European Journal for Philosophy of Science, 9(1), 13. Boon, M. (2004). Technological instruments in scientific experimentation. International Studies in the Philosophy of Science, 18(2–3), 221–230. Borge, S. (2003). The word of others. Journal of Applied Logic, 1(1–2), 107–118. Boschetti, F., & Symons, J. (2011). Why models’ outputs should be interpreted as predictions. In Proceedings of the International Congress on Modelling and Simulation (MODSIM 2011), Perth, Australia (pp. 12–16). Boschetti, F., Fulton, E., Bradbury, R., & Symons, J. (2012). What is a model, why people don’t trust them and why they should. In M. R. Raupach (Ed.), Negotiating our future: Living scenarios for Australia to 2050 (pp. 107–118). Australian Academy of Science. Bruce, A. (1999). The Ising model, computer simulation, and universal physics. Models as Mediators: Perspectives on Natural and Social Science, 52, 97. Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big data & society, 3(1), 2053951715622512. Burge, T. (1993). Content preservation. The Philosophical Review, 102(4), 457–488. Burge, T. (1998). Computer proof, apriori knowledge, and other minds: The sixth philosophical perspectives lecture. Noûs, 32(S12), 1–37. Campbell, K. (2006). Statistical calibration of computer simulations. Reliability Engineering & System Safety, 91(10–11), 1358–1363. Cozad, A., Sahinidis, N. V., & Miller, D. C. (2014). Learning surrogate models for simulation‐ based optimization. AICHE Journal, 60(6), 2211–2227. Dalmedico, A. D. (2001). History and epistemology of models: Meteorology (1946–1963) as a case study. Archive for History of Exact Sciences, 55(5), 395–422. Danks, D. (2019, January). The value of trustworthy AI. In Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society (pp. 521–522). Daston, L., & Galison, P. (2021). Objectivity. Princeton University Press. Davidson, D. (1973). Radical interpretation. Dialectica, 27(3–4), 313–328. Davies, M. (2004). II – Martin Davies: Epistemic entitlement, warrant transmission and easy knowledge. In Aristotelian society supplementary (vol. 78, no. 1). The Oxford University Press. Drake, S. (1984). Galileo, Kepler, and phases of venus. Journal for the History of Astronomy, 15(3), 198–208. Dretske, F. (2000). Entitlement: Epistemic rights without epistemic duties? Philosophy and Phenomenological Research, 60(3), 591–606. Dubucs, J. (2002). Simulations et modélisations. Pour La Science, 300, 156–158.
References
153
Duede, E. (2022). Deep learning opacity in scientific discovery. arXiv preprint arXiv:2206.00520. Durán, J. M. (2017). Varieties of simulations: From the analogue to the digital. In The science and art of simulation I (pp. 175–192). Springer. Durán, J. M. (2018). Computer simulations in science and engineering. Springer. Durán, J. M., & Formanek, N. (2018). Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds and Machines, 28(4), 645–666. Durán, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 47(5), 329–335. Ellul, J. (1964). The technological society. Translated From the French by John Wilkinson. With an introduction by Robert K. Merton. Frans, J., & Kosolosky, L. (2014). Mathematical proofs in practice: Revisiting the reliability of published mathematical proofs. THEORIA. Revista de Teoría, Historia y Fundamentos de la Ciencia, 29(3), 345–360. Fresco, N., & Primiero, G. (2013). Miscomputation. Philosophy & Technology, 26(3), 253–272. Frigg, R., & Reiss, J. (2009). The philosophy of simulation: Hot new issues or same old stew? Synthese, 169(3), 593–613. Galison, P. (1996). Computer simulations and the trading zone. In P. Galison & D. J. Stump (Eds.), The disunity of science: Boundaries, contexts, and power (pp. 118–157). Stanford University Press. Gehring, P. (2017). Doing research on simulation sciences? Questioning methodologies and disciplinarities. In The science and art of simulation I (pp. 9–21). Springer. Golinski, J. (1994). Precision instruments and the demonstrative order of proof in Lavoisier’s chemistry. Osiris, 9, 30–47. Gramelsberger, G. (2011). Generation of evidence in simulation runs: Interlinking with models for predicting weather and climate change. Simulation & Gaming, 42(2), 212–224. Gransche, B. (2017). The art of staging simulations: Mise-en-scène, social impact, and simulation literacy. In The science and art of simulation I (pp. 33–50). Springer. Grüne-Yanoff, T. (2017). Seven problems with massive simulation models for policy decision- making. In The science and art of simulation I (pp. 85–101). Springer, Cham. Guala, F. (2002). Models, simulations, and experiments. In Model-based reasoning: Science, technology, values (pp. 59–74). Springer US. Hacking, I. (1987). Review of data, instruments and theory: A dialectical approach to understanding science, by R. J. Ackermann. The Philosophical Review, 96(3), 444–447. https://doi. org/10.2307/2185230 Hacking, I. (1992). The self-vindication of the laboratory sciences. In Science as practice and culture (Vol. 30). University of Chicago Press. Hacking, I., & Hacking, J. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge University Press. Harré, R. (2003). The materiality of instruments in a metaphysics for experiments. In The philosophy of scientific experimentation (pp. 19–38). H. Radder. Hartmann, S. (1996). The world as a process. In Modelling and simulation in the social sciences from the philosophy of science point of view (pp. 77–100). Springer. Harvard, S., Winsberg, E., Symons, J., & Adibi, A. (2021). Value judgments in a COVID-19 vaccination model: A case study in the need for public involvement in health-oriented modelling. Social Science & Medicine, 286, 114323. Heidelberger, M. (2003). Theory-ladenness and scientific instruments in experimentatiovn. The Philosophy of Scientific Experimentation, 8, 138–151. Heilbron, J. L. (1993). Some uses for catalogues of old scientific instruments. In Essays on historical scientific instruments..., Aldershot, Variorum (pp. 1–16). Heppner, F., & Grenander, U. (1990). A stochastic nonlinear model for coordinated bird flocks. The Ubiquity of Chaos, 233, 238. Holzmann, G. J. (2015). Code inflation. IEEE Software, 2, 10–13.
154
References
Hon, G. (2003). Transcending the “ETC. LIST”. In The philosophy of scientific experimentation (p. 174). University of Pittsburgh Press. Horner, J., & Symons, J. (2014). Reply to Angius and Primiero on software intensive science. Philosophy & Technology, 27(3), 491–494. Horner, J. K., & Symons, J. (2019). Understanding error rates in software engineering: Conceptual, empirical, and experimental approaches. Philosophy & Technology, 32, 1–16. Hu, R., & Ru, X. (2003). Differential equation and cellular automata models. In IEEE proceedings of the international conference on robotics, intelligent systems and signal processing (pp. 1047–1051). IEEE. Hubig, C., & Kaminski, A. (2017). Outlines of a pragmatic theory of truth and error in computer simulation. In M. Resch, A. Kaminski, & P. Gehring (Eds.), The science and art of simulation I (pp. 121–136). Springer. Humphreys, P. (1994). Numerical experimentation (pp. 103–121). Scientific Philosopher. Humphreys, P. (2004). Extending ourselves: Computational science, empiricism, and scientific method. Oxford University Press. Humphreys, P. (2009a). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615–626. Humphreys, P. (2009b). Network epistemology. Episteme, 6(2), 221–229. IPCC. (2001). Climate change 2001: Synthesis report. In Contribution of working group I, II, and III to the third assessment report of the intergovernmental panel on climate change. Cambridge University Press. Jenkins, C. S. (2007). Entitlement and rationality. Synthese, 157(1), 25–45. Johnson, D. G. (2004). Computer ethics. In The Blackwell guide to the philosophy of computing and information (pp. 63–75). Blackwell. Kaminski, A. (2017). Der Erfolg der Modellierung und das Ende der Modelle. Epistemische Opazität in der Computersimulation. echnik–Macht–Raum: Das Topologische Manifest im Kontext interdisziplinärer Studien, 317–333. Kaminski, A., Resch, M. M., & Küster, U. (2018). Mathematische opazität. Über rechtfertigung und reproduzierbarkeit in der computersimulation. In Arbeit und Spiel (pp. 253–278). Nomos Verlagsgesellschaft mbH & Co. KG. Kaufmann, W., & Smarr, L. L. (1993). Supercomputing and the transformation of science. Scientific American Library. Keller, E. F. (2003). Models, simulation, and “computer experiments.”. In H. Radder (Ed.), The philosophy of scientific experimentation (pp. 198–215). University of Pittsburgh. King, H. C. (1955). The history of the Telescope. Griffin. King, H. C. (2003). The history of the telescope. Courier Corporation. Koyré, A. (1957). From the closed world to the infinite universe (Vol. 1). Library of Alexandria. Kroes, P. (2002). Design methodology and the nature of technical artefacts. Design Studies, 23(3), 287–302. Kroes, P. (2003). Screwdriver philosophy; Searle’s analysis of technical functions. Techné: Research in Philosophy and Technology, 6(3), 131–140. Kroes, P. (2006). Coherence of structural and functional descriptions of technical artefacts. Studies in History and Philosophy of Science Part A, 37(1), 137–151. Kroes, P. (2009). Engineering and the dual nature of technical artefacts. Cambridge Journal of Economics, 34(1), 51–62. Kroes, P. (2012). Technical artefacts: Creations of mind and matter: A philosophy of engineering design (Vol. 6). Springer Science & Business Media. Kroes, P. A., & Meijers, A. W. M. (2006). The dual nature of technical artefacts. Studies in History and Philosophy of Science, 37(1), 1–4. Kuhl, F., Dahmann, J., & Weatherly, R. (2000). Creating computer simulation systems: An introduction to the high-level architecture. Prentice Hall. Kuhn, T. S. (1962). The structure of scientific revolutions. Kuhn, T. S. (2012). The structure of scientific revolutions. University of Chicago press.
References
155
Lackey, J. (1999). Testimonial knowledge and transmission. The Philosophical Quarterly, 49(197), 471–490. Lankton, N. K., McKnight, D. H., & Tripp, J. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 1–918. Latour, B. (1983). Give me a laboratory and I will raise the world. In Science observed: Perspectives on the social study of science (pp. 141–170). Latour, B. (1990). Technology is society made durable. The sociological review, 38(S1), 103–131. Lazer, D., Kennedy, R., King, G., et al. (2014). The parable of Google Flu: Traps in big data analysis. Science, 434, 343–1205. Lee, K. H., Ros, G., Li, J., & Gaidon, A. (2018). Spigan: Privileged adversarial learning from simulation. arXiv preprint arXiv:1810.03756. Lenhard, J. (2007). Computer simulation: The cooperation between experimenting and modeling. Philosophy of Science, 74(2), 176–194. Lenhard, J. (2019). Calculated surprises: A philosophy of computer simulation. Oxford University Press. Leonelli, S. (2021). Data science in times of pan (dem) ic. Harvard Data Science Review. London, A. J. (2019). Artificial intelligence and black‐box medical decisions: accuracy versus explainability. Hastings Center Report, 49(1), 15–21. Malet, A. (2003). Kepler and the telescope. Annals of Science, 60(2), 107–136. Malet, A. (2005). Early conceptualizations of the telescope as an optical instrument. Early Science and Medicine, 10(2), 237–262. Maley, C. J. (2011). Analog and digital, continuous and discrete. Philosophical Studies, 155(1), 117–131. Marcovich, A., & Shinn, T. How scientific research instruments change: A century of Nobel Prize physics instrumentation. Social Science Information, 56(3), 201–374. McEvoy, M. (2008). The epistemological status of computer-assisted proofs. Philosophia Mathematica, 16(3), 374–387. McEvoy, M. (2013). Experimental mathematics, computers and the a priori. Synthese, 190(3), 397–412. McGlynn, A. (2014). On Epistemic Alchemy. In D. Dodd & E. Zardini (Eds.), Scepticism and perceptual justification (pp. 173–189). Oxford University Press. Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 1–25. Metropolis, N. (1987). The beginning of the Monte Carlo method. Los Alamos Science, 15(584), 125–130. Moretti, L., & Piazza, T. (2013). When warrant transmits and when it doesn’t: Towards a general framework. Synthese, 190(13), 2481–2503. Morgan, M. S. (2005). Experiments versus models: New phenomena, inference and surprise. Journal of Economic Methodology, 12(2), 317–329. Morgan, M. S., Morrison, M., & Skinner, Q. (Eds.). (1999). Models as mediators: Perspectives on natural and social science (Vol. 52). Cambridge University Press. Morrison, M. (2009). Models, measurement and computer simulation: The changing face of experimentation. Philosophical Studies, 143(1), 33–57. Morrison, M. (2015). Reconstructing reality. Oxford University Press. Nagel, T. (1989). The view from nowhere. Oxford University Press. Nariya, M. K., et al. (2016). Mathematical model for length control by the timing of substrate switching in the type III secretion system. PLoS Computational Biology, 12(4), e1004851. Naylor, T. H., Balintfy, J. L., Burdick, D. S., & Chu, K. (1966). Computer simulation techniques. Wiley. Newman, J. (2015). Epistemic opacity, confirmation holism and technical debt: Computer simulation in the light of empirical software engineering. In International conference on history and philosophy of computing (pp. 256–272). Springer.
156
References
Nieuwpoort, W. C. (1985). Science, simulation and supercomputers. In Supercomputers in theoretical and experimental science (pp. 3–9). Springer. Nola, R., & Sankey, H. (2014). Theories of scientific method: An introduction. Routledge. Norton, S., & Suppe, F. (2001). Why atmospheric modeling is good science. In Changing the atmosphere: Expert knowledge and environmental governance (pp. 67–105). Oreskes, N. (2004). The scientific consensus on climate change. Science, 306(5702), 1686–1686. Oreskes, N. (2021). Why trust science? In Why trust science? Princeton University Press. Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263(5147), 641–646. Pace, D. K. (2004). Modeling and simulation verification and validation challenges. Johns Hopkins APL Technical Digest, 25(2), 163–172. Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459. Parker, W. S. (2003). Computer modeling in climate science: Experiment, explanation, pluralism (Doctoral dissertation, University of Pittsburgh). Parker, W. S. (2009). Does matter really matter? Computer simulations, experiments, and materiality. Synthese, 169(3), 483–496. Parnas, D. L., John Van Schouwen, A., & Kwan, S. P. (1990). Evaluation of safety-critical software. Communications of the ACM, 33(6), 636–648. Petersen, A. C. (2012). Simulating nature: a philosophical study of computer-simulation uncertainties and their role in climate science and policy advice. Chapman & Hall/CRC. Phillips, N. A. (1956). The general circulation of the atmosphere: A numerical experiment. Quarterly Journal of the Royal Meteorological Society, 82(352), 123–164. Pias, C. (2011). On the epistemology of computer simulation. Zeitschrift für Medien-und Kulturforschung, 1, 29–54. Pincock, C. (2011). Mathematics and scientific representation. Oxford University Press. Primiero, G. (2014). On the ontology of the computing process and the epistemology of the computed. Philosophy & Technology, 27(3), 485–489. Primiero, G. (2019). On the foundations of computing. Oxford University Press. Pryor, J. (2012). When warrant transmits. In W. Crispin (Ed.), Mind, meaning, and knowledge: Themes from the philosophy of Crispin Wright (pp. 269–303). University Press. Quine, W. (1960). Word and object. MIT Press. Quine, W. V. (1973). The roots of reference. Open Court. Radder, H. (Ed.). (2003). The philosophy of scientific experimentation. University of Pittsburgh Pre. Resch, M. M. (2013). What’s the result? Thoughts of a center director on simulation. In J. M. Durán & E. Arnold (Eds.), Computer simulation and the changing face of scientific experimentation (pp. 233–246). Cambridge Scholars Publishing. Resch, M. M. (2017). On the missing coherent theory of simulation. In The science and art of simulation I: Exploring-understanding-knowing (pp. 23–32). Springer International Publishing. Resch, M., & Kaminski, A. (2019). The epistemic importance of technology in computer simulation and machine learning. Minds and Machines, 29, 9–17. Resch, M. M., Kaminski, A., & Gehring, P. (Eds.). (2017). The science and art of simulation I: Exploring-understanding-knowing. Springer. Resnik, M. (1997). Mathematics as a science of patterns. Oxford University Press. Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model (vol. 21, no. 4). ACM. Rohrlich, F. (1990). Computer simulation in the physical sciences. In PSA: Proceedings of the biennial meeting of the philosophy of science association (Vol. 2, pp. 507–518). Cambridge University Press. Ropohl, G. (1999). Philosophy of socio-technical systems. Society for Philosophy and Technology Quarterly Electronic Journal, 4(3), 186–194. Roscher, R., Bohn, B., Duarte, M. F., & Garcke, J. (2020). Explainable machine learning for scientific insights and discoveries. Ieee Access, 8, 42200–42216.
References
157
Roush, S. (2015). The epistemic superiority of experiment to simulation. Synthese, 169, 1–24. Ruphy, S. (2011). Limits to modeling: Balancing ambition and outcome in astrophysics and cosmology. Simulation & Gaming, 42(2), 177–194. Ruphy, S. (2015). Computer simulations: A new mode of scientific inquiry? In S. O. Hansen (Ed.), The role of technology in science: Philosophical perspectives (pp. 131–148). Springer. Saam, N. J. (2017a). Understanding social science simulations: Distinguishing two categories of simulations. In M. Resch, A. Kaminski, & P. Gehring (Eds.), The science and art of simulation I (pp. 67–84). Springer. Saam, N. J. (2017b). What is a computer simulation? A review of a passionate debate. Journal for General Philosophy of Science, 48(2), 293–309. Saam, N. J., & Harrer, A. (1999). Simulating norms, social inequality, and functional change in artificial societies. Journal of Artificial Societies and Social Simulation, 2(1), 2. San Pedro, I. (2020). Degrees of epistemic opacity. [Preprint] Can be found here: http://philsciarchive.pitt.edu/18525/ Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1(2), 143–186. Schiaffonati, V. (2016). Stretching the traditional notion of experiment in computing: Explorative experiments. Science and Engineering Ethics, 22(3), 647–665. Schweber, S., & Watcher, M. (2000). Complex systems, modelling and simulation. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 31(4), 583–609. Shannon, R. E. (1981). Tests for the verification and validation of computer simulation models. Institute of Electrical and Electronics Engineers (IEEE). Simon, H. A. (1969). The sciences of the artificial. Cambridge University Press. Simondon, G. (1958). On the mode of existence of technical objects, trans. In Ninian Mel lamphy. University of Western Ontario. Simondon, G. (2011). On the mode of existence of technical objects. Deleuze Studies, 5(3), 407–424. Simoulin, V. (2007). Une communauté instrumentale divisée... et réunie par son instrument. Revue d’anthropologie des connaissances, 1(2), 221–241. Simoulin, V. (2017). An instrument can hide many others: Or how multiple instruments grow into a polymorphic instrumentation. Social Science Information, 56(3), 416–433. Skinner, B. F. (1948). ‘Superstition’ in the pigeon. Journal of Experimental Psychology, 38(2), 168. Steadman, I. (2013). Big data and the death of the theorist. Wired Online, 25, 2013. Sugden, R. (2000). Credible worlds: the status of theoretical models in economics. Journal of Economic Methodology, 7(1), 1–31. Symons, J. (2008). Computational models of emergent properties. Minds and Machines, 18(4), 475–491. Symons, J. (2010). The individuality of artifacts and organisms. History and Philosophy of the Life Sciences, 32, 233–246. Symons, J., & Alvarado, R. (2016). Can we trust big data? Applying philosophy of science to software. Big Data & Society, 3(2), 2053951716664747. Symons, J., & Alvarado, R. (2019). Epistemic entitlements and the practice of computer simulation. Minds and Machines, 29(1), 37–60. Symons, J., & Boschetti, F. (2013). How computational models predict the behavior of complex systems. Foundations of Science, 18(4), 809–821. Symons, J., & Horner, J. (2014). Software intensive science. Philosophy & Technology, 27(3), 461–477. Symons, J., & Horner, J. (2017). Software error as a limit to inquiry for finite agents: Challenges for the post-human scientist. In T. Powers (Ed.), Philosophy and computing: Essays in epistemology, philosophy of mind, logic, and ethics. Philosophical studies series (Vol. 128, pp. 85–97). Springer.
158
References
Symons, J., & Horner, J. (2019). Why there is no general solution to the problem of software verification. Foundations of Science, 25, 1–17. Szabó, B., & Actis, R. (2012). Simulation governance: Technical requirements for mechanical design. Computer Methods in Applied Mechanics and Engineering, 249, 158–168. Tal, E. (2012). The epistemology of measurement: A model-based account. University of Toronto. Taub, L. (2009). On scientific instruments. Studies in History and Philosophy of Science, 40(4), 337–343. Taub, L. (2011). Introduction: reengaging with instruments. An International Review Devoted to the History of Science and Its Cultural Influences, 102(4), 689–696. Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C. G., & Van Moorsel, A. (2020, January). The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 272–283). Turner, G. L. E. (1969). The history of optical instruments: a brief survey of sources and modern studies. History of science, 8(1), 53–93. Turner, R. (2018). Computational artifacts: Towards a philosophy of computer science. Springer. Tymoczko, T. (1979). The four-color problem and its philosophical significance. Journal of Philosophy, 76, 57–82. Vallor, S. (2017). AI and the automation of wisdom. In T. Powers (Ed.), Philosophy and computing: Essays in epistemology, philosophy of mind, logic, and ethics. Philosophical studies series (Vol. 128, pp. 161–178). Springer. van de Velde, L. R. (1960). Computers for field artillery. In Papers presented at the May 3–5, 1960, western joint IRE-AIEE-ACM computer conference. ACM. Van Helden, A. (1974). The telescope in the seventeenth century. Isis, 65(1), 38–58. Van Helden, A. (1977). The invention of the telescope. Transactions of the American Philosophical Society, 67(4), 1–67. Van Helden, A. (1994). Telescopes and authority from Galileo to Cassini. Osiris, 9, 8–29. Van Helden, A. (2004). Galileo and the telescope. Rice University. Van Helden, A. (2020). III. The birth of the modern scientific instrument, 1550–1700. In The uses of science in the age of Newton (pp. 49–84). University of California Press. Van Helden, A., & Hankins, T. L. (1994). Introduction: Instruments in the history of science. Osiris, 9, 1–6. Van Helden, A., Dupré, S., & van Gent, R. (Eds.). (2010). The origins of the telescope (Vol. 12). Amsterdam University Press. Von Neumann, J., & Brucks, G. (1946). Preliminary discussion of the logical design of an electronic computing instrument. Von Neumann, J., & Burks, A. W. (1966). Theory of self-reproducing automata. IEEE Transactions on Neural Networks, 5(1), 3–14. Von Rueden, L., Mayer, S., Garcke, J., Bauckhage, C., & Schuecker, J. (2019). Informed machine learning–towards a taxonomy of explicit integration of knowledge into machine learning. Learning, 18, 19–20. Von Rueden, L., Mayer, S., Sifa, R., Bauckhage, C., & Garcke, J. (2020). Combining machine learning and simulation to a hybrid modelling approach: Current and future directions. In Advances in intelligent data analysis XVIII: 18th international symposium on intelligent data analysis, IDA 2020, Konstanz, Germany, April 27–29, 2020, Proceedings 18 (pp. 548–560). Springer International Publishing. Von Rueden, L., Mayer, S., Beckh, K., Georgiev, B., Giesselbach, S., Heese, R., et al. (2021). Informed machine learning–a taxonomy and survey of integrating prior knowledge into learning systems. IEEE Transactions on Knowledge and Data Engineering, 35(1), 614–633. Warner, D. J. (1990). What is a scientific instrument, when did it become one, and why? The British Journal for the History of Science, 23(1), 83–93. Weisberg, M. (2012). Simulation and similarity: Using models to understand the world. Oxford University Press.
References
159
Werrett, S. (2014). Matter and facts: Material culture in the history of science. Routledge. Whitehead, A. N. (1911). An introduction to Mathematics. Courier Dover Publications. Wilholt, T. (2013). Epistemic trust in science. The British Journal for the Philosophy of Science, 64(2), 233–253. Williams, M. (2000). Dretske on epistemic entitlement. Philosophy and Phenomenological Research, 60(3), 607–612. Winsberg, E. (2003). Simulated experiments: Methodology for a virtual world. Philosophy of science, 70(1), 105–125. Winsberg, E. (2010). Science in the age of computer simulation. University of Chicago Press. Winsberg, E. (2019). Computer simulations in science. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2015 edition). http://plato.stanford.edu/archives/sum2015/ entries/simulations-science/. Accessed 20 Dec 2018. Winsberg, E., & Harvard, S. (2022). Purposes and duties in scientific modelling. Journal Epidemiol Community Health, 76(5), 512–517. Winsberg, E., & Mirza, A. (2017). Considerations from the philosophy of simulation. In The Routledge handbook of scientific realism (p. 250). Wright, C. (2004, July). Warrant for nothing (and foundations for free)? In Aristotelian society supplementary (Vol. 78, No. 1, pp. 167–212). The Aristotelian Society. Wright, C., & Davies M. (2004). On epistemic entitlement. In Proceedings of the Aristotelian society, supplementary volumes (vol. 78, pp. 167–245). www.jstor.org/stable/4106950. Accessed 20 Dec 2018. Zheng, X., & Julien, C. (2015, May). Verification and validation in cyber physical systems: Research challenges and a way forward. In 2015 IEEE/ACM 1st international workshop on software engineering for smart cyber-physical systems (pp. 15–18). IEEE. Zik, Y. (1999). Galileo and the telescope: The status of theoretical and practical knowledge and techniques of measurement and experimentation in the development of the instrument. Nuncius, 14, 31–69. Zik, Y. (2001). Science and Instruments: The telescope as a scientific instrument at the beginning of the seventeenth century. Perspectives on Science, 9(3), 259–284. Zik, Y., & Hon, G. (2017). History of science and science combined: Solving a historical problem in optics – The case of Galileo and his telescope. Archive for History of Exact Sciences, 71, 337–344.
Index
A Abstract objects, 31, 90 Accademia del Cimento, 4, 85 Acceptance Principle, 114, 115, 118, 139 Agent-based opacity, 136 simulations, 136 Algorithms, 124 Apparatus, 2, 71, 73 Approximation, 55, 57, 113, 123 Architecture, 23, 33, 45, 47, 53, 54, 58, 61, 65, 123 Artifacts technical, 1, 2, 4, 9, 15, 60, 61, 65, 69–71, 73, 87, 88, 90, 111, 120, 127, 147 Artificial intelligence (AI), 132, 137 Augmentation, 101 Authority, 77, 78, 82, 83, 85, 112, 119 Automata, cellular, 22, 103 B Bacon, F., 43, 74, 86, 99 Baird, D., 15, 67, 68, 70, 96, 103, 104, 106 Barberousse, A., 11, 52, 58, 114, 126 Black boxes, 6, 15 Burge, T., 11, 113–120, 122, 124–126, 129, 138–141 C Calculus, 140 Climate, 12, 140 Code, 56, 57, 122, 124, 136, 138
Commitment, ontological, 15, 31, 67, 69, 71, 90, 96, 148 Computational method, 4, 5, 10, 11, 55, 65, 116–119, 122, 123, 125, 136–138, 147 model, 21, 33, 55, 57, 89, 115, 124 processes, 7, 8, 10, 12, 53–55, 57, 64, 65, 89–91, 117, 119, 122–125, 136, 137 system, 10, 89, 90, 115, 122, 138 Content preservation, 115, 116, 119, 120 transmission, 116, 120 Conversion, 122 D Danks, D., 133 Deflationary, 41–44, 96, 135 Dichotomy, 13, 15, 51, 58, 91 Discrete, 55–57, 65, 115, 116 Discretization, 42, 56, 122–125 Discretized, 55–57, 123, 124, 126 Durán, J., 31 E Encapsulate(d), 15, 61, 63, 65, 104 Epistemic content, 12, 60, 69, 95, 131–133, 138 context, 9, 10, 60, 61, 68, 69, 82, 87, 111–114, 116, 117, 120, 127, 128, 130, 131, 133, 136–139, 141, 142, 149 enhancers, 91, 131, 134 entitlements, 4–6, 9, 11, 82, 85, 112–114, 116–118, 125, 127, 129, 138–142, 149
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 R. Alvarado, Simulating Science, Synthese Library 479, https://doi.org/10.1007/978-3-031-38647-3
161
162 Epistemic (cont.) limitations, 15, 16, 68, 73, 74, 77, 91, 101, 120, 136, 138, 147, 149 manipulations, 123 opacity, 11, 131–138 practices, 4, 5, 15, 60, 69, 74, 76, 78–82, 85, 86, 111, 114, 116–118, 120, 122, 124, 125, 127–130, 137–142, 147 status, 61, 72, 76–79, 111, 117, 118, 126, 130, 131, 147 technologies, 4–6, 9, 11, 13, 76, 78, 82, 87, 97, 131–138, 147, 150 trust, 133, 134 warrants, 112, 117–119, 122, 124, 128, 129 Epistemically independent, 10, 71, 103, 108, 124 opaque, 113, 124, 134, 136, 137 relevant, 10, 115, 119, 122–124, 130, 134, 136, 137, 139 Equation-based simulations, 20, 21 Error, 80, 84, 122, 123, 125, 126, 128, 130, 136–138, 142 Experimentation, 6, 12–15, 29, 32, 35, 36, 41, 44, 45, 61, 82, 89, 96, 98, 100, 103, 105, 147 Expert communities, 5, 130, 147 sources, 112, 114, 116, 128–130, 147 testimony, 52, 112, 114, 117–119, 128 Explanation, 7, 15, 26, 40, 61, 83, 95, 98, 99, 101, 103, 106, 139, 140, 148 Extrapolation, 101, 102 F Formal elements, 6, 13, 14, 52, 61–63, 65, 68 methods, 2, 6, 14, 56–58, 60, 87, 88, 115, 121, 125, 128, 147, 148 Fox-Keller, E., 68, 122 Functionally distinct, 53, 133 identifiable, 133 G Galileo, G., 1–5, 8, 67, 74–84, 86 H Hardware, 15, 54, 58, 116, 122, 126 Harré, R., 68, 106
Index Heidelberger, M., 105, 107 Humphreys, P., 10, 12, 31, 35, 36, 89, 90, 95, 99, 100, 102, 124, 134–137, 141, 150 Hybrid, 63, 131, 148, 149 I Implementation, 19, 22, 23, 30–34, 40, 41, 53, 54, 58, 59, 64, 87, 95–97, 100, 101, 106, 108, 122, 123, 125, 128, 148 In-between/in-betweennes, 13, 15, 35, 37–39, 41, 43, 88, 95–101, 104, 106, 108, 148 Inquiry, 51, 57, 62–65, 68, 71, 72, 74, 76, 87, 88, 111, 112, 114, 115, 118, 128, 131, 138–142, 147–149 Instruments mathematical, 2, 43, 83 philosophical, 2, 4, 15, 43, 68, 72, 74, 77, 83, 87 J Justification, 57, 77, 87, 112, 113, 115, 118, 119, 121, 123, 125, 130, 139, 140, 142 Justificatory, 8, 80, 112, 114, 116, 117, 121–125, 127, 129, 132, 139, 140 K Kepler, 1–5, 78–83, 85, 87 Kroes, P., 68, 70 L Lenhard, J., 13, 37–39, 43, 44, 52, 97, 100 Leonelli, S., 5, 6 M Machine learning, 11, 25–27, 132, 135–137 Manipulation, 30, 39–42, 44, 58, 65, 71, 105, 123, 132 Materiality, 15, 41, 45, 67, 69, 97, 107, 108, 148 Mathematical considerations, 57 instruments, 2, 6, 12, 39, 41, 43, 63, 78, 102, 103 models, 12–14, 19, 21, 22, 25, 31–35, 37, 39, 41–43, 45, 47, 52, 55–57, 63, 115, 122–125 operations, 8, 14, 30, 32, 47, 54, 56, 57
Index proofs, 114, 116, 118, 140 Measurement, 32, 37, 39, 41, 73, 97, 101, 103, 104 Model abstract, 13, 53, 54, 60 computational, 21, 33, 55, 57, 89, 115, 124 mathematical, 12–14, 31–35, 37, 39, 41–43, 45, 47, 52, 55–57, 63, 115, 122–125 simulation, 6, 10, 12–14, 29–34, 37–45, 47, 52–57, 60, 61, 63, 64, 89–91, 98, 102–104, 106, 115, 122–126, 129, 131, 137 theoretical, 14, 35, 38, 39, 42–45, 53, 63, 103 Morrison, M., 6, 13, 39–45, 52, 96, 97, 100, 104, 105 N Nagel, 87, 136 Norms epistemic, 4, 74, 76, 86, 138, 139, 141, 142, 147 social, 85, 138, 147 Novelty, 12, 36, 97, 98, 101 Numerical, 57, 65, 123 O Ontological commitment, 15, 31, 67, 69, 71, 90, 96, 148 Opacity agent-neutral, 136 essential, 134–136 general, 10, 134, 135, 137 Oracle, 8, 10–12, 149 P Parker, W., 12, 42 Performative, 54, 61, 64, 105, 106 Philosophy of engineering, 5 experimentation, 13 modeling, 15 science, 5, 13, 15, 67, 68, 82, 87, 89, 147 technology, 5, 67, 147 Pipeline, 53, 115 Pragmatic, 2, 12–14, 85, 139, 140, 149 Proofs computer-assisted, 115, 116
163 mathematical, 115, 116 Pseudo-artifacts, 61, 70, 107 Pythagoras, 119, 121 R Reliability, 4, 5, 9–11, 13, 36, 77, 79, 80, 82, 83, 86, 87, 115–117, 121, 122, 124, 128, 136, 137, 140, 141 Representation, 7, 43, 46, 53, 59, 62, 64, 69, 89, 98, 115 Representational, 7, 31–33, 53, 89, 90, 137 S Sanctioning, 2–6, 8, 9, 13, 67, 74, 77, 79, 82–88, 96, 122, 126, 128, 147, 149 Scientific inquiry, 2, 5, 6, 8, 9, 12–16, 54, 57, 61, 67, 68, 71, 74–76, 80–82, 85–88, 91, 111–114, 116, 118, 120, 121, 126, 127, 129, 130, 133, 138–140, 147–150 instruments, 1–4, 11, 14, 15, 29, 41, 61, 67–88, 90, 95, 97, 99–102, 105, 107, 108, 111, 112, 116, 120, 127, 130, 132, 133, 139–142, 147–149 norms, 70, 74, 80, 85, 86, 114, 139, 149 practices, 4, 6, 13, 14, 16, 60, 67, 126, 127, 147 Simon, H., 70, 107 Skepticism, 2, 81, 83, 86, 139, 149 Software, 8, 10, 11, 15, 54, 56–58, 116, 122, 124, 125, 128, 135, 136 Supercomputing, 23–25 Symons, J., 6, 10, 22, 70, 72, 127, 136 T Target phenomena, 14, 55–57, 62–65, 91, 125, 126 systems, 53, 54, 56, 62–65, 91, 126 Telescope, 1–6, 8, 12, 13, 67, 74–81, 83–86, 88, 131, 132, 142, 148, 149 Testimony, 80, 83, 112–114, 116, 118–120, 128–130, 138, 139, 141 Theoretical principles, 13, 14, 21, 25, 55, 56, 68, 70–72, 77, 80, 85, 89, 97, 126–128, 130, 139 Theory, 4, 51, 65, 67, 71, 77, 91, 123, 124, 141, 147, 148 Thermometer, 64, 103, 106 Transparent conveyor, 117, 119, 121
164 Trust, 68, 71, 72, 78, 79, 83, 85, 111–114, 117, 118, 120–141, 149 V Van Helden, A., 3, 4, 73–85 Via negativa, 51–65
Index W Warrant, 9, 11, 58, 69, 72, 79, 82, 83, 112, 114–129, 132, 134, 140, 142, 148, 149 Wilholt, 133 Winsberg, E., 11, 12, 14, 32–36, 56, 100, 115, 122 Working knowledge, 75–77, 141