220 99 798KB
English Pages 208 [222] Year 2008
Nicholas Rescher Being and Value And other Philosophical Essays
Nicholas Rescher
Being and Value And Other Philosophical Essays
ontos verlag Frankfurt I Paris I Ebikon I Lancaster I New Brunswick
Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.d-nb.de.
North and South America by Transaction Books Rutgers University Piscataway, NJ 08854-8042 [email protected]
United Kingdom, Eire, Iceland, Turkey, Malta, Portugal by Gazelle Books Services Limited White Cross Mills Hightown LANCASTER, LA1 4XS [email protected]
Livraison pour la France et la Belgique: Librairie Philosophique J.Vrin 6, place de la Sorbonne; F-75005 PARIS Tel. +33 (0)1 43 54 03 47; Fax +33 (0)1 43 54 48 18 www.vrin.fr
2008 ontos verlag P.O. Box 15 41, D-63133 Heusenstamm www.ontosverlag.com ISBN 978-3-938793-88-6
2008 No part of this book may be reproduced, stored in retrieval systems or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use of the purchaser of the work
Printed on acid-free paper FSC-certified (Forest Stewardship Council) This hardcover binding meets the International Library standard Printed in Germany by buch bücher dd ag
For John Woods In cordial colleagueship
Being and Value And Other Philosophical Essays Preface Chapter 1: BEING AND VALUE: ON THE PROSPECT OF OPTIMALISM
1
Chapter 2: ON EVOLUTION AND INTELLIGENT DESIGN
25
Chapter 3: MIND AND MATTER
37
Chapter 4: FALLACIES REGARDING FREE WILL
49
Chapter 5: SOPHISTICATING NAÏVE REALISM
63
Chapter 6: TAXONOMIC COMPLEXITY AND THE LAWS OF NATURE
71
Chapter 7: PRACTICAL VS. THEORETICAL REASON
79
Chapter 8: PRAGMATISM AS A GROWTH INDUSTRY
89
Chapter 9: COST BENEFIT EPISTEMOLOGY
95
Chapter 10: QUANTIFYING QUALITY
115
Chapter 11: EXPLANATORY SURDITY
129
Chapter 12: CAN PHILOSOPHY BE OBJECTIVE?
139
Chapter 13: ON ONTOLOGY IN COGNITIVE PERSPECTIVE
149
Chapter 14: PLENUM THEORY [Essay Written Jointly With Patrick Grim]
165
Chapter 15: ONOMETRICS (ON REFERENTIAL ANALYSIS IN PHILOSOPHY)
185
Bibliography
197
Name Index
203
PREFACE
T
hese essays were written during 2005-07, and some of them published during this period in various journals and collections. (In particular, items 6 and 14 are at issue here, as per the endnotes of these chapters.) Moreover, chapter 4 was the result of a happy collaboration with Patrick Grim, to whom I am grateful for permitting its republication here. As ever, I am indebted to Estelle Burris for her excellent assistance in preparing this material for publication. Nicholas Rescher Pittsburgh, Pennsylvania November 2007
Chapter 1 BEING AND VALUE On the Prospects of Optimalism 1. INTRODUCTION ith the emergence of “enlightenment” philosophizing in the 18th century, it became a dogma of modern philosophy that an unbridgeable gap exists between normativity and fact, between being and value. It was seen as a virtual truism that no evaluative “ought” can possibly be inferred from a factual “is”—nor the other way round: no ontological bearing can be found in matters of value. The former conviction began to crumble in the 19th century, when utilitarians in particular, sought to ground obligation and ethical normativity in the factualities of human well-being. But the reverse move—a return to the ancient Greek idea of grounding the explanation of actualities in the normativities of value—had to wait for revival until the late 20th century when various cosmologists (though not philosophers!) were drawn to the anthropic hypothesis in cosmology. The idea that being could root in value, that existence can be explained through considerations of merit was still seen as philosophically anathema. But is this negativism actually warranted?
W
2.
INTELLIGENCE-GEARED OPTIMALLY AS A VIABLE VIEW OF THE WAY OF THINGS
Why are things as they are? How are we to account for the world’s being a lawful manifold that is cognitively accessible to beings of finite intelligence? If such “ultimate questions” are to be adequately addressed, then we must expect from the very outset that the explanations able to do the job will have to be of a decidedly unusual sort. For normally our questions about the world are delegated to science. But a science designed to explain what happens in the universe is not by nature suited to explain the existence of the universe. We have to expect that such a nonstandard question must be handled by nonstandard means. And this expectation is certainly met by an optimalism
Nicholas Rescher • Being and Value
that seeks to explain existence in terms of value and thereby pivots the explanation of what is on its being for the best. Such a value-geared optimalism has been on the agenda of philosophy since Plato’s Timaeus elaborated by Proclus in his Commentary on that work, and subsequently as a leitmotiv through ancient and medieval neo-Platonism to Leibniz and beyond. Its historical prominence alone would enable it to qualify as a live option even were it not for its revival on the stage of contemporary science—as will appear below. But what sort of “best” will be at issue with optimalistic explanation? With what particular values can an evaluative approach to explaining existence possibly concern itself? It is clear that one cannot just optimize, any more than one can just maximize or minimize. For one has to optimize something, some feature or aspect of things. And if this factor is to be something that is potentially self-validating and self-sustaining then the clearly most promising candidate would appear to be intelligence itself— that is to say the overall condition and standing of intelligent beings at large. That pivotal “for the best” will have to be construed in terms of what is best for the enhancement and diffusion of intelligence in the cosmos. An optimalistic explanation will accordingly proceed to provide a rational explanation on rational principles. A rational being is bound to see the loss of reason as a supreme tragedy. The mode of merit at issue here with “being for the best” is a matter of being so as intelligent creatures see it—that is from the vantage point of intelligence itself. Assuredly, no intelligent being would prefer an alternative that is inferior in this regard. And so, for any intelligent being—any rational creature—intelligence itself must figure high on the scale of values. It would, accordingly, appear that intelligence and rationality would best qualify as the self-sufficient standard of value that will have to be at issue. The optimalism envisioned here is thus oriented at optimizing the conditions of existence for intelligent beings at large. And at the cosmological level such an optimalism militates towards a universe which: • provides for the chance and randomness through which alone variation and selection permits intelligent beings to emerge in the world through evolutionary processes. • provides for the chance-conditional novelty and innovation needed to provide an environment of sufficient complexity to be of interest for intelligent beings.
2
BEING AND VALUE
• provides for the order of regularity and lawfulness needed for a universe sufficiently orderly and to allow complex creatures to develop and thrive. • provides for a lawful order in the modus operandi of nature sufficiently simple to be understood by imperfectly intelligent beings as a basis for grounding their decisions and actions in a complex world. And so what is called for here is—as Leibniz already saw—a suitable mixture of variety and order, of chance and lawfulness, diversity and harmony. Thus regarded, optimalism is a theory that grounds the explanation of the world’s facts through a process of optimization subject to constraints— the constraints being the projection of a lawful diversity of things that possibilizes and probabilifies the success of intelligent beings in the world (which is not quite to say that such success is necessarily guaranteed). What is at issue here is thus what might be called a noophelic— intelligence friendly—optimalism. Axiogenisis offers a direct answer to the complaint about “isolating man from nature” which John Dewey saw as the cardinal sin of modern philosophy. There is, after all, no more emphatic way to link man and nature than to see values that humans can appreciate to constituting nature’s driving developmental force. Such an optimalism has many theoretical advantages. Here is just one of them. It is conceivable, one might contend, that the existence of the world (i.e., the actuality of a world) is a necessary fact while nevertheless its nature (i.e., of just what sort of world) is contingent. And this would mean that separate and potentially different answers would have to be provided for the questions “Why is there anything at all?” and “Why is the character of existence as is—why is it that this particular world exists?” However, an intelligence-geared optimalism enjoys the advantage of rational economy in that it proceeds uniformly here. It provides a single homogeneous rationale for both answers—namely that “this is for the best.” It accordingly also enjoys the significant merit of providing for the rational economy of explanatory principles. 3. A NOOPHELIC UNIVERSE For nature to be intelligible there must be a coordinative alignment that
3
Nicholas Rescher • Being and Value
requires cooperation on both sides. The analogy of cryptanalysis is suggestive. If A is to break B’s code, there must be due reciprocal alignment. If A’s methods are too crude, too hit and miss, he can get nowhere. But even if A is quite intelligent and resourceful, his efforts cannot succeed if B’s procedures are simply beyond his powers. (The cryptanalysts of the 17th century, clever though they were, could get absolutely nowhere in applying their investigative instrumentalities to a high-level naval code of World War II vintage.) Analogously, if mind and nature were too far out of alignment—if mind were too “unintelligent” for the complexities of nature or nature too complex for the capacities of mind—the two just couldn’t get into step. It would be like trying to rewrite Shakespeare in a pidgin English with a 500 word vocabulary or like trying to monitor the workings of a system containing ten degrees of freedom by using a cognitive mechanism capable of keeping track of only four of them. If something like this were the case, mind could not accomplish its evolutionary mission. The interests of survival would then have been better served by an alignment process that allures creatures to their environment in ways that do not take the cognitive route. Accordingly, nature must cooperate with intelligence in a certain very particular way—it must be stable enough and regular enough and structured enough for there to be appropriate responses to natural events that can be “learned” by creatures. If such “appropriate responses” are to develop, nature must provide suitable stimuli in a duly structured way. Nature must thus present us with an environment that affords sufficiently stable patterns to make coherent “experience” possible, enabling us to derive appropriate information from our structured interactions with the environment. Accordingly, a world in which any form of intelligence evolves will have to be a world that is congenial to the probes of intelligence. To reemphasize: A world in which intelligent creatures emerge through the operation of evolutionary processes must be a substantially intelligible world. Such a universe with intelligent creatures must be intelligence-congenial or noophelic: it must be the sort of universe that an intelligent being would— if it could—endeavor to contrive, a universe that is intelligently designed with a view to the existence and flourishing of intelligent creatures. Optimalism will count as a version of idealism. To be sure, it does not hold that Reality is mental, or mind-made. But it does hold that it is noophelic (i.e., mind-friendly) not just in the trivial sense that it has permitted the emergence of mind, but in the more ambitious sense that it has invited this emergence by having the sort of complex order able to afford an evo-
4
BEING AND VALUE
lutionary advantage to intelligent beings, and order, in sum, that provides and environment constituted by values congenial to the needs of mind. It is, in sum, only natural and to be expected that a universe that brings into being a kind of being capable of substantial progress in the cognitive domestication of its own modus operandi should be so constituted that noophelic optimalism will be a promising explanatory prospect. But what is it that speaks for such a view. 4. OPTIMALISM’S SELF-VALIDATION The question “Why noophelic optimalism?” divides along two decidedly distinct lines, namely the existential and the evidential: (1) Why does optimalism obtain? and (2) Why is it that we should accept optimalism’s obtaining? These issues are, of course, every bit as distinct as “Why did Booth assassinate Lincoln?” and “Why should we accept that Booth assassinated Lincoln?” The former question seeks to explain a fact, the latter asks for the evidentiation of a judgment. The one is ontological, the other epistemic The answer to the first question is straightforward. Optimalism obtains because it is self-potentiating. It is the case that what is for the best obtains because this itself is for the best. Why should what is for the best exist? The answer lies in the very nature of the principle itself. It is selfsubstantiating, seeing it is automatically for the best that the best alternative should exist rather than an inferior rival. Value is, or can be, an explanatory terminus: it can be regress stopping and “final” by way of selfexplanation in a way that causality or purposive can never manage to be. Optimalism, in sum, can be explained as obtaining on its own selfsufficient footing. Principle of Optimality itself as part and parcel of the optimal order whose obtaining it validates. After all, what better candidate could there be than the Law of Optimality itself, with the result that the division between reality and mere possibility is as it is (i.e., value based) because that itself is for the best?1 We must expect that any ultimate principle should explain itself and cannot, in the very nature of things, admit of an external explanation in terms of something altogether different. And the impetus to realization inherent in authentic value lies in the very nature of value itself. A rational person would not favor the inferior alternative; and there is no reason to think that a rational reality would do so either. The self-explanatory nature of the principle reflects the fact that to ask
5
Nicholas Rescher • Being and Value
for a different sort of explanation would be inappropriate. We must expect that any ultimate principle should explain itself and cannot, in the very nature of things, admit of an external explanation in terms of something altogether different. And the impetus to realization inherent in authentic value lies in the very nature of value itself. There is not—cannot be—any good reason for reality to be other than rational. Even as truly rational people will do in action what they think is best (everything considered) in the circumstances at hand, so a rationally functioning reality will realize what is actually for the best, everything considered. And there can be no good reason for reality to function otherwise—that, after all, is inherent in the very idea of what a “good reason” is. Even as a rational person would not favor an inferior alternative; so there is no reason why a rational reality would do so either. So to ask for a different sort of explanation would be not just unnecessary but inappropriate. One can, of course, ask “Why this linkage between optimality and actuality: why should it be that the manifold of real existence is intelligible along rational lines?” But this represents a decidedly problematic proceeding. For to ask this question is to ask for a reason and is thereby already to presume or presuppose the rationality of things, taking the stance that what is so is and must be so for a reason. Once one poses the question “But why should it be that nature has the feature F?” it is already too late to raise the issue of nature’s rationality. In asking that question the matter at issue has already been tacitly conceded. Anyone who troubles to demand a reason why nature should have a certain feature is thereby proceeding within a framework of thought where nature’s rationality—the amenability of its features to rational explanation—is already presupposed. Yet what is to be the status of a Law of Optimality to the effect that “whatever possibility is for the best is ipso facto the possibility that is going to be actualized in the world.” It is certainly not a logico-conceptually necessary truth; from the angle of theoretical logic it has to be seen as a contingent fact—albeit one not about nature as such, but rather one about the manifold of real possibility that underlies it. Insofar as necessary at all it obtains as a matter of ontologico-factual rather than logico-conceptual necessity, while the realm of possibility as a whole is subject to principles of metaphysical necessity. After all, the division of the realm of the logically possible into real and genuine vs. merely conceivable and speculative possibilities can hinge on contingent considerations: there can be logically contingent laws of possibility even as there are logically contingent laws of nature (i.e., of reality). “But if it is contingent then surely it must itself rest
6
BEING AND VALUE
on some further explanation.” Granted. It itself presumably has an explanation, seeing that one can and should maintain the Leibnizian Principle of Sufficient Reason to the effect that for every contingent fact there is a reason why it is so rather than otherwise. But there is no decisive reason why that “further explanation” has to be “deeper and different”—that is, no decisive reason why the prospect of self-explanation has to be excluded at this fundamental ontological level.2 In the end, if there is to be anything worthy of the name of an ultimate explanation there is just no alternative to its being self-validating. At this stage—and of course it is an exceptional and altogether extra-ordinary one. Self-validation is not necessarily circular at all but relative to the problem at issue is profoundly virtuous. For it is a crucial and indispensable to an ultimate experiment that it does not require anything else for its own validation. 5. EVIDENTIATING OPTIMALISM So much, then, for why optimalism obtains: it ratio essendi. But there of course yet remains the matter of its ratio cognocendi: its evidentiation— of why it is that one would be well advised to accept optimalism. This, of course, is something else again. To obtain evidence for optimalism we will have to look at the world itself. The evidential rationale for optimalism will have to root in our knowledge of natural reality. In this sense optimalism, if tenable at all, will have to be tenable on at least roughly scientific (empiricoobservational) grounds. And here the best evidence we can have for optimalism is that emergence in the universe of intelligent beings able to understand the modus operandi of that universe itself: intelligent beings who can create applicatively useful thought-models of nature. For optimalism’s evidentiation, then we must look to a universe that is user-friendly to intelligent beings in affording an environment that is congenial to the best interest of their intelligence. And so we confront the question “Is the world as we have it userfriendly for intelligence?” And the answer here would seem to be an emphatic affirmative with a view to the following considerations: • the fact that the world’s realities proceed and develop under the aegis of natural laws: that it is a manifold of lawful order whose doings exhibit a self-perpetuating stability of processual function.
7
Nicholas Rescher • Being and Value
• the fact that intelligent beings have in fact emerged—that nature’s modus operandi has possibilized and facilitated the emergence of intelligence. • the fact of an ever-deepening comprehension/penetration of nature’s operations on the part of intelligent beings—their ongoing expansion and deepening of their underlying of the world’s events and processes providing step by step the materials for the development of their laws of physics, their chemistry, their biology, their sociology, etc. In sum, a substantial body of facts regarding the nature of the universe speaks on behalf of a noophelic, intelligence-geared optimalism. Evidence is certainly not lacking here. * * * The upshot of these deliberations is that once one is willing to have recourse to axiological explanation there no longer remains any good reason to think that both the existence and nature of reality is something deeply so problematic that it remains inexorably unintelligible as an issue which—on Kantian or other principles—it is somehow inappropriate to inquire into. So we can now take the bull by the horns and address such grandiose questions as “Why is there anything at all?” or “Why is it that the conditions of matters as a whole is as what it actually is?” in the expectation of an answer—and indeed an answer which effectively responds: “Because that’s for the best.” Such an optimalism is, in effect, axiogenetic: it explains reality in terms of value. The axiological approach to explanation that is at issue with such an optimalism is, to be sure is a drastically unusual and extra-ordinary one. But then of course the question of why Reality should be explicable is a highly unusual and extra-ordinary question, and it is a cardinal principle of cognitive sagacity that if one asking an extra-ordinary question one must expect an extra-ordinary answer. But many challenges yet remain. Perhaps the most prominent of these is: optimalism testable? It is obvious that we cannot experiment with the creation of the universe. So if experimental testability is to be the standard of scientificity then optimalism is not a scientific theory, but a metaphysical hypothesis. (In this regard, optimalism looks a bit like string theory.) However, it is not irrelevant that a good deal of contemporary cosmogene-
8
BEING AND VALUE
sis, cosmology, and cosmography invites an optimalistic construction. And so the salient point is that what can be done by way of testing is to experiment by way of thought-experimentation via computer simulation. So if the doctrine is ever to achieve the status of a scientifically respectable situation of a purely methodological theory it will have to be developed along these lines. No doubt this is a hurdle. But there is no reason to think it to be insuperable. 3 6. ENTER THOUGHT EXPERIMENTATION To say that optimalism is not testable in the laboratory is not to say it is not testable at all. True, we cannot create alternative worlds. But we can indeed simulate them. We cannot experiment with alternative actualities. But we can certainly perform thought experiments, creating alternative realities with our brains and our computers.4 The sort of thought experimentation needed to show that the universe as we have it constitutes a setting favorable to the emergence and flourishing of intelligent beings is something that can in principle be assessed by simulating by hypothesis and supposition the condition and circumstances at issue. How might this work out? When we are at the top of a mountain it will transpire that whichever way we move will put us at a point lower down. And just this sort of approach can be put to work in the present case. What we will need—and might actually manage—to do is to ascertain that whatever hypothetical changes we make in nature’s arrangements would— when systematically worked out in their overall systemic ramifications— result in an overall state of things that is inferior to the actual. So while actual experimentation with worlds is certainly impossible, thought experimentation is something we can in principle manage by addressing questions of the format: “If we made this or that modification of nature’s laws as best we understand them—if we redesigned the manifold of natural process in this or that way—would the result that would then actually or probably ensue be more favorable to the flourishing of intelligent beings in the world’s scheme of things?” And there is in fact some reason to think that, insofar as we can manage matters, the result of such thought experimentations will not in fact be unfavorable to optimalism. On the contrary, the entire elaborate discussion of in contemporary cosmology of the fundamental laws of physics in their relation to cosmic evolution seems to suggest that the results of such a thought experiment is going to manifest the sort of intelligent world-design that optimalism envisions.
9
Nicholas Rescher • Being and Value
Advocates of the anthropic theory of cosmology propose to reason essentially as follows: Given basic laws, what values must the constants of nature have in order to engender and facilitate (to possibilize and probabilify) the emergence of organic (and specifically humanoid) life? If these constants had values significantly different from the actual, then would not certain eventuations in cosmic evolution become impossible or vastly improbable?
However the reasonings at issue with the presently envisioned cosmological optimalism is something quite different from such anthropocentrism. It runs along the following track: Given certain very fundamental principles (of conservation, economy, simplicity, quasi-symmetry, etc.), what form must the laws of nature take to engender and facilitate (possibilize and probabilify) the emergence of a selfsustaining and enduring cosmos—and in particular, one that can provide a congenial home for intelligent beings.
The idea is to establish that nature’s basic laws as we have them are— relative to certain very fundamental principles—optimally conducive to the realization of a viable universe. Specifically, optimalism has it that nature’s basics law represents the optimal (most efficient and effective) solution to certain key problems—viz. those of realizing such factor of fundamental principle as: complexity; functional regularity and stability; coherent spatiotemporal order; uniformity/symmetry; economy; progressiveness in change; and the like. On this basis, it would have to transpire that the manifold of natural law as we have it represents the optimal (most elegant, effectively efficient) solution to the realization of the constraints inherent in certain very basic and general principles of nature-design. Along exactly these lines some recent physicists such as Freeman Dyson have proposed that the fundamental laws of physics appear to be arranged to “make the universe as interesting as possible.”5 Werner Heisenberg held that the universe must exhibit elegance in its overall design with a “proper conformity of all the parts to one another and to the whole.”6 A. Zee has it that the design of the universe, like that of a good Persian carpet, will exhibit “both unity and diversity, absolute perfection and boisterous dynamism, symmetry and lack of symmetry.”7 Paul Davis muses that “there may be a strict mathematical sense in which . . . .the familiar laws of
10
BEING AND VALUE
physics form an optimal set.”8 There are many indications that a view is emerging in contemporary cosmology that the laws of physics represent the optimally efficient and elegant solution to the design of a universe subject to some fundamental principles of operation. And those physicists who operate within the aegis of this program are pretty much on the same page as Leibniz himself.9 7. PRODUCT OPTIMALISM VS PROCESS OPTIMALISM To be sure, optimalism is less a theory than a program which, as such, can take various forms. One consideration that is going to be important here lies in the distinction between product optimalism and process optimalism. For one can, in principle, seek to optimize two sorts of things, namely process and product. And these procedures are significantly distinct. They represent distinct items. • product optimalism which looks to providing for realization of the best possible result • process optimalism which looks to providing for a process that maximizes the chances of achieving the best possible result. In tennis, there is not certainty that the player with the better form will win the game. In warfare, there is no certainty that the better tactics will win the battle (an incompetent tactician’s eccentric comportment may discombobulate his more able opponent by oddball maneuvers). In wooing there is no assurance that the more handsome well-spoken and mannerly swain will win the maiden. In general there is no categorical guarantee that the better process will yield the better product. Now process-assessment addresses the merit of processes as such. By contrast, product-assessment concerns itself solely with the merit for product. And these need not necessarily harmonize: A process optimalism is a doctrine to the effect not that what is best exists, but rather what is for the best exists. And the two are much the same such in some circumstances it may be that it is not for the best that what is the best exists. The best process need not issue on the best products—let alone one that is perfect. (Unless, of course, we take the circular route of assessing process in terms of product!)
11
Nicholas Rescher • Being and Value
In sum the sort of optimalism that it makes sense to operate is no doubt the process rather than the product version of the doctrine. And in view of this the problem of how evil can enter into the best possible world becomes tractable, exactly because the best possible in point of process need not (and really cannot) involve absolute optimality in point of product. Just as the ends to not justify the means, the best means do not necessarily produce the best result (when things go wrong despite “the best laid plans”—and the best managed execution as well) we speak of bad luck! And here is the irony of it: When the free agents he creates misuse their freedom, God himself has (the functional equivalent of) bad luck. 8. VARIANT MODES OF EXPLANATION Optimalism shifts matters of ontological explanation at the fundamental level from the productivity of any sort of causality—be it of nature or of agents—to the possibility-elimination at issue with axiologically finalistic explanation. But accustomed as we are to explanations in the mode of efficient causality, this idea of a finalistic explanation by eliminating possibilities on the basis of evaluative considerations has a distinctly strange and unfamiliar air about it. Let us consider more closely how it is supposed to work. Possibilities—of whatever kind—need not and should not be rooted in the machinations of things. We must not attribute it to the operation of substances of some sort, or see it as the fruit of the productive efficacy of some existent or other. We must avoid taking the stance that the structure of possibility must root in an actuality of some type, that there is something that exerts a determinative agency in consequence of which real possibility is as it is. We can reject the “existentialist” thesis that possibility must be grounded in an actuality of some sort—or else modify it by taking the stance that the realm of possibility itself constitutes a (self-subsistent) actuality of sorts. A protolaw accordingly does not root in the operations of pre-existing things. It should be conceived of as an autonomous principle conditioning the sphere of (real) possibility without being emplaced in an actuality of some sort. These protolaws are not reality-reflecting at all, but possibilitydeterminative. They reflect the fact that a field of possibility is prior to and grounds any physical field—that there must be “laws of possibility” before there can be the powers and dispositions that encapsulate the “laws of things,” the “laws of nature” as ordinarily understood. The “possibility-
12
BEING AND VALUE
____________________________________________________________ Display 1 MODES OF EXISTENCE EXPLANATION I. Productive agency: efficient causation (a positive impetus to the realization of a certain result: outcome production). A. Causality of nature (via forces and powers) B. Agent causality (via motives and purposes) II. Eliminative passivity: resistance barriers (within the manifold of possibility: a negative veto to the realization of certain outcomes as indeed out for realization (to speak figuratively). NOTE: All three of these modes of ontological functioning can be either categorical or merely probabilistic). ____________________________________________________________ space” that encompasses the realm of the possible is seen as having a particular character in view of which certain conditions must be met by any real possibility that it can accommodate—a character which is encapsulated in the protolaws. To put it very figuratively, these protolaws brood over the realm of the possible like the primal logos over the waters. Some terminological distinctions will clarify the conceptual basis of the discussion. Display 1 depicts the lay of the land here, with I. B here representing what one might call the teleology of actual agency, while II represents a negative teleology of outcome delimitation. These various modes of existence explanation represent decidedly different approaches. The axiological explanation of existence inherent in a possibilityreductive optimalism requires us to qualify the principle ex nihilo nihil fit. Since the matter is not one of causal explanation there is no need for preexisting causes. However, one can and should distinguish between nonexistence and nothingness seeing that a domain devoid of things need not be one without laws (which, after all, are of a conditional and hypothetical in form). Possibility-excluding laws, after all, can obtain without rooting in the disposition of existing things.
13
Nicholas Rescher • Being and Value
In explaining why physical objects or occurrences exist we must indeed invoke others to serve as causes and effects. But the laws of nature and the conditions of its affairs that delineate the world’s modus operandi themselves do not “exist” as causal products in any substantial way—they just obtain. (The law of inertia is not a property of something.) When a law does so, a reason for its obtaining (an axiological reason, as we ourselves see it). But this reason is now provided by an explanatory principle that need not carry us into the order of efficient causality at all. And to insist upon asking how values are able to function causally in law-realization is simply to adopt an inappropriate model for the processes involved. Such axiological explanation just is not causal: values do not function in the order of efficient causality at all and so the Law of Optimality does not yield those results not via the mysterious attractive power of optimal possibilities. It obtains, rather, because suboptimal possibilities are excluded through a displacement by superior rivals preempting their place in virtue of a better standing in possibility space. For axiogenetic theory has it that even as the presence of light displaces darkness so does the availability of better alternatives preclude the very possibility of any inferior so-called alternatives require the intervention of a productive agent or agency. And so the fact that axiology does not provide such a causal explanation of existence is not an occasion for appropriate complaint. It does not stop value-explanations from qualifying as explanations. They present perfectly good answers to questions of the format: “Why is something-orother so?” It is just that in relation to laws, values play only an explanatory role though possibility elimination and not through any causally productive role. And this is no defect because a productive process is simply not called for. And so, to inquire into how values operate causally in lawrealization is simply to adopt an inappropriate model for the processes involved. Value explanation just is not causal: values do not function in the order of efficient causality at all. Optimalism views the world—natural reality as we have it—as the actualization of certain specifically optimal possibilities. But it does not— and does not need to—regard value as a somehow efficient cause, a productive agency. On the contrary—value is not seen productive at all be it by personal agents or natural agency. Instead the impetus at issue is merely eliminative in so functioning as to block the way to availability of inferior productions. It does not drive causal processes but only canalizes or delimits them by ruling certain theoretical (or logical) possibilities out of the realm of real possibility.
14
BEING AND VALUE
A possibility-exclusion that calls for the unavailability of alternatives roots in “the general modus operandi” of things without any reference to causal agency. Consider an analogy. Suppose that a society exhibits a suicide rate of 1.2 per 10,000 per annum during a certain era of its existence. No positive force is at work in constraining it to meet its quota of suicides—no identifiable cause engenders this aggregate result. And while it is effectively impossible to have a suicideless year, this lies in “the nature of things” generally and not in the potency of some suicide-impelling power or force. Again, more than 5% of the letters on the first page of tomorrow’s Times newspaper will be E’s. But no force or power compels this effect. And while it is literally impossible for no E’s to occur there and “the nature of the situation,” preclude this prospect, there is no force of attraction to constrain the presence of E’s. It is inevitable that there be more E’s than Z’s but this result is not the product of any power or force. This result is not produced by some ad hoc force or agency or power—it is simply a feature of how things work in this context. Only what already exists can exercise efficient causality on the world stage. But in theory at least a mere values or ideal that exists only extranaturally can exert a final causality in the absence of a pre-existing physical presence. This at least is how neo-Platonism saw it,10 and Aristotle had already taken much the same line. For as W. D. Ross rightly stresses that Aristotle’s God is an efficient cause of natural existence being its final cause— effectively by being the object of “desire,” a completion of probability.11 The Aristotelian tradition maintained throughout a distinction between efficient causation or production on the one hand and final causal or inspiration on the other. And that of course is exactly what values do they inspire by way of an attractive positivity rather than an impetus of repulsive negativity. The overall story that must be narrated here runs somewhat as follows: Nature—physical reality as we have it—represents the actualization of certain possibilities. But underlying this existential condition of affairs is the operation of a prior sub- or metaphysical principle, operative within the wider domain of logical possibility, and dividing this domain into disjoint sectors of “real” and “purely theoretical” possibility. To put it very figuratively, logical possibilities are involved in a virtual struggle for existence in which the axiologically best win out so as to become real possibilities. Specifically, when there are (mutually exclusive) alternatives that are possible “in theory,” nevertheless none will be a “real” or “ontologi-
15
Nicholas Rescher • Being and Value
cal” possibility for realization as actual or as true if some other alternative is superior to it. The availability of a better alternative disqualifies its inferiors from qualifying as ontologically available—as real—that is, metaphysical—possibilities. And so whenever there is a uniquely best alternative, then this alternative is ipso facto realized as actual or true. After all, questions like “Why is there anything at all?” “Why are things-in-general as they actually are?”, and “Why is the law structure of the world as it is?” cannot be answered within the standard causal framework. For causal explanations need inputs: they are essentially transformational (rather than formational pure and simple). They can address themselves to specific issues distributively and seriatim, but not collectively and holistically. The unorthodoxy at work here lies in the very nature of the question being addressed. If we persist in posing the sorts of global questions at issue, we cannot reasonably hope to resolve them in terms of orthodox causality. 9. OPTIMALISM IS A MATTER OF THE FINALITY OF VALUE Optimalism relies crucially on the Aristotelian dialectic between efficient and formal causality: between the explanatory mode of production and exclusionary limitation. Either way, optimally continues to operate in the nomic order of laws. However the laws at issue are not causal with respect to the activities and processes of thing, but exclusionary with regard to possibilities. The laws at issue have a characteristic nature. Possibilities can relate to another either by way of affinity or repulsion—that is the realization of a possibility can either require or preclude that of another. Laws of nature take the former requiring route: If A were to happen, then B must happen. Laws of possibility reduction, by contrast, are preclusive: If A were to happen, then B is ruled out. Laws of the former type are causal: one state of things demands yet another: the issue is one of productive inclusion. Laws of normativity, by contrast, are reductive: one possibility impedes another. The approach to existence explanation that has been sketched here rests on adopting what might be called an axiogenetic optimality principle to the effect that value represents a decisive advantage in ruling out certain possibilities for realization. The modus operandi at issue here has it that, in the end, it is the comparatively best that is to prevail.12 Inferior possibilities are possibilities alright, but are, in the circumstances, destined through their
16
BEING AND VALUE
comparative unworthiness to go unrealized in the way of actual outcome. In sum, a Law of Optimality prevails; value (of a suitable—and ultimately intelligence coordinated—sort) enjoys an existential impetus so that it lies in the nomic nature of things that inferior alternatives will drop out by virtue of this condition. Such an optimalism is certainly a teleological theory: it holds that possibility’s modus operandi effectively manifests an inherent tropism towards a certain end or telos, namely the forging of circumstances favorable to the flourishing of intelligence. We confront a doctrine of “final causes” in Aristotle’s sense. However, an approach along these lines is emphatically not a causal theory in the nowadays standard sense of efficient causation: Neither is a causality of determination by nature at issue, nor yet a causality of determination by agents. Rather, it here transpires that whenever there is a plurality of alternative possibilities competing for realization in point of truth or of existence, only an optimal possibility is realistically in a position to win out. A perfectly non-occult process is to be seen at work. As water gravitates downwards, so reality gravities valuewards—as simply a “law of nature” as it were. But the ultimate result is that things exist, and exist as they do, specifically because this is optimific for the flourishing of intelligence. 10. OPTIMALISM IS NOT A MATTER OF THE CAUSALITY OF PURPOSE From the angle of explanation, such a finalistic eliminativeness of value enjoys substantial advantages over a purpose. To be sure, both represent modes of final rather than efficient causation, since in both cases we deal with tendencies towards the realization of some prespecifiable condition of things. But these two forms of teleology are altogether distinct. The former explains regularities in terms of their conduciveness to some purposive agent’s aims and objectives (“he never mixes business with pleasure”). The latter explains them through a generic factor such as efficiency or economy that exerts a selectively eliminable impetus. Accordingly, the axiological explanation of laws is a matter of nomological constraint based on values, and not a matter of efficient causality at all. In this regard it is a “causality” in name only. And the distinction between values and purposes is crucial here. After all, a purpose must be somebody’s purpose: it must have some intelligent agent as its owner-operator. For purposes as such, to be is to be adopted:
17
Nicholas Rescher • Being and Value
they cannot exist in splendid isolation. Purposive explanations operate in terms of why conscious agents do things, and not ones of why impersonal conditions obtain. A value, however, is something quite different, something that can be altogether impersonal. When something has value, this does not require that somebody actually values it (any more than being a fact requires that somebody actually realizes it). A person can certainly hold a certain value dear but if it indeed is a value, then its status as such is no more dependent on its actually being valued than the symmetry of a landscape depends on its actually being discerned. Values admit of being prized, but that does not mean that they actually are, any more than a task’s being difficult means that anyone actually attempts it. To be of value is to deserve to be valued, but that of course need not actually happen: the value of things can be underestimated of overestimated or totally overlooked. Something can be of value—can have value—without being valued by anybody—not even God (To be sure it must be valuable for something or other but it need not be valued by somebody; in principle clean air can be valuable for mammals without being valued by any of them.) And this holds in particular for “ontological” values like economy, simplicity, regularity, uniformity, etc., that figure in the axiological explanation of laws. Accordingly, the objection inherent is the question: “But how can value of any sort—intelligence-coordinated or other—possibly exert a causally productive influence?” can just be put aside. For optimalism’s answer to this otherwise excellent question is simply that is does not. The causal question “How do values operate productively so as to bring particular laws to actualization?” is simply inappropriate in the axiological setting. Values don’t “operate” in the causal order at all. They operate by possibility exclusion. For they function only—and quite inefficiently—as constraints within the manifold of possibility. The issue of a specifically causal efficacy just does not arise with axiological explanation. What value conditions do is not to create anything (i.e., productively engender its realization). Their modus operandi is not causal but modal: their role is to block or preclude certain theoretically conceivable possibilities from realizability by qualifying as ontological (potentially achievable) possibilities. On this approach we must distinguish between the quasi-reality of being (i.e., somehow being there) and the full-blooded reality of actual existence (i.e., realization as a part of physical reality). For we here accept the idea of a manifold of potentiality, a “possibility field” as it were, that has being
18
BEING AND VALUE
anterior to and independently of any actual physical existence. And it is the essentially exclusionary operation of this possibility field which— through the ongoing elimination of conceivable possibilities which (by mutually ex-inconsistent) exclude one another and ultimately given rise to one single (nomically—i.e., physically or axiologically—necessary) possibility which, now no longer blocked by excluding rivals—accordingly achieves actual existence. It is thus the proto-laws which govern that possibility field which thereby provide for the “explanation” of existence through a process whose character is not one of efficient causality but approximates to the sort of thing classically at issue with final causality.13 Even as a physical universe could in theory consist of an electromagnetic field without any objects exerting electromagnetic force or indeed even a gravitational field without any material objects, so there might, at least in theory, be a possibility field devoid of any actualities: a realm of diverse and rival possibilities whose reciprocal discord impedes actualization. And the proto-laws of such a manifold could provide for a quasiDarwinian struggle for the survival of the fittest which ultimately sees only one alternative—the optimal—as surviving to realization. But since what is possible is a matter of mere logic, and what is best is a matter simply of concept-specification does optimalism not mean that what exists is so determined with logico-conceptual necessitation? What would prevent this view of world-actualization from resulting in a block universe of absolute necessitation? Only this—that the constitution of that real possibility field is itself contingent. Its arrangements too are for the best, perhaps. But on what does this depend—how does the possibility manifold get constituted? Certainly not by chance by a second-order optimalism that launches us as an infinite regress along the lines already contemplated by Leibniz when he insisted that radix contingentia est in infinium. In effect, we here contemplate a tripartite hierarchy of (increasingly substantive) possibilities: logical, ontological, and physical subject to the controlling norms of logic, of axiology and of physics, respectively. It is thus at the middle level of ontological possibilities that axiology does its work. And here that metaphysical possibility should be seen as reflecting the most fundamental laws of nature, the most basic of which would emerge as invariant with respect to those metaphysical possibilities. In this way, that complaint “How can values possibly operate causally?!” is sidelined, simply because it confuses axiological explanation with productively efficient explanation.
19
Nicholas Rescher • Being and Value
And so the fact that axiology does not provide an explanation in the order of efficient causality—be it causality of nature or of process—is not an occasion for appropriate complaint. It does not stop value explanations from being explanations. They present perfectly good answers to “Why is something-or-other so?” type questions. It is just that in relation to laws, values play only an explanatory role though possibility elimination and not a causally productive role though actual creation. For with axiogenetic explanation the world is, in a certain sense “constrained by value.” But this is nowise a matter of absolute necessitation but merely one of axiological delimitation. This sort of thing is a constraint without onus, freed from the negativities to which the block-universe doctrine of causal necessitarian deliberation is subject. Granted—if there is an explanation of everything (Principle of Sufficient Reason) then there will have to be an explanation of why the boundaries of real possibility are set where they are within the manifold of theoretical probability. But as optimalism sees it, the boundary is set by considerations of value—of what’s for the best. After all, if an infinite regress is not to block explanatory understanding we will have to come to a stop with something self-explanatory. And an optimalistic line of explanation that explains matters in terms of what’s for the best has the merit that if you press the question of why this should be so an answer is forthcoming in its own terms—namely because that’s for the best. 11. AXIOGENESIS AS AN IDEALISTIC NATURALISM A recourse to value has no quarrel with naturalism. Since it is values rather than purposes that function in axiological explanation, these explanations can be entirely impersonal. The values at issue in the determination of optimality need not—and almost certainly will not—be a matter of human wishes, decisions, and aspirations. They will be naturalistic in nature—bearing upon the inherent rationality of natural order rather than the contingent decisions of rational beings. We need not commit the pathetic fallacy in personalizing matters here by invoking the mediation of agents. The merits at issue with optimalism are impersonal and “natural” in relating to the physical and metaphysical status of potential existents. For there is no good reason why an axiology could not take the form of a value naturalism—and very good reason why it should do so. A viable optimalism may not be science-assured, but it should certainly be science-congenial.
20
BEING AND VALUE
And so, given that it is values rather than purposes that function in axiological explanation, these explanations can be entirely impersonal. Values here function directly through value coordinated laws rather than through the mediation of agents and their pursuit of purpose. The idea is simply that the natural system in question is value-tropic (as it were) in that it inherently tends to realize certain value-endowed conditions (maintaining stability, achieving symmetry, prolonging longevity, operating efficiently, etc.) But, of course, the system that comports itself in this way need not overtly hold or espouse such a value. When its modus operandi establishes commitment to a certain value, nature need not “seek” value any more than water need “seek” its own level. We need not anthropomorphize here, even as a claim to end-directed transactions in the world (“Nature abhors a vacuum”) is without any implications about a purposively operating mind. A system can be goal directed through its inherent natural “programming” (e.g., heliotropism or homeostasis) without any admixture of purpose even as a conservation of energy principle need not be held on the basis of nature’s “seeking” to conserve energy. For there is no good reason why an axiology cannot or should not take the form of a value naturalism. On the contrary. To implement the principle of axiology by way of personification would be self-defeating, since we ideally want to explain existence in a way that is self-sustaining (self-contained, “ultimate”). An axiological principle need not and should not be thought of as super-natural—as a power of agency outside nature and somehow acting upon it. It should, instead be thought of as internal to nature—as a force or agency acting within it to bring (or tend to bring) certain sorts of results into being. The axiology at issue should thus be seen as naturalistic. The values involved encompass factors like stability, symmetry, continuity, complexity, order and even a dynamic impetus to the development of “higher” forms possessed of more sophisticated capabilities—perhaps even a sort of Hegelian impetus toward the evolutionary emergence of a creature possessed of an intelligence able to comprehend and appreciate the universe itself, creating a conscious reduplication model of the universe in the realm of thought through the artifice of intelligence. It might seem at first thought that a reality that emerges under the aegis of physico-metaphysical values is coldbloodedly indifferent to the welfare of its having population. But this is not the case. For the rational order that emerges is bound to be congenial to the creaturesand especially the intelligent creaturesthat evolve within it.
21
Nicholas Rescher • Being and Value
(What we have here is a position that is a hybrid crossing of Leibniz and Darwin.) The salient point is that the regress of explanatory principles must have a stop and that it is here—with axiology—that we reach a natural terminus by way of self-explanation. The long and short of it is that axio-ontology can be autonomous and nomically self-sufficient: it does not need to be seen as based in the operative power of some productive force or agency. The present discussion does not undertake the daunting lack of endeavoring to argue the truth of optimalism as an account of axiological cosmogeners is. Its task it the more modest one of maintaining the theory can afford what William James called a live option. The prime aim is accordingly not so much to argue that the theory is true—which only the future of science could reveal—but rather to show that it can be given a form sufficiently allowed to present-day perspectives and reasonabilities as to deserve consideration as a serious possibility. 14 NOTES 1
Optimalism is closely related to optimism. The optimist holds that “Whatever exists is for the best,” the optimalist maintains the converse that “Whatever is for the best exists.” However, when we are dealing with exclusive and exhaustive alternatives the two theses come to the same thing. When one of the alternatives A, A1, . . . An must be the case, then if what is realized is for the best it follows automatically that the best is realized (and conversely).
2
After all, there is no reason of logico-theoretical principle why propositions cannot be self-certifying. Nothing vicious need be involved in self-substantiation. Think of “Some statements are true” or “There are statements that state a particular rather than universal claim.”
3
The preceding discussion draws on “Optimalism and Axiological Metaphysics,” The Review of Metaphysics, vol. 53 (2000), pp. 807-35. It served as my March 2005 presidential address to the Metaphysical Society of America. On relevant issues see also my books, The Riddle of Existence (Lanham, MD.: University Press of America, 1984), and Nature and Understanding (Oxford, Clarendon Press, 2000).
4
On the increasing replacement in contemporary science of laboratory experimentation by thought experimentation see the chapter “On Overdoing Thought Experimentation” in the present author’s What If? (New Brunswick, NJ: Transaction Publishers, 2005).
22
BEING AND VALUE
NOTES 5
Freeman Dyson quoted in John Horgan’s The End of Science (Reading, Mass: Addison-Wesley, 1996), p. 252.
6
Werner Heisenberg quoted in Mario Livio, The Accelerating Universe (New York: John Wiley, 2000).
7
Anthony Zee, Fearful Symmetry (Princeton, Princeton University Press, 1999), p. 211.
8
Paul Davis, “Teleology, Without Teleology” in P. Clayton and A. Peacock (eds.) In Whom We Live and Move and Have our Being (Grand Rapids: Wm. B. Eerdmans, 2004), p. 104.
9
See Julian Barbour and Lee Smolin, “External Variety and the Foundation of a Cosmological Quantum Theory,” .
10
See Proclus, Commentary on the Timaeus, ed. Diehl, (Frome: Somerset Prometheus Trust, 1998), Vol. I., pp. 266-67.
11
See. W. D. Ross at Metaphysics 1072a26-27.
12
The prime spokesman for this sort of theorizing within the Western philosophical tradition was G. W. Leibniz. A major present-day exponent is the Canadian philosopher John Leslie. See John Leslie “The World’s Necessary Existence,” International Journal for the Philosophy of Religion, vol. 11 (l980), pp. 297-329; “Efforts to Explain All Existence,” Mind, vol. 87 (1978), pp. 181-197; “The Theory that the World Exists Because It Should,” American Philosophical Quarterly, vol. 7 (1910), pp. 286-298; and “Anthropic Principle, World Ensemble, Design,” American Philosophical Quarterly, vol. 19 (1982), pp. 141-151; as well as his book Value and Existence (Totowa N.J.: Rowman & Littlefield, 1979).
13
It must be stressed that the sort of nonlogical (or axiological) “necessitation” at issue with the proto-lawful axiologically geared machinations that reduce the probability field to a single existence-determinative outcome is perfectly compatible with contingency in the wider (logico-conceptual) construal of the term.
14
For further deliberations relevant to this paper’s themes see the authors Studies in Philosophical Optimalism (Frankfurt: ONTOS Verlag, 2006).
23
Chapter 2 ON EVOLUTION AND INTELLIGENT DESIGN 1. BEING INTELLIGENTLY DESIGNED
T
his discussion aims at two principal points: (1) that there is a decided difference between being designed intelligently and being designed by intelligence, and (2) that evolution, broadly understood, is in principle a developmental process through which the former feature—being designed intelligently—can actually be realized. The conjoining of these items means that, rather than there being a conflict or opposition between evolution and intelligent design, evolution itself can be conceived of as an instrumentality of intelligent design. To be intelligently designed is to be constituted in the way an intelligent being would arrange it. To this end, it need certainly not be claimed that an intelligent being did do so. Being intelligently designed no more requires an intelligent designer than being designed awkwardly requires an awkward one. At bottom, intelligent design is a matter of efficiency and effectiveness in goal realization. But what then when the entire universe is at issue? How are we then to conceive of this matter of aims and goals? The crux at issue here is not afforded by the question “Does the universe have a goal?” but rather by the subtler, purely conditional and strictly hypothetical question: “If we are to think of the universe as having a goal, then what could this reasonably be taken to be?” The issue here is one of a figuratively virtual rather than an actually literal goal. So to begin with we must ask whether or not it is reasonable to expect an intelligent agent or agency to produce a certain particular result. Clearly and obviously, this issue will depend on the aims and purposes this agent or agency could reasonably be expected to have. And this leads to the question: What is it that one could reasonably expect regarding the productive aims and purposes of an intelligent agent or agency? Now what would obviously have pride in place in the evaluative pantheon of such an intelligence is intelligence itself. Surely nothing has
Nicholas Rescher • Being and Value
higher value for an intelligent being than intelligence itself and there is little that would be worse for a being than “losing its reason.” Intelligence and rationality is the paramount value for any rational creature: a rational being would rather lose its right arm than lose its reason. But of course a rational being will thereby only value something it regards as having value; it would not value something that it did not deem valuable. It will thus only value rationality in itself if it sees rationality as such as a thing of value. And so in valuing their rationality, truly rational creatures are bound to value rationality in general—whenever it may be found. The result of this will be a reciprocal recognizance among rational beings—as such they are bound to see themselves as the justly proud bearers of a resource of special value. Accordingly, the only response to the question of a goal for worlddevelopment that has a scintilla of plausibility would have to take the essentially Hegelian line of locating the crux of intelligent design in the very factor of intelligence itself. Implementing this idea calls for locating the “virtual” goal of the universe in its providing for the development of intelligent beings able to achieve some understanding of its own ways and operations. One would accordingly inquire whether the world’s nature and modus operandi are so constituted as to lead with efficiency and effectiveness to the emergence of intelligent beings. Put in technical jargon the question is: Is the universe noophelic in favoring the interests of intelligence in the course of its development? A positive response here has deep roots in classical antiquity— originally in Plato and Aristotle and subsequently in the Aristotelianizing neo-Platonism of Plotinus and Proclus. And it emerges when two ancient ideas are put into juxtaposition—first that it is love that makes the world go ‘round, and the second is that such love is a matter of understanding, so that its crux lies in an amor intellecualis of sorts.1 On this perspective, self-understanding, the appreciation through intelligence of intelligence would be seen as definitive aim and telos of nature’s ongoing selfdevelopment. Such a position is, in effect, that of an updated neoPlatonism. And it represents a tendency of thought that still has potential relevancy. 2. INTELLIGENCE WITHIN NATURE: NATURE’S NOOPHELIA From this perspective, intelligent design calls for the prospering of intelligence in the world’s scheme of things. But just what would this involve?
26
ON EVOLUTION AND INTELLIGENT DESIGN
Of course the emergence of living organisms is a crucial factor here. And an organically viable environment—to say nothing of a cognitively knowable one—must incorporate orderly experientiable structures. There must be regular patterns of occurrence in nature that even simple, singlecelled creatures can embody in their make-up and reflect in their operations. Even the humblest organisms, snails, say, and even algae, must so operate that certain types of stimuli (patterns of recurrently discernible impacts) call forth appropriately corresponding types of response—that such organisms can “detect” a structured pattern in their natural environment and react to it in a way that proves to their advantage in evolutionary terms. Even its simplest creatures can maintain themselves in existence only by swimming in a sea of regularities of exactly the sort that would be readily detectable by intelligence. And so nature must cooperate with intelligence in a certain very particular way—it must be stable enough and regular enough and structured enough for there to be appropriate responses to natural events that can be “learned” by creatures. If such “appropriate responses” are to develop, nature must provide suitable stimuli in a duly structured way. Nature must thus present us with an environment that affords sufficiently stable patterns to make coherent “experience” possible, enabling us to derive appropriate information from our structured interactions with the environment. Accordingly, a world in which any form of intelligence evolves will have to be a world whose processes bring grist to the mill of intelligence. To reemphasize: A world in which intelligent creatures emerge in a natural and efficient way through the operation of evolutionary processes must be a substantially intelligible world. But there is another side to it above and beyond intelligible order. For the world must also be varied and diversified—it cannot be so bland and monotone that the stimulation of the sort of challenge-and-response process required for evolution is not forthcoming. All in all, then, a universe with intelligent creatures must be intelligence-congenial: it must be just the sort of universe that an intelligent creatures would—if it could— endeavor to contrive, a universe that is intelligently designed with a view to the existence and flourishing of intelligent beings. In sum, then, a complex world with organisms that develop by natural selection is going to be such that intelligent beings are likely to emerge, even as a world which permits the emergence of intelligent beings by natural diverseness success is going to be an intelligently designed world. Accordingly four facts speak most prominently on behalf of a noophelic cosmos:
27
Nicholas Rescher • Being and Value
• the fact that the world’s realities proceed and develop under the aegis of natural laws: that it is a manifold of lawful order whose doings exhibit a self-perpetuating stability of processual function. • the fact of a course of cosmic development that has seen an evergrowing scope for manifolds of lawful order providing step by step the materials for the development of increasingly complex laws of nature: of physics, the their theme of chemistry, their biology, their sociology, etc. • the fact that intelligent beings have in fact emerged—that nature’s modus operandi has possibilized and facilitated the emergence of intelligence. • the fact of an ever-deepening comprehension/penetration of nature’s ways on the part of intelligent beings—their ongoing expansion and deepening of their underlying of the world’s events and processes. And so, the key that unlocks all of these large explanatory issues regarding the nature of the world is the very presence of intelligent beings upon its stage. For if intelligence is to emerge in a word by evolutionary means, it becomes a requisite that that world must be substantially intelligible. It must comport itself in a way that intelligent beings can grasp, and thereby function in a way that is substantially regular, orderly, economical, rational. In sum it must be the sort of world that intelligent beings would contrive if they themselves were world contrivers, so that the world must be “as though” it were the product of an intelligent agent or agency; although there is no way to take the iffiness of that “as though” out of it. In the event then, evolutionary noophelia is a position for which there is plausible basis of evidential substantiation. A world in which intelligence emerges by anything like standard evolutionary processes must be a realm pervaded by regularities and periodicities regarding organism-nature interaction that produces and perpetuates organic species. And so, to possibilize the evolutionary emergence of intelligent beings the universe must afford a manifold of lawful order that makes it a cosmos rather than a chaos. Intelligence too needs its nourishment. In a world without significantly diversified phenomena intelligent creatures would lack opportunities for
28
ON EVOLUTION AND INTELLIGENT DESIGN
development. If their lifespan is too short, they cannot learn. If too long, there is too slow a pace of generational turn-over for effective development—a sort of cognitive arteriosclerosis. Accordingly, nature’s own contribution to the issue of the intelligibility of nature has to be the possession of a relatively simple, uniform, and systematic law structure with regard to its processes—one that deploys so uncomplicated a set of regularities that even a community of inquirers possessed of only rather modest capabilities can be expected to achieve a fairly good grasp of significant parts of it. On this line of deliberation, then, nature admits cognitive access not just because it has laws (is a cosmos), but because it has relatively simple laws, and those relatively simple laws must be there because if they were not, then nature just could not afford a viable environment for intelligent life. But how might an intelligence-friendly noophelia world come about? At this point evolution comes upon the stage of deliberation. In order to emerge to prominence through evolution, intelligence must give an “evolutionary edge” to its possessors. The world must encapsulate straightforwardly “learnable” patterns and periodicities of occurrence in its operations—relatively simply laws. A world that is too anarchic or chaotic for reason to get a firm grasp on the modus operandi of things will be a world in which intelligent beings cannot emerge through the operations of evolutionary mechanisms. In a world that is not substantially lawful they cannot emerge. In a world whose law structure is not in many ways rather simple they cannot function effectively. There are many ways in which an organic species can endure through achieving survival across generations—the multiplicity of sea turtles, the speed of gazelles, the hardness of tortoise shells, and the simplicity of micro-organisms all afford examples. But among these survival strategies intelligence—the resource of intelligent beings—is an adaptive instrumentality of potent and indeed potentially optimal efficacy and effectiveness. So in a universe that is sufficiently fertile and complex, the emergence of intelligent beings can be seen as something that is “only natural” under the pressure of evolutionary processes. After all if intelligence is to emerge in a word by straightforward evolutionary means, it becomes a requisite that that world must be substantially intelligible. It must comport itself in a way that intelligent beings can grasp not only grasp but deploy to their survivalistic benefit, and must therefore, and thereby function in a way that is. It must thus be the sort substantially regular, orderly, economical, rational of world that intelligent beings would contrive if they themselves were
29
Nicholas Rescher • Being and Value
world contrivers, so that the world must be “as if” it were the product of an intelligent agent or agency. And so what evolution by natural selection does is to take some of the magic out of intelligence—to help de-mystify that presence of intelligent in the cosmos. It is no more surprising that nature provides grist for the mind than that it provides food for the body. But it manages to do this precisely to the extent that it itself qualifies as an intelligently construed instrumentality for the realization of intelligence. 3. THE REVERSE SIDE: NATURE’S NOOTROPISM But beyond the issue of the evolution OF intelligence there is also that of intelligence IN evolution. The question from which we set out was: Is the world so constituted that its natural development leads with effectiveness and efficacy to the emergence of intelligent beings able to achieve some understanding of its modus operandi? And the answer to this question as we have envisioned it lies in the consideration that a world in which intelligent creatures emerges through evolutionary means—as ours actually seems to be—is pretty much bound to be so constituted. A universe designed by an intelligent being would accordingly be a universe designed for intelligent beings and thus be user-friendly for intelligent beings. Their very rationality requires rational beings to see themselves as members of a confraternity of a special and particularly worthy kind. But what about rationality in nature? One would certainly expect on general principles that the nature’s processes should proceed in a maximally effective way—on the whole and with everything considered comporting itself intelligently, subject to considerations of what might be characterized as a rational economy of effort. And so, with rationality understood as being a matter of the intelligent management of appropriate proceedings, we would view nature as a fundamentally rational system. However, our expectation of such processual rationality is not based on personifying nature, but rather—to the contrary on naturalizing intelligence. For to say that nature comports itself intelligently is not so much to model nature in our image as it is as to position ourselves within the manifold of process that is natural to nature itself. Here there is no projection of our intelligence into nature, but rather of envisioning a (minute) manifestation of nature’s intelligence in ourselves. Nature’s nootropism is thus to be seen as perfectly naturalistic—an aspect of its inherent
30
ON EVOLUTION AND INTELLIGENT DESIGN
modus operandi. For in seeing its workings to proceed as though intelligent agency were at work, we not so much conceive of nature in our terms of reference as conceive of ourselves as natural products of the fundamentally rational comportment of nature. Our rationality insofar as we possess it is simply an inherent part of nature’s ratio-tropism, so that the result is not an anthropomorphism of nature but rather a naturo-morphism of man. When desirable outcomes of extremely small probability are being produced undue frequency we can count on it that some sort of cheating is going on.2 And on this basis it would appear that nature “cheats” by exhibiting a favorable bias towards the interests of intelligence by so functioning so as to render an intelligence-favorable result more probable than would otherwise be the case. Indeed the noophelia that figures among rationality’s most basic commitments and among the most striking features of nature’s modus operandi. 4. A NATURALISTIC TELEOLOGY It would be a profound error to oppose evolution to intelligent design— to see them as somehow conflicting and incompatible. For natural selection—the survival of forms better able to realize self-replication in the face of challenges and overcome the difficulties posed by the world’s vicissitudes—affords an effective means to establishing intelligent resolutions. (It is no accident that whales and sophisticated computer-designed submarines share much the same physical configuration or that the age of iron succeeded that of bronze.) The process of natural selection at work in the unfolding of biological evolution is replicated in the rational selection we encounter throughout the history of human artifice. On either side, evolution reflects the capacity to overcome obstacles and resolve problems in the direction of greater efficiency and effectiveness. Selective evolutionary pressures—alike in natural (biological) and rational (cultural) selection— are thus instrumentalities that move the developmental course of things in ways selective of increasing rationality. Yet why should it be that the universe is so constituted as to permit the emergence of intelligence. Three possible answers to the problem of nature’s user friendliness toward intelligence suggest themselves: • The universe is itself is the product of the creative agency of an intelligent being who, as such, will of course favor the interests of intelligence.
31
Nicholas Rescher • Being and Value
• Our universe is simply one item within a vast megaverse of alternatives—and it just so happens (fortuitously, as it were) that the universe that we ourselves inhabit is one that exhibits intelligent design and intelligence-friendliness. • Any manifold able to constitute a universe that is self-propagating and self- perpetuating over time is bound to develop in due course in the direction of an intelligence-favoring dimension. The same sort of selective developmental pressures that make for the emergence of intelligent beings IN the universe make for the emergence of an intelligent design OF the universe. Note that the first and the last of these prospects are perfectly compatible, though both explanations would be incompatible with the middle alternative whose bizarre character marks its status as that of a decidedly desperate recourse. To be sure, if the world is intelligently designed there yet remains the pivotal question: how did it get that way? And at this point there comes a forking of the way into two available routes, namely: by natural means or by super- or supra-natural means. There is nothing about intelligent design as such that constrains one route or the other. Intelligent design does not require or presuppose an intelligent designer—any more than an oddly designed reality would require an odd designer. A naturally emerging object need not—or will not of necessity—be made into artifact by its possession of a feature whose artifice might also produce. Being intelligently designed no more demands an intelligent designer than saying it is harmoniously arranged requires a harmonious arranger or saying it is spatially extended requires a spatial extender. Against this background it would appear that there is thus nothing mystical about a revivified neo-Platonism. It is strictly geared to nature’s modus operandi. Insofar as teleology is at work, it is a naturalistic teleology. Here many participants in the debates about intelligent design get things badly confused. Deeply immersed in a theism-antipathetic odium theologicum they think that divine creation is the only pathway to intelligent design and thereby feel impelled to reject the idea of an intelligently designed universe in order to keep God out of it. They think that intelligent design can only come to realization through the intermediation of an intelligently designing creator. But this view sees matters askew. A perfectly
32
ON EVOLUTION AND INTELLIGENT DESIGN
natural impetus to harmonious coordination could perfectly well issue in an intelligently designed result. And so could the natural selection inherent in some macro-evolutionary process. The hypothetical and conditional character of the present line of reasoning must be acknowledged. It does no more than maintain the purely conditional thesis that if intelligent creatures are going to emerge in the world by evolutionary processes, then the world must be ratiophile, so to speak— that is, user-friendly for rational intelligences. It is not, of course, being argued that the world must contain intelligent beings by virtue of some sort of transcendental necessity. Rather, a conditional situation—if intelligencecontaining then intelligible—is quite sufficient for present purposes. For the question we face is why we intelligent creatures present on the world’s stage should be able to understand its operations in significant measure. And the conditional story described above fully suffices to accomplish this particular job in view with linking evolution and intelligent design. 5. DERAILING WASTAGE AS AN OBJECTION TO EVOLVED DESIGN To be sure there can be objections. One of them runs as follows: “Is evolution by variation and survivalistic selection not an enormously wasteful mode of operation? And is it not cumbersome and much too slow?” Does this sort of moving not rule intelligence out of it? Not really. For where the objector complains of wastage here, a more generous spirits might see a Leibnizian Principle of fertility at work that gives a wide variety of life forms their chance for a moment in the limelight. (Perhaps the objector wouldn’t think much of being a dinosaur, but then many is the small child who wouldn’t agree.) Any anyway, perhaps it is better to be a microbe than to be a Wasn’t that just Isn’t—to invoke Dr. Seuss. Or again, one person’s wastings is another’s fertility—to invoke Leibniz. But what of all that suffering that follows to the lot of organic existence? Perhaps it is just collateral damage that in the cosmic struggle towards intelligent life. But this is not the place nor time for producing a Theodicy and address the theological Problem of Evil. The salient point is simply that the Wastage Objection is not automatically telling and that various lines of reply are available to deflect its impact.
33
Nicholas Rescher • Being and Value
Now on to slowness. Surely the proper response to the lethargy objection is to ask: What’s the rush? In relation to a virtually infinite vastness of time, any finite initial timespan is but an instant. Of course there must be time enough for evolutionary processes to work out. There must be sufficiency. But nothing patent is achieved by minimality unless there is some mysterious collectively reason way this particular benefit—an economy of time—should be prioritized over desiderata such as variety, fertility, or the like. Surely what matters is a complex of such desiderata rather than any single one of them in isolation. 6. INTELLIGENT DESIGN DOES NOT REQUIRE ABSOLUTE PERFECTION Yet another line of objection arises along the following lines: “Does not reality’s all too evident imperfection constitute a decisive roadblock to intelligent design? For if optimal alternatives were always realized would not the world be altogether perfect in every regard? By no means! After all, the best achievable result for a whole will, in various realistic conditions requires a less-than-perfect outcome for the parts. A game with multiple participants cannot be won by every one of them. A society of many members cannot put each of them at the top of the heap. In an engaging and suspenseful plot things cannot go with unalloyed smoothness for everybody in every character. Moreover, there are generally multiple parameters of positivity that function competitively so that some can only be enhanced at the cost of other—even as to make a car speedier we must sacrifice operating cost. With an automobile, the parameters of merit clearly includes such factors as speed, reliability, repair infrequency, safety, operating economy, aesthetic appearance, road-handle ability. But in actual practice such features are interrelated. It is unavoidable that they trade off against one another: more of A means less of B. It would be ridiculous to have a supersafe car with a maximum speed of two miles per hour. It would be ridiculous to have a car that is inexpensive to operate but spends three-fourths of the time in a repair shop. Invariably, perfection—an all-at-once maximization of every value dimension—is inherently unrealizable because of the inherent interaction of evaluative parameters. In designing a car you cannot maximize both safety and economy of operation, and analogously, the world is not, and cannot possibly be, absolutely perfect—perfect in every
34
ON EVOLUTION AND INTELLIGENT DESIGN
respect—because this sort of absolute perfection is in principle impossible of realization. In the context of multiple and potentially competing parameters of merit the idea of an all-at-once maximization has to give way to an on-balance optimization. The interactive complexity of value is crucial here. For it is the fundamental fact of axiology that every object has a plurality of evaluative features some of which will in some respects stand in conflict. Absolute perfection becomes in-principle infeasible. For what we have here is a relation of competition and tradeoff among modes of merit akin to the complementarity relation of quantum physics. The holistic and systemic optimality of a complex whole will require some of its constituent comportments to fall short of what would be content for it if abstractly considered in detached isolation. This suffices to sideline the objection: “If intelligent design obtains, why isn’t the world absolutely perfect?” 7. CONCLUSION The present discussion has argued that evolution is not at odds with intelligent design, because the efficiency-tropism inherent in the modus operandi of evolutionary development actually renders it likely to issue in an intelligently designed product. Accordingly, evolution should not be seen as the antithesis of intelligent design. Nor yet is it inimical to a theology of an intelligent designer. In arranging for a developmental pathway to an intelligently designed world a benign creator could well opt for an evolutionary process. So in the end evolution and intelligent design need not be seen as antagonistic. In closing it must be stressed that noophelia can be entirely naturalistic it is nevertheless altogether congenial to theism. To be sure, there is no reason of necessity why a universe that is intelligently designed as userfriendly for intelligent beings must be the result of the agency of an intelligent being any more than a universe that is clumsily designed for accommodating clumsy beings would have to be the creative product of a clumsy being. But while this is so, nevertheless, such a universe is altogether harmonious to theistic cosmogony. After all, an intelligently construed universe is altogether consonant with a cosmogony of divine creation. And so: noophelia is not only compatible with but actually congenial to theism. After all, one cannot but think that the well-being of its intelligent creatures will rank high in the value-scheme of a benign creator. As should
35
Nicholas Rescher • Being and Value
really be the case in general, approaches based on the study of nature and the reflections of theology can here be brought into alignment.3 APPENDIX The most eloquent exponent of nootropism is Teilhard de Chardin. Whether the evolutionary emergence of what he calls the noosphere will go as far as to reach the ultimate “omega state” that he envisions could be seen as speculative and eschatological. Yet the fundamental process of ratiotropic evolution that he envisions is there for all to see presently, irrespective of how far they may be prepared to venture into its speculative projection into a yet uncertain future. While in their detail the present deliberations differ substantially from those of Teilhard, nevertheless their tendency and motivating spirit is unquestionably akin to his. NOTES 1
The neo-Platonists Plotinus and Proclus differentiated natural phusikôs love (amor naturalis), and psychic (psychikôs) love (amor sensitivus) from intellectual love: erôs noerôs (amor intellectualis or rationalis). In the end their rendition of the Aristotelian idea that “love makes the world go ‘round” comes down to having a world developed conformably and sympathetically to the demands of the intellect in relation to intelligibility.
2
See William S. Dembski, The Design Inference: Eliminating Chance Through Small Probabilities (Cambridge: Cambridge University Press, 1998). The quarrel between orthodox Darwinians and Intelligent Design Theorists of the more conservative stamp is not over the question of evolution by chance selection but simply over the question of whether such selection is strictly random or bias-manifestingly skewed. What is at issue here is not a choice between science vs. religion but choice between two rival scientific theories.
3
As regards the Catholic ramifications of the issue, it is certainly true that the Church emphasizes the distinction between body and soul, and views the former, soul, not as a product of the physical causality of nature, but of a special act of creation on the part of God. But this of course need not (and indeed should not) be construed as creating an unmendable breach between doctrine and evolution, since there simply is no need to claim that evolution creates souls rather than saying that it affords fitting occasions for the creation of souls.
36
Chapter 3 MIND AND MATTER 1. A TWO-SIDED COIN
T
he interpretation of most empirical studies of psycho-behavioral aspects of free will is almost hopelessly muddied through mis-construing the brain-psychology gearing of mind to body to mean that the former is governed and determined by the machinations of the latter. Let it be granted that there is here a linkage with these two resources operating in unison and thereby what one recent writer refers to as: The Correlation Thesis . . . to the effect that there exists for each discriminable conscious state or occurrence [in the mind of an agent] a theoretically discernable [characteristically coordinate] brain correlate.”1
Yet even granting such a rigid, lock-step coordination there still remains the question of who is in charge? Which is the dependent and which the independent variable? Who commands and who follows? Clearly, the tighter the coordination the more pressing this questions becomes. Unison of operation will not as such establish primacy of control. And this critical point is almost universally overlooked. For consideration is a two-way street. Change in the psychological state carry changes in cerebral physiology in their wake: when the mind frets the brain buzzes. And conversely, changes in brain states carry changes in mind-states in their wake. And the coordination of mind and matter—however tight—does not put matter into the driver’s seat as direction-determinative. It situates matters on a twoway street where things can go either way. One can think of mental activity as a matter of the mind’s awareness of what the brain is doing. And conversely one can think of brain activity as the brain’s response to or reflection of what the mind is doing. But there is no reason to think of either of these as an inevitable arrangement, excluding the prospect that sometimes the balance runs one way and sometimes the other. In the end, any adequate mind/body theory must accommodate two facts of common experience: (1) That mind responds to bodily changes (drugs, fatigue, anesthetics). And (2) That the body responds to many of
Nicholas Rescher • Being and Value
the mind’s demands (to stand up, walk about, hold one’s breath, etc.). Now in this light, consider the following oft-maintained contention: An act can be free only if its productive source is located in the thoughts and deliberations of the agent. But this is never the case because the tight linkage of mind-activity to brain-activity means that the thoughts and deliberations of the agent’s mind are always rooted in and explicable through the processes at work in the agent’s brain.
But the fact of it is that any sort of functional lock-step correlation of mind and brain leaves the issue of imitative wholly open. For the fact that two parameters are lock-step coordinated does not settle—or even address—the issue of processual initiative. Coordination as such does not settle the question of the direction of determination of which of those coordinated variables is free and which is dependent.2 2. THE ISSUE OF INITIATIVE All of those myriad illustrations of a correlating connection between thought and brain activity are simply immaterial to the issue of who is in charge. For what is involved cannot settle the question of whether mind responds passively to brain-states changes or whether it actively uses the brain to its own ends. For the scientistic determinist, to be sure, agents are productively inert—what they do is always the product of what happens to them: they simply provide the stage on which the causality of nature performs its drama. The voluntarist, by contrast, sees intelligent agents as productively active participants in the drama of the world’s physical processuality. And the reality of it is that mind-brain correlation cannot effectively be used against him. It is simply fallacious to think that the intimate linkage between brain activity and thought puts the brain in charge of the mind. Any mere coordinative correlation between brain-state physiology and mind-state conceptuality will still leave open and unrendered the issue of which variable functions independently and which dependently—which initiates changes and which responds. Be the coordination or amalgamation ever so tight-woven the questions change-imagination remains open. And there is no reason at all why this cannot be a two-way street with some transactions going in the one direction and others in the other. On such an approach, the brain/mind complex is seen as an emergently evolved dual-aspect organization whose two interlinked domains permit
38
MIND AND MATTER
the impetus to change lie sometimes on the one side and sometimes on the other. For the direction of determination so far remains open. Given these interlocked variables, the question of the dependent-vs.-independent status is wholly open and the question of initiative unresolved. And the fact that mind and brain sail in the same boat, is no reason why mind cannot occasionally seize the tiller. What is at issue is a partnership of coordination not a state of inflexible master-servant subordination. In particular situations the initiative can lie on one side or the other—all depending. But all depending on what? How does it get decided where the initiative lies? Consider a chamber and piston set-up. Move the piston and the situation in the chamber changes: pressure and temperature will respond. Conversely, change the situation in the chamber (by modifying its temperature/pressure condition) and the piston will respond. The processual interlinkage is rigorously fixed: pressure and temptation move in lock-step. But the direction of influence remains a wide-open issue which resolution depends on the overall modus operandi of the set-up. Or consider again of the pulley set-up. When the cube rises, is this because someone is pushing up on it or because a bird has alighted on the sphere? The system itself taken in isolation in will not answer this for you, but the wider context—the overall synoptic processual context—will provide the information needed to decide where the initiative lies. It is all a matter of where the activity starts and what stands at the end of the causal line. And the free will situation is much the same. When I read, the mind responds to the body; when I write the body responds to the mind.3 Consider the following argument. “Our mental performances correspond to physico-denial processes in the brain which as such answer to nature’s laws of cause and effect. Ergo those fundamental processes of inert nature encompass the realm of thought as well.” There is a deep flaw in this reasoning—a flaw that lies in a failure to realize that correspondence and correlation do not settle the issue of initiative. Irrespective of how tightly the operations of the mind are interlimited with those of the brain, this does not settle—or even address the issue of initiation, the question of whether it is mind or brain that is what Mortz Schlick called the “original instigator.” 4 There is good reason to see the mind-brain interlinkage in much the same terms. And here too the linkage as such does not set a fixed direction to the initiative and control of changes. Anger the individual and characteristic patterns of brain activities will ensue; create a characteristic pattern of activity in the brain (say by electrical stimulus), and the person will re-
39
Nicholas Rescher • Being and Value
spond with anger. Yes, there indeed is a tight correlation, but productivity functions along a two-way street. The correlation of mind and brain is no more an obstacle to thought-initiated physical responses than it is an obstacle to the evocation through thought responses of physical stimuli. My annoyance at the pin-prick is a triumph of matter over mind; my extraction of the pin a triumph of mind over matter.5 When my finger wiggles because I decide to move it, for the sake of illustration, then the mind-side of the mind/brain complex sets the brain-side into motion. By contrast, when I hear the alarm clock ring, it is the brainside of the mind/brain complex that alerts the mind-side to a wake-up call. The interlinkage at issue with the mind/brain amalgamation leaves the issue of the direction of motivation—be it brain-initiated mind receptivity or mind-inaugurated brain responsiveness—as an issue open to further resolution. With agent causation originating in the mind, the agent is active; with physical considerations originating in the brain the agent is passive. Both are perfectly possible. And each happens some of the time with neither enjoying a monopoly. Mind clearly cannot do the work of matter: it cannot on its own produce snow or ripen tomatoes. Nor, it would seem, can matter do the work of mind: it cannot read books or solve crosswords. And yet the two are clearly connected. When the mind decides to raise the hand, that hand moves. And on the other side there is Mark Twain’s question “When the body gets drunk, does the mind stay sober?” Clearly there is interaction here. What we do qualifies as a product of mind as long as the controlling guidance for our actions lies in the operation of our minds—irrespective how tightly these may be tied to the operation of our brains. Mind-body coordination does not, as such, put the brain in charge. This is not the place to articulate a full-scale philosophy of mind. The extensive detail of mind-brain coordination will not preoccupy us here.6 All that is requisite for present purposes is (1) that there is a tight linkage of mind-brain coordination, and (2) that when a state-change occurs in this context the initiative for it can lie on either side. We need not here enter into detail at a level that transcends these rudimentary basics. That said, it must be acknowledged that the conception of a mind-brain partnership of coordination n which a process of a change in psychophysical states can be initiated on either side is critically important in the present context. For it opens the way to seeing those free decisions as a crucial productive contribution of mind to the world’s panoply of occurrence.
40
MIND AND MATTER
3. A SALIENT DUALITY What we have with mind-body coordination is not an externally managed pre-established harmony, but an internally assured co-established alignment—a dual-aspect account if you will. Even as what is for the paper a squiggle of ink is for the reader a meaningful word, so one selfsame psychophysical process is for the brain a signal (a causal stimulus) and for the mind a sign (a unit of meaning). Or again, one selfsame process, the ringing of the dinner gong, has one sort of significance ever-obvious of the guests and another for their mind-set. Such analogies, while imperfect, should help to convey the general idea of phenomena that have an inherent duality. The mind is a hermeneutical engine. For only a mind can operate the symbolic process that transforms stimuli into meanings. Those physical inclinations are the occasion and perhaps even in some sense the productive cause of the interpretations at issue, but they are not the bearers of its substantive meaning-content. For that requires a very different level of understanding and a very different framework of conceptualization. All the same, the mind no more functions independently of the brain than the expressive mood of the visage can smile Cheshire-cat-like without the physical face. And yet that physical face can achieve no expression in the absence of there being a psychological mood to express. Rigid materialism sees mental action as a systemically subordinate response to the functioning of matter. Rigid idealism sees matter as somehow engendered through the productive activity of mind. But more realistic than either is a theory of mind-matter coordination that sees the two as reciprocally conjoined functioning expressive of different facets of complex, two sided interaction where the ball of state inauguration and change production sometimes in one court and sometimes in the other. With any system in which there are functionally coordinated factors (be they temperature/pressure or supply/demand or whatever) a change in the one can engender change in the other. The relationship of lock-step coordination at issue is open to two way implementation according as it is a change in parameter No. 1 the conclusion in an accommodating change in parameter No. 2 or the very reverse. Lock-step coordination leaves the issue of control—of independent vs. dependent variable—entirely open. And there is no reason to think that the situation at issue with mind/brain coordination is any different from the general run in this particular respect.
41
Nicholas Rescher • Being and Value
What we have here, then, is a situation of coordination and reciprocity rather than that of a unidirectional dominance/subordination. Being anxious can make the pulse race; but then again, sensing one’s pulse racing can induce anxiety. The interconnection and interaction of mind and body can work both ways. Granted, where the brain is dead the mind no longer works. But then as long as the mind is working the healthy brain responds. Thought is not an epi-phenomenon to physical processes but a cophenomenon coordinate with certain ones among them. Thinking is not something the brain does: it is done by a mind that uses the brain as its instrument. When thought leads to action it is not the two different sorts of causality at work. The causality of agency (thought control) and the causality of nature (brain control) are two sides of the same coin as it were, two inseparably conjoined aspects of one comprehensive causal process. The changes at issue flow from one unified causality. It is just that the actuating impetus to those changes in the one case lies at the pole of thought-processes and in the other case at the pole of brain processes. And so when the mind has the initiative, the brain does not react but rather responds—and conversely. So strictly speaking rather than causal influence is at work. (After all, a suggestion can induce or occasion on ideas in someone’s mind without causing or producing it. On the issue of who is in charge, mind or brain, thought or matter traditional philosophizing has almost always taken an all-or-nothing approach. Materialistic determinists from classical atomism to the time of Hobbes, La Mettrie, and Laplace put matter in charge; idealists from Socrates to Berkeley and Lotze put mind in charge.7 For some reason the common-sensical idea that in some transactions the one is in control and in others the other had little appeal for philosophy’s endless succession of absolutists. But in the end, there is really no reason to opt for an all-or-nothing resolution. 4. MIND-BRAIN INTERACTION WORKS BY COORDINA-TION NOT BY CAUSALITY With mind-brain coordination mind as well as matter can seize the initiative with respect to human action so that we can act in the mode of agent causality, while nevertheless all human actions can be explained on the side of natural causality. And so we confront Kant’s paradox of reconciling the two modes of causality.8
42
MIND AND MATTER
But how on such a view does mind come to exercise physical causality? When I mentally decide to wiggle my fingers a few seconds hence for the sake of an example, how is it that my body responds to this purely mental transaction? The answer is that doesn’t because no “purely mental” transaction is at issue. Thought always has its correlative in the domain of brain psychology.9 And so an individual’s so-called “purely mental intention” is not really purely mental at all because it stands coordinate with a mindbrain amalgamating physico-physiological intention-state in much the same manner at issue with the mood/configuration duality of those smiley/frowny faces considered above. And the physical cause of that wiggling response is not something “purely mental” but the physical side of that bi-polar amalgam. What actually occurs in such transactions is a matter less of causality than of coordination. In his classic paper of 1934, Dickinson Miller saw the matter quite clearly: [In choosing or deciding] the mental process is paralleled in the brain by a physical process. The whole [two-sided] psycho-physical occurrence would then be the cause of what followed, and the psychic side of it—the mental struggle proper—a con-cause or side of the [overall, two-sided] cause. Tomorrow’s configuration of matter [i.e.., the physical result of an action] will [then] have been brought about by a material [i.e., physical] process with which the mental process was inseparably conjoined.10
When an agent acts there is no need to dream up a Cartesian categorytranscending impetus of thought upon matter. The material eventuations are produced materially, by the physical side of the two-sided mind-matter amalgam at issue in psychophysical processes. And the same with thought processes. Each component functions in its own order, but the coordinate linkage of the two move in lock step, thus automatically answering Mark Twain’s question “What the body get drunk, does the mind stay sober?” The one thing this account leaves out—and it is a crucial omission—is the key point that the actuating initiative for change can lie on either side. But what could account for the fact that on this occasion the initiative lies with the mind and on that occasion it lies with the brain? Here we need to look to the temporal context of occurrence in its more comprehensive Gestalt. If what I do comes in response to drink or drugs then it is clearly the brain that is in charge. On the other hand if it is a matter of careful deliberation and in painstaking weighing of alternatives, then it is clearly the mind that is in charge. It all depends on the structure of occurrence subject
43
Nicholas Rescher • Being and Value
to pretty much the same sort of contextual analysis that is at issue with the discrimination between dependent and independent parameters in physicalprocess situations. Fro here as elsewhere the wider context of occurrence can settle the question of productive priority and initiative. The causal deliberations of the ancient Greeks were predicated on the idea that only like can cause like. The idea that factors which are as different conceptually as night and day could nevertheless influence on another causally was anathema to them. But the reality of it stands otherwise. Motion creates heat via friction; sounds engender salivation by Pavlonian conditioning). Yet not only was this considerations rejected by the Greeks, but it continued to exert influence as late as Descartes continued with his Chinese-wall separation of mind from matter. However, the revolution in causal thinking launched by David Hume changed all that. The idea of cross category causation no longer seems all that odd to us. And we nowadays do not—or should not—see any inherent impossibility that in the order of causal production physical processes should engender mental responses—or the other way around.11 Philosophers often see it as an obstacle to theories of M ~ B coordination that the mind (M) functions on the basis of reasons in the psychic (ψ) order, while the brain (B) functions on the basis of physico-psychological causes that function in the physical order (φ). And as they see it, reasons are something so radically distinct from cases that “never the two shall meet.” In their haste to get on with their analysis they overlook the fact that M’s concern actually is not with reasons themselves but with what is seen or accepted as such, and that “accepting something as a reason” is just exactly one of those double-aspect M ~ B or (ψ-φ) amalgamating states that a theory of M ~ B coordination requires. That what the mind sees as a reason for something by and large is indeed actually so is nothing for which individual agents are in any way responsible; it is the consequences of the evolutionary procedure that equips agents with their cognitive resources. Accordingly, the transition from an internal psychio-physical mind/brain state M1/B1 to M2/B2 can occur along very different pathways. It can be a matter of physical necessitation where the transition M1 → M2 is in control and the B-component carried along as an inactive passenger. Or it can be a matter of agent control where the transition B1 → B2 is paramount and the M-components carried along in their wake. The transition in question thus has two very different forms, one of nature-causality and physical necessitation, and the other of agent-causality and thoughtcontrol. When I avert my finger from a pin-prick we have the one sort of
44
MIND AND MATTER
case; when I wiggle it for the sake of a philosophical example we have the other. And in those M1/P1 to M2/P2 transitions where mind responds to physiological changes it (the mind) can take physical stimuli as inputs and interpreted meanings (messages, information) as outputs. In this way the mind is a hermeneutical engine. And so regardless of how tight the correlation of mind and matter may be, there is no ground for construing this circumstance as precluding the efficacy of mind in effecting change, and no reason to refrain form maintaining that it is sometimes mind rather than matter that affords the independent variable that take intuition in the inauguration of change. The tighter the interrelatedness of brain and matter the ampler the prospects of transactions where mind has the initiative. It is not functional coordination as such that is the pivotal consideration but the difference in the direction of the dependency at issue. If mind were “nothing but” the machinations of matter, if brain psychology were all there is to it, then mind would never be able to do this characteristic work of providing a bridge for the domain of physical processes to the domain of ideas. We would never get from here (physicality) to there (thought): all possibility of achieving meaning, significance, information, would be lost. Whoever insists on seeing mind as totally “reduced” to matter—dismissing mental options as “nothing but” the machinations of matter—thereby excludes himself excluded from the conceptual domain. Two different modes of determinism are in play—doubtless among others—in the context of free will deliberations, namely the physico-causal determinism of natural processes popular among thinkers of a mechanistic inclination and the motivational determination of desires and reasons favored by thinkers of more idealistic inclination. The contrast between the causality of physical process and the psychic causality of thought processes of intelligent beings is crucial here. The issue is once again that of where the initiative lies, whether with mind or matter, processes of brain or of thought. And nothing about mind-matter condition—no matter how tightly we weave it—precludes the prospect of a two-way street here. NOTES 1
Theodore Honderich (ed.), Essays on Freedom of Action (London: Routledge, 1973), p. 189. However the author emphatically declines to take a position on the issue of whether brain-state causes the mind-state (p. 190), and also passes the reverse idea over in discrete silence. But it is just this prospect—not so much of causation as of state-change initiation—that lies at the heart of the present deliberations.
45
Nicholas Rescher • Being and Value
NOTES 2
The next chapter will return to this issue in closer detail.
3
And note that even were physical (muscular) action sometimes initiated prior to conscious awareness of a decision, this decision’s subconscious vanguard may nevertheless still provide the mental precursors of action.
4
Moritz Schlick, Problems of Ethics (New York: Dover Publications, 1962), Chap. 8.
5
On mind-brain interaction see the contribution of Jürgen Boeck in HenrichWilhelmi 1982, pp. 9-22.
6
There is a vast number of fine books on the subject. A random sampling includes Eccles 1994, Honderich 1990, Watson 1982.
7
“Das Verhältnis der Seele zum Leib ist stets das einer Herrschaft,” H. Lotze, Medizinische Psychologie oder Physiologie der Seele (Leipzig, 1852), p. 289.
8
See Immanuel Kant, Critique of Pure Reason, A803 = B831.
9
The reverse will not of course be the case.
10
R. E. Hobart, (= Dickinson Miller), “Free Will as Involving Determination and Inconceivable Without it,” Mind, vol. 43, No. 169 (1934), pp. 1-27.
11
John Stuart Mill A System of Logic (London: John W. Parker, West Strand, 1843) is one of the earliest works that is altogether sound on the issues of this paragraph.
46
Chapter 4 FALLACIES REGARDING FREE WILL 1. INTRODUCTION
I
t is not my object here to argue that we humans do actually have free will. I merely want to show the fallaciousness of various arguments to the effect that we do not. Nor do I propose here to plumb into the analytic depths by spelling out the full detail just what it is that claims to free will involve. All that matters for my present purposes is that such freedom calls for an agent’s being in control of what he does in ways that are at odds with the prospect that his thoughts and intentions could be bypassed in an adequate explanation of his actions. FALLACY NO. 1 Free volition is the same as free agency. This fallacy becomes unstuck in the face of John Locke’s example of the agent who deliberately and willingly resolves to remain in a room all of whose exists have—unbeknownst to him—been sealed. His decision to remain is free, but his is not at liberty to implement it. Clearly freedom to act (Handlongsfreiheit) calls for more than freedom to decide, to choose, to try (Wahlfreiheit). Freedom to decide among putative alternatives could obtain even in the face of deterministic necessitation. FALLACY NO. 2 The nextt fallacy to be considered relates to a point which Daniel Dennett has formulated as follows: If determinism is true, then our every deed and decision is the inexorable outcome, it seems, of the sum of physical forces acting at the moment; which in turn is the inexorable outcome of the forces acting an instant before, and so on to the beginning of time . . . . [Thus]—If determinism is true, then our acts are the consequences of the laws of
Nicholas Rescher • Being and Value
nature and events in the remote past. But it is not up to us what went on before we were born, and neither is it up to us what the laws of nature are. Therefore the consequences of these things (including our 1 present acts) are not up to us.
It is exactly this transit from “and so on” to “the beginning of time” that constitutes what I shall call the Zenonic Fallacy. It totally overlooks the prospect of backwards convergence as illustrated in the following diagram O X
t2 t1
t0
Here ti+1 stands halfway between ti and X. Consider an occurrence O at t0, putatively the product of a free decision at X. To explain it in terms of what precedes we need certainly not getting go back to “the beginning of time.” The failing at issue here is substantially that of Zeno’s notorious paradox of Achilles and the Tortoise. Both alike involve a fallacy in overlooking the circumstance that, thanks to convergence, an infinity of steps can be taken in a finite distance, provided merely that the steps get ever shorter. Once it is granted that, even if a cause must precede its effect, nevertheless there is no specific timespan, however small, by which it need do so, the causal regression argument against free will looses all of its traction. With Zeno, Achilles never catches the tortoise because his progress must go on and on before the endpoint is reached. In the present reasoning explanation will never reach an initiating choice-point because the regress goes on and one. But in both cases alike the idea of a convergence which terminated the infinite process at issue after a finite timespan is simply ignored. Such a perspective leaves the Principle of Causality wholly compatible with freedom because all those causal consequences of the act remain causally explicable. FALLACY NO. 3 The Law of Causality leaves no room for agent causality and thus no room for free will. For if all events are explicable in the order of
48
FALLACIES REGARDING FREE WILL
natural causality, then so are all of those supposedly free decision of agents. To avoid this fallacy we must draw some rather subtle distinctions, and involve ourselves in a bit of process metaphysics to boot. The first and most crucial distinction here is that between two sorts of occurrences, merely events and eventuations. Events are occurrences that form part of nature processuality. They are happenings on the world’s spatio-temporal stage. So they transpire over time: they have a finite lifespan and their time of existence always occupies an open interval Time the lifespan of an event Eventuations, by contrast, are not parts of nature’s processuality but terminating points within it. They are temporally puctiform and lack duration. They mark the beginnings and endings of events. Now all human acts (all actions and activities) are even-like. They occupy time. But the junctures of resolution that mark the completion of a process of choice of decision are not events. Such completions are not actually processual doings, but rather are mere junctures of passage—transitions that mark the beginnings and endings of events. Looking for something is an activity but, actually finding it is not. (There is no present continuous here. One can be engaged in looking but not in finding.) Listening to someone is an activity, but hearing what they say is not. Activities are events, terminations and completions are not. The running of a race is an event (as are its various parts, such as running the first half of the race). However, finishing the race is an eventuation. Such eventuations are endings or culminations. One can ask “How long did he take him to run the race?” but not “How long did he take to start the race?” And even as the race ends when it is won (or lost), so the task ends exactly at the moment when it is completed (or abandoned). Finishing is thus an eventuation and accordingly, the finishing-point of a race, instead of being the last instant of the race, is the first instant at which the race is no longer in progress. And just this is the case with the decisions and choices that terminates a course of deliberations.
49
Nicholas Rescher • Being and Value
Eventuations, so understood, are not parts of nature’s processual flow, since parts of processes are always processes themselves. Rather, eventuations—their beginnings and endings—belong to the machinery of conceptualization that minds impose on nature: instrumentalities of descriptive convenience that do not correspond to anything enjoying independent existence in the real world. Like the North Pole or the Equator they are not real items existing physically in nature, but rather thought instrumentalities projected into reality by minds proceeding in the interests of description and examination. Deliberations, so regarded, are seen as events—as processes that occur over open-ended intervals of time and culminate in decisions as eventuations. And this means that, there will always be an interval of time between a decision and any subsequent action—an interval able to accommodate intervening events to serve as causal explainers of that decision-consequent action. Since there is no such thing as a next time subsequent to a point of decision, there will always be room for squeezing in further events before any particular decisionsubsequent event. The prospect of determination by events is thus ever-present. And analogously, there is not first decision-succeeding event that excludes the prospect of an occurrence-explaining prior event. Just this is critical for the present positions regarding the causal explainability of actions. Freedom of decision accordingly does not impede causal explicability. However, what one has in the wake of a free decision is a phenomenon that might be characterized as causal compression. Every event that ensues from that decision can be accounted for causally—but only with reference to occurrences during the immediately preceding but decision-subsequent timespan, whose duration converges to zero as the point of decision is approached. In sum: when we duly distinguish events from eventuations we can regard all action (as events) to be causally explicable in terms of what precedes. Free will becomes reconciled to the causal explicability of actions. A free decision inaugurates a series of events each of which is fully explicable and determinate on the order of natural causality. But this is something that is true of all those decision-subsequent events, and does not hold for that free decision itself.
50
FALLACIES REGARDING FREE WILL
FALLACY NO. 4 Predictability is incompatible with freedom of the will. If I know your taste in books or in moving pictures, then I can confidently predict your selections among various alternatives—and will in general probably be right—without any infringement on your freedom. And we can safely predict of a sensible person that she will freely choose to do those things which are—in the circumstances— the sensible things to do. The idea of “rational self-interest” actually provides a powerful predictive resource in the context of human affairs. Determining what it is that, from the angle of their aims and values, is the most advantageous thing for people to do provides a generally effective device for predicting their comportment. Thus it is easy to predict what a competent mathematician will arrive at when seeking the solution of a problem, though one would find it quite difficult to say just what a highly incompetent mathematician will come up with—a circumstance which clearly does not make him any the freer. Accordingly, the operation of a power of free choice certainly does not mean that there must be impredictability.2 Freedom and predictability do not conflict as long as no loss of agent control is involved. Externally intrusive constraints are one thing, but the internally rooted predispositions on whose basis we can safely predict are something else again. FALLACY NO. 5 Since pre-determination is incompatible with free will, so is the determination of a decision’s outcome by the agent’s own decisionengendering deliberations. This objection overlooks an important distinction, namely that between pre-determination and what might be called precedence determination. The former calls for predictability as of some antecedent time; the latter involves no such thing. This crucial difference is illustrated by the following diagram:
51
Nicholas Rescher • Being and Value
point of decision
Some t < t0 time
t
t0
With predetermination what happens at t0 is determined (i.e., lawdeducible from) that which happens at some earlier time t. Already at this earlier time the decision becomes settled: a foregone conclusion that is reached in advance of the fact. Some earlier state of affairs renders what occurs at the tie causally inevitable. With precedence determination, by contrast, what happens at t0 is also determined by what goes before—but only by everything that happens from some earlier time t up to but not including t0.3 Both alike are modes of determination by earlier history. But unlike the former, the latter requires an infinite amount of input-information which is of course never available. What we thus have in this latter case is a mode of antecedence determination that does not give rise to predictability but is in fact incompatible with it. Just exactly such precedence determination can and should be contemplated in relation to free decisions and choices: a determination by the concluding phase of the course of the agent’s deliberation that issues in the decision or choice at issue. Predetermination means that the outcome becomes a foregone conclusion at some antecedent time. The entire matter becomes settled in advance of the fact. This is indeed incompatible with free will because it deprives the agent of the power to change his mind. There is some time in advance of the point of decision when the whole matter becomes settled. The events that constitute a course of deliberation antecedent to a decision or choice so function as to determine the outcome, it is only the end-game, the final, concluding phase, that is decisive. Precedence determination, by contrast, means that the final phase of the deliberation is decisive. Only the entire course of the agent’s thinking from some earlier point up to but not including the point of decision suffices to settle the issue. The outcome is never settled in advance—it isn’t over “until the fat lady sings.” And it should be clear that this sort of antecedent determination geared to the unfolding course of deliberation in its final phase is nowise at odds with freedom of the will.
52
FALLACIES REGARDING FREE WILL
_______________________________________________________ Display 1 DELIBERATING AND PROBABILITY: AN EXAMPLE
A A Probability band of width 1
B B C C
time →
t0
_______________________________________________________ The situation of a free choice among alternatives is thus associated with the following sort of picture regarding the situation at issue. Consider, by way of example, a course of deliberation for deciding among three alternatives A, B, C with the decision ultimately arrived at in favor of C at time t0, the “point of decision”. At every time t be-fore t0 there are three possible outcomes A, B, and C whose probabilities (at any given time prior to t0 shows a band of width 1 overall, as per Display 1. Throughout the course of deliberation these probabilities may wobble across the probability band but in the end they must converge in a way which at t01 gives the whole probability to one outcome alone. But at any time prior to t0 there is a nonzero probability that any of the three outcomes will result—at no anterior time is the outcome a foregone conclusion. The endgame is never definitively settled before the end is reached: only at the very end (at t0) is there a “probability collapse” into 1 and 0’s. Until the issue s “fully decided” there is a non-zero probability of the agent’s making a choice different from the one that ultimately eventuated. As the “point of decision” is reached it becomes more and more likely how the issue will resolve itself. But there are no guarantees. At no time before
53
Nicholas Rescher • Being and Value
that point of decision is there a “point of no return” where the resolution becomes a foregone conclusion. So once again a distinction comes upon the scene to save the day. The objection in view is fallacious because it overlooks the crucial distinction between the two very different modes of “determination by what precedes” represented respectively by pre-determination and precedence-determination of the sort just described. And one other point is important here. The first question to ask of any mode of determinism is determination by what? By matters outside the agent’s range of motivation is one thing. But by the agent’s own deliberations—by the manifold of inclination that encompasses his wants, wishes, aims and choices is something else again. After all, determination of decision outcomes by the agent’s thoughts is surely a requisite of free will rather than an obstacle to it. And this brings us to— FALLACY NO. 6 An act can be free only if its productive source is located in the thoughts and deliberations of the agent. But this is never the case because the tight linkage of mind-activity to brain-activity means that the thoughts and deliberations of the agent’s mind are always rooted in and explicable through the processes at work in the agent’s brain. To see what is amiss here consider the classic freshman-physics set-up of a gas-containing cylindrical chamber closed off by a piston at one end. The temperature inside the chamber is lock-stop coordinate to the distance of the piston-wall from the fixed wall: when the piston moves the temperature changes correspondingly, and conversely when temperature-changes are induced the piston moving correspondingly. But this condition of functional lock-step correlation leaves the issue of imitative wholly open: one may either be changing the temperature by moving the piston, of moving the piston by changing the temperature. Thus Lock-step coordination as such does not settle the question of the direction of determination of which of those coordinated variables is free and which is dependent. The fact that two parameters are lock-step coordinated does not settle—or even address—the issue of processual initiative.
54
FALLACIES REGARDING FREE WILL
For the sake of illustration consider a teeter-totter or, alternatively, a pulley arrangement as per
Here the up-or-down motion of the one weight is inseparably tied to the corresponding motion of the other. And this illustrates the larger point: however tight and rigid the functional coordination between two operative agencies may be, the issue of initiative and changeinauguration is something that yet remains entirely open and unaddressed. Mark Twain’s tendentious question “When the body is drunk, does the mind stay sober?” is perfectly appropriate. But then the inverse question “When the mind panics does the body remain calm?” is no less telling.4 All of those myriad illustrations of a condition between thought and brain activity are simply immaterial to the issue of who is in charge. For what is involved cannot settle the question of whether mind responds passively to brain-states changes or whether it actively uses the brain to its own ends. For the determinist, to be sure, agents are productively inert— what they do is always the product of what happens to them: they simply provide the stage on which the causality of nature performs its drama. The voluntarist, by contrast, sees intelligent agents as productively active participants in the drama of the world’s physical processuality. And the reality of it is that mind-brain correlation cannot effectively be used against him. It is simply fallacious to think that the intimate linkage between brain activity and thought puts the brain in charge of the mind. But if mind as well as matter can seize the initiative with respect to human action so that we can act in the mode of agent causality, while nevertheless all human actions can be explained on the side of natural causality, then we confront Kant’s paradox of reconciling the two modes of causality.5 On such an approach, the brain/mind is seen as an emergently evolved dual-aspect organization whose two interlinked domains permit the impetus to change lie sometimes on the one side and
55
Nicholas Rescher • Being and Value
sometimes on the other. For the direction of determination so far remains open. Given these interlocked variables, the question of the dependent-vs.-independent status is wholly open and the question of initiative unresolved. And the fact that mind and brain sail in the same boat, is no reason why mind cannot occasionally seize the tiller. What is at issue is a partnership of coordination not a state of inflexible master-servant subordination. In particular situations the initiative can lie on one side or the other—all depending. But all depending on what? How does it get decided where the initiative lies? Well—think again of the pulley situation. When the cube rises, is this because someone is pushing up on it or because a bird has alighted on the sphere? The system itself taken in isolation in will not answer this for you, but the wider context—the overall causality synoptic and dynamic context—will decide where the initiative lies. It is all a matter of where the activity starts and what stands at the end of the causal line. And the free will situation is much the same. When I read, the mind responds to the body; when I write the body responds to the mind. FALLACY NO. 7 If the acts of an agent are anywise determined—if they are somehow, that is, anywise necessitated—then they cannot possibly qualify as free. Both Aristotle and the Stoics sought to reconcile the volitional freedom they deemed requisite for morality with the determinism they saw operative in the circumstance that character dictates decisions. To accomplish this without adopting the Platonic myth of character selection, they maintained that what would impede freedom is not determination as such but only exogenous determination rooted in factors outside the agent’s self-produced motivations. The crux of freedom, so viewed, is not indetermination but autodetermination—determination effected by the agent’s agency itself—sua sponite as the medievals put it. On such a compatibilist view, the crux of the matter is not whether or not there is determinism—it is conceded that there indeed is, albeit of the agent-internal variety. The crux is whether there is an agent external determinism—a determinism where all reference to the agent and his motivations can be out of consideration of consid-
56
FALLACIES REGARDING FREE WILL
eration in matters of explanation. The crux is of freedom does not lie in the that of determination, but in its how, it procedural mechanisms. And as long as those deliberative factors are deemed paramount the basis for freedom is secured. Thus what we have here to distinguish between endogenous (agent-internal) and exogenous (agent-external) determination. Clearly if that determination is effected without reference to the agent by forces and factors above and beyond his control by thought, then we can hardly characterize that agent as free. But if those determinative factors are agent-internal, if they are a matter of the agents own plans and projects, his own wishes, desired, and purposes, then the deliberation of the values of decisions and choices nowise stands in the way of the agents freedom. Quite on the contrary. A choice or decision that was not the natural and inevitable outcome of the agent’s motivations could hardly qualify as a free decision of his. And so freedom of the will is nowise at odds with the Principle of Causality as long as the locus of causal determination is located in the thought-process of the agent—that is, as long as causal determination is canalized through the mediation of the choices and decisions emergent from his deliberations. And there is consequently no opposition between freedom and causal determination as long as that determination is effected by what transpires in the principle of agents and the matter is one of agent-causality.6 In sum, to set free will at odds with determinacy is fundamentally fallacious because it too rides roughshod over the crucial distinction—that between the agent-external causality of impersonal events and the agent-internal causality of deliberative thought. FALLACY NO. 8 Free will is mysterious and supra-natural. For it requires a suspension of disbelief regarding the standard view of natural occurrence subject to the Principle of Causality. Along these lines one recent writer complains7: Agent causation is a frankly mysterious doctrine, positing something unparalleled by anything we discover in the causal processes of chemical reactionism, nuclear fission and fusion, magnetic attraction, hurricanes, volcanoes, or such biological processes as metabolism,
57
Nicholas Rescher • Being and Value
growth, immune reactions, and photosynthesis. Is there such a thing? When libertarians insist that there must be, they [build upon sand].8
But this sort of complaint is deeply problematic. Free will, properly regarded, hinges on the capacity of the mind to seize the initiative in effecting changes in the developmental course of mind-brain coordinated occurrence. Need this, or should it be seen as something mysterious and supra-natural? With the developments of minds upon the world stage in the one the course of evolution, various capacities and capabilities come upon the scene emergently, adding new sorts of operations to the repertoire of mammalian capacities—remembering past occurrences, for example, or imagining future ones. And one of these developmental innovations is the capacity of the mind to take the initiative in effecting change in the setting of mind from coordinate developments. Now the explanatory rationale for this innovation is substantially the same as that for any other sort of evolution-emergent capability, namely that it contributes profitability to the business of natural selection. There is nothing mysterious or supra-natural about it. And so this present fallacy rests on a failure of imagination. It is predicated in an inability to actualize that with the evolution of intelligent agents there arises the prospect of intelligence-guided agency determined through the deliberations of these intelligent agents. FALLACY NO. 9 Only rationally grounded decisions are ever free. Down the corridors of time have echoed the words of Spinoza: Men believe that they are free, precisely because they are conscious of their volitions and decision, and think not in the slightest about the causes that dispose them to those appetites and volitions, since they are unknown to them.9
And here Spinoza was echoed by Charles Darwin when soon after the voyage of the Beagle, he wrote: The general delusion about free will is obvious because man has the power of action but he can seldom analyze his motives (origi-
58
FALLACIES REGARDING FREE WILL
nally mostly instinctive, and therefore now [requiring] great effort of reason to discover them . . . )10 Apparently Darwin thought (with Spinoza and perhaps Freud) that action is only genuinely free when it is activated entirely by recognized and rationally evaluated and approved motives. But this simply confounds free with rational agency. As long as the agent acts on his own motives—without external duress or manipulation—his action is free in the standard (rather than rationalistically reconfigured) sense of the term. Motivation as such does not impede freedom—be it rationally grounded or not. Our motives, however inappropriate and ill-advised they may be and however little understood in terms of their psychogenesis, nevertheless do not constrain our will externally from without our self but are the very core of its expression. A will that is responsive to an agent’s motivation is thereby free and it matters not how compelling that motive may be in relation to the resolution at issue11 After all, a person’s nature is manifested in his decisions and finds its overt expression realized in them. His decisions are nothing but the overt manifestation of his inner motivational nature. It is through his decisions and consequent actions that a person displays himself as what he actually is. Consider this situation. I ask someone to pick a number from 1 to 6. He selects 6. I suspected as much: his past behaviour indicates that he has a preference for larger numbers over smaller and for evens over odds. So his choice was not entirely random. Does that make it unfree? Not at all! It was nowise forced or constrained. Those number-preferences of his were not external pressures that restricted his freedom: on the contrary they paved that way to selfexpression. It would be folly to see freedom as antithetical to motivation. Quite to the contrary! Volitional freedom just exactly is a freedom to indulge one’s motivations. To “free” the will from obeisance to the agent’s aims and motives, needs and wants, desires and goals, likes and values, personality and disposition is not to liberate it but to make it into something that is not just useless but even counterproductive. What rational agent would want to be harnessed to the decision-effecting instrumentality that left his motivations by the wayside? A will detached
59
Nicholas Rescher • Being and Value
from the agent’s motives would surely not qualify as his! It is a rogue will, not a personal one. FALLACY NO 10 The very idea of free will is antithetical to science. Free will is something occult that cannot possibly be naturalized. It is—or should be—hard to work up much sympathy to this objection. For if free will exists—if homo sapiens can indeed make free choices and decisions—then this of course has to be part of the natural order of things. So if we indeed are free then this has to be so for roughly the same reason that we are intelligent—that is, because evolution works things out that way. What lies at the heart and core of free will is up-to-the-lastmoment thought-control by a rational agent of his deliberationproduced choices and decision in the light of his ongoingly updated information and evaluation. To see that such a capacity is of advantage in matters of survival is not a matter of rocket-science. The objection at issue is thus fallacious in that it rests in the inappropriate presupposition that free will has to be something super- or preter-natural. If free will there is, it is an aspect of how materially evolved beings operate on nature’s stage. * * * But enough! We have now looked at some ten fallacious arguments against freedom of the will, and the list could easily be continued. But the overall lesson should already be clear. In each case the misconception at issue can be overcome by drawing appropriate distinctions whose heed makes for a more viable construal of how freedom of the will—if such there is—should be taken to work. So at each stage there is some further clarification of what free will involves. There gradually emerges from the fog an increasingly clear view that what is at issue here is the capacity of intelligent beings to resolve matters of choice and decision through a process of deliberation on the basis of to beliefs and desires that allows for ongoing updates and up-to-the-bitter-end revisability.
60
FALLACIES REGARDING FREE WILL
Properly understood, freedom of the will should not be at odds with our knowledge about how things work in the world. A viable theory of free will should—nay, must—proceed on a naturalistic basis. And the idea that this is infeasible appears is by all the available indications based on an incorrect and fallacious view of what freedom of the will is all about. NOTES 1
Daniel Dennett “I Could not Have Done Otherwise—So What,” The Journal of Philosophy, vol. 81 (1984), pp. 553-65 (my italics). Compare Dennett Elbow Room: The Varieties of Free Will Worth Wanting (Cambridge: MIT Press, 1984), p. 16.
2
See John Earman, A Primer on Determinism (Dordretch/Boston: D. Reidel, 1986).
3
Note that while predetermination entails precedence-determination, the converse is not the case: precedence determination does not entail predetermination.
4
The fact that sometimes the initiative for action lies on the bodily side is shown by various (somewhat controverted) cases where muscular action is inaugurated before the agent is aware of having made a decision. (See the report on the experiments of Benjamin Libet in Kane 1996 (p. 232, note 12) and in Walter 1998 (pp. 299-308.) But it is all too obvious that the determination can also go the other way—that I can decide now what I am going to do with my hands in a few second hence.
5
See Immanuel Kant, Critique of Pure Reason A803 = B831.
6
“Das wahre Causalprincip steht . . . der Freiheit nicht im Wege.” (Lotze Microcosmus: An Essay Concerning Man and His Relation to the World (New York : Scribner & Welford, 1890), p. 16).
7
Daniel Dennett, Freedom Evolves (New York: Viking, 2003) p. 120.
8
Ibid.
9
Spinoza, Ethics, Book I, Appendix.
10
Charles Darwin, Charles Darwin's Notebooks, 1836-1844: Geology, Transmutation of Species, Metaphysical Enquiries, ed. by Paul H. Barrett, Peter Jack Gautrey, Sandra Herbert, David Kohn, and Sydney Smith (Ithaca, N.Y.: Cornell University Press, 1987)..
11
Lotze, Microcosmus: An Essay Concerning Man and His Relation to the World (New York : Scribner & Welford, 1890), vol. I, p. 287.
61
Chapter 5 SOPHISTICATING NAÏVE REALISM 1. NAÏVE REALISM
N
aïve Realism maintains that the objects of ordinary experience actually and objectively possess the features that our perceptions indicate them to have—that grass really is (and does not just appear) green and that sugar really is (and do not just seem) sweet. It is the doctrine of the objective reality of phenomenal properties. As one writer puts it, “Naïve realism claims . . . that the shapes, colors, sound, and . . . [felt textures]—the sensible qualities—are always the intrinsic properties of material objects.”1 And in thus maintaining that things do really possess the discernible properties that they manifest in experiential encounters, such realism is a doctrine that coordinates phenomenology (appearance) with ontology (makeup) and takes experience to ratify the nature of things. Three considerations are generally adduced in rebuttal of this position: 1. That a phenomenal properties are geared to observation, so that (for example) the redness of that ripe strawberry is “merely phenomenal” in only being present when someone looks at it. 2. That only some observers (an not—for example—people who are color blind or otherwise perceptually nonconformist) will be able to perceive the actual features of things. 3. That even then, the property is present only in certain very special conditions (e. g., when there is natural light in the case of color). These considerations, so it is held, render a naïve realism that identifies observed properties with actual ones is untenable, with its claims “easily shown to be erroneous by the argument from illusion.”2 To see that this sort of objection is by no means decisive, it suffices to take a careful look at just what is actually being asserted by the attribution of perceptual qualities such as color, shape, and taste. For something decidedly conditionalized is going on here, seeing that—quite obviously—
Nicholas Rescher • Being and Value
that ripe strawberry will manifest that perceptual property of phenomenal redness only if one looks at it and (provided you are a human observer with standard visual equipment). 2. THE NATURE OF SENSORY PROPERTIES By its very nature as such, any sensory property is both perceptual and relational in being a “how it strikes the normal observer” property. It is, moreover, a latent and dispositional property—in this regard much like the property of “being attracted by a magnet” that characterizes an iron filing. But when seen in this light as relational, dispositional and it is a perfectly real and objective property of its possessor—a property that that strawberry really has as much as any other (its shape or weight included), although the strawberry brings it into operation only when conditions are right. What is at issue is a property of the object alright, but a certain particular sort of property—a disposition to produce a certain effect on suitable occasions. The paradigm example of a property that is dispositional and relational in nature is the solubility of sugar. For to say that sugar is soluble is just exactly to claim that: when you immerse sugar for a period of time in water (relation) then it will in due course tend to evoke a certain sort of reaction through entering into solution (disposition). Exactly the same sort of thing is at issue with these perceptual properties. To say the grass is green is to say exactly that if you confront a normal human observer with it under standard conditions of lighting etc. then it will evoke a certain sort of characteristic reaction in this observer—viz., taking the grass to be phenomenally green. And the same hold for all those other perceptual properties: an object has them if in standardized observation conditions it evokes the item-appropriate reaction in perceptually normal observers. The item in question (the lump of sugar or the grass) does actually have that property alright; but that property itself is relational and dispositional in nature.3 To have that feature just exactly is to have the capacity to produce a certain sort of result in a certain sort of inter-agents in certain sorts of circumstances. And in ascribing that feature to objects we allocate to them neither more nor less than just exactly that. And so, one has to recognize what is at issue with a sensory property, namely 1. That it is latent and dispositional
64
NAÏVE REALISM
2. That it concerns a phenomenal disposition—a disposition to appear X-wise to observers. 3. That such appearing relates specifically to normally constituted observes functioning in standard conditions. In essence, then, those observational properties relate to dispositions of a certain particular kind. For it has to be recognized from the outset that it lies in the very nature of perceptual qualities that they are both dispositional and inherently relational. And this consideration nowise prevents such properties from actually and objectively characterizing objects. For what it is for an object actually and objectively to have the property is for it actually and objectively to possess the disposition at issue. These acids actually and objectively have the property of turning blue litmus paper red because they do actually and objectively possess the disposition to engender the effect at issue. And correspondingly proper strawberries do actually and objectively have the redness we perceive of them because they do, actually and objectively, have the disposition at issue (of dispensing phenomenological redness to normal observers in standard conditions). 3. VALIDATING NAÏVE REALISM Once one is clear about what sort of property it is that is at issue, then the definitive thesis of Naïve Realism—to the effect that objects do actually possess the phenomenal features we standardly ascribe to them—is readily sustainable. Even as it is a property (albeit a dispositional property) of sugar to be soluble—i.e., to dissolve when exposed to water—and it is a property (albeit a dispositional property) of magnetized iron to attract iron filings, so it is a property (albeit a dispositional property) of grass to appear green to normally constituted human observers. A dispositional property, after all, is the capacity to evoke a certain sort of response in suitable interaction with duly constituted inter-agents, and the greenness of the grass is just exactly that. Indeed none of its features can be claimed for grass with greater cogency.4 Seen in this light, it is perfectly cogent to claim that those observed phenomenal features of things are indeed objective features of their object—albeit properties of a very particular sort. The key point is that once one realizes exactly what is at issue in regarding perceptual properties as real characteristics of their correlative objects—once one draws the needed
65
Nicholas Rescher • Being and Value
distinctions (relationality, dispostionality, etc.) that are relevant here— there just is no longer any sound reason for refraining from ascribing these properties, so understood, to the objects that we standardly take them to characterize. Given a sufficiently sophisticated understanding of the issue, naïve realism is a perfectly tenable position. And on this basis, a suitably construed naïve realism overcomes those three standard objections to naïve realism, and constitutes in what might be called a sophisticated (rather than naïve) version of the doctrine. For once one understands the real nature of the properties at issue, those three canonical objections to objective property attraction no longer hold. They are effectively, removed by noting (1) dispositionality (2) condition standardness, (3) observer normality. In sum that “namely real” perceptual property is a disposition to evoke a certain particular response in normal observers under standard conditions. To speak of a sophisticated Naïve Realism may sound like a contradiction in terms—but actually is not. Like pretty well anything else in philosophy, this doctrine must be properly understood for it to be justly appraised. 4. EXPLANATION AS ANOTHER ISSUE It is, clearly, one thing to ascribe a property sand quite another to explain its presence—one thing to say that grass is green—in that that’s how it standardly looks—and something very different to explain why it exhibits this feature in observational transaction. The query “What is there about an object that endows it with the relational disposition of evoking as R-type reaction in X-type inter-agents” poses a perfectly good question—one that looks to the explanation of dispositional properties. But this explanatory issue is something quite different from the merely descriptive issue. In asking for the explanatory considerations in virtue of what certain descriptions apply we no longer pose a descriptive question as such—throughout will doubtless be the case the certain further descriptions will have to figure in whatever account that becomes required at this stage. It is clear against this background that naïve realism is a descriptive rather than explanatory theory. The attribution of a perceptual property—exactly like the attribution of a disposition such as the notorious “dormative virtue” merely claims that dispositional potency: it does nothing to explain it. And so there are, clearly, various questions about the nature of a disposition that are not addressed by merely noting that it exists. For example,
66
NAÏVE REALISM
what is a disposition doing when not actuated? (Answer: nothing apart from lying latent.) Must a disposition be encoded in a physical medium (material configuration, physical “field”, brain)? (Answer: often yes but not necessarily—a person’s fear of flying unlike his disposition of vertigo has no stable physical substation.) Must dispositions come to expressions via identifiable processes? (Answer: apparently not always; my disposition to avoid eggplant certainly sets various processes into motion but does not in itself seem to be the expression of a process.) The relation between physical processes and disposition is multiform and complex. My finger’s disposition to respond to my decision to move it certainly sets various nerves and muscular mechanisms into motion but doe not seem to function mechanically itself. Clearly while phenomenal properties are real enough the business of their explanatory “reduction” to physical process is going to be a long and complex story. 5. NAÏVE REALISM’S JUSTIFICATORY BASIS The phenomenologically dispositional properties of things are crucially important for our capacity to communicate about them. For we generally operate on the defeasible presumption—the working hypothesis—that, absent evidence to the contrary, people in general (ourselves included) are normal with respect to matters of perception and of judgment, that the conditions at hand are standard so that items with which we deal are functioning in their usual and customary manner. This is the Standing Presumption of Normalcy. We realize full well that its stipulations are not always met. And we thus standardly proceed on the working assumption that they indeed are met unless and until case-specific considerations to the contrary arise indicate that matters stand otherwise. And it is exactly this presumption that provides the rationale of justificatory validation for a sophisticated naïve realism. Precisely because we presume our own experience to meet the conditions of normalcy and standardness—supposing that in the absence of counterindications things are what they seem—that we standardly and appropriately take the line that things in fact have the features that our experience indicates them as having. And so at bottom naïve realism is not so much a substantive metaphysico-ontological doctrine to the effect that experience reveals the experience-independently real features of experienced objects as it is an experientially validated procedural policy to adopt and implement a certain prin-
67
Nicholas Rescher • Being and Value
ciple of presumption as basis for our cognitive transactions. What we have here is, in short, a metaphysico-ontological view whose justificatory basis looks to the (ultimately pragmatic) validation of a policy of presumption. But what is it that validates the adoption and implementation of such a procedural policy? It is, in the end, simply the pragmatic success in working out efficiently and effectively in operational and communicative contexts. Validated as such by its pragmatic success naïve realism finds its justifactory rationale in a policy of practical procedure that is rendered cogent through its efficacy in managing our cognitive and communicative affairs. It must accordingly be stressed that there is no clash or conflict between the description of things at the level of ordinary experience and their (very different) description in the language of natural science. The English astronomer Arthur Eddington drew a strong contrast between the table of ordinary life (solid, hard, made of wood), and the table of the physicist (made of electronic particles cavorting in mainly empty space) and regarded the latter alone as real. But actually there is only one table that admits of being in different terms of reference in different contests of relationship, even as one selfsame person can be X’s old uncle and Y’s young nephew. Both descriptions have a good claim on staking what is really the case, and neither has a monopoly on the characterization of reality. A sensible realism has it both ways, and there is no good reason to think that “scientific realism” needs to suppress or supplant the “naïve realism” of ordinary experience. NOTES 1
R. J. Hirst in The Encyclopedia of Philosophy, Vol. 7 (New York: Macmillan Co. and The Free Press, 1967), p. 78.
2
Loc. cit.
3
It is in fact questionable whether with natural objects there actually are any such things as nondispositional properties. For the nature of any natural item is inevitably embodied in its actions. Given the epistemic realities there is nothing we can say about natural objects altogether above and beyond what it is interactive orientations to (some) others! (Abstract objected like numbers are something else again.)
4
In the stylistically outward way characteristic of his writing, John Dewey has made much the same point: With language they [the sensed qualities of things] are discriminated and identified. They are “objectified”; they are immediate traits of things. This “objectification” is not a miraculous ejection from the organism or soul into external things, nor an
68
NAÏVE REALISM
NOTES
illusory attribution of psychical entities to physical things. The qualities never were “in the organism,” they always were qualities of interactions in which both extra-organic things and organisms partake. When named, they enable identification and discrimination of things to take place as means in a further course of inclusive interaction. Hence they are as much qualities of thing engaged as of the organism . . . to name another [sensed] quality “red,” is to indicate an interaction between an organism and a thing to some object which fulfills the demand or need of the situation. It requires but slight observations of mental growth of a child to note that organically conditioned qualities, including those special sense-organs, are discriminated only as they are employed to designate objects; red, for instance, as the property of a dress or toy. (John Dewey, Experience and Nature [Peru, Ill.: Open Court, 1925].)
69
Chapter 6 TAXONOMIC COMPLEXITY AND THE LAWS OF NATURE 1. NOMIC NECESSITY
T
he world’s complexity has many dimensions. One of these relates to its lawfully nomic structure—its lawful modus operandi. And another relates to the taxonomic structure of its components and constituents. However, the theoreticians of science have not in general sufficiently stressed the intimate relationship that actually obtains between these two desperate-seeming dimensions of complexity. For as will be argued here, in matters of natural science lawfulness is in fact coordinate with nomic subordination because in natural taxonomies higher-level generalities will, even when themselves merely factual rather than nomically necessary, nevertheless prove to be nomically lawful at taxonomically subordinate levels. Viewed in this light, lawfulness can be rooted in the taxonomic order of nature itself. For nomic lawfulness at issue with so-called “laws of nature” can inhere in and issue from taxonomic relationships. To get a clear grip on this somewhat obscure but critically important point, one must consider how it is that nomic lawfulness emerges within the setting of an empirical science. First comes the question of what a law of nature is. What does it take for an empirical generalization to qualify as a law of nature? Laws of nature have two definitive aspects: nomic necessity and counterfactual force. If a generalization on the order of “Acids turn blue litmus paper red” does indeed state a natural law, then (1) When this piece of blue litmus paper is immersed in this beaker of nitric acid, then it will and must turn red, and (2) If that piece of litmus paper (which has just been burned up over there) had been dipped into the beaker, then it would have turned red. Now there are obvious problems here: All we can ever observe in nature is how things do behave and not how they must behave, nor yet again, how they would behave in unrealized conditions. And the two definitive factors of natural lawfulness—viz., nomic necessity and counterfactual applicability—endow them with involvements that lie above and beyond the reach of observational experience.
Nicholas Rescher • Being and Value
But how can one possibly come to grips on empiricist principles with lawfulness—so construed? How can empiricists who are determined to base their contentions regarding the world on observations ever get there from here? What sort of conceptually cogent basis can they provide for such a conjectural projection beyond the evidence at hand? 2. NECESSITY BY SUBORDINATION It is at just this point that the hierarchic order of a natural taxonomy comes into play. For to all visible appearances a fundamental procedural/methodological principle of operation is at work here which runs as follows: A universal generalization to which the (actually existing) natural objects of a given level of a taxonomic order actually (even if contingently) conform, will thereby be taken to obtain lawfully at all subordinate taxonomic levels.
In line with this principle as a regulative authorization of imputing lawfulness it transpires that relationships which are taxonomically grounded universal at higher taxonomic levels will be (nomically) necessary at the lower subordinate ones. Thus we effectively have the following inferential rule: If T1 is a subordinate unit of T2 within a natural taxonomy, then whenever (∀x)([x ∈ T2 & Fx] ⊃ Gx) obtains as a universal generalization, then (∀x)([x ∈ T1 & Fx] ⊃ Gx) obtains lawfully, with representing (natural or nomic) necessity.
This is, in effect, a consequence or aspect of the defining specification of nomic lawfulness. And on its basis, lawfulness can be the product of a suitable concurrence between generality and hierarchy. In adopting a natural taxonomic hierarchy as such we thereby subscribe to a certain correlative view of lawfulness. It may be a merely accidental regularity that all pigs have tails that spiral out clockwise, but if (or, rather, since) it is indeed so, it mat be deemed lawful that all Vietnamese pot-bellied pigs are so constituted. And if (or rather since) it is indeed the case that all mammals are featherless, we shall have it lawfully the case that all dogs (or, for that matter horses) are featherless. Or again, even though it is merely contingently true that no canines
72
TAXONOMIC COMPLEXITY AND THE LAWS OF NATURE
have horns, nevertheless the generalization that no spaniels have horns qualifies as a law. The Schoolmen of the Middle Ages distinguished many versions of necessity, among them the necessitas naturalis of that which is always and everywhere the case. They were accordingly inclined to coordinate necessity with universality—an idea that is effectively defeated by the existence of generalizations that hold good merely by accident. (E.g., that no moon craters are deeper than X meters.) But while the present analysis also grounds lawful necessity in universality, it actually does this with a critical difference, namely that in place of mere universality it takes the crux of the matter to inhere in higher level universality within an established natural taxonomy. So regarded, necessity becomes establish through taxonomic subsumption. Necessity comes into operation coordinately with and superveniently upon the entry of natural taxonomies into the conceptual setting of the situation. Nomic lawfulness can accordingly emerge from mere generality in taxonomic contexts subject to the principle that with natural taxonomies, higher-level regularities provide for lower-level laws. Accordingly, the considerations at issue have historical roots even earlier than the schoolmen, namely in Aristotle. For the salient consideration here is that nomic necessity works rather differently from logicoconceptual necessity. The latter is subject to the Rule of Theophrastus which has it that the modality of a conclusion is that of the modally weakest premiss (conclusio sequitur peiorem partem). But with nomic or causal necessity we have it that for certain minor premisses, the modality of a major premiss will in suitable circumstances be determinative. The reasoning now at issue can take the following generic form: (1) (Factually) All lions have manes [A de facto generalization] (2) African lions are a species of lions [By taxonomic fiat] (3) ∴ By nomic necessity: African lions have manes [Necessitation by Subordination] (4) Leo is a lion [Taxonomic fact] (5) Leo necessarily has a mane [From (3) and (4) through Necessitation by Subordination]
73
Nicholas Rescher • Being and Value
As the aforementioned considerations suggest, the modal logic of natural (rather than logico-conceptual) necessity resembles that of Aristotle rather than Theophrastus: the status of the major premiss predominates. What we have here is, in effect, a—metaphysical, if you will—Principle of Necessitation by Subordination (N x S Principle for short), stipulating that in natural hierarchies (natural) necessitation can be an emergent nomic property whose presence can in principle root in suitably taxonomized universal property linkages. The underlying idea is that universal property linkages at a given taxonomic level are never “purely coincidental” (or “merely accidental”) when they also obtain universally at taxonomically higher levels. But does not the following line of thought present a crucial objection here, namely that a generalization that is nomically lawful for every actual subordinate sub-kind of a given super-ordinate kind will thereby be as lawful for that entire super-kind itself? Would we not thus arrive at the following stumbling block? Let it be that the taxonomic unit T is composed entirely of the sub-units T1, T2, . . . Tn. And let us suppose that the generalization G obtains with respect to T1, but does so non-nomically and only contingently. The G will hold with some necessity in every Ti. But surely—so goes the objection—if G holds with nomic necessity in every Ti, and the Ti are collectively exhaustive of T, then G will hold of necessity throughout T as well.
There is, however, a fatal flaw in this plausible-seeming course of reasoning—a flaw which lies in the following consideration: With natural hierarchies, the fact that a given taxonomic unit in actual fact has just exactly these and no other sub-units is always accidental and contingent. Natural hierarchies are always in-principle enlargeable. And our necessitation by subsumption or for short N x S principle holds only for actual but not for additionally possible taxonomic units.
This circumstance of the in-principle non-exhaustiveness of the sub-units of a natural taxonomy—the omnipresent possibility of further (non-actual) sub-types within a natural hierarchy—thus plays a critical role in these deliberations. On this basis, then, the underlying rationale of our N x S Principle roots in the idea of the potential fecundity of nature. For this effectively blocks
74
TAXONOMIC COMPLEXITY AND THE LAWS OF NATURE
the line of objection outlined above by negating its supposition of exhaustivity. To be sure, with abstract rather than natural taxonomies the exhaustivity of natural sub-kinds can sometimes be maintained as a matter of general principle. (In pure mathematics, for example, the integers can be divided into the odd and the even with categorical exhaustivity.) But this sort of thing will not happen with natural (rather than theoretical) taxonomies. For a natural species and in principle always have more interspecies than it actually does. And just this is what blocks the preceding line of objection to our N x S Principle. 3. NATURALIZING NECESSITY The salient lesson of these deliberations is thus that the hierarchic complexity of natural kinds renders it feasible to “naturalize” the lawfulness of nature’s laws by providing them with a grounding that is inherent/implicit in merely de facto universal relationships. The circumstance that lawfulness emerges from mere generality via hierarchical subordination in natural taxonomies opens up the prospect of embedding laws—with their full freight nomic and counterfactual weight—in merely de facto generalizations, even though these generalizations may themselves obtain only on grounds that we deem to be contingent and accidental. The problem for strict empiricists when confronted with natural science is how to extract laws with their ambitious claims to nomic necessity and counterfactual force from the mere de facto generalizations evidentiated by observations via the “scientific method.” And the ontological complexity of nature’s hierarchical taxonomization can provide a cogent theoretical rationale for this transition, enabling lawfulness to emerge from mere generalizations. What is at issue here is a fundamental aspect of our conception of the natural world. For, as we standardly envision it, our universe is viewed as a hierarchically structured taxonomic manifold of natural order, with those hierarchical units are themselves defined by nomic regularities. In such a setting, necessities can indeed reveal themselves through the actualities of property-linkages, exactly because those linkages are seen as mediated through relationships of hierarchical order. In shaping our idea of the natural world we operate a conceptual scheme in which lawfulness and taxonomy are inseparably intertwined.
75
Nicholas Rescher • Being and Value
Yet what of the objection: “So laws can emerge from taxonomies, well and good, but where do taxonomies themselves come from? Does it not require laws—i.e., lawful coordination among identifiable types of objects— to make for taxonomies?” The answer is that while this is indeed often the case, it will not necessarily be so. For taxonomies themselves can in practice be constructed initially on the basis of mere generalizations. Only after this process is already well underway need laws come into it—and they can then do so exactly by means of the N x S mechanism that we have been considering. But what is it that ultimately justifies the observation-transcending imputations at work in a concept scheme committed to nominally geared taxonomies. One fundamental factor is paramount here, namely utility. For it is just exactly this concept scheme upon which we base the vast theoryedifice that is our natural science with its impressive track-record of successful prediction and effective control. To be sure, the preceding considerations do not claim or imply that taxonomic subordination is the only pathway to lawfulness. After all, the scientific method spreads its wings of its inductive power more broadly. But let us nevertheless suppose for discussion’s sake that this were indeed the case, so that lawful property linkages were to obtain ONLY when a taxonomically superordinate generalization—of a potentially accidental and contingent standing—provided for them. This supposition well deserves fuller consideration than can be given here. But what can be said concisely is that we will then face a situation where: • At the topmost level of a natural taxonomy all universal property linkages are merely contingent (and thus not lawful). • Nevertheless, if there are any generalizations at the topmost level at all, then they will thereby be lawful at every subordinate taxonomic level. • No considerations of general principle can block the prospect of there being some “emergent” accidental (non-nomic) generalization at any given subordinate level, and therefore nothing can block the prospect of new “emergent” laws at any taxonomic level except for the highest.
76
TAXONOMIC COMPLEXITY AND THE LAWS OF NATURE
Thus even if hierarchical-subordination were seen as the only pathway to nomic lawfulness, nothing would impede an effectively endless proliferation of complexity within the manifold of natural laws. One final concluding observation is in order. The preceding discussion is intended to offer a theory that elucidates how the concept of lawfulness actually works in those contexts of deliberation where it standardly finds application. Now to be sure, there exists the option of simply abandoning the idea of laws of nature with their problematic freighting of universality, nomicity, and counterfactuality. We can contemplate a revisionary view of science as dealing not with the necessities of the natural order but merely with empirical generalities—with mere coincidences rather than natural laws. Such a science abandons law-geared explanation for statistical coordination. This would, however, involve a truly revolutionary revision both in our view of nature and in our view of our cognitive commerce in matters of inquiry. To take this line is to pay an awesome price to gain a rather modest ideological advantage.1 NOTES 1
This chapter was initially published under the same title in Theodor Leiber (ed.), Dynamisches Denken und Handeln (Stuttgart: S. Hirzel Verlag, 2007), pp. 187-91.
77
Chapter 7 PRACTICAL VS. THEORETICAL REASON 1. A DIFFERENCE IN OBJECTIVES
M
an is a rational agent, a creature that acts on the basis of its beliefs, and thereby an amphibious being who lives both in the realm of thought the in that of action. Accordingly, knowledge and information serve a dual purpose for us. On the one hand, they provide for cognitive satisfactions in answering our questions and removing the distress of unknowing; and on the other they provide for physical satisfactions in facilitating goal-oriented action. Our reason is a dual purpose instrument, geared on its one hand to the acquisition of information (theoretical reason) and on the other to the resolution of decisions (practical reason). With theoretical reasoning we are resolving whether some contention or other is to be accepted as true. With practical reasoning we are resolving whether some action or other is to be performed. Theoretical reasoning is wholly cognitive in orientation; practical reasoning is also oriented towards action. The difference in aim and objective between theoretical (informationproviding) and practical (action-determinative) reasoning makes for a profound difference in the rules and regulations by which these two modes of reasoning proceed. And this difference roots principally in one single factor, namely that of urgency. For with purely theoretical matters decision can always be put off to another day, while practical matters generally require resolution here and now. To postpone a practical decision until later is in effect to resolve to do nothing for the time being, and such postponement constitutes a particular course of action, namely letting things go along as is for the present. When the issue is one of acting or not acting, indecision and suspension of judgment are themselves modes of acting that amount to inaction and letting matters take their own course. A bomb-alert is sounded. It is a credible or not? We have to decide one way or another. To defer a decision is to all intents and purposes to decide against credibility. Where the choice is between staying and going, to remain undecided is in effect to opt for staying. Cognition and action are inexorably intertwined for us.
Nicholas Rescher • Being and Value
2. THE DEMAND FOR EVIDENTIAL CONCLUSIVENESS It has been insisted long and often that we are to follow the rule: NEVER DECIDE MATTERS ON INCONCLUSIVE EVIDENCE!
And here philosophers are invariably put in mind of William Kingdon Clifford’s ardent declaration that “It is wrong, always and everywhere, to believe anything on insufficient evidence.”1 And perhaps this precept may seem plausible enough in matters of purely theoretical reason where the option of suspending judgments and deferring acceptance is ever open. For in contexts where the sole relevant aim is to settle the truth-status of a claim and where the sole operative sanction is proving to be in the wrong, we can always afford indecision and defer resolution until such time—if ever!—when sufficient information comes to hand to settle the issue conclusively. But of course this situation seldom obtains—and never in practical matters. In contexts of action, the circumstances will always raise the stakes above and beyond the mere misfortune of being wrong about something. With theoretical issues it makes sense to ask “Why risk error if this is the only negativity that can happen and indecision, suspension of judgment, is a sure-fire way to avert it.” But in practical matters the situation just is not like this at all. Here worse things than “merely being may abet something” will always be. And here where indecision means inaction and inaction is itself an action of sorts, the stakes become greater and the need for a decision becomes imperative. Indecision—suspension of judgment—is now no longer a cost-free option. That aforementioned injunction must be abandoned in practical contexts, and in its place we must substantiate something along the lines of: Whenever something really at stake requires a decision to be made, decide matters one way or the other as best the evidence at hand permits. Closely related to the insistence on evidential conclusiveness is the requirement of evidential completeness. This pivots on yet another regulative injunction characteristic of theoretical reason namely one that runs as follows:
80
PRACTICAL VS THEORETICAL REASON
BASE THE ANSWER YOU GIVE TO YOUR QUESTIONS ON ALL THE INFORMATION THAT IS RELEVANT!
However, given the realities of our situation in the world, our knowledge of the future is invariably imperfect. And in general we cannot say now what the future will bring—especially in regard to new information. To be sure, we can always “wait and see.” But in practical matters, where we need a resolution here-and-now, the luxury of “awaiting developments” is something we cannot afford. Accordingly, in practical matters the preceding precept has to be replaced by: Whenever answers are needed, be sure to base the answers you give to your questions on all the relevant information that you can mange to gather within the existing limitations of time, resource, and opportunity. So here too we must compromise the idealizations of the purely theoretical case in the interests of the urgent realities of actual practice. The ground-rules of purely theoretical reason will no longer apply here. Regrettably but unavoidably, the resolution that is the best available in the circumstances prevailing at the time need not be optimal or correct in any more ambitious sense. The then-unavailable data may make all the difference here. Circumstantial optimality affords no guarantees. So in theoretical contexts we may well say “Why settle for something that might be wrong. Let’s hold off until the situation is clear.” But in practical contexts where a resolution is mandatory we have no alternative but to settle for the most promising -looking option available to us at the time. 3. THE DEMAND FOR PRECISION AND DETAIL A further common injunction that characterizes the operations of theoretical reasoning runs as follows: IN ADDRESSING YOUR QUESTIONS SEEK THE GREATEST PRECISION AND DETAIL
In this regard, however, it becomes critically important to come to terms with the basic principle of epistemology that increased confidence in the correctness of our estimates can always be secured at the price of decreased accuracy. For in general an inverse relationship obtains between the definiteness or precision of our information and its substantiation: detail and security stand in a competing relationship. We estimate the height of the tree at around 25 feet. We are quite sure that the tree is 25±5 feet
81
Nicholas Rescher • Being and Value
____________________________________________________________ Display 1 DUHEM’S LAW THE COMPLEMENTARITY TRADE-OFF BETWEEN SECURITY AND DEFINITENESS IN ESTIMATION
increasing security (s)
s x d = c (constant)
increasing detail (d) NOTE: The shaded region inside the curve represents the parametric range of achievable information, with the curve indicating the limit of what is realizable. The concurrent achievement of great detail and security is impracticable.
____________________________________________________________ high. We are virtually certain that its height is 25±10 feet. But we can be completely and absolutely sure that its height is between 1 inch and 100 yards. Of this we are “completely sure” in the sense that we are “absolutely certain,” “certain beyond the shadow of a doubt,” “as certain as we can be of anything in the world,” “so sure that we would be willing to stake your life on it,” and the like. For any sort of estimate whatsoever there is always a characteristic trade-off relationship between the evidential security of the estimate, on the one hand (as determinable on the basis of its probability or degree of acceptability), and on the other hand its contentual detail (definiteness, exactness, precision, etc.). And so a complementarity relationship of the sort depicted in Display 1 obtains. This was adumbrated in the ideas of the French physicist/theoretician Pierre Maurice Duhem (1981-1916) and may accordingly be called “Duhem’s Law.”2 In his classic work on the aim and structure of physical theory, Duhem wrote as follows:
82
PRACTICAL VS THEORETICAL REASON
A law of physics possesses a certainty much less immediate and much more difficult to estimate than a law of common sense, but it surpasses the latter by the minute and detailed precision of its predictions. . . The laws of physics can acquire this minuteness of detail only by sacrificing something of the fixed and absolute certainty of common-sense laws. There is a sort of teetertotter of balance between precision and certainty: one cannot be increased except to the detriment of the other.3
In effect, these two factors—security and detail—stand in a relation of inverse proportionality, just as that display suggests. To be sure with an improvement in the general cognitive state-of-the-art the parameter c increases: the conditions of inquiry improve, concurrently greater detail and security become possible—up to a point. Duhem’s principle accordingly has significant implications in our present context. For it means that we can only increase the conjoint precision and detail of our problem-resolutions by awaiting the progress of inquiring-methodology. And of course just this is something we cannot afford to do in practical contexts where resolutions have to be arrived at here-and-now. Fortunately, the information we require for practical guidance need not be totally accurate and precise. To decide as on taking an umbrella, we need not know exactly how much rain will fall (whether one-half or threequarters inches), nor need we know exactly when the rainstorm commences (whether at 18:30 or 18:31). Our decision-guiding information can be subject to a considerable “margin of error”; all that matters is that it be good enough for “all practical purposes” within the situation we confront. And so, where theoretical reason says “Require certainty/security and accuracy/detail in your cognitive commitments” practical reason is prepared to compromise with the insuperable realites. It’s requirement is: Ask for as much of those cognitive desiderata as is available to you within the confines of the prevailing situation. And accordingly, we must accordingly replace the preceding injunction by: In addressing your questions seek only as much precision and detail as is required for resolving the issue at hand. In practical contexts we do not—and need not—achieve absolute maximization with respect to precision and detail, but merely “situational optimization.” Where theory says “Do the best!” practice takes the more “realistic” line of the injunction: “Do the best you can.” And so the key difference between practical and theoretical inquiry makes itself felt not just in relation tot evidential certainty but in relation to contentual accuracy as well.
83
Nicholas Rescher • Being and Value
4. PRACTICAL REASONING IN INQUIRY The fact remains, however, that theoretical reason does not have a monopoly on matters of inquiry. For even in inquiry—in seeking for information to answer our question—practical reason can play a role. Consider the purely hypothetical, if-then question of the format: “If p is the case (i.e., if it were true) what possible indications could one expect to have if it in the conditions that exist here and now? In general, once p is specified, this becomes a perfectly meaningful question, and its answer would call for specifying a series of potentially available evidential indications of such a sort that if p were actually true, we would expect all of these to indicate considerations that in fact obtain. But now suppose that inquiry reveals that they all in fact do so. The result is thus a situation of the format: By all possibly-available indications, p is actually true.” Where does that put us? Clearly it proves nothing. But nevertheless it puts us into a position where it is only sensible and rational to incline to accepting p. After all, what more could we possibly (reasonably) ask for? What more could we possibly have on p’s behalf that is holding good by all of the possibly available indications. Of course what is at issue is not a deductively valid argument. We certainly do not have it that: Whenever all of the possibly-available evidence speaks on a proposition’s behalf that proposition is true.
In accepting theses on inclusive evidence we always have a chance of error (however small). We always take a risk (however shrewdly calculated). But why should we do so? Only because in practical contexts we need, or at least want, an answer. We take a calculated risk. We balance the personalities of ignorance (of a lack of question-resolution) against the chance of error. So what is at issue is thus an inductive (rather than deductive) inference, which by this very fact belongs to the realm of practical (rather than purely theoretical) reasoning. Evidentiation on the basis of all available indicators—unlike acceptance on the basis of all possible indicators—is never conclusive. And acceptance on the basis of inconclusive evidentiation is always a matter of practical (rather than theoretical) reasoning. Here we are simply the cardinal maxim of practical reasoning:
84
PRACTICAL VS THEORETICAL REASON
“Accept the best you can possibly do in the existing circumstances as good enough.” Of course in purely theoretical contexts we can always wait. We suspend judgment. But this step is not cost-free. Indecision exacts the penalty of ignorance, of controversy, of not having an answer. And rationality here as elsewhere is a matter of balancing costs and benefits. We must weigh the risk of error against the negativity of ignorance and indecision. And here the salient point is that man is a creature that acts on the basis of information. Our sequences accordingly function not just where we seek information for its own sake—its purely cognitive satisfaction—but also in contexts of decision and action. And this can endow our informationseeking endeavors with an urgency that is absent in purely theoretical matters. 5. PRACTICAL REASONING AS INFORMATION ENGINEERING All of reason is a matter of issue-resolution, of problem solving. But the practical and theoretical domains deal with this common issue in characteristically different ways. Precisely because theoretical reason is under no practical pressure to decide issues, it can afford to be uncompromising and perfectionistic. It can—and generally does—insist on certainty, accuracy, consistency and all of the other epistemic virtues. It can afford to say “That’s not absolutely certain—defer judgment!” and “That’s possibly not absolutely accurate— get more information!” As long as an answer can possibly be imprinted upon, theoretical reason can continue irresolute. But practical reason cannot afford this sort of line. It functions in regions where more than mere cognitive error is at risk and the stakes are higher than the mere embarrassment of having “go at it wrong.” And as these stakes rise the luxury indecision in the light of merely possible improvements unaffordable. Where theoretical reason demands perfection, practical reason asks for no more than adequation. It imposition is: “Resolve the issue as best you ca hereand-now, with the limits and limitations of the prevailing situation, but GET IT RESOLVED!” And this confronts practical reason with a very different sort of requirement, not maximization or uncompromising optimization, but rather mere adequation, mere doing the best one can realistically manage in the circumstances. Practical reason, in sum, requires coming to terms with the limit and limitations of the human situation—with the constraints under which we finite and imperfect inquirers must actually labor.
85
Nicholas Rescher • Being and Value
Insofar as the aim is optimization, it is a matter of optimization under constraint—of doing what is best not necessarily in toto, but only within the limits of the situation. We must resolve matters on the basis of the bestavailable indications that are realistically available to us. And this means that in the realm of practice there is an ever-present prospect of error. Fallibilism dogs our footsteps throughout this domain. The cognitive structures we erect in relation to matter of practice are like dams or bridges. We can—and unquestionably should!—endeavor to make them as safe and secure as possible. But we must always and inevitably compromise with the realities. Perfect and absolute safety is no more realizable in cognitive as in physical engineering. Theoretical reason can neither replace nor substitute for practical reason. But what such reasoning can do is to show that following practical reason’s guidance is the best we can do. For judicious theorizing can manage to show that going by the best in directions of the moment affords a better prospect of success than any other alternative policy—any general rule of procedure—that is available to us. But still, there is a fly in this ointment. For there are no categorical guarantees in matters of rational decision with respect to practice. Whenever we act on incomplete information we can secure no categorical assurance that doing what reason urges in pivotal matters will in actual fact be the best thing to do—that her recommendations will not actually prove counter-productive. And this means that we must live the life of reason in the full recognition that, while always and everywhere insisting on obedience to her requirements, reason nevertheless can issue no certified assurance that in following her counsels as best we can, we may in various circumstances actually damage rather than enhance the prospects of attaining our chosen ends. Reason readily acknowledges that confident expectations in her own efficacy is something that she simply cannot warrant. To be sure, she sees such optimism as an eminently desirable attitude which deserves every possible encouragement and support. But she nevertheless acknowledges her fully recognized impotence to guarantee success. No doubt this is a source of unavoidable frustration for reason. And it is a fact of profound irony that assured confidence in the case-specific efficacy of reason requires something an act of faith.4 NOTES 1
86
See W. K. Clifford, The Ethics of Belief and Other Essays, ed. by T. Madigan (Amherst, NY: Prometheus Books, 1999).
PRACTICAL VS THEORETICAL REASON
NOTES 2
It is alike common and convenient in matters of learning and science to treat ideas and principles eponymously. An eponym, however, is a person for whom something is named, and not necessarily after whom this is done, seeing that eponyms can certainly be honorific as well as genetic. Here at any rate eponyms are sometimes used to make the point that the work of the person at issue has suggested rather than originated the idea or principle at issue.
3
La théorie physique: son objet, et sa structure (Paris: Chevalier and Rivière, 1906); tr. by Philip P. Wiener, The Aim and Structure of Physical Theory (Princeton, Princeton University Press, 1954), see pp. 178-79 (italics supplied). This principle did not elude Neils Bohr himself, the father of complementarity theory in physics: “In later years Bohr emphasized the importance of complementarity for matters far removed from physics. There is a story that Bohr was once asked in German what is the quality that is complementary to truth (Wahrheit). After some thought he answered clarity (Klarheit).” Stephen Weinberg, Dreams of a Final Theory (New York: Pantheon Books, 1992), p. 74 footnote 10.
4
For further deliberations on issues relevant to this chapter see the author’s Rationality (Oxford: The Clarendon Press, 199x).
87
Chapter 8 PRAGMATISM AS A GROWTH INDUSTRY 1. EXPANDING HORIZONS
T
hroughout the course of its history, pragmatism has seen a constant enlargement in the scope of its operations. With Bentham we have a protopragmatism geared to the legal system of the state, am idea which J. S. Mill expanded with respect to social justice at large. C. S. Peirce saw the focus of the doctrine’s main concern as scientific practice with its focus on observation and experimentation and the mode of success on which pragmatic efficacy hinged were satisfactory prediction and content. With William James the range expanded yet further by adding the practice of everyday life experience and success construed as a matter of living a satisfying and enjoyable life. With John Dewey yet another dimension was added by analyzing practice to encompass and indeed prioritize the conduct of public and societal affairs and success understood in terms of societal welfare and democratic processes. The sequence of developments saw a steady expansion in the envisioned scope of pragmatic praxis advancing for scientific inquiry (Peirce) to personal life-management (James), to public affairs (Dewey). And later C. I. Lewis and R. Carnap added logic and language to the mix. 2. THE CORE IDEA OF PRAGMATISM As the history of pragmatism shows, the crux of the theory lies in its core conception that validation lies in successful application. Our ways of thinking and acting are seen as instruments and the list of their validity lies in the extent of what their use in practice is attended by success. Applicative efficacy is seen as the touchstone of validation. This shared idea works itself out differently in the thought of different pragmatists, coordinate with the range of issue they prioritize. With Peirce it is success in realizing the conditional objections of science— explanation, prediction, and control—that is paramount. With James success takes on a personalistic and existentialistic cast, with the achievement
Nicholas Rescher • Being and Value
of a satisfying personal life as the hallmark of pragmatic success. And with Dewey the ability of policies and procedures to democracy and social justice, broadly understood, that is the crux on which pragmatic considerations will pivot. With C. I. Lewis and Carnap it is efficacy in effectiveness in managing information—especially in mathematics. An idea which finds resonance throughout the history of pragmatism from Peirce to Dewey and beyond is that of what might be characterized as confirmation via conformation. Instrumental artifacts in matters of procedure—our beliefs, practices, and methods—shape our view of reality, so that our world-picture is itself an artifact, the product of inquiry and intellectual construction. But this creative process is carried out in interaction with nature and the adequacy of its results in cognitive and physical implementations is not up to us but depends critically on how things work out. Thus it is nature which, as ultimate arbiter of the adequacy of our devisings is the prime arbiter of appropriateness because it alone determines successful application and implementation—of effective use. Man proposes by nature disposes, and in so doing is the arbiter of artifice. 3. EXPANDING PRAGMATISM’S RANGE: METHODOLOGICAL FUNCTIONALISTIC PRAGMATISM Against this background it is—or should be—clear that pragmatism can be regarded as a very broad and much-encompassing program. For subject to the indicated conception of the crucial ideas at work, the door is thrown to a far larger range of pragmatic deliberations. After all, virtually any domain of human enterprise, thought, or activity has an aim or object—a reason for being. And on this basis the instrumentalities and methods of its pursuit can be appraised in point of validity and appropriateness by looking to its effectiveness in facilitating realization of the aims and goals at issue. When matters are viewed in this light the isomorphism of structure prevailing among the different versions of pragmatism become apparent. What is at issue throughout is a line of thought answering to the same boilerplate format: • Within the domain D, the particular items of category C should be accepted as valid and appropriate because their employment in this domain effectively and efficiently facilitates the realization of the aims and grounds that are this domain’s definitive characteristics.
90
PRAGMATISM AS A GROWTH INDUSTRY
____________________________________________________________ Display 1 MODES OF PRAGMATISM Sphere of Activity
Unit of Concern
Mode of Success
What Success Betokens
Pragmatic Theorists/ Theory
• The state
Law
The general good
justice
Bentham (Virtualism)
• The laboratory Scientific Theories Satisfactory prediction Truth (or at least (observation and and control verisimilitude) experimentation)
Peirce (Scientific Pragmatism)
• Individual (personal) life
Beliefs
Living a rewarding satisfying life
Validation of acceptability
James (Personalistic Pragmatism)
• Public affairs in society and politics
Customs and policies
Societal well-being, social welfare in a democratic society
Practical warrant Dewey and appropriateness (Social Pragmatism)
• Logic
Logical theses and rules
Effective systematization of a range of division (such as mathematics)
Systemic viability
C. I. Lewis and R. Carnap (Logical Pragmatism)
Efficacy in goal realization
Adequacy of proceedings (methodological efficacy)
Rescher (Methodological Pragmatism)
• Any goal oriented Methods, line of endeavor procedures whatsoever
____________________________________________________________ Given this generic format, we arrive at the situation depicted in Display No. 1, which clearly illustrated the generic uniformity of pragmatic approaches. And on this basis, expansion of pragmatism’s range is illustrated in Display No. 2 which shows the diffusion of pragmatism as it has come to expand across the whole range of human endeavor. 4. PRAGMATISM AND PRACTICAL REASON But just exactly what is the justifactory rationale of such an approach to validation via efficacy. How is it that functional efficacy exerts a justifactory impetus to cogent validation? The answer lies in the very nature of practical reason.
91
Nicholas Rescher • Being and Value
____________________________________________________________ Display No. 2 VARIOUS DISCIPLINES IN PRAGMATIC PERSPECTIVE Domain of Deliberation
Instrumentality of Proceeding
Aim of the Enterprise
Law
legal system
justice
Natural Science
theories
prediction and control
Politics
policies
quality of life
Ethics
rules of conduct
harmonious personal interaction
Education
teaching methods
instruction
Communication
modes of formulation
transmission of information
Logic
processes of reasoning
truth conservation
Pharmacology
drugs
symptom prevention or alleviation
____________________________________________________________ When understood in the enlarged sense contemplated here, pragmatism is inherent in the very nature of practical rationality. Even as the cardinal value of logical reasoning is cognitive coherence and consistency, so the cardinal value of practical reason is applicative effectiveness and efficacy. “Pursue the aims and objectives you have in view with the greatest realizable efficacy and effectiveness” is not just a good suggestion but the definitive injunction of practical rationality that carries a whole host of individual impetuous (regarding efficacy, economy, compatibility, and the like) in its wake. The idea that the adequacy-claims of anything instrumental (methods, policies, processes, procedures) have to be assessed in terms of their functional efficacy and effectiveness lies at the heart of practical reason. And exactly this line of thought affords the defining contention of a functionalistic pragmatism.
92
PRAGMATISM AS A GROWTH INDUSTRY
5. THE ONTOLOGICAL ASPECT This practicalistic approach may seem to indicate that functionalistic pragmatism is entirely a matter of procedural rationality that has no bearing upon ontological matters. But this inference would be quire incorrect. Whether it is beliefs, mechanisms, methods, practices, or policies that are at issue, man proposes but nature disposes. Success in implementation, application, and praxis is the standard of adequacy for our endeavors. The constructive initiative is ours, but nature remains in charge. Whether the buildings we construct stand or fall, the machines we build fly or crash, the beliefs we adopt engender success or disaster—all this is something that is not up to us but to nature. The adequacy of what we do is always a matter of its securing nature’s Good Housekeeping seal of Approval by means of success in application and implementation. Pragmatism’s coordination of procedure and reality is to be seen as having two directions as per the following design ONTOLOGICAL
Normative adequacy
successful practice
EPISTEMIC As this picture indicates, normative adequacy is seen as the explanatory basis of successful praxis (its ratio essendi). But in the reverse direction successful praxis is seen as the evidential basis of this to make adequacy (its ratio cognoscendi). The structure of reasoning is circular, but the circle is not vicious but virtuous. It is a matter of the systemic coherence coordination that one would not just accept but require in matters of this kind. And just this ensures a harmonious condition of cognitive process with objective reality. 6. PRAGMATIC VALIDATION What this panoramic perspective upon pragmatism makes clear is the fundamentally constructive nature of the program. Over the years, many of pragmatism’s false friends have viewed the venture as having a fundamen-
93
Nicholas Rescher • Being and Value
tally negative import, subject to the idea of dismissing or downgrading the role of theory. They see pragmatism as conveying the following injunction: “Do not worry yourself about theoretical matters; let theory alone and focus on practical matters instead.” But this idea of dualizing theory from the arena of concern is a betrayal of the historical integrity of the program. For pragmatism has come on the scene not to dismiss theory but to validate it. A functionalistic pragmatism of the traditional sort sees the efficacy of its application as a means to testing and validating the adequacy of theories. Such a pragmatism brings the issue of pragmatic efficacy upon the stage not as a replacement for theorizing but as an arbiter of its adequacy and thereby as an instrumentality for its validation. For as Peirce already saw it, the mission of pragmatism is not to dismiss objectivity and realism but to provide them with a rational basis. 1 NOTES 1
94
Some of themes of this chapter are also treated in the author’s Realistic Pragmatism (Albany, NY: State University of New York Press, 2000).
Chapter 9 COST-BENEFIT EPISTEMOLOGY 1. THE ECONOMIC DIMENSION FO KNOWLEDGE: COSTS AND BENEFITS rom the very start of the subject, students of knowledge have approached the issues from a purely theoretical angle, viewing epistemology not as the study of knowledge but quite literally as the theory of knowledge. But this is not altogether a good situation. For in taking this approach one looses sight of something critically important, namely that there is a seriously practical and pragmatic aspect to knowledge in whose absence important features of knowledge are destined to be neglected or, even worse, misunderstood. And in particular, what we are going to loose sight of is the profoundly economic dimension of knowledge. Knowledge has a significant economic dimension because of its substantial involvement with costs and benefits. Many aspects of the way we acquire, maintain, and use our knowledge can be properly understood and explained only from an economic point of view. Attention to economic considerations regarding the costs and benefits of the acquisition and management of information can help us both to account for how people proceed in cognitive matters and to provide normative guidance toward better serving the aims of the enterprise. Any theory of knowledge that ignores this economic aspect does so at the risk of its own adequacy. Homo sapiens have evolved within nature to fill the ecological niche of an intelligent being. The demand for understanding, for a cognitive accommodation to one’s environment, for “knowing one’s way about” is one of the most fundamental requirements of the human condition. Humans are Homo quaerens. We have questions for which we want and indeed need answers. The need for information, for cognitive orientation in our environment, is as pressing a human need as that for food itself. We are rational animals and must feed our minds even as we must feed our bodies. In pursuing information, as in pursuing food, we have to settle for the best we can get at the time, regardless of its imperfections.
F
Nicholas Rescher • Being and Value
With us humans, the imperative to understanding is something altogether basic: things being as they are, we cannot function, let alone thrive, without knowledge of what goes on about us. The need for information, for knowledge to nourish the mind, is every bit as critical as the need for food to nourish the body. Cognitive vacuity or dissonance is as distressing to us as physical pain. Bafflement and ignorance—to give suspensions of judgment the somewhat harsher name that is their due—exact a substantial price from us. The quest for cognitive orientation in a difficult world represents a deeply practical requisite for us. That basic demand for information and understanding presses in upon us and we must do (and are pragmatically justified in doing) what is needed for its satisfaction. For us, cognition is the most practical of matters: Knowledge itself fulfils an acute practical need. The introduction of such an economic perspective does not of course detract from the value of the quest for knowledge as an intrinsically worthy venture with a perfectly valid l’art pour l’art aspect. But as Charles S. Peirce emphasized, one must recognize the inevitably economic aspect of any rational human enterprise— inquiry included. It has come to be increasingly apparent in recent years that knowledge is cognitive capital, and that its development involves the creation of intellectual assets, in which both producers and users have a very real interest. Knowledge, in short, is a good of sorts—a commodity on which one can put a price tag and which can be bought and sold much like any other—save that the price of its acquisition often involves not just money alone but other resources, such as time, effort, and ingenuity. Man is a finite being who has only limited time and energy at his disposal. And even the development of knowledge, important though it is, is nevertheless of limited value it is not worth the expenditure of every minute of every day at our disposal. As these deliberations indicate, the economic perspective of cost and benefits has a direct and significant bearing on matters of information acquisition and management. The benefits of knowledge are twofold: theoretical (or purely cognitive) and practical (or applied). A brief survey of the situation might look somewhat as per Display 1.
96
COST-BENEFIT EPISTEMOLOGY
_______________________________________________________ Display 1 COGNITIVE BENEFITS I. Theoretical —Answers our questions about how things stand in the world and how they function. —Guide out expectations by way of anticipation. II. Practical —Guide our actions in ways that enable us to control (parts of) the course of events. _______________________________________________________ And of course knowledge does not come cost-free. Its acquisition comes as the format of effort and this effort demands the expenditure of time, energy, ingenuity, and a vast spectrum of material. Information, in sum, is a product of resource expenditure. Some of the key costs involved relate to the following processes: • Research/discovery (manpower, time, equipment/technology) • Systematization (explanation, weaving information into the fabric of knowledge). • Dissemination—making it available • Communication—making it accessible • Education—putting people into a position to understand. Three things are fundamentally at issue: (1) Enhancing our understanding of the world we live in by way of insight into matters of description and explanation. (2) Averting unpleasant surprises by pro-
97
Nicholas Rescher • Being and Value
_______________________________________________________ Display 2 THE COMPLEMENTARITY OF EPISTEMIC VIRTUES A gain in point of
is counterbalanced by a loss in
Security
Definiteness
Generality
Particularity
Plausibility
Interest
Novelty
Entrenchment
Probability
Range
Precision Likelihood _______________________________________________________________
viding us with predictive insight. And (3) Guiding effective action by enabling us to achieve at least partial control over the course of events. The theoretical/cognitive benefits of knowledge relate to its satisfactions in and for itself, for understanding is an end unto itself and, as such, is the bearer of important and substantial benefits—benefits which are purely cognitive, relating to the informativeness of knowledge as such. The practical benefits of knowledge, on the other hand, relate to its role in guiding the processes by which we satisfy our (non-cognitive) needs and wants. The satisfaction of our needs for food, shelter, protection against the elements, and security against natural and human hazards all require information. And the satisfaction of mere desiderata comes into it as well. We can, do, and must put knowledge to work to facilitate the attainment of our goals, guiding our actions and activities in this world into productive and rewarding lines. And so, the impetus to inquiry to investigation, research, and acquisition of information—can thus be validated in strictly economic terms with a view to potential benefits of both theoretical and practical sorts.
98
COST-BENEFIT EPISTEMOLOGY
_______________________________________________________ Display 3 EPISTEMIC VIRTUES I. Internal (Contentual Substantive/Assertoric) Virtues •
Informativeness (generality, reach and range of application)
•
Definiteness (precision avoidance of vagueness equivocation).
•
Specificity/Detail
II. External (Reliability-oriented) Virtue •
Verisimilitude/truthfulness
•
Evidentiation
•
Probability
•
Source reliability
III. Contextual Virtues •
Newsworthiness (Novelty, information augmentation)
•
Coherence (contextual fit; entrenchment)
_______________________________________________________ 2. VIRTUE COMPLEMENTARITY Rational inquiry can be viewed as a quasi-economic process of negotiation between cognitive benefits and costs. Let us begin with benefits. For reasons that are not easy to understand, theorists of knowledge have not come to terms with the full range of positivities that can be exhibited by statements but have focused on a few favorites. But as Display 2 shows there is a considerable spectrum of prime virtues appertaining to factual claims. Interestingly these virtues are often not consilient but conflicting, the advantages governed in one respect are here generally counterbalanced by negativities in another. A sort of complementary relationship so operates that a gain on one side is counterbalanced by a loss on the other. Display 3 illustrates the situation at issue here.
99
Nicholas Rescher • Being and Value
Consider just one example. It is a basic principle of epistemology that increased confidence in the correctness of our estimates can always be secured at the price of decreased accuracy. For in general an inverse relationship obtains between the definiteness or precision of our information and its substantiation: detail and security stand in a competing relationship. We estimate the height of the tree at around 25 feet. We are quite sure that the tree is 25±5 feet high. We are virtually certain that its height is 25±10 feet. But we can be completely and absolutely sure that its height is between 1 inch and 100 yards. Of this we are “completely sure” in the sense that we are “absolutely certain,” “certain beyond the shadow of a doubt,” “as certain as we can be of anything in the world,” “so sure that we would be willing to stake your life on it,” and the like. For any sort of estimate whatsoever there is always a characteristic trade-off relationship between the evidential security of the estimate, on the one hand (as determinable on the basis of its probability or degree of acceptability), and on the other hand its contentual detail (definiteness, exactness, precision, etc.). And so these two factors—security and detail—stand in a relation of inverse proportionality, as per the picture of Display 4. Overall then we face the reality that there is an effectively inevitable trade-off among the cognitive virtues. The benefits gained by an increase in one get counterbalanced by the cost of decrease in another. This state of things comes to the fore when we consider the problem of skepticism. 3. SCEPTICISM AND RISK The scientific researcher, the inquiring philosopher, and the plain man, all desire and strive for information about the “real” world. The sceptic rejects their ventures as vain and their hopes as foredoomed to disappointment from the very outset. As he sees it, any and all sufficiently trustworthy information about factual matters is simply unavailable as a matter of general principle. To put such a radical scepticism into a sensible perspective, it is useful to consider the issue of cognitive rationality in the light of the situation of risk taking in general. For cognitive efficacy calls for a judicious balance between vacuity and potential error.
100
COST-BENEFIT EPISTEMOLOGY
_______________________________________________________ Display 4 THE COMPLEMENTARITY TRADE-OFF BETWEEN SECURITY AND DEFINITENESS IN ESTIMATION
increasing security (s)
s x d = c (constant)
increasing detail (d) NOTE: The shaded region inside the curve represents the parametric range of achievable information, with the curve indicating the limit of what is realizable. The concurrent achievement of great detail and security is impracticable.
_______________________________________________________ There are three very different sorts of personal approaches to risk and three very different sorts of personalities corresponding to these approaches, as follows: Type 1: Risk avoiders Type 2: Risk calculators 2.1: cautious 2.2: daring Type 3: Risk seekers The type 1, risk-avoidance, approach calls for risk aversion and evasion. Its adherents have little or no tolerance for risk and gambling. Their approach to risk is altogether negative. Their mottos are Take no chances, Always expect the worst, and Play it safe. The type 2, risk-calculating, approach to risk is more realistic. It is a guarded middle-of-the-road position, based on due care and calculation. It comes in two varieties. The type 2.1, cautiously calculating, approach sees risk taking as subject to a negative presumption,
101
Nicholas Rescher • Being and Value
which can however, be defeated by suitably large benefits. Its line is; Avoid risks unless it is relatively clear that a suitably large gain beckons at sufficiently suspicious odds. If reflects the path of prudence and guarded caution. The type 2.2, daringly calculating, approach sees risk taking as subject to a positive presumption, which can, however, be defeated by suitably large negativities. Its line is; Be prepared to take risks unless it is relatively clear that an unacceptably large loss threatens at sufficiently inauspicious odds. It reflects the path of optimistic hopefulness. The type 3, risk-seeking, approach sees risk as something to be welcomed and courted. Its adherents close their eyes to danger and take a rosy view of risk situations. The mind of the risk seeker is intent on the delightful situation of a favorable issue of events: the sweet savor of success is already in his nostrils. Risk seekers are chance takers and go-for-broke gamblers. They react to risk the way an old warhorse responds to the sound of the musketry: with eager anticipation and positive relish. Their motto is; Things will work out. In the conduct of practical affairs, risk avoiders are hypercautious; they have no stomach for uncertainty and insist on playing it absolutely safe. In any potentially unfavorable situation, the mind of the risk avoider is given to imagining the myriad things that could go wrong. Risk seekers, on the other hand, leap first and look later, apparently counting on a benign fate to ensure that all will be well; they dwell in the heady atmosphere of “anything may happen.” Risk calculators take a middle-of-the-road approach. Proceeding with care, they take due safeguards but still run risks when the situation looks sufficiently favorable. It is thus clear that people can have very different attitudes toward risk. So much for risk taking in general. Let us now look more closely at the cognitive case in particular. The situation with regard to specifically cognitive risks can be approached as simply a special case of the general strategies sketched above. In particular, it is clear that risk avoidance stands coordinate with scepticism. The sceptic’s line is; Run no risk of error; take no chances; accept nothing that does not come with ironclad guarantees. And the proviso here is largely academic, seeing that little if anything in this world comes with ironclad guaranteescertainly nothing by way of interesting knowledge. By contrast, the adventuresome syncretist is inclined, along with radical Popperians
102
COST-BENEFIT EPISTEMOLOGY
such as P. K. Feyerabend, to think that anything goes. His cognitive stance is tolerant and open to input from all quarters. He is gullible, as it were, and stands ready to endorse everything and to see good on all sides. The evidentialist, on the other hand, conducts his cognitive business with comparative care and calculation, regarding various sorts of claims as perfectly acceptable, provided that the evidential circumstances are duly favorable. In matters of cognition, the sceptic accepts nothing, the evidentialist only the chosen few, the syncretist virtually anything. In effect, the positions at issue in scepticism, syncretism, and evidentialism simply replicate, in the specifically cognitive domain, the various approaches to risks at large. It must, however, be recognized that in general two fundamentally different kinds of misfortunes are possible in situations where risks are run and chances taken: 1. We reject something that, as it turns out, we should have accepted. We decline to take the chance, we avoid running the risk at issue, but things turn out favorably after all, so that we lose out on the gamble. 2. We accept something that, as it turns out, we should have rejected. We do take the chance and run the risk at issue, but things go wrong, so that we lose the gamble. If we are risk seekers, we will incur few misfortunes of the first kind, but, things being what they are, many of the second kind will befall us. On the other hand, if we are risk avoiders, we shall suffer few misfortunes of the second kind, but shall inevitably incur many of the first. The overall situation has the general structure depicted in Display 5. Clearly, the reasonable thing to do is to adopt a policy that minimizes misfortunes overall. It is thus evident that both type 1 and type 3 approaches will, in general, fail to be rationally optimal. Both approaches engender too many misfortunes for comfort. The sensible and prudent thing is to adopt the middle-of-the-road policy of risk calculation, striving as best we can to balance the positive risks of outright loss against the negative ones of lost opportunity. Rationality thus counterindicates approaches of type 1 and type 2, taking the
103
Nicholas Rescher • Being and Value
_______________________________________________________ Display 5 RISK ACCEPTANCE AND MISFORTUNES Misfortune of kind 1 Number of (significant) misfortunes
Misfortune of kind 2
0
50
Type 1 Type 2.1 (Risk (Cautious avoiders) calculators)
100 Type 2.2 (Daring calculators)
Type 3 (Risk seekers)
Increasing risk acceptance (in % of situations)
_______________________________________________________ line of the counsel neither avoid nor court risks, but manage them prudently in the search for an overall minimization of misfortunes. The rule of reason calls for sensible management and a prudent calculation of risks; it standardly enjoins upon us the Aristotelian golden mean between the extremes of risk avoidance and risk seeking. Turning now to the specifically cognitive case, it may be observed that the sceptic succeeds splendidly in averting misfortunes of the second kind. He makes no errors of commission; by accepting nothing, he accepts nothing false. But, of course, he loses out on the opportunity to obtain any sort of information. The sceptic thus errs on the side of safety, even as the syncretist errs on that of gullibility. The sensible course is clearly that of a prudent calculation of risks. Being mistaken is unquestionably a negativity. When we accept something false, we have failed in our endeavors to get a clear view
104
COST-BENEFIT EPISTEMOLOGY
of things—to answer our questions correctly. And moreover, mistakes tend to ramify, to infect environing issues. If I (correctly) realize that P logically entails a but incorrectly believe not-a, then I am constrained to accept not-P, which may well be quite wrong. Error is fertile of further error. So quite apart from practical matters (suffering painful practical consequences when things go wrong), there are also the purely cognitive penalties of mistakes—entrapment in an incorrect view of things. All this must be granted and taken into account. But the fact remains that errors of commission are not the only sort of misfortune there are.1 Ignorance, lack of information, cognitive disconnection from the world’s course of things—in short, errors of omission—are also negativities of substantial proportions. This too is something we must work into our reckoning. In claiming that his position wins out because it makes the fewest mistakes, the sceptic uses a fallacious system of scoring, for while he indeed makes the fewest errors of one kind, he does this at the cost of proliferating those of another. Once we look on this matter of error realistically, the sceptic’s vaunted advantage vanishes. The sceptic is simply a risk avoider, who is prepared to take no risks and who stubbornly insists on minimizing errors of the second kind alone, heedless of the errors of the first kind into which he falls at every opportunity. Ultimately, then, we face a question of value trade-offs. Are we prepared to run a greater risk of mistakes to secure the potential benefit of an enlarged understanding? In the end, the matter is one of priorities—of safety as against information, of ontological economy as against cognitive advantage, of an epistemological risk aversion as against the impetus to understanding. The ultimate issue is one of values and priorities, weighing the negativity of ignorance and incomprehension against the risk of mistakes and misinformation. And here the sceptic’s insistence on safety at any price is simply unrealistic, and it is so on the essentially economic basis of a sensible balance of costs and benefits. Risk of error is worth running because it is unavoidable in the context of the cognitive project of rational inquiry. Here as elsewhere, the situation is simply one of nothing ventured, nothing gained. Since Greek antiquity, various philosophers have answered our present question, Why accept anything at all?, by taking the line that man is a rational animal. Qua animal, he must act, since his very survival depends upon action. But
105
Nicholas Rescher • Being and Value
qua rational being, he cannot act availingly, save insofar as his actions are guided by his beliefs, by what he accepts. This argument has been revived in modern times by a succession of pragmatically minded thinkers, from David Hume to William James. The line of reasoning contrasting to this position is not: If you want to enter into the cognitive enterprise, that is, if you wish to be in a position to secure information about the world and to achieve a cognitive orientation within it, then you must be prepared to accept something. Both approaches take a stance that is not categorical and unconditional, but rather hypothetical and conditional. But in the classically pragmatic case, the focus is upon the requisites for effective action, while our present, cognitively oriented approach focuses upon the requisites for rational inquiry. The one approach is purely practical, the other also theoretical. On the present perspective, then, it is the negativism of automatically frustrating our basic cognitive aims (no matter how much the sceptic himself may be willing to turn his back upon them) that constitutes the salient theoretical impediment to scepticism in the eyes of most sensible people. The crucial defect of scepticism is that it is simply uneconomic. 4. DIMINISHING RETURNS An important principle that is familiar for a wide variety of context and can be summarized in a two-word principle: Quality costs. And this also holds for knowledge—in spades. For when other things are anything like equal, the higher the quality of information the greater its significance in the overall scheme of things—the more its ascertainment requires by way of resource investment. Knowledge in effect is high—grand information—information that is cognitively significant. And the significance of incrementally new information can be measured in terms of how much it adds, and thus by the ratio of the increment of new information to the volume of information already in hand: • I/I. Thus knowledge-constituting significant information is determined through the proportional extent of the change effected by a new item in the preexisting situation (independently of what that preexisting situation is). In milking additional information for cognitively significant insights it is generally the proportion of the increase that matters: its percentage rather than its brute amount. And so, with high-quality information or knowl-
106
COST-BENEFIT EPISTEMOLOGY
edge it is a control matter of how much a piece of information ∆I added to the total of what was available heretofore I. Looking from this perspective at the development of knowledge as a sum-total of such augmentations we have it that the total of high-grade information comes to K = ∫ Fehler! ≅ log I On this basis, viewing knowledge as significant information we have it that the body of knowledge stands not as the mean amount of information-to-date but rather merely as its logarithm. We have here an epistemic principle that might be called The Law of Logarithmic Returns. It is not too difficult to come by a plausible explanation for the sort of information/knowledge relationship that is represented by K = log I. The principal reason for such a K/I imbalance may lie in the efficiency of intelligence in securing a view of the modus operandi of a world whose law-structure is comparatively simple. For here one can learn a disproportionate amount of general fact from a modest amount of information. (Note that whenever an infinite series of 0’s and 1’s, as per 01010101 . . ., is generated—as this series indeed is—by a relatively simple law, then this circumstance can be gleaned from a comparatively short initial segment of this series.) In rational inquiry we try the simple solutions first, and only if and when they cease to work—when they are ruled out by further findings (by some further influx of coordinating information)—do we move on to the more complex. Things go along smoothly until an oversimple solution becomes destabilized by enlarged experience. We get by with the comparatively simpler options until the expanding information about the world’s modus operandi made possible by enhanced new means of observation and experimentation demands otherwise. But with the expansion of knowledge new accessions set ever increasing demands. The implications for cognitive progress of this disparity between mere information and authentic knowledge are not difficult to discern. Nature imposes increasing resistance barriers to intellectual as to physical penetration. Consider the analogy of extracting air for creating a vacuum. The first 90% comes out rather easily. The next 9% is effectively as difficult to extract as all that went before. The
107
Nicholas Rescher • Being and Value
next .9 is proportionally just as difficult. And so on. Each successive order-of-magnitude step involves a massive cost for lesser progress; each successive fixed-size investment of effort yields a substantially diminished return. The circumstance that the increase of information carries with it a merely logarithmic return in point of increased knowledge suggests that nature imposes a resistance barrier to intellectual as much as to physical penetration. Intellectual progress is exactly the same: when we extract actual knowledge (i.e. high-grade, nature-descriptively significant information) from mere information of the routine, common “garden variety,” the same sort of quantity/quality relationship obtained. Initially a sizable proportion of the available is high grade—but as we press further this proportion of what is cognitively significant gets ever smaller. To double knowledge we must quadruple information. As science progresses, the important discoveries that represent real increases in knowledge are surrounded by an ever vaster penumbra of mere items of information. (The mathematical literature of the day yields an annual crop of over 200,000 new theorems.2) 5. PLANK’S PRINCIPLE In the ongoing course of scientific progress, the earlier investigations in the various departments of inquiry are able to skim the cream, so to speak: they take the “easy pickings,” and later achievements of comparable significance require ever deeper forays into complexity and call for an ever-increasing bodies of information. (And it is important to realize that this cost-increase is not because latter-day workers are doing better science, but simply because it is harder to achieve the same level of science: one must dig deeper or search wider to achieve results of the same significance as before.) This situation is reflected in Max Planck’s appraisal of the problems of scientific progress. He wrote that “with every advance [in science] the difficulty of the task is increased; ever larger demands are made on the achievements of researchers, and the need for a suitable division of labor becomes more pressing.”3 The Law of Logarithmic Returns would at once characterize and explain this circumstance of what can be termed Plank’s Principle of Increasing Effort to the effect that substantial findings are easier to come by in
108
COST-BENEFIT EPISTEMOLOGY
the earlier phase of a new discipline and become ever more difficult in the natural course of progress. A great deal of impressionistic and anecdotal evidence certainly points towards the increasing costs of high-level science. Scientists frequently complain that “all the easy researches have been done.”4 The need for increasing specialization and division of labor is but one indication of this. A devotee of scientific biography cannot help noting the disparity between the immense output and diversified fertility in the productive careers of the scientific colossi of earlier days and the more modest scope of the achievements of their latter-day successors. As science progresses within any of its established branches, there is a marked increase in the over-all resource-cost of realizing scientific findings of a given level intrinsic significance (by essentially absolutistic standards of importance).5 At first one can skim the cream, so to speak: they take the “easy pickings,” and later achievements of comparable significance require ever deeper forays into complexity and call for an ever-increasing investment of effort and material resources. And it is important to realize that this costincrease is not because latter-day workers are doing better science, but simply because it is harder to achieve the same level of science: one must dig deeper or search wider to find more of the same kind of thing as before. And this at once explains a change in the structure of scientific work that has frequently been noted: first-rate results in science nowadays come less and less from the efforts of isolated workers but rather from cooperative efforts in the great laboratories and research institutes.6 The idea that science is not only subject to a principle of escalating costs but to a law of diminishing returns as well is due to the nineteenth-century American philosopher of science Charles Sanders Peirce (1839-1914). In his pioneering 1878 essay on “Economy of Research” Peirce put the issue in the following terms: We thus see that when an investigation is commenced, after the initial expenses are once paid, at little cost we improve our knowledge, and improvement then is especially valuable; but as the investigation goes on, additions to our knowledge cost more and more, and, at the same time, are of less and less worth. All the sciences exhibit the same phenomenon, and so does the course of life. At first we learn very easily, and the interest of experience is very great; but it becomes harder and harder, and less and less worthwhile. . . . (Collected Papers, Vol. VII
109
Nicholas Rescher • Being and Value
[Cambridge, Mass., 1958], sect. 7.144.)
The growth of knowledge over time involves ever-escalating demands. Progress is always possible—there are no absolute limits. More information will always yield proportionality greater knowledge. For the increase of knowledge over time stands to the increase of information in a proportion fixed by the inverse of the volume of already available information: Fehler!
K ≅ Fehler! log I ≅
1 Fehler!I I
The more knowledge we already have in hand, the slower (by very rapid decline) will be the rate at which knowledge grows with newly acquired information. And with the progress of inquiry, the larger the body of available information, the smaller will be the proportion of this information that represents real knowledge. Consider an example In regard to the literature of science, it is readily documented that the number of books, of journals, of journal-papers has been increasing exponentially over the recent period.7 Indeed, the volume of familiar fact that scientific information has been growing at an average of some 5 percent annually throughout the last two centuries, manifesting exponential growth with a doubling time of ca. 15 years—an order-of-magnitude increase roughly every half century. By 1960, some 300,000 different book titles were being published in the world, and the two decades from 1955 and 1975 saw the doubling of titles published in Europe from around 130,000 to over 270,000,8 and science has had its full share of this literature explosion. The result is a veritable flood of scientific literature. As Display 5 indicates, it can be documented that the number of scientific books, of journals, and of journal-papers, has been increasing at an exponential rate over the recent period. It is reliably estimated that, from the start, about 10 million scientific papers have been published and that currently some 30,000 journals publish some 600,000 new papers each year. However, let us now turn attention from scientific production to scientific progress. The picture that confronts us here is not quite so expansive. For there is in fact good reason for the view that the substantive level of scientific innovation has remained roughly constant over the last few generations. This contention—that while scientific
110
COST-BENEFIT EPISTEMOLOGY
efforts have grown exponentially, nevertheless the production of really high-level scientific findings has remained constant—admits of various substantiating considerations. One indicator of this constancy in high-quality science is the relative stability of honors (medals, prizes, honorary degrees, membership of scientific academics, etc.). To be sure, in some instances these reflect a fixed number situation (e.g., Nobel prizes in natural science). But if the volume of clearly first-rate scientific work were expanding drastically, there would be mounting pressure for the enlargement of such honorific awards and mounting discontent with the inequity of the present reward-system. There are no signs of this. A host of relevant considerations thus conspire to indicate that when science grows exponentially as a productive enterprise, its growth as an intellectual discipline proceeds at a merely constant and linear pace.9 The Law of Logarithmic Returns thus has substantial implications for the rate of scientific progress.10 For while one cannot hope to predict the content of future science, the knowledge/informationrelationship does actually put us into a position to make plausible estimates about its volume. To be sure, there is, on this basis, no inherent limit to the possibility of future progress in scientific knowledge. But the exploitation of this theoretical prospect gets ever more difficult, expensive, and demanding in terms of effort and ingenuity. New findings of equal significance require ever greater aggregate efforts. Accordingly, the historical situation has been one of a constant progress of science as a cognitive discipline notwithstanding its exponential growth as a productive enterprise (as measured in terms of resources, money, manpower, publications, etc).11 If we look at the cognitive situation of science in its quantitative aspect, the Law of Logarithmic Returns pretty much says it all. On its perspective, the struggle to achieve cognitive mastery over nature presents a succession of ever-escalating demands, with the exponential growth in the enterprise associated with a merely linear growth in the discipline.
111
Nicholas Rescher • Being and Value
Display 5 THE NUMBER OF SCIENTIFIC JOURNALS AND ABSTRACT JOURNALS FOUNDED, AS A FUNCTION OF DATE 1,000,000 100,000 number of journals 10,000 Scientific Journals 1,000 (200)
(200)
100 10 Abstract Journals (1665) 1700
1800
1900
2000
date Source: Derek J. Solla Price, Science Since Babylon (New Haven: Yale University Press, 1961).
NOTES 1
As William James said: “[Someone] who says ‘Better to go without belief forever than believe a lie!’ merely shows his own preponderant private horror of becoming a dupe. . . But I can believe that worse things than being duped may happen to a many in this world” (The Will to Believe, (New York/London and Bombay, Longmans, Green, and Co., 1897), pp. 18-19).
2
See Stanislaw M. Ulam, Adventures of a Mathematician (New York: Scribner,1976).
3
Max Plank, Vorträge und Erinnerungen, 5th ed. (Stuttgart, 1949), p. 376; italics added. Shrewd insights seldom go unanticipated, so it is not surprising that other theorists should be able to contest claims to Planck’s priority here. C. S. Peirce is particularly noteworthy in this connection.
112
COST-BENEFIT EPISTEMOLOGY
NOTES 4
See William George, The Scientist in Action (London, 1936), p. 307. The sentiment is not new. George Gore vainly lambasted it 100 years ago: “Nothing can be more puerile than the complaints sometimes made by certain cultivators of a science, that it is very difficult to make discoveries now that the soil has been exhausted, whereas they were so easily made when the ground was first broken. . . .” The Art of Scientific Discovery (London, 1878), p. 21.
5
The following passage offers a clear token of the operation of this principle specifically with respect to chemistry: Over the past ten years the expenditures for basic chemical research in universities have increased at a rate of about 15 per cent per annum; much of the increase has gone for superior instrumentation, [and) for the staff needed to service such instruments. . . . Because of the expansion in research opportunities, the increased cost of the instrumentation required to capitalize on these opportunities, and the more highly skilled supporting personnel needed for the solution of more difficult problems, the cost of each individual research problem in chemistry is rising rapidly. (F. H. Wertheimer et al., Chemistry: Opportunities and Needs [Washington, D.C., 1965; National Academy of Sciences/National Research Council], p. 17.)
6
The talented amateur has virtually been driven out of science. In 1881 the Royal Society included many fellows in this category (with Darwin, Joule, and Spottiswoode among the more distinguished of them). Today there are no amateurs. See D. S. C. Cardwell, “The Professional Society” in Norman Kaplan (ed.), Science and Society (Chicago, 1965), pp. 86-91 (see p. 87).
7
Cf. Derek J. Price, Science Since Babylon, 2nd ed. (New Haven CN: Yale University Press, 1975), and also Characteristics of Doctrinal Scientists and Engineers in the University System, 1991 (Arlington, VA.: National Science Foundation, 1994); Document No. 94-307.
8
Data from An International Survey of Book Production During the Last Decades (Paris: UNESCO, 1985).
9
See also Nicholas Rescher, Scientific Progress (Oxford: Blackwell, 1976).
10
It might be asked: “Why should a mere accretion in scientific ‘information’—in mere belief—be taken to constitute progress, seeing that those later beliefs are not necessarily true (even as the earlier one’s were not)?” The answer is that they are in any case better substantiated—that they are “improvements” on the earlier one’s by way of the elimination of shortcomings. For a more detailed consideration of the relevant issues, see the author’s Scientific Realism (Dordrecht: D. Reidel, 1987).
11
To be sure, we are caught up here in the usual cyclic pattern of all hypothetico-deductive reasoning. In addition to explaining the various phenomena we have been canvassing that projected K/I relationship is in turn
113
Nicholas Rescher • Being and Value
NOTES
substantiated by them. This is not a vicious circularity but simply a matter of the systemic coherence that lies at the basis of inductive reasonings. Of course the crux is that there should also be some predictive power, which is exactly what our discussion of deceleration is designed to exhibit.
114
Chapter 10 QUANTIFYING QUALITY (ON THE THEORY OF ELITES) 1. QUALITY COMPARISONS ENGENDER ELITES
Q
uality and quantity are usually seen as terms of contrast. So regarded, the idea quantifying quality would be seen as a contradiction in terms. But this does not do justice to the actual situation. Sometimes quality is measured directly via quantity, as when one assesses the quality of a scientific paper by the number of citations it attracts on the quality of a javelin throw by the distance it reaches. And in general quality is a matter of better of worse, superior or inferior, nicer of nastier. And at this comparative point counting can enter upon the scene and one can ask questions like “Better than most?” comparisons pave the way for counting, and this in turn opens the door to quantification. Whenever there is quality, some items are going to be better than others; whenever there is performance, some will perform better than others. Quality is pervasive. But here there is also instruction in numbers. Evaluation is inevitable for a rational agent: we cannot act where we cannot decide and we cannot decide rationally without judgments of preferability—without evaluation and prioritization. And with these evaluate assessments elites invariably come upon the scene. We cannot escape this, the task, instead, is to make reasonable and proper use of them. Whether we are buying an automobile or hiring a plumber, quality is bound to be a matter of prime concern. The key factor for quality is rare achievement, and its pivotal conception is that of an elite. Elite membership is often defined in the realizing some not altogether common achievement, accomplishing some feat that the general reason fails to achieve—overcoming some difficulty meeting some challenge, achieving some result. The pivotal idea here is that an elite is a group that qualifies as outstanding in relation to the rest of the items that will be at issue. The concept of an elite affords a key tool for the quantification of quality. An elite is defined through the circumstance that members realize some (generally positive) quality to a greater-than- ordi-
Nicholas Rescher • Being and Value
____________________________________________________________ Display 1 A QUALITY DECLINE CURVE 100 % of at least This quality Level
0 low middling high very high ___________________________________________________________ nary extent. And one good way to assess and measure quality is by looking to the elites to which the items involved happen to belong. To be sure, performance can be described categorically. It is simply a matter of someone’s accomplishing a certain result. But evaluation is inherently comparative: it is not just a matter of measurement by assigning some quantity but requires that having more or less of it means something in point of a merit or demerit with regard to which the items evaluated become marked as superior or inferior. Thus an evaluation—determining just how significant an achievement this is—will generally be a matter of comparison, of saying how rare or common the accomplishment of this feat actually is. Achievement is a matter of description—of answering the question “what was done?” And its measurement answers questions like “How much?” “How often?” How fast?” “How energy-consumptive?” Measurement looks merely to quantity. But evaluation looks to quality and this can be measured comparatively on the basis of such questions as “How unusual?” “How common?”. The guiding principle here is embodied in the idea that superiority in quality is coordinate with diminution in quantity: that the qualitatively superior items are comparatively few in quantity in relation to the rest. We have here what might be called Spinoza’s Principle as framed in his dictum that omnia paeclara tan difficilis, quan rara sunt: “All excellence is as difficult as it is rare.”1 (See Display 1.) Consider by way of illustration any qualitative factor that has a normal distribution across a population. (See Display 2.) The shaded “tail” end of
116
QUANTIFYING QUALITY
____________________________________________________________ Display 2 A NORMAL DISTRIBUTION Number of individuals
e Extent of quality ____________________________________________________________ such a distribution encompasses those individuals who possess the quality at issue to an extent more than some threshold amount e. It is these individuals that constitute a (positive) elite. (There is, of course, also an initial segment that constitutes a negative anti-elite.) There are, however, various distinct ways of effecting the quantitative comparison that is at issue with the quantitative determinations that serve to define elites. Let us consider some ht the main possibilities. 2. COUNT ELITES A count elite consists in the N best-ever—the top N, so to speak. (It could be called an N elite.) An example would be “the five greatest U. S. Presidents.” Popular culture loves count elites. It is replete with lists of the 10 skyscrapers, the 100 greatest movies ever, the dozen best selling books of all times, and the like. The size of a count elite—though not, of course, its membership—is bound to remains constant even in a growing domain D. (So for an elite E of this type we have dE/dD = 0.) Irrespective of how a domain grows, the size of a count still will remain constant, so that, as the domain at issue grows, it becomes increasingly difficult to qualify for membership in its Nelite because the competition gets ever more acute. (In a domain of 100 one member in ten will qualify for its top-10 elite, while when the domain grows to 1,000 only one in 100 will do so.)
117
Nicholas Rescher • Being and Value
3. PERCENT ELITES Percent elites are defined in terms of top-tier percentages—the top 10 percent or 1 percent or such. There is now an arithmetically linear connection between the size of the field and that of its percent elites—when one doubles (or triples) so does the other. For with a growth of the domain D there is a proportional growth in the size of its percent elites (E). (We have dE/dD = constant.) Those percent elites are one-in-X elites: one in ten or one in a thousand. Thus as the domain grows it remains equally difficult to secure membership on its percentage elites: here the competition remains uniform in extent. Elites accordingly often arise in a manner suggestive of “lion’s share” possession. E(x) = y iff x% of the overall population at issue accounts for y% of the whole of some parameter Thus the richest 5% of Americans own 80% of the nation’s privately held wealth. Or, again, that the top 20% of university libraries account for 60% of all university library holdings in the US. So let us define the “haves” subgroup of a population as that which accounts for 50% of the whole in point of “ownership” of the parametric quantity at issue. With a “fair” deliberation, the haves will be just as numerous as the have-nots. However, with what we shall call an “elitist” distribution, this relationship functions as per the curve of Display 3. As this representation indicates, with an elitist distribution a comparatively small proportion of the population will engross a lion’s share of the whole. In scientific publications, for example, some ten percent of the articles manage to engross fifty percent of citation-references, so that the E1 elite effectively constitutes the “haves” in point of professional recognition.2 This situation is bound up with “Lotka’s Law” to the effect that if k is the number of scientists who publish just one paper, then the number publishing n papers is k/n2. In many branches of science this works out to some 56 percent of scientists producing half of all papers in their discipline, putting the subgroup of the haves some one-twentieth of the whole.3
118
QUANTIFYING QUALITY
____________________________________________________________ Display 3 PERCENT ELITIST DISTRIBUTION an elitist distribution 100
% of the 50 whole
an egalitarian distribution
0 50 % of P
100
NOTE: The straight line represents a perfect egalitarianism, while that convex curve represents an elitist distribution of sorts. ____________________________________________________________ 4. EXPONENT ELITES AND ROUSSEAU’S LAW A pattern is already starting to build up, seeing that the size of an elite E governed by a relationship of coordination in relation to D (the domain size). Thus, as already noted, we have: E=c
count elites
E=cxD
percent elites
Along these lines we can now contemplate the exponent elites where: E = Dc
0