243 107 662KB
English Pages 174 [184] Year 2008
Nicholas Rescher Epistemic Pragmatism And Other Studies in the Theory of Knowledge
Nicholas Rescher
Epistemic Pragmatism And Other Studies in the Theory of Knowledge
Bibliographic information published by Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliographie; detailed bibliographic data is available in the Internet at http://dnb.ddb.de
North and South America by Transaction Books Rutgers University Piscataway, NJ 08854-8042 [email protected]
United Kingdom, Eire, Iceland, Turkey, Malta, Portugal by Gazelle Books Services Limited White Cross Mills Hightown LANCASTER, LA1 4XS [email protected]
Livraison pour la France et la Belgique: Librairie Philosophique J.Vrin 6, place de la Sorbonne; F-75005 PARIS Tel. +33 (0)1 43 54 03 47; Fax +33 (0)1 43 54 48 18 www.vrin.fr
2008 ontos verlag P.O. Box 15 41, D-63133 Heusenstamm www.ontosverlag.com ISBN 978-3-86838-003-3
2008 No part of this book may be reproduced, stored in retrieval systems or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use of the purchaser of the work
Printed on acid-free paper FSC-certified (Forest Stewardship Council) This hardcover binding meets the International Library standard Printed in Germany by buch bücher dd ag
Contents PREFACE Chapter 1:
Epistemic Pragmatism
1
Chapter 2:
Linguistic Pragmatism
13
Chapter 3:
On Cognitive Finitude and Limitations
23
Chapter 4:
On Cognitive Economics
37
Chapter 5:
The Uneasy Union of Ideality and Pragmatism in Inquiry
57
Chapter 6:
On Inconsistency and Provisional Acceptance
63
Chapter 7:
On Realism and the Problem of Transparent Facts
87
Chapter 8:
On Fallacies of Aggregation
93
Chapter 9:
Leibniz and the Conditions of Identity
107
Chapter 10: Worldly Woes: The Trouble with Possible Worlds
113
Chapter 11: Trigraphs: A Resource for Illustration In Philosophy
133
Chapter 12: Fragmentation and Disintegration in Philosophy (Its Grounds and it Implications)
159
NAME INDEX
171
For Joseph Pitt In cordial friendship
PREFACE The core of pragmatism lies in the concept of functional efficacy—of utility in short. And epistemic pragmatism accordingly focuses on the utility of our devices and practices in relation to the aims and purposes of the cognitive enterprise—answering questions, resolving puzzlement, guiding action. The present book revolves around this theme. The studies collaborated here were mostly written during 2007–2008. All of them bear on epistemological topics which have preoccupied me for many years, an interest first manifested in print over fifty years ago in my 1957 paper “On Prediction and Explanation” (British Journal for the Philosophy of Science, vol. 8 [1957]). Much as with the thematic structure of this book, this interest expanded from an initial concern with the exact sciences, to encompass the epistemology of the human sciences, and ultimately the epistemology of philosophy itself. I am grateful to Estelle Burris for her patient and competent work in preparing this material for the press. Nicholas Rescher Pittsburgh, PA June 2008
Chapter 1
EPISTEMIC PRAGMATISM 1. CONSEQUENTIALISM AND FUNCTIONALISTIC PRAGMATISM Consequentialism, broadly understood, is the strategy of arguing for or against a measure—a practice, program, or policy—on the basis of the good (fortunate) or bad (unfortunate) consequences that would ensue from its implementation. So understood, consequentialism can take any of the four forms indicated in Display 1.
Display 1 MODES OF CONSEQUENTIALISM Merit of Consequences
Recipient of Consequences
Description of the Mode
Good
The individual agent
Positive personal consequentialism
Good
The environing society
Positive social consequentialism
Bad
The individual agent
Negative personal consequentialism
Bad
The environing society
Negative social consequentialism
Several modes of consequentialism have historically been in particular prominent in philosophy. One relates to the justification of morality, where positive personal consequentialism has been advocated both in a spiritual version (Plato) and a crasser, more narrowly self-advantaged version (moral egoism). A further historically influential version of consequentialism relates to the utilitarianism of the school of Bentham, and Mill, which adopts positive social consequentialism as the proper standard for adjudging matters of law and public policy. Yet another historically influential mode of consequentialism is pragmatism as prominent in American philosophy since the days of Peirce and the early Dewey. Oriented particularly towards issues of cognitive practices, it
sees the proper standard of practical adequacy as a matter of working out successfully in realizing the aims and goals for which the community’s theoretical and practical procedures have been instituted. Such a pragmatism is accordingly an approach to philosophical issues whose standard of appraisal proceeds in terms of purposive efficacy. On this basis a functionalistic pragmatism that looks to human endeavor in a purposive light can encompass the entire range of human concern. Such a pragmatism is not (and should not be) regarded as a materialistic doctrine concerned only for crass payoffs. After all, man is a purposive animal. Almost everything that we do has an aim or end. Even play, idleness, and tomfoolery has a purposeto divert, to provide rest and recreation, to kill time. And certainly our larger projects in the realm of human endeavor are purposive: —Inquiry: to resolve doubt and to guide action. —Ethics: to encourage modes of conduct in human interactions that canalize these into a generally satisfactory and beneficial form. —Law: to establish and enforce rules of conduct. —Education: to acculturate the younger generation so as to enhance the prospect that young people will find their way to personally satisfying and communally beneficial lifestyles. —Art: to create objects or object types that elicit personally rewarding and enlightening experiences. We are committed to such projects as the pursuit of nourishment, of physical security, of comfort, of education, of sociability, of rest and recreation, etc., designed to meet our requirements for food, shelter, clothing, knowledge, companionship, realization, etc., and equipment with its own complex of needs and desiderata. And throughout this manifold we have at our disposal one selfsame rationale of end-realization with its inherent involvement with issues of effectiveness and efficiency. Pragmatism’s concern for functional efficiency, for success in the realization of ends and purposes, is an inescapable determinative standard for an intelligent being’s way of making its way in the world. In such a purposive setting, the
2
pragmatic approach with its concern for functional efficacy is a critical aspect of rationality itself. Pragmatism is thus a multi-purpose resource. Its approach to validation can of course be implemented in pretty much any purposive setting. Given any aim or objective whatever, we can proceed in matters of validation with reference to effectiveness and efficiency in the realization of purposes. However, a really thorough pragmatism must dig more deeply. It cannot simply take purposes as given—as gift horses into whose mouths we must not look. For purpose-adoption too has to be viewed in a pragmatic perspective as an act or activity of sorts that itself stand in need of legitimation. Accordingly, a sensible pragmatism also requires an axiology of purposes, a normative methodology for assessing the legitimacy and appropriateness of the purposes we espouse. Even our purposes themselves have their purposive aspect with a view to ulterior benefits. To be sure, functionalistic pragmatism does not tell us what human purposes are mandated by the situation of homo sapiens in the world’s scheme of things. That has to come from other sorts of investigationsinquiries that are effectively factual. But what it does do on this basis is to deploy a cogent standard of normative adequacy via the customary demands of practical rationality: effectiveness and efficiency in the realization of appropriate goals. Pragmatic efficacy is a salient arbiter of rational adequacy. The justifactory impetus of functionalistic pragmatism bears directly and immediately upon anything that is of an instrumental nature. And on this basis, it applies to: our cognitive processes of truth validation and question-resolution, our practice-guiding and act-recommending norms for practical decision, our methods and procedures by which the endorsement of scientific hypotheses is validated. The deliberations of functionalistic pragmatism accordingly have a methodological bearingone that makes its impact upon methods rather than results, upon process rather than product. But, of course, since the processes at issue are product-productive processes, these deliberations will have an important indirect bearing on issues of product as well.
3
The rational validation of functionalistic pragmatism is thus something that is comparatively straightforward. For the approach at issue is validated through the consideration that its modus operandi is based on the principle: “In all matters of purposive actionselect that among alternative processes and procedures which, as best you can tell, will enable you to reach the objectives at issue in the most effective and efficient way.” 2. THE PRAGMATIC MODE OF TRANSCENDENTAL DEDUCTION Functionalistic pragmatism looks to effectiveness and efficiency in realizing the aims and purposes inherent in various human enterprises and endeavors. But one particularly salient factor here relates to those purposes that are not optional for us, but rather are mandatory and inherent in our needs and natural desires as the sorts of beings we humans in fact are. The fundamental thesis at issue here bears upon what might be called a Pragmatic Mode of Transcendental Deduction whose line of thought runs as follows: • In virtue of our natural condition, we humans have such-and-such needs and natural desires. (This is simply a “fact of life”—a contingent fact about the world’s realities.) • These needs are of such a sort that as a matter of principle for them to be satisfiable requires something (Z) to be the case as forming part of “the conditions under which alone these needs and natural desires can be met”. • Therefore: Taking Z to be the case is rationally appropriate. On this basis one could articulate, for example, transcendental arguments against such extreme doctrines as solipsism, radical scepticism, or cognitive anarchism. Note that the preceding argumentation proceeds in the practical rather than the theoretical order of reason. For it argues pragmatically to what it is rationally sensible to accept rather than evidentially to what is actually the case. That is, it validates accepting something (viz. Z) as a presupposition (or sine-qua-non requirement) of the only condition under which a need or natural desire of ours can be satisfied. Dismissing the counsel of despair, this line of reasoning effectively has it that we are rationally entitled—in
4
the practical order of reason—to accept any presupposition of a sine qua non requisite of the meetability of our human needs or desires. What we have here is, interestingly, a sort of marriage of convenience between Kantian transcendentalism and pragmatism. Traditionally philosophy has been divided into a practical and a theoretical sphere, distinguishing issues of cognition from issues of action as reflected in the belief-desire approach to explaining human action. But a very different perspective is also available that sees cognition— the quest for and consideration of information—as itself a mode of practice. Rational inquiry is now viewed as a practical endeavor, a purposive enterprise, and even theorizing can be seen as a purposive endeavor whose aim has in the answering of our questions with a view to information gapfilling and applicative guidance. For the fact is that our beliefs are what they are because we have certain desires, viz. (1) to have answers to our questions (to remove the discomfort of knowing), and (2) to have answers we can see as credible—answers that satisfy various requirements we deem essential to adequacy (groundedness, reliability, contextual fit, etc.). On such a perspective, the belief/desire contrast does not provide for a belief/desire separation but rather leaves room for a coordination of these factors into one seamless whole. Consider as an illustration of this processes of argumentation, namely the special case of knowledge—of information management. As beings of the sort we are, we humans need to acquire and communicate information and, life being short, communally conducted inquiry into the ways of a shared world is a sine qua non for us. The cognitive explanation of shared, objective experience and interpersonal communication about it is thus a situational requisite for us. The postulation of an observationally accessible and interpersonally shared environment—naturalistic realism in short—is mandatory for us, and its validity is a requisite for rather than a fruit of observational experience. Just this consideration affords a transcendental deduction of its validity in the pragmatic order of reason. 3. THE ASPECT OF REASON The here-envisioned functionalistic version of pragmatism regards effective praxis as the proper arbiter of appropriate theorizing. It takes considerations of purposive effectiveness to provide a test-standard for the adequacyalike in theoretical and in practical matters. Effective implementation is its pervasive standard of adequacy. And here its logical
5
starting point is the uncontroversial idea that the natural and sensible standard of approval for something that is in any proceduralanything that has an aspect that is methodological, procedural, instrumentallies in the question of its successful application. Anything that has a teleologythat is an instrumentality for the realization of certain purposeswill automatically stand subject to an evaluation standard that looks to its efficacy. For whenever something is in any way purposively oriented to the realization of certain ends, the natural question for its evaluation in this regard is that of its serviceability in end-realization. Pragmatic efficacy becomes the touchstone of adequacy. The close connection between functional efficacy and rationality must be stressed in this context. In any context where the satisfaction of needs and/or the realization of goals is at issue, a rational creature will prefer whatever method process or procedure willother things equalfacilitate goal realization in the most effective, efficient, and economical way. In this way economic rationality is a definitive dimension of rationality-in-general and thereby endows functional efficacy with a normative aspect. Cognitive and practical rationality constitute a unified whole. Cognition itself has its practical dimension. For cognition is an investment. It has costs and benefits, risks and rewards. Nothing ventured, nothing gained. To have information we must accept propositions and claims—buy in on them, so to speak. And the benefits we receive are just that—information, knowledge, consensus to our questions, the diminution of ignorance and unknowing. But there are also significant costs, which in the main come down to one thing—the risk of falling into error, getting things wrong, looking foolish in the eyes of our fellows. Immanuel Kant spoke of “the crooked timber of humanity.” But the timber of reality is every bit as warped, and the project we pursue and the processes we use to implement them must be carried through to perfection. A sort of engineering that we can copy out in the real world can determine a perfect flawless product into our hands. The contrast between the ideal world and the real is inescapable. And this is just as true in cognitive as in physical engineering. No realizable program of knowledge development can determine perfection into our hands, can provide us with truths absolute, definitive, detailed, irrefragable. The risks of error and imperfection is inescapable.
6
4. PRAGMATIC APPROPRIATENESS AND COGNITION The core and crux of pragmatic validation lies in its taking a functionalistic perspective. Its validating modus operandi proceeds with reference to the aims and ends of whatever happens to be the enterprise at issue. The aim of the enterprise of inquiry is to get answers to our questions. And not just answers but answers that can warrantedly be seen as being appropriate through success in matters of explanation and application. And so on pragmatic grounds the rational thing to do in matters of inquiry is to adopt that policy which is encapsulated in the idea that answers to a question for which one need/want an answer—for which the available evidence speaks most strongly is to be accepted until such time as something better comes along. In line with this perspective, a realistic pragmatism insists upon pressing the question: “If A were indeed the correct answer to a question Q of ours, what sort of evidence could we possibly obtain for this?” And when we actually obtain such evidence—r at least as much of it as we can reasonably be expected to achieve—then pragmatism enjoins us to see this as sufficient. (“Be prepared to regard the best that can be done as good enough” is one of pragmatism's fundamental axioms.) If it looks like a duck, waddles like a duck, quacks like a duck, and so on, then, so pragmatism insists, we are perfectly entitled to stake the claim that it is a duck—at any rate until such time as clear indications to the contrary come to light. Once the question “Well what more could you reasonably ask for?” meets with no more than hesitant mumbling, then sensible pragmatists say: “Feel free to go ahead and make the claim.” While the available information is all too incomplete and imperfect (as fallibilism cogently maintains), nevertheless, in matters of inquiry (of seeking for answers to our questions) we can never do better than to accept that answer for which the available evidence speaks most strongly—or at least to do so until such time as something better comes along. It is not that truth means warranted assertability, or that warranted assertability guarantees truth. What is the case, rather, is that evidence here means “evidence for truth” and (methodologically) warranted assertability means “warrantedly assertable as true.” After all, estimation here is a matter of truth-estimation, and where the conditions for rational estimation are satisfied we are—ipso facto—entitled to let that estimates stand surrogate to the truth. And in these contexts there is no point in asking for the impossible. The very idea that the best we can do is not good enough for all rele-
7
vantly reasonable purposes is—so pragmatism and common sense alike insist—simply absurd, a thing of unreasonable hyperbole. Whatever theoretical gap there may be between warrant and truth is something which the very nature of concepts like “evidence” and “rational warrant” and “estimation” authorizes us in crossing. And so at this point we have in hand the means for resolving the question of the connection between thought and reality that is at issue with “the truth.” The mediating linkage is supplied by heeding the modus operandi of inquiry. For cognition is a matter of truth estimation, and a properly effected estimate is, by its nature as such, something that is entitled to serve, at least for the time being and until further notice, as a rationally authorized surrogate for whatever it is that it is an estimate of. Consider the following dialogic exchange: Q: Why should we adopt the policy at issue? A: Because it is the best one can do in the circumstances. Q: But why shall I regard the best I can do as good enough? A: Well, it certainly is not necessarily correct. But the fact remains that it is the best one can do, and that is all that you can (rationally) call for. Q: But is this line of reasoning not circular. Are you not in effect insisting for its validation that very policy whose validation is in question? A: That’s true enough. But that’s exactly how matters should be. Q: How can you claim this? Is the argumentation not improper on grounds of self-innovation and self-reliance—that is, on grounds of vicious circularity? A: No. The circularity is there alright. But there is nothing vicious about it. IT is self-supportive and thus is exactly what a thoroughly rational mode of validation should be. For where rationality is involved, self-supportingness is a good thing and circularity is not only unproblematic but desirable. Who would want a de-
8
fense of reason that is not itself reasonable? Reason and rationality not only can but must be called upon to speak upon their own behalf. Thus insofar as inquiry into the nature of the real is a matter of truth estimation, the process at issue is and must be one that enjoys reason’s “Good Housekeeping” seal of approval. For of course rational acceptance cannot be random, fortuitous, haphazard; it must be done in line with rules and regulations, with programs and policies attuned to the prospects of realizing the objectives inherent in the situation at hand. 5. ON THE VALIDITY OF PURPOSES To be sure, a pragmatic position will meet with the objection: “Surely efficacy in goal-attainment cannot count for all that much. Surely we have to worry about the rationality of ends as well as the rationality of means! Surely there is no sense in pursuinghowever effectivelyan end that is absurd, counter-productive, harmful.” Quite right! There is good common senseand indeed even sound rationalityto such a view of the matter. But, of course it is far from being the case that all ends are created equalthat giving people needless pain, say, is every bit as appropriate as helping them avoid injury. However, this is an issue that a well-developed pragmatism, one which is altogether true to itself, needs to and can address through its own resources. And the terms of reference at issue here will in the natural course of things have to be those of philosophical anthropology. We are humans, members of Homo sapiens—that is an inescapable given for us. And given along with it are the conditions needed by us humans to lead not just survivable lives (requiring air, food, and shelter) but also those conditions needed by us to live satisfying lives (requiring self-respect, companionship and a feeling of communal belonging, and a sense of control over major elements of our life, and the like). And the pragmatic validation of aims and purposes can be established pragmatically in point of their efficiency and effectiveness in the realization of such life-maintaining and lifeenhancing requirements that are mandated to us by our position in the world’s scheme of things. Some aims and purposes are optionalwe choose them freely. But others are mandatorybuilt into the very fabric of our existence within nature as members of homo sapiens. These non-optional goals and purposes will
9
obviously have to play a pivotal role in a functionalistic pragmatism built on that paramount demand of reason: efficacy in goal attainment. The correlative requisites are manifold for us—not just food, shelter, and clothing alone, but also information and comprehension. For the fact of it is that human beings not only have wants, wishes, and desires, but have needs as well. And as beings of the sort we in fact are, we have many of them. Individually we need nourishment, physical security, and congenial interaction if our physical and psychological well-being is to be achieved and maintained. Collectively we require social arrangements that maximize the opportunities for mutual aid and minimize those for mutual harm. This aspect of the practical scheme of things is built into our very condition as the sorts of creatures we are and the place we have in nature’s scheme of things. This state of affairs endows functionalistic pragmatism with a second dimension of objectivity. On the one hand it is perfectly objective and nowise a matter of preference what sorts of means are effective in the realization of specified objectives. And on the other hand it is analogously perfectly objective and nowise a matter of preference that humans have certain needscertain requirements that must be satisfied if they are to exist, persist, and function effectively as the sorts of creatures they have evolved as being on the world’s stage. By virtue of their very nature as purposive instrumentalities, value claims can and generally do fall within the domain of reason. For values are functional objects that have a natural teleology themselves, namely that of helping us to lead lives that are personally satisfying (meet our individual needs) and communally productive (facilitate the realizations of constructive goals to the community at large). This circumstance has farreaching implications because it indicates that our assessment of values themselves can and should be ultimately pragmatic with want duly coordinated with needs. Our evaluations are appropriate only insofar as their adoption and cultivation are efficiently and effectively conducive to the realization of human interests—the rationally appropriate endspersonal and communalthat root in our place in nature’s scheme of things. Accordingly, a pragmatism that is consistent, coherent, and selfsustaining will not just proceed pragmatically with respect to achieving unevaluated ends and purposes, but must also apply its pragmatic perspective to the issue of validating ends and purposes themselves in terms of their capacity to facilitate the realization of those conditions whose beneficial realization is, for us humans, simply a “fact of life.” A pragmatically based epistemology is thus altogether “realistic.”1
10
NOTES 1
Further material relevant to deliberations of this chapter can be found in the author’s Studies in Pragmatism, Vol. II of Nicholas Rescher: Collected Papers (Frankfurt: Ontos, 2005).
11
Chapter 2
LINGUISTIC PRAGMATISM 1. A SEMANTICAL DEPARTURE The point of view of traditional semantics is founded on a sharp distinction between meaning and truth and enjoins the seemingly natural sequential policy: settle issues of meaning first, and only then address the issue of truth. It is thus predicated on the two coordinate ideas of (1) meaning/warrant separation, and (2) meaning-over-warrant prioritization. The guiding thought is that meaning is based exclusively on considerations of communication, while warrant is based exclusively on considerations of fact: meaning issues deal with language; truth issues deal with the world’s reality of things. And these are distinct and separate matters. However, a very different and far less tidy view of things is also possible, a view which insists upon uniting what the traditional position separates, and which sees the waters of theoretical distinction as unavoidably muddied by the realities of operational interrelatedness here. The idea that underlies this alternative approach is a less simple and tidy view of the nature of language whose guiding thought is that there just is no neat meaning/fact separation. Language is now seen as so complex and convoluted, that what we mean linguistically and what we assert factually are inextricably intertwined. Only a part—say, for discussion, half—of what we are saying via the statements that we make about something is fixed by the words of the claims we utter about it, and that the remaining half of what we mean to be saying only emerges subsequently from the entire context of discussion. Accordingly, a contention’s substantiating evidentiation not only serves to support a claim, as to elucidate, specify and define exactly what it is that one is claiming. Such a position—linguistic pragmatism as it deserves to be called— effectively erases the neat traditional division between meaning and substantiation, between language and its declarative application.1 And in consequence it also obliterates the neat traditional division between ontology and epistemology, because now “the facts” evince their evidential basis on the mode and manner of their formulation, and—conversely—the statements we make hinge and are substantially controlled by the evidentially relevant facts.
2. FROM MEANING SEMANTICS TO COMMUNICATIVE THEORY To see the rationale for a linguistic pragmatism of this sort, let us begin by noting that informative communication normally takes place in the setting of an informal compact between sender and receiver. 2 There is a reciprocity of mutual assurances: each party tacitly undertakes a commitment along roughly the following lines: SENDER:
I will endeavor to convey to you information that is correct as best I can determine. And I will endeavor to convey this information in a way that imposes no unnecessary strain on your (recipient’s) resources of time and effort.
RECEIVER: I will credit you (sender) with these good intentions, accepting your reliability, and construing your statements in the light of their presumed communicative efficiency. The tacit agreement at issue here has significant implications for the sender in particular: • To protect one’s claim to reliability/credibility—that is, not to mislead people. What is said must accordingly have a sound rationale of evidentiation/substantiation. One must avoid being frivolous or misleading about what is said. • To protect one’s claim to efficiency/economy—that is, not to waste people’s time. One must avoid unnecessary redundancy, dispensable detail, unhelpful complexity of formulation, etc. And barring evidence to the contrary, the recipient credits the sender with a like intent. Accordingly, in the standard situations of informative communication it is (tacitly) claimed by the sender and (tacitly) recognized by the recipient that the sender has taken reasonable and appropriate precautions to ensure the truth of what he says and that when there are problems in this score (when the sender is unsure, uncertain, doubtful), he will take corrective measures to indicate this—generally by the use of appropriate qualifiers: “it seems,” “it is said,” “presumably,” “very likely,” or the like.
14
In the setting of communicative discourse a variety of operative presumptions are thus at work to set the stage on which information is conveyed in a way largely indifferent to the delimited particulars of the case. And the commitments at issue are inherent in the presupposition framework of the general context of communicative discourse. They are forthcoming not from the substantive content of the message at issue but from the contextually indicated presuppositions we make on our own responsibility. And so, much of what we mean is contained not in the words and utterances themselves but in the context and above all the evidential context of the communicative situation. The statements we are making—what we actually mean—go well beyond what we explicitly say. 3. THE ECONOMIC RATIONALE The ground rules of communicative practice generally serve the role and function of making communication possible and/or efficient. They are features of a communicative modus operandi in whose absence the entire process would become at best more difficult and at worst impossible. The sender is engaged in an effort to be clear—to avoid being misunderstood or misinterpreted. He/she can pursue this objective by the simple devices of elaboration and amplification, for in doing this he/she automatically provides more context, and this obviously makes the work of interpretation easier and renders its results more secure. But the price of this benefit is not negligible and its advantages are not cost-free. Specifically, the costs include, on the sender’s part: time, effort, and ingenuity (thinking up possibilities to block off); and on the receivers part: time, effort, and patience (having to put up with explanations and elaborations about all of what he/she will deem unnecessary.) After all, when one endeavors to convey information to someone, various sorts of unpleasant reactions can occur—as Display 1 indicates. After all, sender and receiver need to strike a happy compromise—enough to make it likely that one will be understood, but not so much that one will simply get tuned out because of a loss of interest and attention on the receiver’s part.
15
Display 1 COMMUNICATIVE NEGATIVITIES AND THEIR CAUSES Negativity One is disbelieved.
Sample Causative Etiology One speaks falsely too often (out of heedlessness or out of deceptiveness).
One is misunderstood.
One uses careless or inadequate formulations.
One is tuned out.
One speaks off the point (digresses), or speaks at undue length (even if it is to the point).
Accordingly, effective communication is throughout a matter of maintaining proper cost-benefit coordination. It is governed by such maxims as: • Be sufficiently cautious in your claims to protect your credibility, but do not say so little that people dismiss you as a useless source. • Formulate your statements fully and carefully enough so as to avoid misunderstanding but not with so much detail and precision as to weary your auditors and get tuned out. • Make your message long (explicit, detailed, etc.) enough to convey your points but short enough to avoid wasting everyone’s time, effort, and patience. • Be sufficiently redundant that an auditor who is not intensely attentive can still get the point but not so redundant as to bore or annoy or insult your auditors. • Keep to the point but not so narrowly that your message is impoverished by lack of context. All of these rules are fundamentally economic principles of balance; they all turn on finding a point after which the benefit of further gain in information falls below the cost demanded for its process of acquisition.
16
In extracting information from the declarations of others we accordingly do—and must—rely on a whole host of working assumptions and presumptions. The pivotal factor here lies in the circumstance that verbal communication in informative contexts is governed by such conventions as: • Speakers and writers purport their contentions to present the truth as they believe it to be. (This is why saying “The cat is on the mat and I don’t believe it” is paradoxical.) • Speakers and writers purport their contentions to be formulated in a way that is accurate and not misleading. The communicative substance of a statement is attuned to the evidence the speaker has for making it. Such conventions have a well-based rationale in considerations of communicative efficiency and effectiveness. For in their absence communication would become at worst impossible and at best vastly more cumbersome, Byzantine, and problematic. The guiding principle here is that of costbenefit considerations. The standard presumptions that underlie our communicative practices are emphatically not validatable as established facts. (For example, it is certainly not true—save at the level of statistical generality—that people say what they mean.) But their justification becomes straightforward on economic grounds, as practices that represent the most efficient and economical way to accomplish our communicative work. 4. EMISSION AND RECEPTION: THE ROLE OF COMMUNICATIVE CONTEXT Communication is clearly a two-way street. It depends every bit as much on reception as on sending. Ideally, of course, we would like to be perfectly clear about what we are saying. We would like to make it all perfectly explicit so that every possibility of any misunderstanding—any interpretation other than just exactly the one we have in mind—is simply ruled out. But the fact—perhaps unfortunate but unavoidable nevertheless—is that ordinary language is an imperfect resource. For one thing, it is a general-purpose instrument and not a specialized one like the language of law or of science, which makes for an imperfect fit to particular cases. (And note that even there the processes of change and development are at work. The courts must rule whether the laws applicable to ships apply to airships, and science no longer understands an atom in the same way that it
17
did before the advent of nuclear physics.) We would prefer to make it all as clear and definite as legal contracts do (or try to do), but, alas, things just don’t work like that. In relation to what we mean, the status of what we actually say or write is indefinite and amorphous. The process of seeking to reduce or even eliminate the pluralism of alternative constructions that verbal formulations so often admit is called interpretation. We accomplish this primarily and preeminently with reference to context. One must distinguish between the meaning of a statement and the message it conveys. For example, “The cat is on the mat” has the same meaning everywhere, but it will convey a different message in different contexts. As a supposition, it is subject to an injunction to the recipient “Suppose it to be so for the moment for the sake of discussion,” while as an informative contention the operative injunction is “(Re)adjust your world-picture so as to place the cat on the mat.” Such indirectly conveyed information—this aspect of its subtext—is also part of the text’s “meaning”—in the wider sense. All in all it is fair to say that in interpretation context is not just important, it is everything. A claim taken in isolation can generally bear various constructions—various different interpretations. But we standardly can, do, and should let contextual considerations bear the work of disambiguation. And of course the bearing of context does not of course issue from the explicit content of the statements at issue. Their bearing does not reside in the substantive contentions that those statements express. It can, for example, root in the way they are taken to be used by the sender (informatively, ironically) or in the way we view the sender by whom they are used (as truthful, reliable, rational—or otherwise). Different contexts of communication and different communicative settings will clearly make for different sorts of communicative ground rules. Suppositions that are taken for granted among practitioners of a certain sort or specialists in a certain field may fail to be operative elsewhere. We can assume that in dealing with people hairstylists and haberdashers will give priority to different sorts of issues. Even the same facts may be seen in a very different light. In dealing with an old man, the physician will care for the person’s diet; the lawyer will ask whether he has a will. In communication there are but few universal priorities. A cry of “Fire!” by the actor on the stage carries messages different from that by the usher in the aisle. Accordingly, many communicative presumptions will hinge on contextual matters: what the appropriate presumptions are will be a function of the wider setting of the discourse at issue. And contexts are matters of
18
scale—there are small-scale contexts and large-scale contexts. The latter include such massive discourse settings as: everyday small talk, informative communication, fiction, journalism (serious or tabloid), and others. The communicative role of context is simply a principle of efficiency. To do all of one’s communicating with textual explicitness would simply be too cumbersome. It is obviously economical to have context carry much of the burden. 5. COMMUNICATION AS A GENERAL BENEFIT ENTERPRISE Information exchange based on principles of cooperation is a mutual benefit process, because everyone is advantaged by adopting a system of operation that maintains the best available balance of costs and benefits in this matter of creating a communally usable pool of information. Contrast two hypothetical societies of communicators, the Liars and the Deceivers. The Liars generally say the opposite of what they think: “Rotten day today,” they say when the weather is beautiful, and vice versa. The Deceivers, however, do not behave so reliably: they mix putative truth and falsity more or less randomly. The Liars can communicate with us and with each other perfectly well. Once one has got the trick, one knows exactly how things stand in discussions with them. But the Deceivers are something else again. One never knows where one stands with them. And worse yet, they too have no idea where they stand with one another. Indeed, they could never even begin to communicate. Even if an initial generation of Deceivers came equipped with a ready-made language (say, because they began as normal communicators and then turned en masse into Deceivers at some point), the fact remains that they could never teach language to their offspring. “That’s a lion,” the parents observe to Junior one day, pointing to a dog, and “That’s a cat,” the next time, and “That’s an elephant,” the time after that. Poor Junior would never manage to catch on. Contrast now two other communities: The Trusters and the Distrusters. The Trusters operate on the principle: Be candid yourself, and also accept what other people say as truthful—at any rate in the absence of counterindications. The Distrusters operate on the principle: Be deceitful yourself, and look on the assertions of others in the same light—as ventures in deceitfulness: even when people are ostensibly being truthful, they are only trying to lure you into a false sense of security. It is clear once again that the policy of the Distrusters is totally destructive of communication. If exchange of information for the enhancement of our knowledge is the aim of
19
the enterprise, the diffusion of distrust is utterly counterproductive. To be truthful, to support the proper use of language and refrain from undermining its general operation, is a policy of the greatest general utility— however beneficial occasional lapses may seem to be. Not only is the maintenance of credibility an asset in communication, but some degree of it is in fact a necessary condition for the viability of the whole project. The precept, “Protect your credibility; do not play fast and loose with the ground rules, but safeguard your place in the community of communicators,” is basic to the enterprise of informative communication. From the sender’s point of view, putting forth a message costs time, effort, energy, and the like. The rational agent will incur such costs only with a view to benefits—some sort of reward (if only in the respect or gratitude of others) or reciprocity, with a view to a quid pro quo. This point is simple but of far-reaching import. Given our need for information to orient us in the world (on both pure and practical grounds), the value of creating a community of communicators is enormous. We are rationally well advised to extend ourselves to keep the channels of communication to our fellows open, and it is well worth expending much for the realization of this end. The same sort of story holds for the receivers’ point of view as well. They, too, must expend resources on taking in, processing, and storing messages. Clearly, a rational receiver would be prepared to undertake this expenditure only if there were a reasonable expectation of drawing profit from it, be it by way of information added or resources conserved—an expectation that, in general, is amply warranted. Communicative discourse is a purposive transaction subject to economic principles. The sender and the receiver share a certain particular aim—that of accomplishing the transaction at issue with maximal effectiveness and minimal effort. Both want the message issued by the sender to be available to the receiver in the sense intended. And both want this to be achieved with a minimum of time and effort. And so, a substantial cluster of regulative principles of procedure regarding information management are at work here that have a straightforward economic purport. After all, the interests of overall communicative economy are best served by making sure that one is properly understood in the first place, so that later efforts need not be dedicated to unraveling confusions. 3 Linguistic pragmatism looks on language not just as a means for encoding information but as a means for communicating it. And from the receiver’s point of view the fabric of reason that the sender putatively has
20
for stating his message becomes an essential feature of just what it is that that message is designed to convey. The aspect of communicative purposiveness is a critically determinate feature of the substance of that message. And it is this purposive aspect that linguistic pragmatism sees as a critical and constitutive facet of the communicative enterprise. 4 NOTES 1
See Zoltàn G. Szabo (ed.), Semantics vs. Pragmatism (Oxford: Clarendon Press, 2005), as well as the present author’s Communicative Perspectives (Lanham. MD: Rowman & Littlefield, 1998).
2
Compare C. S. Peirce’s terminology for interlocutors: utterer (= sender) and interpreter (= recipient). He used these terms in relation to signs as well as to statements.
3
In this context, it is useful to consider H. P. Grice’s influential interpretation of communicative practice with its emphasis on communicative maxims. See H. P. Grice, “Meaning,” The Philosophical Review, 66 (1957); pp. 377–88, as well as his book Studies in the Way of Words (Cambridge, Mass.: Harvard University Press, 1989). Compare also Jonathan Bennett, Linguistic Behavior (London, 1963), chaps. 1 and 7.
4
Further material relevant to this chapter’s deliberations can be found in the author’s Communicative Pragmatism (Lanham, MD: Rowman & Littlefield, 1998).
21
Chapter 3
ON COGNITIVE FINITUDE AND LIMITATIONS 1. LIMITS VS. LIMITATIONS There is a significant difference between limits and limitations. Limits inhere in outright impossibilities—conditions that simply cannot be realized in the very nature of things. Limitations, by contrast, have a sociological aspect: they relate to things that intelligent agents would like to do—if only they could, which is not the case because limits are at work. Every law of nature sets a limit. Take: “Acids turn blue litmus paper red.” This generalization is correlative with the impossibility of finding some acid-immersed blue litmus paper that takes on some color other than red. But of course no limitation is involved. Nobody hankers after an acid that turns blue litmus paper black. Limits belong primarily to the natural sciences; limitations by contrast, have a whiff of the social sciences about them. Even as “it takes two to tango,” so it takes two parties to create a limitation—a reality that sets limits and an agent who aspires to transcend them. Limitation is a matter of generic infeasibility in realizing something that people in general might ideally want to do. Interestingly enough, this means that a being whose aspiration-horizon is narrow—confined entirely within the range of what is well within its powers—will encounter no limitations in the presently operative sense of the term (notwithstanding the fact that all those limits that nonetheless confront it will reflect its status as a finite being). We humans, however, are prey to both finitude and limitations. We are limited in what we can do with our bodies—we cannot, for example, turn them into bronze. But this hardly qualifies as a limitation—nobody in their senses wants to transform themselves into a statue. Our actual limitations represent limits we would ideally like to transcend if we could have things in our own way. And there are, of course, a great many of them: our wishes and aspirations outrun the reach of our capabilities and capacities. For it is a characteristic feature of our condition in this regard that we humans are all too clearly limited in matters of knowledge, power, beauty, and many other desiderata. And this salient aspect of our condition deserves scrutiny.
2. SOME SALIENT LIMITS PAST AND PRESENT Certain infeasibilities have been on the agenda for a long time. In mathematics, for example, the project of “squaring the circle”—of using ruler and compass for the construction of a circle—was demonstrated to be impossible by J. H. Lambert in the middle of the eighteenth century.1 Again, in physics, the idea of a perpetual motion machine, which has intrigued theorists ever since the middle ages, came to grief with the demonstration of its infeasibility during the rise of thermodynamics in the middle years of the 19th century. 2 And yet again in physics, we have the long recognized idea of the impossibility of achieving a perfect vacuum. 3 All of these impossibilities—these insuperable limits to goal achievement—betoken limitations exactly because those infeasible achievements have been a focus of aspiration. But with advances in mathematical and physical science, these longstanding aspirations ended up on the scrap-heap of demonstrated impossibility. And this is only the beginning. The twentieth century has witnessed an amazing volume of technological progress that saw mankind attain to a heretofore undreamed-of extent and a wide spectrum of longstanding aspirations: harnessing the power of the atom, aerial flight, instant transglobal communication, painless dentistry/surgery, drudgery-free computation, to cite only a few items. But notwithstanding such virtually incredible progress in matters of technology, when we turn to theoretical science we see a rather different scene. For here the century presents a picture that focuses on limits rather than wide-open horizons. The demonstration of impossibilities is among the most strikingly characteristic features of 20th century science. 4 A handful of salient instances that illustrate this fact are given in Display 1. All of these milestone achievements of the era share the common feature of demonstrating the inherent infeasibility of achieving some desideratum to which practitioners of the discipline at issue had long and often aspired. Such findings had the effect of derailing unreasonable aspirations by bringing significant limitations to light. In this regard, the 20th century has proven itself to be an era of disillusion where time and again the discovery of limits has thrown a bright, and often unwelcome light on our insuperable limitations.
24
Display 1 IMPOSSIBILITY DEMONSTRATIONS IN TWENTIETH-CENTURY SCIENCE •
Physics/Relativity: Albert Einstein’s demonstration of the impossibility of physical transmissions faster than the speed of light.
•
Physics: The discovery of irreducibly stochastic processes and the consequent of fundamental randomness of various sectors of nature. Conceivably we can get a firm grip on nature’s laws, but we cannot possibly get one on its facts, and the consequent limits of predictability means that surprises are inevitably in store for us.
•
Physics/Quantum Theory: Niels Bohr’s demonstration of the Principle of Complementarity inherent in the infeasibility of a conjointly precise specification of certain physically descriptive parameter (i.e., position and momentum) of physical micro-entities.
•
Psychology: Sigmund Freud’s insistence on the impossibility of self-mastery on grounds of there being no way for our rational conscious deliberation to gain complete control of our psychological processes.
•
Thermodynamics/Cryogenics: Max Plank’s demonstration of the effective impossibility of reaching absolute zero in experimental situations.
•
Cybernetics: Claude Shannon’s demonstration of the impossibility of a flawless (loss-free) transmission of information, any channel having a level beneath which noise cannot be reduced.
•
Mathematics: Kurt Gödel’s demonstration of the impossibility of axiomatizing arithmetic.
•
Social Theory/Economics: Kenneth Arrow’s theorem establishing the impossibility of reconciling social preferability with individual preferences.
It is one of the ironies of 20th century science that, as its achievements have pushed ever further the frontiers of science and technology, this has at the same time brought various insuperable limits more sharply to view. Accordingly, the 20th century has witnessed an ever more emphatic awareness of limits. For despite the vast new vistas of possibility and opportunity that modern science and technology have opened up, there has emerged an ever clearer and decidedly sobering recognition that the region beyond those new horizons is finite—that progress in every direction—be it material, cognitive, or social—has its limits, and that we can go only so far in realiz-
25
ing of our desires. And it seems fair to say that the conception of deep limits that lie at the foundation of nature’s modus operandi is a salient Leitmotiv that runs across the entire landscape of 20th century science. 3. SOURCES OF FINITUDE: NECESSITY There is nothing eccentric or anomalous about all of those manifold impossibilities. They root in certain fundamental features of reality. And this prominence of limitations in reality’s larger scheme of things calls for a closer look at the underlying grounds of such a state of affairs. And here it emerges that the etiology of limits—the systematic study of this topic— brings to light the operation of certain very general and fundamental processes that account for a wide variety of particular cases. In particular, the following five factors figure among the prime sources of finitude: • Necessity • Incapacity • Scarcity (of resources or time) • Uncontrollability — Fate — Chance • Imperfectability — via desiderata conflicts — via resistance barriers Let us look at the situation more closely—staring at the top. Limits of necessity can root in the fundamental principles of logic (logical impossibility) but also in the laws of nature (physical impossibility). For every scientific law is in effect a specification of impossibility. If it indeed is a law that “Iron conducts electricity” then a piece of nonconducting iron thereby becomes unrealizable.
26
Accordingly, limitations of necessity are instantiated by such aspirations as squaring the circle or accelerating spaceships into hyperdrive at transluminal speed. Many things that we might like to do—to avoid ageing, to erase the errors of the past, to transmute lead into gold—are just not practicable. Nature’s modus operandi precludes the realization of such aspirations. We had best abandon them because the iron necessity of rational law stands in the way of their realization. 4. INCAPACITY (CATEGORICAL AND TEMPORAL) A second key limitation of finite creatures relates to limits of capacity. In this regard there are various desiderata that individual finite beings can realize alright but only at a certain rate—so much per hour, or year, etc. Reading, communicating, calculating—there is a virtually endless list of desirable tasks that people can manage within limits. For throughout such matters we encounter a limit to performance—a point beyond which more efficient realization becomes effectively impossible. Here we have a sort of second-order limit of impossibility. The issue is no longer “Can X be done at all?” but rather “Yes, X can be done alright, but how much of it can one manage in a given timespan?” The prospect of X-performance accordingly become subject to the limitedness of time. With virtually all performatory processes people generally have a capacity limit—there is only so much that can be accomplished in a given timespan. And this leads to the phenomenon of what might be characterized as the time compression of manmanaged activities such as reading or writing or speaking—or proofreading for that matter. One can perform them at increasing speed but only at the price for increasing malfunctions. “Haste makes waste” as the proverb sagely has it. 5. SCARCITY Scarcity is another prime source of limitations. It is not the laws of nature as much as the condition of our planet that preclude diamonds from being as plentiful as blackberries, and truffles as common as mushrooms. Many of the things that people would like to have are matters of scarcity—there is just not enough of them to go around. Not everyone can have their castle in Spain, their personal field of oil wells, their daily commute along peaceful country lanes. Even fresh, clean air is not all that easy to come by. Many or most resources are in short and limited supply—there is just not
27
enough of them—time, that is to say, lifetime included. In hankering after them we encounter insuperable limits of scarcity that impel us into some of the unavoidable limitations that manifest our finitude. 6. UNCONTROLLABILITY: FATE AND LUCK The inexorable rulings of fate are yet another prime source of limitations. We come into a world not of our making and occupy under conditions not of our own choosing. We would all like to have a healthy genetic heritage, but have no choice about it. We would like to have in peaceful, prosperous, easy times but cannot reclaim the conditions of a past golden age. We would like to be graceful, talented, charming, accomplished—and would welcome having children of the same sort—but yet have relatively little say in the matter. All of us have to play the game of life with the cards we have been dealt. Our control over the conditions that fate assigns to us is somewhere between minute and nonexistent. Then too there is the impetus of chance. Often as not in life matters develop in ways governed by pure luck rather than by arrangements within the scope of our control. Often as not it is chance alone that gets us involved in accidents or in fateful encounters. Much that is important for us in life issues from fortuitous luck rather than deliberate choice and control. 5 And all of this betokens limits to the extent to which we can achieve the control we would fain have with regard to our circumstances in this world. 7.
IMPERFECTABILITY (I) DESIDERATUM CONFLICTS AND COMPLEMENTARITY
Prominent among the root causes of human finitude is the phenomenon of what might be called desideratum conflicts where in advancing with one positivity we automatically diminish another. What we have here is vividly manifested in the phenomenon of positivity complementarity that obtains when two parameters of merit are so interconnected that more of one automatically means less of the other, much as illustrated by the following diagram:
28
Positivity 1 Positivity 2 The systemic modus operandi of the phenomenology at issue here is such that one aspect of merit can be augmented only at the price of diminishing another. Consider a simple example, the case of a domestic garden. On the one hand we want the garden of a house to be extensive—to provide privacy, attractive vistas, scope for diverse planting, and so on. But on the other hand we also want the garden to be small—affordable to install, convenient to manage, affordable to maintain. But of course we cannot have it both ways: the garden cannot be both large and small. The desiderata at issue are locked into a see-saw of conflict: those positivities here stand in a relation of complimentarity (to borrow a term from the physicists). Again, in many ventures—and especially in the provision of services— the two desiderata of quantity and quality come into conflict: processing too many items in pursuit of efficiency compromises quality, providing too high quality compromises quantity. Some further examples are as follows: • In quest of the perfect automobile we encounter a complementarity between speed vs. safety. • In quest of the perfect kitchen we encounter a complementarity between spaciousness vs. convenience. • In pursuit of the perfect vacation spot we encounter a complementarity relationship between attractiveness vs. privacy and again between affordability vs. amenities. A philosophically more germane example arises on epistemology. With error-avoidance in matters of cognition the tradeoff between errors of type 1 and errors of type 2—between inappropriate negatives and false positives—is critical in this connection. For instance, an inquiry process of any realistically operable sort is going to deem some falsehoods acceptable and
29
some truths not. And the more we fiddle with the arrangement to decrease the prospect of one sort of error, the more we manage to increase the prospect of the other. Analogously, any criminal justice system realizable in this imperfect world is going to have inappropriate negatives through letting some of the guilty off while also admitting false positives by sometimes condemning innocents. And the more we rearrange things to diminish one flaw the greater scope we give to the other. And so it goes in other situations without number. The two types of errors are locked together in a see-saw balance of complementarity that keeps perfection at bay. In any event, it transpires that throughout such cases we have the situation where realizing more of one desideratum entails a correlative decrease in the other. We cannot have it both ways so that the ideal of an absolute perfection at issue with maximizing every parameter of merit at one and the same time is out of reach. In the interest of viability some sort of compromise must be negotiated, seeing that the concurrent maximization of desiderata is now unavoidably unrealizable. And the unattainability of perfection also has other interesting ramifications. 8. IMPERFECTABILITY (II)— RESISTANCE BARRIERS AND DIMINISHING RETURNS The utopian idea of human perfection—be it at the level of individuals or of the social order at large—has been with us throughout recorded history. 6 After all, it lies in our nature to aspire after ever greater things. But experience and theorizing alike indicate that nothing is clearer than that, neither our lives, nor our knowledge, nor yet our morals can ever be brought even remotely near to the pinnacle of perfection. And for good reason. In medicine, life prolongation affords a vivid example, since with the elimination of one form of life-curtailment, others emerge that are yet more difficult to overcome. Again, performance in such sports as speed-racing or high-jumping also illustrates this phenomenon of greater demand for lesser advance. Throughout there is a point where a further proportionate step towards an ideal limit becomes increasingly difficult with the result that an exponentially diminishing group of performers will be able to those increasively greater levels of achievement. There is something about perfection that generally resists realization. In physics and engineering this sort of thing is called a resistance barrier, a
30
phenomenon encountered throughout physics with such ventures as the endeavor to create a perfect vacuum, or that of achieving absolute zero in low-temperature research, or again that of propelling subatomic objects to the speed of light with particle accelerators. What we have here is a principle to the effect that “Nature always buffers its brick walls.” (It is, in effect, a corollary to Leibniz’s Principle of Continuity.) What is at issue is a fundamental principle of natural philosophy: Limits impose resistance barriers. The closer we get to that idealized extreme condition the harder it pushes back in reaction against further progress. And just this sort of resistance same sort of phenomenon is encountered in many areas of ordinary life—as is illustrated by the quest for a perfectly safe transport system or a perfectly efficient employment economy. One of the prime instances of a resistance barrier is encapsulated in the phenomenon of entropy—of disorder. For not only does nature “abhor” a vacuum, it does so with order as well. It insists on disorder through something of an entropic principle of dissonance preservation: the more one intervenes in nature to control disorder by compressing it into a more limited area, the more strongly it pushes back and resists further compression. Nature insists upon an ultimately ineliminable presence of chaos and disorder. Resistance barriers involve two aspects: first an outright impossibility of reaching an ideal goal, and second an effective infeasibility of drawing ever nearer to it because this requires a level of capability (and thus resource investment) that is ever-larger and thereby ultimately bound to outreach the extent of resources at our disposal. Two different albeit interrelated sorts of limits are accordingly at issue here, namely limits of possibility (of unrealizability in principle due to an inherent impossibility) and limits of feasibility (of unrealizability in practice due to a shortfall of resource, time, or capability). In either case, however, the pervasive reality of resistance barriers constitutes a decisive obstacle to the approximation of perfection in practice. A crucial aspect of resistance barriers lies in the fact that the more progress one makes along the lines at issue, the more difficult—and expensive—still further progress always becomes. Resistance barriers inevitably engender cost escalation and diminishing returns. An instructive illustration of this phenomenon is afforded by the situation of what might be called the nonstandard response effect in the realm of medicaments. When one nowadays purchases a drug it generally comes accompanied by a long slip of paper listing the possible unwelcome “side effects”—nonstandard reactions that occur in some relatively few cases. The
31
response-list at issue consists of a series of entries inventorying nonstandard reactions in a format something like: (Ei) In pi percent of cases patients have responded in manner Mi. But how are we—or our physicians—to determine in advance if we belong to the particular group Gi of individuals to whom Ei applies? Is there a test Ti that provides predictive guidance through generalizations of the format: (Xi) It is exactly those who pass that Ti who constitute that percentage group Pi of cases where patients respond in manner Mi. Or in other words, is there providing an advance determination of those instances where Ei applies—a test that renders what is ex-post facto explainable also be predictable in advance? The reality of it is that a striking phenomenon occurs in this connection. Namely: Only in the first few instances—at best, that is, only for the first few E1 entries—say only for E1 and E2 and E3—will a test ever actually be available in advance of the fact. For the most part—and effectively always for the lower end of our response list—there just is no way of telling in advance why people react as they do and no way of determining in advance of the fact which individual will respond in that way. And in general, the rarer a nonstandard response to mediation is, the more difficult it is to explain and the more difficult to predict. A fundamental principle regarding the modus operandi of biomedicine in relation to its cognitive domestification is encountered here: That is, the rarer the phenomenon (in our case, the negative reaction to a medicament), the less frequently it is encountered in the course of experience, the more difficult—the more costly in terms of time, effort, and resources—its cognitive domestication by way of explanation, prediction, rational systematization will generally prove to be.
And the reason why we cannot predict the extremely rare nonstandard responses is not that they are inherently inexplicable, but rather that their explanation or prediction becomes unaffordable through the extent to which its realization would require advancing the cognitive state of the art. So what we have here is ultimately once again a limitation rooted in the finitude of resources, with the inability to achieve a desired result whose
32
ultimate ground lies—as is all too commonly the case—in an inability to afford it thanks to that ever-increasing difficulty in overcoming resistance. 9. REACTIONS TO FINITUDE When aspiration outreaches attainability there are really only two basic responses: 1. to remove such the disparity between the two by an Epicurean curtailment of aspirations that restores an equilibrium with attainability, or else 2. to accept the disparity a) with grudging annoyance and regret b) with Stoic resignation to a realistic acceptance of the inevitable. But in matters of human finitude where authentic limits are at issue, the second course seems to offer the most sensible option. There is, after all, little point in baying after an unattainable moon. It is perhaps not entirely correct to hold that understanding an infeasibility will automatically issue its acceptance (that tout comprendre c’est tout accepter, to readjust a wellknown French dictum). But all the same, Spinoza was clearly right in insisting that this is the rational thing to do. Ironically, however, the realization of complete rationality in the management of our affairs is itself one of those unrealizable aspirations that characterizes our limitations as a species. Still, when all is said and done, we do well to “push the envelope” in matters of realizing our desiderata. For it is a sound policy never to be readily overconfident that what is thought to be unattainable is actually so. In the absence of demonstrations to the contrary, the burden of proof is best assigned to naysayers in matters of this sort. 10. CONCLUDING OBSERVATIONS REGARDING LIMITS AND FINITUDE Finally, a few observations on theoretical issues. Rational agents are in principle capable of three distinct different sorts of things—those lying in the realms of: knowledge, action, and evaluation
33
(of Kant’s theoretical, practical, and judgmental reason, respectively). And of course limits are possible in every direction—be it cognition or power or judgment. Now in theory, these limits can diverge and supposition can disconnect what fact has joined together. For example, one can conceive of a being whose knowledge is limited but whose power is not—who could accomplish great things if only he could conceive of them. Or again one can envision a creature beset by value blindness who is incapacitated from implementing the belief-desire mechanics of agency by an apathy that blinds him for seeing anything as desirable. And even with regard to a single factor such as knowledge, incapacities of very different sorts can be envisioned. One can conceive of a highly intelligent being whose knowledge is confined to the necessary truths of logic and mathematics, but for whom all knowledge of contingent fact is out of range. Or again one can imagine a being whose knowledge is geared to matters of fact, but who is wholly lacking in imagination, so that all matters of hypothesis and supposition remain out of view. The possibilities of capacity divergence are manifold. On this basis, the prospect of very different sorts of limits and limitations looms large before us. But the sobering fact remains that the definitive feature of man as a finite being is that we are, to some extent, subject to all of them.
NOTES 1
See Eugen Beutel, Die Quadratur des Kreises (Leipzig/Berlin: B. G. Teubner, 1913; 5th ed. 1951); C. H. Edwards, Jr., The Historical Development of the Calculus (New York-Heidelberg-Berlin: Springer-Verlag, 1979).
2
See A. W. J. G., Ord-Hume, Perpetual Motion: The History of an Obsession (New York: St. Martin’s Press, 1977).
3
See Mary Hesse, “Vacuum and Void,” The Encyclopedia of Philosophy (New York: Macmillan and Free Press), Vol. VII (1967), pp. 217-18.
4
For an instructive discussion of relevant issues see John P. Barrow, Impossibility (Oxford: Oxford University Press, 1998).
5
On these matters see the author’s Luck (New York: Farrar Straus Giroux, 1990).
34
6
For a lucid and instructive discussion of these issues see John Passmore, The Perfectability of Man (London: Duckworth, 1970).
35
Chapter 4
ON COGNITIVE ECONOMICS 1. THE ECONOMIC DIMENSION OF KNOWLEDGE: COSTS AND BENEFITS From the very start, students of cognition have approached the issues of epistemology from a purely theoretical angle, viewing this field not as the study of knowledge-acquisition, but quite literally as the theory of knowledge-possession. However, in taking this approach one looses sight of something critically important, namely that there is a seriously practical and pragmatic aspect to knowledge in whose absence important features of the idea are destined to be neglected or, even worse, misunderstood. And in particular, what we are going to loose sight of is the profoundly economic dimension of knowledge development. Knowledge indeed has a significant economic dimension because of its substantial involvement with costs and benefits. Many aspects of the way we acquire, maintain, and use our knowledge can be understood and explained properly only from an economic point of view. Attention to economic considerations regarding the costs and benefits of the acquisition and management of information can help us both to account for how people proceed in cognitive matters and to provide normative guidance toward better serving the aims of the enterprise. Any theory of knowledge that ignores this economic aspect does so at the risk of its own adequacy. With us humans, the imperative to understanding is something altogether basic: things being as they are, we cannot function, let alone thrive, without knowledge of what goes on about us. The need for information, for knowledge to nourish the mind, is every bit as critical as the need for food to nourish the body. Cognitive vacuity or dissonance is as distressing to us as physical pain. Bafflement and ignorance—to give suspensions of judgment the somewhat harsher name that is their due—exact a substantial price from us. The quest for cognitive orientation in a difficult world represents a deeply practical requisite for us. That basic demand for information and understanding presses in upon us and we must do (and are pragmatically justified in doing) what is needed for its satisfaction. For us, cognition is the most practical of matters because knowledge fulfils an acute practical need.
Homo sapiens has evolved within nature to fill the ecological niche of an intelligent being. The demand for understanding, for a cognitive accommodation to one’s environment, for “knowing one’s way about,” is among the most fundamental requirements of the human condition. Humans are Homo quaerens. We have questions for which we want and indeed need answers. The demand for information, for cognitive orientation in our environment, is as pressing a human need as that for food itself. We are rational animals and must feed our minds even as we must feed our bodies. In pursuing information, as in pursuing food, we have to settle for the best we can get at the time, regardless of its imperfections. Throughout modern times philosophers have divided their field into two main domains, the theoretical and the practical, the one oriented to the articulation of knowledge, the other to matters of decision and action. 1 The school of thought that most decidedly rejected this approach is that of Pragmatism. From Peirce to Dewey and beyond, pragmatists have argued that no Chinese Wall can be erected between theoretical and practical philosophy. They have sensibly maintained that both the development of knowledge itself and theorizing about it must be seen as a praxis, at the same time insisting that we must cultivate this praxis under the guidance of theory. Put in Kantian terms, their position is that praxis is blind without theory and theory empty without praxis. Just this holistic view of the matter mandates an economic perspective. For any human activity—rational inquiry and theorizing included— demands the deployment of resources (time, energy, effort, ingenuity). And no view of reality is acceptable to us—no model of the world’s operation accessible—without the expenditure of effort and the risk of error. Any adequate philosophically grounded understanding of the nature and ramification of knowledge cannot but see information as a product that must be produced, systematized, disseminated and utilized in ways that are inherent to economic processes in general. What interests philosophers and economists in their shared concern for cognition will doubtless differ. But one fundamental fact links their deliberations in symbiotic unity. A philosophy of knowledge that does not acknowledge and exploit the subject’s interest economic realities is in significant measure destined to be an exercise in futility. As these deliberations indicate, the economic perspective of cost and benefits has a direct and significant bearing on matters of information acquisition and management.
38
The introduction of such an economic perspective does not of course detract from the value of the quest for knowledge as an intrinsically worthy venture with a perfectly valid l’art pour l’art aspect. But as Charles S. Peirce emphasized, one must recognize the inevitably economic aspect of any rational human enterprise—inquiry included. It has come to be increasingly apparent in recent years that knowledge is cognitive capital, and that its development involves the creation of intellectual assets, in which both producers and users have a very real interest. Knowledge, in short, is a good of sorts—a commodity on which one can put a price tag and which can be bought and sold. Like many another commodity, its acquisition involves not just money alone but other resources, such as time, effort, and ingenuity. And man is a finite being who has only limited time and energy at his disposal. So much for costs. The benefits of knowledge are twofold: theoretical (or purely cognitive) and practical (or applied). A brief survey of the situation might look somewhat as in Display 1. Three things are thus fundamentally at issue here: (1) Developing and enhancing our understanding of the world we live in by way of insight into matters of description and explanation. (2) Averting unpleasant surprises by providing us with predictive insight. And, (3) guiding effective action by enabling us to achieve at least partial control over the course of events. Why pursue cognitive economy? Why strive for the most economical resolution of the issues before us? After all, we have no categorical assurance that the most economical—the simplest, most uniform, and symmetrical, etc.—is actually correct. Rather, we do so because economy is a decider, a means to definiteness. Resources can be expended in this way or that. Economy eliminates alternatives. In cognition as in life, there will be many possible paths leading towards a distinction and alternatives proliferate here. But there is generally only one that is the usual economics of effort. Economy thus serves as a decision principle. It guides investigation. It need not lead immediately to the correct situation. But in tracking economy along the pathway of the best-option alternatives, we are bound to arrive at the best available resolution in the most efficient and cost-minimizing way.
39
Display 1 COGNITIVE BENEFITS I. Theoretical —Answering our questions about how things stand in the world and how its processes function. —Guiding our expectations by way of anticipation. II. Practical —Guiding our actions in ways that enable us to control (parts of) the course of events.
2. VIRTUE COMPLIMENTARITY IN EPISTEMOLOGY So much for economy. Turning now to other philosophical concerns, let us begin with Niels Bohr’s epistemic illustration of complementarity. It is a basic principle of this field that increased confidence in the correctness of our estimates can always be secured at the price of decreased accuracy. For in general an inverse relationship obtains between the definiteness or precision of our information and its substantiation: detail and security stand in a competing relationship. We estimate the height of the tree at around 25 feet. We are quite sure that the tree is 25r5 feet high. We are virtually certain that its height is 25r10 feet. But we can be completely and absolutely sure that its height is between 1 inch and 100 yards. Of this we are “completely sure” in the sense that we are “absolutely certain,” “certain beyond the shadow of a doubt”, “as certain as we can be of anything in the world,” “so sure that we would be willing to stake your life on it,” and the like. For any sort of estimate whatsoever there is always a characteristic trade-off relationship between the evidential security of the estimate, on the one hand (as determinable on the basis of its probability or degree of acceptability), and on the other hand its contentual detail (definiteness, exactness, precision, etc.). And so these two factors—security and detail—stand in a relation of inverse proportionality, as per the picture of Display 2.
40
Display 2 THE COMPLEMENTARITY TRADE-OFF BETWEEN SECURITY AND DEFINITENESS IN ESTIMATION
increasing security (s)
s x d = c (constant)
increasing detail (d) NOTE: The shaded region inside the curve represents the parametric range of achievable information, with the curve indicating the limit of what is realizable. The concurrent achievement of great detail and security is impracticable.
This situation was adumbrated by the French physicist Pierre Maurice Duhem (1981–1916) and may accordingly be called “Duhem’s Law.”2 In his classic work on the aim and structure of physical theory, Duhem wrote as follows: A law of physics possesses a certainty much less immediate and much more difficult to estimate than a law of common sense, but it surpasses the latter by the minute and detailed precision of its predictions ... The laws of physics can acquire this minuteness of detail only by sacrificing something of the fixed and absolute certainty of common-sense laws. There is a sort of teetertotter of balance between precision and certainty: one cannot be increased except to the detriment of the other. 3
In effect, these two factors—security and detail—stand in a teeter-totter relation of inverse proportionality, much as with physical complementarity. While in many cases the virtue complementarity at issue rests on a contingent basis. However, in the present case of security vs. detail it is—or should be—clear that the complementarity at issue rests on conceptual rather than contingent considerations.
41
Or consider another epistemological example of such complementarity. There are only so many hours in a day and thus only so many in a workday and a work-life. And so, if the extent/range of someone’s knowledge is measured by the number of items of information this individual ever processes and the depth/profundity of this knowledge is measured by the amount of attention these items receive, we will have it that extent and depth trade off against one another in much the same see-saw manner as with the security/detail balance of Display 2. Accordingly, a further epistemic situation of virtue complementarity clearly obtains as between the extent and the depth of an individual’s knowledge. And in this instance the relationship at issue is not something contingent but lies in the inevitable nature of things. In the era of the knowledge evolution, exponential growth in the extent of information overall means exponential decay in the information of individuals—an era of exploding superficiality. Continuing along such lines, let it be noted that there are two significantly different sorts of errors, namely errors of commission and errors of omission. For it is only too clear that errors of commission are not the only sort of misfortune there are. Ignorance, lack of information, cognitive disconnection from the world’s course of things—in short, errors of omission—are also negativities of substantial proportions, and this too is something we must work into our reckoning. Both are negativities and obviously need to be avoided insofar as possible in any sensible inquiry process. With error-avoidance in matters of cognition the trade-off between errors of type 1 and errors of type 2—between improper negatives and false positives—is critical in this connection. For instance, an inquiry process of any realistically operable sort is going to deem some falsehoods acceptable and some truths not. And the more we fiddle with the arrangement to decrease the one sort of error, the more we manage to increase the other. The by now familiar teeter-totter relationship obtains here once more. For unfortunately the reality of it is that any given epistemic program—any sort of process or policy of belief formation—will answer to the situation of Display 3. In discerning between the sheep and the goats, any general decision process will either allow too many goats into the sheepfold or exclude too many sheep from its purview. The cognitive realities being what they are, perfection is simply unattainable here.
42
Display 3 THE PREDICAMENT OF COGNITIVE PROCEEDINGS
errors of commission errors of omission
To be sure, agnosticism is a sure-fire safeguard against errors of commission in cognitive matters. If you accept nothing then you accept no falsehoods. But error avoidance as such does not bring one much closer to knowing how pancakes are actually made. The aims of inquiry are not necessarily enhanced by the elimination of cognitive errors of commission. For if in eliminating such an error we simply leave behind a blank and for a wrong answer substitute no answer at all we have simply managed to exchange an error of commission for one of omission. Accordingly, a situation of desideratum complementarity obtains here requiring a trade-off between the values at issue: How much gain in one is needed to compensate for how much loss in the other? And such situations crop up in many epistemic contacts as Display 3 illustrates. Overall then we face the reality that there is an effectively inevitable trade-off among the cognitive virtues. The benefits gained by an increase in one gets counterbalanced by the cost of decrease in another. This state of things comes to the fore when we consider the problem of skepticism. 3. SKEPTICISM AND RISK The scientific researcher, the inquiring philosopher, and the plain man, all desire and seek for information about the “real” world. The skeptic rejects their efforts as vain and their hopes as foredoomed to disappointment from the very outset. As he sees it, any and all sufficiently trustworthy information about factual matters is simply unavailable as a matter of general principle.
43
To put such a radical skepticism into a sensible perspective, it is useful to consider the issue of cognitive rationality in the light of the situation of risk taking in general. For cognitive efficacy calls for a judicious balance between vacuity and potential error. There are three very different sorts of personal approaches to risk and three very different sorts of personalities corresponding to these approaches, as follows: Type 1: Risk avoiders Type 2: Risk calculators 2.1: cautious 2.2: daring Type 3: Risk seekers The type 1, risk-avoidance, approach calls for risk aversion and evasion. Its adherents have little or no tolerance for risk and gambling. Their approach to risk is altogether negative. Their mottos are: “Take no chances,” “Always expect the worst,” and “Play it safe.” The type 2, risk-calculating, approach to risk is more realistic. It is a guarded middle-of-the-road position, based on due care and calculation. It comes in two varieties. The type 2.1, cautiously calculating, approach sees risk taking as subject to a negative presumption, which can however, be defeated by suitably large benefits. Its line is: “Avoid risks unless it is relatively clear that a suitably large gain beckons at sufficiently suspicious odds.” It reflects the path of prudence and guarded caution. The type 2.2, daringly calculating, approach sees risk taking as subject to a positive presumption, which can, however, be defeated by suitably large negativities. Its line is: “Be prepared to take risks unless it is relatively clear that an unacceptably large loss threatens at sufficiently inauspicious odds.” It reflects the path of optimistic hopefulness. The type 3, risk-seeking, approach sees risk as something to be welcomed and courted. Its adherents close their eyes to danger and take a rosy view of risk situations. The mind of the risk seeker is intent on the delightful situation of a favorable issue of events: the sweet savor of success is already in his nostrils. Risk seekers are chance takers and go-for-broke gamblers. They react to risk the way an old warhorse responds to the sound of the musketry: with eager anticipation and positive relish. Their motto is: “Things will work out.”
44
In matters of cognition, the skeptic accepts nothing, the evidentialist only the chosen few, the syncretist virtually anything. In effect, the positions at issue in skepticism, syncretism, and evidentialism simply replicate, in the specifically cognitive domain, the various approaches to risks at large. It must, however, be recognized that in general two fundamentally different kinds of misfortunes are possible in situations where risks are run and chances taken: 1. We reject something that, as it turns out, we should have accepted. We decline to take the chance, we avoid running the risk at issue, but things turn out favorably after all, so that we lose out on the gamble. 2. We accept something that, as it turns out, we should have rejected. We do take the chance and run the risk at issue, but things go wrong, so that we lose the gamble. If we are risk seekers, we will incur few misfortunes of the first kind, but, things being what they are, many of the second kind will befall us. On the other hand, if we are risk avoiders, we shall suffer few misfortunes of the second kind, but shall inevitably incur many of the first. The overall situation has the general structure depicted in Display 4. Clearly, the reasonable thing to do is to adopt a policy that minimizes misfortunes overall. It is thus evident that both type 1 and type 3 approaches will, in general, fail to be rationally optimal. Both approaches engender too many misfortunes for comfort. The sensible and prudent thing is to adopt the middle-of-the-road policy of risk calculation, striving as best we can to balance the positive risks of outright loss against the negative ones of lost opportunity. Rationality thus counterindicates approaches of type 1 and type 2. Instead, it takes line of the counsel, “Neither avoid nor court risks, but manage them prudently in the search for an overall minimization of misfortunes.” The rule of reason calls for sensible management and a prudent calculation of risks; it standardly enjoins upon us the Aristotelian golden mean between the extremes of risk avoidance and risk seeking.
45
Display 4 RISK ACCEPTANCE AND MISFORTUNES Misfortune of kind 1 Misfortune of kind 2 Number of (significant) misfortunes
0
50
100
Increasing risk acceptance (in % of situations) Type 1 Type 2.1 Type 2.2 (Risk (Cautious (Daring avoiders) calculators) calculators)
Type 3 (Risk seekers)
Turning now to the specifically cognitive case, it will be clear that the skeptic succeeds splendidly in averting misfortunes of the second kind. He makes no errors of commission; by accepting nothing, he accepts nothing false. But, of course, he loses out on the opportunity to obtain any sort of information. The skeptic thus errs on the side of safety, even as the syncretist errs on that of gullibility. The sensible course is clearly that of a prudent calculation of risks. Ultimately, then, we face a question of value trade-offs. Are we prepared to run a greater risk of mistakes to secure the potential benefit of an enlarged understanding? In the end, the matter is one of priorities—of safety as against information, of ontological economy as against cognitive advantage, of an epistemological risk aversion as against the impetus to understanding. The ultimate issue is one of values and priorities, weighing the negativity of ignorance and incomprehension against the risk of mistakes and misinformation.
46
And here the skeptics’ insistence on safety at any price is simply unrealistic, and it is so on the essentially economic basis of a sensible balance of costs and benefits. Risk of error is worth running because it is unavoidable in the context of the cognitive project of rational inquiry. Here as elsewhere, the situation is simply one of nothing ventured, nothing gained. Since Greek antiquity, various philosophers have answered our present question, Why accept anything at all?, by taking the line that man is a rational animal. Qua animal, he must act, since his very survival depends upon action. But qua rational being, he cannot act availingly, save insofar as his actions are guided by his beliefs, by what he accepts. This argument has been revived in modern times by a succession of pragmatically minded thinkers, from David Hume to William James. On the present perspective, then, it is the negativism of automatically frustrating our basic cognitive aims (no matter how much the skeptic himself may be willing to turn his back upon them) that constitutes the salient theoretical impediment to skepticism in the eyes of most sensible people. The crucial defect of skepticism is that it is simply uneconomic. 5. WHEN IS ENOUGH EVIDENCE ENOUGH? The cognitive enterprise of information acquisition through rational inquiry is a vast venture in decision making—of deciding which questions to pursue, which possible question-resolutions to take seriously, which answers to accept as correct and (at least putatively) true. The last of these issues is particularly crucial. Almost immediately further information on a given issue can always be secured. So why not always simply suspend judgment until all possibly relevant information is at hand—which may well of course never happen. When is the evidence enough to give us rational warrant for endorsing/directing an answer? Consider the following situation. I face a decision, which I can either resolve now or later on, after a further inquiry which will cost me a certain amount C. Let us suppose that making the decision correctly brings a return of gain G and making it incorrectly exacts the penalty of a loss in relation to G, and so will yield G – L. Moreover, I cannot at present determine the correct answer; as I see it, my decision can go either way, 50:50. Then at present my expectation is: ½ G + ½ (G - L) = G – ½ L
47
But after making the further inquiry needed to settle the matter—thus expending C—my expectation will be: G–C Accordingly undertaking that further inquiry will be worth while as long as G–C>G–½L That is, as long as ½L>C The crux of the matter is—obviously enough—the comparison of the negativity of error with the cost of removing its prospect by further inquiry. To be sure, with respect to many issues the gathering of further evidence can prove to be not impracticable but pointless. To arrive at a diagnosis, here physician can observe his patient one more day and he can insist that some further tests be done. If the chances of error are very small and the penalty of error very small as well, there is little point in pushing for greater evidential assurance. The crux here is the fundamentally economic question of whether the advantages of a secure decision are costeffectively worthwhile in relation to the disadvantages of delay and the added costs of further inquiry. To be sure, some questions just cannot be resolved by any finite amount of effort. What is known as the “halting problem” in pure mathematics is relevant here. It envisions the prospect of calculations where there is never any way of deciding that we have done enough, where the prospect of a change of situations can never be ruled out at any particular stage whatsoever. Consider, for example, the decimal expansions of S and of 2 , respectively: 3.14159 ... 2.14142 ...
48
And now pose the question of the existence of a value of n when the nth place of the S expansion duplicates that of the 2 expansion, for 100 successive times. No matter how far we carry out our calculations with negative results, we can never rule out that there will be no value of n that would do the job. Such issues are matters of speculative conjecture that intrigue and challenge theorists. Even where they cannot be settled the attempt at their resolution often brings interesting and far-reaching facts to life. But were it not for those collateral benefits of potential value in other contexts, a stage could well be reached where the expenditure of further effort on the matter would simply not be worthwhile. 6. DIMINISHING RETURNS Quality and quantity are factors that play every bit as significant a role in matters of knowledge as elsewhere. They are usually seen as terms of contrast. So regarded, the idea quantifying quality would be seen as a contradiction in terms. But this does not do justice to the actual situation—and in particular not as regards knowledge. Knowledge in effect is high—grand information—information that is cognitively significant. And the significance of incrementally new information can be measured in terms of how much it adds, and thus by the ratio of the increment of new information to the volume of information already in hand: •I/I. Thus knowledge-constituting significant information is determined through the proportional extent of the change effected by a new item in the preexisting situation (independently of what that preexisting situation is). In milking additional information for cognitively significant insights it is generally the proportion of the increase that matters: its percentage rather than its brute amount. And so, with high-quality information or knowledge it is a control matter of how much a piece of information 'I added to the total of what was available heretofore I. Looking from this perspective at the development of knowledge as a sum-total of such augmentations we have it that the total of high-grade information comes to: dI K = ³ I | log I
49
On this basis, viewing knowledge as significant information we have it that the body of knowledge stands not as the mean amount of information-todate but rather merely as its logarithm. We have here an epistemic principle that might be called The Law of Logarithmic Returns. The Law of Logarithmic Returns has substantial implications for the rate of scientific progress.4 For while one cannot hope to predict the content of future science, the knowledge/information-relationship does actually put us into a position to make plausible estimates about its volume. To be sure, there is, on this basis, no inherent limit to the possibility of future progress in scientific knowledge. But the exploitation of this theoretical prospect gets ever more difficult, expensive, and demanding in terms of effort and ingenuity. New findings of equal significance require ever greater aggregate efforts. Accordingly, the historical situation has been one of a constant progress of science as a cognitive discipline notwithstanding its exponential growth as a productive enterprise (as measured in terms of resources, money, manpower, publications, etc).5 If we look at the cognitive situation of science in its quantitative aspect, the Law of Logarithmic Returns pretty much says it all. On its perspective, the struggle to achieve cognitive mastery over nature presents a succession of ever-escalating demands, with the exponential growth in the enterprise associated with a merely linear growth in the discipline. 7. PLANCK’S PRINCIPLE It is not too difficult to come by a plausible explanation for the sort of information/knowledge relationship that is represented by our K | log I measure. The principal reason for such a K/I imbalance may lie in the efficiency of intelligence in securing a view of the modus operandi of a world whose law-structure is comparatively simple. For here one can learn a disproportionate amount of general fact from a modest amount of information. (Note that whenever an infinite series of 0’s and 1’s, as per 01010101 ..., is generated—as this series indeed is—by a relatively simple law, then this circumstance can be gleaned from a comparatively short initial segment of this series.) In rational inquiry we try the simple solutions first, and only if and when they cease to work—when they are ruled out by further findings (by some further influx of coordinating information)—do we move on to the more complex. Things go along smoothly until an oversimple solution becomes destabilized by enlarged experience. We get by with the comparatively simpler options until the expanding information about the world’s
50
modus operandi made possible by enhanced new means of observation and experimentation demands otherwise. But with the expansion of knowledge new accessions set ever increasing demands. The implications for cognitive progress of this disparity between mere information and authentic knowledge are not difficult to discern. Nature imposes increasing resistance barriers to intellectual as to physical penetration. Consider the analogy of extracting air for creating a vacuum. The first 90 % comes out rather easily. The next 9 % is effectively as difficult to extract than all that went before. The next .9 is proportionally just as difficult. And so on. Each successive order-of-magnitude step involves a massive cost for lesser progress; each successive fixed-size investment of effort yields a substantially diminished return. The circumstance that the increase of information carries with it a merely logarithmic return in point of increased knowledge suggests that nature imposes a resistance barrier to intellectual as much as to physical penetration. Intellectual progress is exactly the same: when we extract actual knowledge (i.e. high-grade, nature-descriptively significant information) from mere information of the routine, common “garden variety,” the same sort of quantity/quality relationship is obtained. Initially a sizable proportion of the available is high grade—but as we press further this proportion of what is cognitively significant gets ever smaller. To double knowledge we must quadruple information. As science progresses, the important discoveries that represent real increases in knowledge are surrounded by an ever vaster penumbra of mere items of information. (The mathematical literature of the day yields an annual crop of over 200,000 new theorems. 6 ) In the ongoing course of scientific progress, the earlier investigations in the various departments of inquiry are able to skim the cream, so to speak: they take the “easy pickings,” and later achievements of comparable significance require ever deeper forays into complexity and call for an everincreasing bodies of information. (And it is important to realize that this cost-increase is not because latter-day workers are doing better science, but simply because it is harder to achieve the same level of science: one must dig deeper or search wider to achieve results of the same significance as before.) This situation is reflected in Max Planck’s appraisal of the problems of scientific progress. He wrote that “with every advance [in science] the difficulty of the task is increased; ever larger demands are made on the achievements of researchers, and the need for a suitable division of labor becomes more pressing.” 7 The Law of Logarithmic Returns would at once
51
both characterize and explain this circumstance of what can be termed Planck’s Principle of Increasing Effort to the effect that substantial findings are easier to come by in the earlier phase of a new discipline and become ever more difficult in the natural course of progress. A great deal of impressionistic and anecdotal evidence certainly points towards the increasing costs of high-level science. Scientists frequently complain that “all the easy researches have been done.” 8 The need for increasing specialization and division of labor is but one indication of this. A devotee of scientific biography cannot help noting the disparity between the immense output and diversified fertility in the productive careers of the scientific colossi of earlier days and the more modest scope of the achievements of their latter-day successors. As science progresses within any of its established branches, there is a marked increase in the over-all resource-cost of realizing scientific findings of a given level intrinsic significance (by essentially absolutistic standards of importance). 9 And this at once explains a change in the structure of scientific work that has frequently been noted: first-rate results in science nowadays come less and less from the efforts of isolated workers but rather from cooperative efforts in the great laboratories and research institutes.10 The idea that science is not only subject to a principle of escalating costs, but also to a law of diminishing returns as well is due to the 19th century American philosopher of science Charles Sanders Peirce (1839–1914). In his pioneering 1878 essay on “Economy of Research” Peirce put the issue in the following terms: We thus see that when an investigation is commenced, after the initial expenses are once paid, at little cost we improve our knowledge, and improvement then is especially valuable; but as the investigation goes on, additions to our knowledge cost more and more, and, at the same time, are of less and less worth. All the sciences exhibit the same phenomenon, and so does the course of life. At first we learn very easily, and the interest of experience is very great; but it becomes harder and harder, and less and less worthwhile ... (Collected Papers, Vol. VII [Cambridge, Mass., 1958], sect. 7.144.)
The growth of knowledge over time involves ever-escalating demands. Progress is always possible—there are no absolute limits. More information will always yield proportionality greater knowledge. For the increase of knowledge over time stands to the increase of information in a proportion fixed by the inverse of the volume of already available information:
52
d d 1 dI K | log I = dt dt I dt The more knowledge we already have in hand, the slower (by very rapid decline) will be the rate at which knowledge grows with newly acquired information. As noted above, with the progress of inquiry, the larger the body of available information, the smaller will be the proportion of this information that represents real knowledge. Consider an example. In regard to the literature of science, it is readily documented that the number of books, of journals, of journal-papers has been increasing exponentially over the recent period. 11 Indeed, the volume of familiar fact that scientific information has been growing at an average of some 5 percent annually throughout the last two centuries, manifesting exponential growth with a doubling time of ca. 15 years—an order-ofmagnitude increase roughly every half century. By 1960, some 300,000 different book titles were being published in the world, and the two decades from 1955 and 1975 saw the doubling of titles published in Europe from around 130,000 to over 270,000, 12 and science has had its full share of this literature explosion. The result is a veritable flood of scientific literature. As Display 5 indicates, it can be documented that the number of scientific books, of journals, and of journal-papers, has been increasing at an exponential rate over the recent period. It is reliably estimated that, from the start, about 10 million scientific papers have been published and that currently some 30,000 journals publish some 600,000 new papers each year. However, let us now turn attention from scientific production to scientific progress. The picture that confronts us here is not quite so expansive. For there is in fact good reason for the view that the substantive level of scientific innovation has remained roughly constant over the last few generations. This contention—that while scientific efforts have grown exponentially, nevertheless the production of really high-level scientific findings has remained constant—admits of various substantiating considerations. One indicator of this constancy in high-quality science is the relative stability of honors (medals, prizes, honorary degrees, memberships in scientific academics, etc.). To be sure, in some instances these reflect a fixed number situation (e.g., Nobel prizes in natural science). But if the volume of clearly first-rate scientific work were expanding drastically, there would be mounting pressure for the enlargement of such honorific awards and mounting discontent with the inequity of the present reward-system. There
53
____________________________________________________________ Display 5 THE NUMBER OF SCIENTIFIC JOURNALS AND ABSTRACT JOURNALS FOUNDED, AS A FUNCTION OF DATE
1,000,000 100,000 10,000 Scientific Journals Number of journals
1,000 (200)
(200)
100 10 Abstract Journals (1665) 1700
1800
1900
2000
date Source: Derek J. Solla Price, Science Since Babylon (New Haven: Yale University Press, 1961).
____________________________________________________________ are no signs of this. A host of relevant considerations thus conspire to indicate that when science grows exponentially as a productive enterprise, its growth as an intellectual discipline proceeds at a merely constant and linear pace.13
54
NOTES 1
The ancients favored a triparitative division discourse: Logic and Language, Theoretical Philosophy, and Natural Philosophy. With the last setting up shop on its own as Natural Science, the former two morphed into the dichotomy of the theoretical and the practical, engendering a duality in the arrangement of university chairs that so continues in Scandinavia to the present day.
2
Here at any rate eponyms are sometimes used to make the point that the work of the person at issue has suggested rather than originated the idea or principle at issue.
3
La théorie physique: son objet, et sa structure (Paris: Chevalier and Rivière, 1906); tr. by Philip P. Wiener, The Aim and Structure of Physical Theory (Princeton, Princeton University Press, 1954), op. cit., pp. 178–79. Italics supplied.
4
It might be asked: “Why should a mere accretion in scientific ‘information’—in mere belief—be taken to constitute progress, seeing that those later beliefs are not necessarily true (even as the earlier one’s were not)?” The answer is that they are in any case better substantiated—that they are “improvements” on the earlier one’s by way of the elimination of shortcomings. For a more detailed consideration of the relevant issues, see the author’s Scientific Realism (Dordrecht: D. Reidel, 1987).
5
To be sure, we are caught up here in the usual cyclic pattern of all hypotheticodeductive reasoning. In addition to explaining the various phenomena we have been canvassing that projected K/I relationship is in turn substantiated by them. This is not a vicious circularity but simply a matter of the systemic coherence that lies at the basis of inductive reasonings. Of course the crux is that there should also be some predictive power, which is exactly what our discussion of deceleration is designed to exhibit.
6
See Stanislaw M. Ulam, Adventures of a Mathematician (New York: Scribner, 1976).
7
Max Planck, Vorträge und Erinnerungen, 5th ed. (Stuttgart, 1949), p. 376; italics added. Shrewd insights seldom go unanticipated, so it is not surprising that other theorists should be able to contest claims to Planck’s priority here. C. S. Peirce is particularly noteworthy in this connection.
8
See William George, The Scientist in Action (New York: Arno Press, 1936), p. 307. The sentiment is not new. George Gore vainly lambasted it 100 years ago: “Nothing can be more puerile than the complaints sometimes made by certain cultivators of a science, that it is very difficult to make discoveries now that the soil has been exhausted, whereas they were so easily made when the ground was first broken ...” The Art of Scientific Discovery (London: Longmans, Green, and Co., 1878), p. 21.
55
NOTES 9
The following passage offers a clear token of the operation of this principle specifically with respect to chemistry: Over the past ten years the expenditures for basic chemical research in universities have increased at a rate of about 15 per cent per annum; much of the increase has gone for superior instrumentation, [and] for the staff needed to service such instruments ... Because of the expansion in research opportunities, the increased cost of the instrumentation required to capitalize on these opportunities, and the more highly skilled supporting personnel needed for the solution of more difficult problems, the cost of each individual research problem in chemistry is rising rapidly. (F. H. Wertheimer et al., Chemistry: Opportunities and Needs [Washington, D.C., 1965; National Academy of Sciences/National Research Council], p. 17.)
10
The talented amateur has virtually been driven out of science. In 1881 the Royal Society included many fellows in this category (with Darwin, Joule, and Spottiswoode among the more distinguished of them). Today there are no amateurs. See D. S. C. Cardwell, “The Professional Society” in Norman Kaplan (ed.), Science and Society (Chicago: Rand McNally, 1965), pp. 86–91 (see p. 87).
11
Cf. Derek J. Price, Science Since Babylon, 2nd ed. (New Haven CN: Yale University Press, 1975), and also Characteristics of Doctrinal Scientists and Engineers in the University System, 1991 (Arlington, VA.: National Science Foundation, 1994); Document No. 94–307.
12
Data from An International Survey of Book Production During the Last Decades (Paris: UNESCO, 1985).
13
Further material relevant to the material of this chapter can be found in the author’s Scientific Progress (Oxford: Blackwell, 1976), and Epistemetrics (Cambridge: Cambridge University Press, 2006).
56
Chapter 5
THE UNEASY UNION OF IDEALITY AND PRAGMATISM IN INQUIRY 1. IDEALIZATION AS A UNIFYING PRINCIPLE Idealization provides the key to understanding various fundamental philosophical relationships. Granted, an ideal is something “unrealistic,” something that is not actually realizable and attainable. It looks to a completion and perfection that is not to be achieved under the obstreperous conditions of a difficult reality. Nevertheless, it is an eminently useful resource because it serves as a constant reminder that what we actually have is imperfect and improvable, and thereby offers us a constant challenge to endeavor to improve on what we actually have. Moreover—and this is the presently crucial point—idealization provides for us a conceptual instrument by whose means some key philosophical ideas can be explained and relationships understood. To see this idea at work, consider some of the traditional philosophical contracts and dichotomies: appearance/reality; phenomena/actuality; what seems/what is; what we think/what actually is; belief/fact. Here we appear to be confronting opposites that glower at one another across a gulf of seemingly unsurmountable differentiation, challenging philosophers with a seemingly insuperable barrier. To all appearances there is just no way of getting there from here. But just here idealization comes to the rescue by affording a convenient means of building a bridge across this seemingly impassible barrier and effecting a viable connection between its seemingly opposed contrasts. The key idea is conveyed by the following instances: • Reality not disconnected from appearance: it just exactly is what would appear in ideal conditions. • Fact is not disconnected from belief: it just exactly is what belief would maintain in ideal conditions. • What it is not disconnected from what seems: it is what would seem to be so in ideal conditions.
• What is just is not disconnected from what is lawful: it is what would be lawful in ideal conditions. As these examples illustrate, idealization provides an effective means for coordinating and connecting certain seemingly opposed philosophical contrasts. 2. WHY IDEALS ARE UNREALISTIC: DESIDERATUM COMPLEMENTARITY The problem with idealizations is of course that they are not effectively realizable as such. And this is so for a deep rooted and compelling reason. For we here encounter the phenomenon of what might be called desideratum complementarity. It lies in the nature of things that their desirable features are in general competitively interactive. A conflict or competition among desiderata is an unavoidable fact of life, seeing that since positivities cannot all be enhanced at once since more of the one can only be realized at the expense of less of the other. All too often parameters of merit are linked (be it through a nature-imposed or a conceptually mandated interrelationship) in a seesaw or teeter-totter interconnection where more of the one automatically ensures less of the other. Situations of trade-off along these general lies occur in a wide variety of contexts, and many parameters of merit afford instances of this phenomenon. Thus as the medieval knight-in-armor soon learnt to his chagrin, safety and mobility are locked into a conjunction-resistant conflict when it comes to dealing with his armor. And automobile manufacturers of the present confront pretty much the same problem. Or consider homely situation of a domestic garden. On the one hand we want the garden of a house to be extensive—to provide privacy, attractive vistas, scope for diverse planting, and so on. But on the other had we also want the garden to be small—affordable to install, convenient to manage, affordable to maintain. But of course we can’t have it both ways: the garden cannot be both large and small. The desiderata at issue are locked into a see-saw of conflict. Overall, desideratum complementarity is pretty well inevitable with any complex, multidimensional good whose overall merit hinges on the cooperation of several distinct value-components. In all such cases we have a teeter-totter, see-saw relationship of the general sort here characterized as desideratum complementarity. Beyond a certain point, augmentations of
58
the one are simply incompossible with augmentations of the other (to use Leibniz’s terminology). There is always a trade-off curve that characterizes the decrease in one parameter of value that is the unavoidably exacted price for an increase in the other. 3. THE INEVITABILITY OF COMPROMISES IN INQUIRY Such situations of complementarity are also encountered in the context of inquiry. With higher standards of acceptability we plunge into errors of omission. With lower standards we plunge into errors of commission—and even inconsistency. And yet we cannot have it both ways but must settle for an imperfect compromise. Such clashes occur also in matter of inquiry and cognition. The classic illustrations are security/definiteness reliability/detail informativeness/ vulnerability plurality/newsworthiness And these conflicts are present both at the local level of individual theses and contributions and at the global level of theories and systems. Here, with respect to cognitive engineering, the situation is analogous with that of physical engineering. In physical engineering we overdesign. We prepare for worst-case scenarios; we indulge an excess of caution; we protect against disasters. But such insurance is neither cost-free nor risk free. The more complex and ambitious the overall mechanism (physical or cognitive system) the more vulnerable it becomes to the prospect of a system failure. With cognitive as with physical systems the less we ask of them by way of sophistication and ambitiousness of operation the further we reduce the prospect of malfunction. And yet we pay a substantial price.
59
Display 1 HYPERBOLICALLY IMAGINABLE FAILINGS BARELY CONCEIVABLE FAILINGS REASONABLE FAILINGS LIKELY FAILINGS
The situation in matters of knowledge is similar. What do we do when the things we accept on rationally cogent ground prove to be collectively inconsistent? We launch into damage control. We seek out the weakest link within the context of discussion. We do what investors do when market conditions turn difficult—we opt for safety. Yet we do not—cannot provide for absolute security against everything, however fanciful, unrealistic and hyperbolic. Nor can we do this in cognitive engineering. We cannot achieve absolute security against Descartes’ all-powerful evil deceiver nor against the skeptic for whom all life is but a dream. There are no defenses against unrealism—save by stressing its very nature. We have to worry about the possibilities of failure already, but we must do so in the face of the realization that they come at very different levels of expectability. (See Display 1.) In realistic cognitive management we enlarge the domain of worriment only as far as we need to in order to merit the discernible realities of the situation. We calculate the trade-off costs and benefits: ignorance and unknowing as again incorrectness and error. The whole process is an exercise in theoretical reasoning on the basis of abstract general principles but of practical sagacity in the management of resources in the face of costs and gains, of risks and benefits.
60
4. FROM IDEALIZATION TO PRAGMATIC AND CONTEXTUAL OPTIMIZATION In the management of information we deploy certain rules and regulations. For—to reemphasize—information management is cognitive engineering. It is, in the first analysis, a process that is structurally and fundamentally not altogether different from bricklaying or plastering. And in actual practice this calls for a negotiation between the (obtainable) realities of the situation and the (unachievable) idealities that prevail in the domain. In matters of knowledge as in matters of politics, “pragmatism” is in a position based on compromise and accommodation—of adjustment (perhaps with reluctance and regret) to the aft-unwelcome reality of things. It is a position of sub-idealization, of rational resignation. Its rationale lies in the idea that since it is in-principle impossible for us to have things be as we would ideally like we have and are thereby constrained to do the best that is practicable in the circumstances. The realities we have must serve as placeholders for the idealities we seek. And these circumstances have to be understood as being defined not by the unrealizable idealities of the matter but rather by the prevailing conditions of the existing situation at hand. It is—as pragmatism sees it—the purposive fabric of the situation that is the arbiter of the adequacy of our problem-resolutions—even within the domain of inquiry and cognition. The questions we confront always arise in circumstances where there is something that we expect that answer to do for us—some purpose it is expected to served. Perhaps this is only the purely cognitive benefit of “Allaying our uncertainty”—removing our ignorance and unknowing. Here nothing narrowly practical is at stake. (In such cases we can afford to set very high standards.) Even here purpose is still upon the scene and delaying a decision forever is not practicable with a being whose lifespan is finite. Desideratum complementarity is an ineliminable feature of the real. The world’s furnishings are inevitably such that any merit of a thing is a complex that disassembles into a plurality of subordinate merits each one of which conflicts with some of the rest. And this means that absolute perfection—now understood as maximal merit in every evaluation-relevant respect—is something that is in principle impossible of realization. (Structures can have the merit of being livable and enduring, but each will defeat the other. Pyramids endure, but are fit only for the dead; cabins are livable, but subject to decay.)
61
Desideratum complementarity has the consequence that to envision an ideal that optimizes matters in every desirable direction is to suspend realism and take flight in pure fancy. In matters of enhancing merit an advance in one direction involves retreat in another. To optimize we must compromise—strive for the best achievable balance of merits. But here we at once come face to face with the question: Best for what? For there just is no absolute best—no “best for everything all-at-once.” Optimization is unavoidably contextual, uneliminably purpose conditioned. It is, in sum, a pragmatic, purpose-coordinated process. And in view of the complexities that arise in this connection, there just is no purposively context-free, all-in, best—no absolute or categorical optimization. And just here lies the unavoidability of pragmatism even in “purely theoretical” matters of inquiry and value alike. And in the end the ironic fact remains that even in matter of theoria—of rational inquiry and cognitive development—considerations of praxis, of purpose-realization, must be determinative for of our theorizing. Somewhat ironically, the key to reality is afforded by idealization. Matters in reality are just what inquiry would indicate them to be in ideal conditions. Ontology is idealized epistemology. Reality just is what it reveals itself to be in ideal circumstances, where revelation is of course not alone but encompasses conceptualization as well. Such an approach averts the Kantian gulf between a realm of appearances and actualities as such. Those notorious “things in themselves” are now simply things as they would appear in idealized conditions of observation and conceptualization. The things that are reality’s actual furnishings are in principle self-revealing, albeit only under ideal conditions.
62
Chapter 6
ON INCONSISTENCY AND PROVISIONAL ACCEPTANCE 1. THE PLUSSES AND MINUSES OF PROVISIONAL ACCEPTANCE As traditional epistemology sees it, the endorsement of claims is a dichotomous matter: they are either the accepted or not, there is no really comfortable halfway house in between. And acceptance is subject to one absolutely crucial and non-negotiable demand, viz. that the totality of what is accepted be logically consistent—that contradiction and inconsistency be altogether absent from the realm of the accepted. Acceptance, so it is held, is acceptance-as-true, and the truth must be consistent. However, in contrast to this traditional approach to endorsement, a different and variant approach can also be contemplated, one that sees acceptance in a decidedly more murky light. For two very different epistemic stances towards a contention are possible, namely (1) outright acceptance as actually true, and (2) provisional or conditional acceptance as presumably (or putatively) true. Both of these modes of acceptance are just that, modes of acceptance, and whatever we accept is in a position to furnish us with information. But we ask less of provisional acceptance, and in particular the requirement for consistency is dropped. Granted, the idea of “accepting” inconsistent claims looks to be a contradiction in terms. And yet the weakened or guarded mode of acceptance can readily issue in inconsistent acceptances. With provisional acceptance we obtain a manifold of statements whose epistemic status is not viewed as a matter of firmly accepted truths but rather as that of more cautiously endorsed plausibilities. And yet here too the acceptance of a contention is to be seen as involving its availability for inference and reasoning, and informational gap-filling in general. But how can this sort of thing be made to work? 2. THE PATHWAY TO INCONSISTENCY First off, however, why should one adopt this weakened provisional—or, pejoratively, “watered down”—mode of acceptability? For the good and sufficient reason that information does not always come our way in the form of certified and uncontestable truths. We have questions and need an-
swers. And since we have to exploit all of the promising possibilities at our disposal for securing them we sometimes clutch at straws. After all, inconsistency is not vacuity. On the contrary, it arises from cognitive overcommitment. In theory, three sorts of cases can arise, according as the data at our disposal provide enough, too little, or too much information to underwrite a conclusion. The general situation at issue here can be seen in the following example of three pairs of equations in two unknowns, which poses the problem of determining the values of the parameters x and y: Case I (Too little)
Case II (Enough)
Case III (Too Much)
x+y=2 3x + 3y = 6
x+y=3 x–y=3
x+y=4 2x + 2y = 5
In the first case there is too little information to make a determination possible, in the second case just enough, and in the third too much. For the information overload of case III creates an inconsistent situation. This inherent conflict of the claims at issue renders them collectively aporetic and thereby paradoxical. And yet for even inconsistent data can provide information and provide for answers to our questions. They enable us to answer questions that would leave us perplexed and undecided in their absence. If all we need to know is whether x + y is less than 5, even case III has an answer for us. One of the prime directives of rationality is to restore consistency in such situations of inconsistency. And yet once consistency is lost, how is it to be regained? How is useful information to be extracted from inconsistent premisses? 3. INCONSISTENCY REMOVAL AND CONDITIONAL POSSIBILITIES Any group of collectively inconsistent propositions spells out a set of possibilities via the different ways of making deletions to form maximal consistent subsets. For example, consider the following, obviously inconsistent claim: (1) A < B
64
(2) B < C (3) C < A We have three possibilities for extruding inconsistency from this nonexistent triad: I.
Dismiss premiss (1) thus arriving at B < C < A
II. Dismiss premiss (2) thus arriving at C < A < B III. Dismiss premiss (3) thus arriving at A < B < C Note, however, that no matter which way we twist and turn to reestablish consistency here, certain possibilities will be excluded, as for example: IV. C < B < A V. A < C < B VI. B < A < C Our inconsistent trio thus engenders a correlative mode of possibility/necessity: I–III all become (conditionally) possible, while IV–VI become (conditionally) impossible—and their negations consequently become (conditionally) necessary. And only in extreme circumstances would the set of claims so secured as conditionally possible relative to a set of collectively inconsistent propositions cover the entire spectrum of (categorically unconditional) possibility. Relative possibility is thus generally circumscribed. And we may define the possibility-range (¡-range) of a set of collectively inconsistent propositions as the set of all categorically unconditional possibilities that are conditionally possible relative to the inconsistent propositions at issue.
65
Thus consider a trigraphic minispace as per:
Let it be postulated that an X (just exactly one) is located somewhere within this setting, and then consider the now incompatible pair of premisses: (1) X is not in rows 1 or 2 (2) X is not in rows 2 or 3 There are now just two minimally dismissive ways in which this set of premisses can be reduced to consistency, viz., the two singletons consisting of (1) or (2) alone. The following two steps are accordingly consistency restorative: I.
Dismiss premiss (1) and retain only (2): X is in Row 1
II. Dismiss premiss (2) and retain only (1): X is in Row 3 Either of these now state (conditionally) possible states of affairs. And accordingly III. X is not in Row 2 here becomes a claim of (conditional) necessity. The inconsistency at issue has not left us altogether empty handed. 4. WHY NOT CONSTRAIN CONSISTENCY FROM THE OUTSET? But why deal with alternative possibilities and not simply constrain consistency from the very outset. Consider once more the previous situation of a trigraphic mini-region with one single X located within. And now suppose the following series of claims regarding the placement of the X at issue.
66
(1) Not in rows 2, 3 (2) Not in columns 1, 2 (3) Not in columns 2, 3 In the circumstances we cannot endorse all of these claims. And only two maximal subsets of the trio (1)–(3) are self-consistently available, namely (1) & (2) and (1) & (3), which place X respectively as follows: X
X
Of course we cannot have it both ways—that is the burden of inconsistency. But we do have a great deal of information which is bound to obtain in either case—e.g., that X is not in the middle column, and that X is I the top row. On the other hand, suppose we insisted on enforcing consistency and so stipulated from the outset that one or the other (2) or (3) be abandoned. We would then get a perfectly definite answer to our question, say (for discussion’s sake) that of the left trigraph. But this would not actually be a satisfactory result. For this resolution is too definite and fails to be adequate to the informative situation represented by (1)–(3). Specifically it gives an incorrect and inappropriate answer to the question “In view of the given information might X not actually be located in the upper left-hand partition?” Enforcing consistency in these cases of inconsistent data is a discursive measure that violates the informative realities of the situation. As this example shows, a body of inconsistent claims spells out a limited range of alternative possibilities. And if we enforce consistency—one way or another—so as to fix upon a particular definite situation we will in fact loose information as to what might be so for aught that we really know to the contrary.
67
6. ENHANCING DEFINITENESS THROUGH CONTEXTUALIZATION The natural reaction whenever one inclines to accept a group of collectively inconsistent claims is to seek to overcome the inconsistency by bringing additional machinery to bear. And this sort of thing is reminiscent of a story we have heard long ago—from Aristotle, no less, in his De interpretatione. For he held that while “Socrates is sitting” and “Socrates is standing” are incompatible, they cease to be so if we vary the context of time; and while “It is raining” and “It is not raining” are incompatible, these claims can be rendered perfectly consistent by varying the context of space. So while those initial claims were inconsistent, we now contextualize their endorsement in such a manner as to render them co-tenable. Analogously, we now propose to tolerate inconsistency through mitigating its damage by a relativization to context—but now not only positional context as per space and time, but also thematic context in relation to topics or problem-settings. Granted, proceeding in this way does not come cost-free. Acceptance is now no longer a matter of categorical endorsement: it becomes contexualized, geared to a particular cognitive environment. We are, in sum, propelled into a realm where acceptance is not absolute and unconditional but becomes attuned to a variety of contexts of deliberation. Yet still there is much to be gained. For contextualization makes it possible to prioritize one sort of thesis over another—making some give way to others in conflict. And anything that does this puts us into a position to provide for more definite information in cases of propositional inconsistency. It is clear that the contextual prioritization that is our guide in matters of consistency restoration can be of many different sorts: (1) Prioritization by compatibility with a certain portrayed presupposition (2) Prioritization by source (3) Prioritization by theme or subject (4) Prioritization by format (e.g., specificity or generality) By way of an illustration consider the following set of claims:
68
(1) Not on a diagonal (2) Not in the middle column (3) Not in the rightmost column (4) Not in the middle row (5) Not in the bottom row Pictographically these theses may be represented as follows: (1)
(2)
(3)
(4)
(5)
Taken together these are all clearly inconsistent. But now suppose we are in a context where we place greater reliance on column-geared information than on row-geared information so that we effectively write off those last two possibilities. Then we shall arrive at:
X
On the other hand, if we were to place greater reliance on row-geared information than on column information then we would arrive at X
69
By contrast if all the available possibilities were deemed as equal we would arrive at the following statistical view of the situation. 4
4
3
4
2
3
3
3
2
While no definite conclusion is now available, nevertheless the top left compartment would be seen as far more likely than the lower right. In a context in which we had to place a bet on a corner, we would take that of the upper left. But we shall not here purpose this statistical approach any further. So what has emerged can be summarized in three points (1) that inconsistent data nevertheless convey information, (2) that a given, fixed body of such inconsistent data conveys different information in different epistemic contexts, and (3) that this end is achieved through providing for different conclusions in these different contexts of deliberative analysis. 7. ZONES OF CONTEXTUALITY When inconsistency confronts us, contextual variation can come to the rescue. And there are many possibilities here—not space and time alone, but also such orienting factors as many arise from the particular thematic setting. We may define as a contextual restraint—or now simply “context” for short—of an inconsistent set of propositions any consideration that sidelines certain of its correlative conditional possibilities. One example of a context is given by a topic-inherent assumption, supposition, or stipulation. For example with respect to our trigraphs we may have it that the location of that postulated X is conditioned by circumstances that counterindicate its being located (1) in an outside row, (2) in an outside column, (3) on a diagonal.
70
The process of consistency reduction leads to three alternatives: (1) & (2)
(1) & (3)
(2) & (3)
X
To begin with, it transpires that for aught that we are told by our (inconsistent) premisses, the X can be anywhere save in the four corner positions. But if additionally contextual factors indicate that our X will not lie in the central row then only the two alternatives at issue with (2) & (3) are left open and the ultimate result—thought still indefinite—has been substantially clarified.
Display 1 CONTEXTUAL DIFFERENTIATION REGIONS OF ACCEPTANCE I
II 1 2
3
III NOTE: Incompatibility means that the central region of ubiquitous commonality is empty.
With inconsistent claims we can achieve contextually differentiated regions of compatibility, so that viable alternative ranges of possibility span distinct areas as per Display 1. Thus in different contexts of deliberation
71
different items will be prioritized. For example, if III is subordinated to I and II then regions 2 and 3 will be eliminated and matters will be reduced to 1 as the appropriate possibility-range. (And mutatis mutandis in the other cases.) Different zones of contextuality are created by different pathways to issue-prioritization. And one selfsame assertion may be endorsed or dismissed in different contexts of deliberation in the interest of getting the best realizable information relation to the issue that is specifically at hand. 8. A PRAGMATIC TURN And this line of thought points in a decidedly pragmatic direction. The fact that we can employ different modes of thesis prioritization (e.g., generality preference and specificity preference) in different deliberative settings indicate a pragmatic and purposively orientated line of approach. But here the pragmatism and purpose lies in the cognitive order: What we have here is a version of pragmatism alright, but now with respect to our specifically cognitive praxis. Let us now turn to a specific illustration of this situation. 9. SPECIFICITY PRIORITIZATION The story is told that Herbert Spencer said of Thomas Buckle (or was it the other way round?—as it could just as well have been) that his idea of a tragedy was a beautiful theory destroyed by a recalcitrant fact. A fundamental epistemic principle is at issue here, namely that when the limited particularity of fact and the broad generality of theory come into conflict in the case of otherwise plausible propositions, then it is the former that will prevail. Facts, as the proverb has it, are stubborn things: in case of a clash, facts must prevail over theories, observations over speculations, concrete instances over abstract generalities, limited laws over broader theories. With factual issues specificity predominates generality when other things are anything like equal. A far-reaching Principle of Specificity Precedence accordingly comes into view. In cases of conflict or contradiction in our information, the cognitive dissonance that needs to be removed is to be resolved in favor or the more particularly concrete, definite party to the conflict. The more general, the more cases included, and so the more open to error: generality is a source of vulnerability: and when clashes arise, particularity enjoys priority. We presume that specifics are in better probative condition than gener-
72
alities because they are by nature easier to evidentiate, seeing that generalities encompass a multitude of specifics. Contrariwise, seemingly established generalities are easier to disestablish than specifics because a single counter-instance among many possibilities will disestablish a generality, whereas it takes something definite to disestablish a particularity. Accordingly, it transpires that ordinarily and in “normal” circumstances specificities are on safer ground and thereby enjoy probative precedence in situations of discord and inconsistency. When mere information is being distilled into coherent knowledge, specificity prioritization is the rule. 10. ILLUSTRATIONS OF SPECIFICITY PRECEDENCE Such a Principle of Specificity Precedence can be illustrated from many different points of view. As already noted, it is a standard feature of scientific practice, where when theory and observation clash, it is, in general, observation that prevails. 1 The practice of monitoring hypothetical theorizing by means of experimentation is characteristic of the scientific process, and the Principle of Specificity Precedence is fundamental here. Throughout, whenever speculation clashes with the phenomena, a conjectured hypotheses with the data at our disposal, or a theory with observation then it is generally—and almost automatically—the former that is made to give way. Presumption, that is to say, stands on the side of specificity throughout the realm of factual inquiry. This circumstance obtains not only in clashes between observation with theory, but also in clashes between a lower level (less general or abstract) theory with one that is of a higher (more general and abstract) level. Here too the comparatively specific rival will prevail in situations of conflict. And the general principle prevails with the historical sciences every bit as much as from the sciences of nature. A single piece of new textual evidence or a single item of new archeological discovery can suffice to call a conflicting theory into question. Here too a penchant for specificity preference is very much in operation. Philosophy affords yet another illustration of specificity preference. The work of Thomas Reid (1710–96) and the philosophers of the Scottish school illustrates this in an especially vivid way. These thinkers reasoned as follows: Suppose that a conflict arises between some speculative fact of philosophical theorizing and certain more particular, down-to-earth, bits of everyday common sense. Then it of course will and must be those philosophical contentions that must give way.
73
In this spirit Reid insisted that common sense must hold priority over the more speculative teaching of philosophy. Maintaining that most philosophers themselves have some sense of this he observes wryly that “it is pleasant to observe the fruitless pains which Bishop Berkeley took to show that his system … did not contradict the sentiment of the vulgar, but only those of the philosophers.” 2 Reid firmly held that any clash between philosophy and common sense must be resolved in the latter’s favor. Should such a clash occur: The philosopher himself must yield … [because] such [common-sense] principle are older, and of more authority, than philosophy; she rests upon them as her basis, not they upon her. If she could overturn them she would become buried in their ruins, but all the engines of philosophical subtlety are too weak for this purpose. 3
In any conflict between philosophy and everyday common sense beliefs it is the latter that must prevail. The down-to-earth lessons of ordinary experience must always prevail over any conflicting speculations of philosophical theorizing. At this point the Scottish common-sensists were emphatic. When conflicts arise, commonplace experience trumps philosophical speculation. And ever outside the orbit of common-sense philosophizing most metaphilosophical approaches have agreed with this specificity-favoring point of view. Yet another illustration of specificity preference comes (perhaps surprisingly) from pure mathematics. In deliberating about the relationship between mathematics proper and metamathematical theorizing about mathematical issues, the great German mathematician David Hilbert (1862–1943) also argued for specificity preference. If any conflict should arise between substantive mathematical findings and large-scale metamathematical theory, so he maintained, then it is automatically the latter that must yield way by abandonment or modification. Here too we are to favor concrete specificity over abstract generality: seeing that, across a wide range of mathematics, abstract metamathematical theories are comparatively more risky. Accordingly, we have what Arthur Fine calls “Hilbert’s Maxim”, namely the thesis that: Metatheoretic arguments [regarding a theory] must satisfy more stringent requirements [of acceptability] than those placed on the arguments used by the theory in question. 4
74
And so the mathematical realm provides yet another illustration of specificity preference. Throughout our inquiry into the reality of things it appears that our pursuit of knowledge prioritizes specificity. Presumption, that is to say, stands on the side of comparative specificity and definiteness. 11.
THE QUESTION OF RATIONALE
Is there a cogent rationale for this? Are there sound reasons of general principle why specificity should be advantaged? An affirmative answer is clearly in order here. The reasoning at issue runs somewhat as follows. Consider a conflict case of the sort that now concerns us. Here, in the presence of various other uncontested “innocent bystanders” (x), we are forced to a choice between a generality (g) and a specificity (s) because a situation of the following generic structure obtains: (g & x) o ~s or equivalently (s & x) o ~g It is clear, here that, with the unproblematic context x fixed in place, either s or g must be sacrificed owing to the conflict at issue. But since g, being general, encompasses a whole variety of other special cases—some of which might well also go wrong—we have, in effect, a forced choice occasioned by a clash between a many-case manifold and the fewer-case competitor. And since the extensiveness of the former affords a greater scope for error, the latter is bound to be the safer bet. As a rule, generalities are more vulnerable than specificities since when other things are anything like equal, it is clearly easier for error to gain entry into a larger than into a smaller manifold of claims. To be sure, it deserves to be noted that what is basically at issue with specificity preference is not a propositional truth-claim but a procedural principle of presumption. What we have is not a factual generalization to the effect that specificities inevitably prevail over generalities, but a precept of epistemic practice on the order of “Believe the testimony of your own eyes” or “Accept the claim for which the available evidence is stronger”. It is a matter of the procedural principle. And of course one can go wrong here: It is not true that what your eyes tell is always so or that the truth always lies on the side of the stronger evidential case in hand. All that we have—and all that is at issue—is that such methodological precepts of
75
rational procedure indicate a process that will generally lead us aright. Though not infallible they are good guides to practice. Such a principle of practice reflects a matter of general adequacy rather than fail-proof correctness. And the justification at issue is thus one of functional efficacy— of serving the purposes of the practice at issue effectively. Here, as elsewhere, presumption is less a matter of demonstrating a universal truth than of validating a modus operandi on the basis of its general efficacy. 12. AN INVERSION TO GENERALITY PRECEDENCE: THE CASE OF COUNTERFACTUALS It is, however, necessary to come to terms with the striking circumstance that there is an important family of cases where the more usual presumption of specificity prioritization is in fact inverted and the reverse process a generality prioritization obtains. This occurs when we are dealing not with matters of fact, but with fact-contradicting assumptions and hypotheses.5 By way of illustration consider the counterfactual conditional: If he had been born in 1999, then Julius Caesar would not have died in 44 BC but would be a mere infant in 2001. This arises in the context of the following issue-salient beliefs: (1) Julius Caesar was born in 100 BC. (2) Julius Caesar is long dead, having died at the age of 56 in 44 BC. (3) Julius Caesar was not born in 1999 AD. (4) Anyone born in 1999 AD will only be an infant by 2001. (5) People cannot die before they are born. And let us now introduce the supposition of not-(3) via the following: Assumption: Suppose that not-(3), that is, Julius Caesar was born in 1999 AD.
76
In the face of this assumption we must, of course, follow its explicit instruction to dismiss (1) and (3). Thesis (4) is safe, inherent in the very definition of infancy. But even with these adjustments, inconsistency remains and confronts us with two distinct acceptance/rejection alternatives: (2), (4)/(1), (3), (5) (4), (5)/(1), (2), (3) In effect we are now constrained to a choice between the specific (2) on the one hand and the general (5) on the other. At this point, however, the “natural” resolution afforded by the Principle of Generality Precedence that holds in these purely hypothetical cases will prioritize the more general and instance-encompassing (5) over the case-specific (2), thereby eliminating that first alternative. With not-(1) fixed by hypothesis, the conclusion of the initial counterfactual then at once follows from (4) and (5). In effect, that counterfactual is the product of generality prioritization. The perplexity of an unnatural counterfactual along the lines of “If Julius Cesar had been born in 1999 AD then he would have been born again from the dead” would be averted. As this example illustrates, in deliberating with respect to fact-contradicting assumptions generality precedence comes into operation. And this betokens a larger lesson. In determining which beliefs should give way in the face of counterfactual assumptions do and should let informativeness be our guide, so that authentic generality is now in the driver’s seat.6 Rational procedure in speculative contexts becomes a matter of keeping our systemic grip on the manifold of relevant information as best we can. Again, consider another example: —If this rubber band were made of copper, what then? And this question arises in an epistemic context where the following beliefs are salient: (1) This band is made of rubber. (2) This band is not made of copper. (3) This band does not conduct electricity.
77
(4) Things made of rubber do not conduct electricity. (5) Things made of copper do conduct electricity. Let it be that we are now instructed to accept the hypothesis: Not-(2): This band is made of copper. And now the following two propositional sets are the hypothesis-compatible maximal consistent subsets of our specified belief set: {(3), (4)}
corresponding to the acceptance/rejection alternative (3), (4)/ (1), (2), (5)
{(4), (5)}
corresponding to the acceptance/rejection alternative (4), (5)/ (1), (2), (3)
The first alternative corresponds to the counterfactual —If this band were made of copper, then copper would not conduct electricity [since this band does not conduct electricity]. And the second alternative corresponds to the counterfactual —If this band were made of copper, then it would conduct electricity [since copper conducts electricity]. In effect we are driven to a choice between (3) and (5), that is, between a particular feature of this band and a general fact about copper things. And its greater generality qualifies (5) as being systemically more informative, and its prioritization is therefore appropriate. Accordingly, we will retain (4) and (5) along with not-(2), and therefore accept that second counterfactual as appropriate. And this exemplifies a general situation of generality preference in matters of counterfactual reasoning. 13. NATURAL VS. UNNATURAL COUNTERFACTUALS The distinction between “natural” and “unnatural” counterfactuals is, of course, crucial in the present context. To illustrate this, let us suppose that
78
we know that all the coins in the till are made of copper. Then we can say without hesitation: If the coin I have in mind is in the till, then it is made of copper. But we certainly cannot say counterfactually If the coin I have in mind were in the till, then it would be made of copper. After all, I could perfectly well have a certain silver coin in mind, which would certainly not change its composition by being placed in the till. But just how is the difference between the two cases to be explained? Let C = {c1, c2, … , cn} be the set of coins in the till, where by hypothesis all of these ci are made of copper. And now consider the assumption: • Let x be one of the ci (that is, let it be some otherwise unspecified one of those coins presently in the till). Clearly this assumption, together with our given “All of the ci are made of copper,” will entail “x is made of copper” so that first conditional is validated. But in the second case we merely have the assumption • Let x be a coin in the till (through not necessarily one of those presently there). Now, of course, this hypotheses joined to “All of the coins presently in the till are made of copper” will obviously not yield that conclusion. Accordingly, the second counterfactual is in trouble, since the information available to serve as its enthymematic basis is insufficient to validate the requisite deduction. The two conditionals are different because they involve different assumptions of differing epistemic status, a difference subtly marked by use of the indicative in the first case and the subjunctive in the second. For in the former we are dealing merely with de facto arrangements, while in the latter case with a lawful generalization. And so generality prioritization speaks for the latter alternative. Lawfulness makes all the difference here.
79
Again, consider the question “What if Booth had not murdered Lincoln?” And let us suppose that the salient beliefs here stand as follows: (1) Lincoln was murdered in April 1865. (2) Murder is deliberate killing so that Lincoln was murdered, it was by someone deliberately trying to kill him. (3) Booth murdered Lincoln. (4) Only Booth was deliberately trying to kill Lincoln in April 1865. Observe that (1), (2), (4) Ō (3). Now suppose that not-(3). Then we must abandon one of the trio: (1), (2), (4). Here (2) is a definitional truth. And (4) is a general fact, while (1) is but a matter of specific fact. So now the rule of precedence for matters of generality/informativeness marks (1) as the weakest link and we arrive at: —If Booth had not murdered Lincoln, Lincoln would not have been murdered in April 1865. One further complication must be noted. We have the problem of explaining how it is that the subjunctively articulated counterfactual —If Oswald had not shot Kennedy, then nobody would have. seems perfectly acceptable, while the corresponding indicative conditional —If Oswald did not shoot Kennedy, then no one did. seems deeply problematic. 7 And within the presently contemplated frame of reference the answer is straightforward. The background of accepted belief here is as follows: (1) Kennedy was shot. (2) Oswald shot Kennedy.
80
(3) Oswald acted alone: No one apart from Oswald was trying to shoot Kennedy. Now suppose that (2) is replaced by its negation not-(2), i.e., that Oswald had not shot Kennedy. For the sake of consistency we are then required to abandon either (1) or (3). The informativeness-geared policy of presumption via generality precedence in matters of mere hypothesis now rules in favor of retaining (3), thus dropping (1) and arriving at the former of that pair of conditionals. The alternative but inappropriate step of dismissing (1), would, by contrast, issue in that second, decidedly implausible counterfactual. To be sure, this conditional could in theory be recast in a more complex form that would rescue it as it were: —If Oswald did not shoot Kennedy then no one did, so since Kennedy was shot, Oswald did it. In this revised version the conditional in effect constitutes a reductio ad absurdum of the idea that Oswald did not shoot Kennedy. But it is now clear that these conditionals address very different questions, namely the (1)-rejecting • What if Oswald had not shot Kennedy? and the (1)-retaining • Who shot Kennedy? respectively. The retention guidance of the different question-contexts here serves to settle the issue. 14. THE PRAGMATIC IMPETUS What is thus crucial with counterfactuals is the determination of precedence and priority in a consistency-restoring right-of-way allocation in cases of conflict. And here we proceed on the basis of the rule that: In counterfactual reasoning, the right-of-way priority among the issue-salient beliefs is ordinarily determined in terms of their generality of import by way of informativeness in the systemic context at hand.
81
The situation can be summarized in the unifying slogan that in hypothetical situations the standard modus operandi of presumption prioritizes beliefs on the basis of systematicity preference. But this matter of right of way is now determined with reference to informativeness within the wider context of our knowledge. When we play fast and loose with the world’s facts we need the security of keeping its fundamentals in place. In particular, it is standard policy that in counterfactual contexts, propositions viewed as comparatively more informative in the systemic context at hand will take priority over those which are less so. While revisions by way of curtailment and abandonment in our family of relevant belief are unavoidable and inevitable in the face of belief-countervailing hypotheses, we want to give up as little as possible. And here the ruling principle is: “Break the chain of inconsistency at its weakest link in point of systemic informativeness.” In counterfactual contexts, generalities accordingly take precedence over specificities. Once we enter the realm of fact-contravening hypotheses those general theses and themes that we subordinate to specifics in factual matters now become our life preservers. We cling to them for dear life, as it were, and do all that is necessary to keep them in place. “Salvage as much information about the actual condition of things as you possibly can” is now our watchword. Accordingly, specifics and particularities will here yield way to generalizations and abstractions. And so in determining which beliefs are to give way in the face of counterfactual assumptions we do and should let informativeness be our guide. Keeping our systemic grip on the manifold of relevant information is the crux, and speaks clearly for generality precedence here. At bottom, then, considerations of context and purpose become determinative here. The lesson of these deliberations is clear. In matters of conflict within the factual domain, presumption lies on the side of specificity, while in the speculatively counterfactual domain it favors lawful generality. In the larger scheme of things, two diametrically opposed principles—specificity prioritization and generality prioritization—are in operation in our overall deliberations. But they obtain in very different sectors of the cognitive terrain—namely factual inquiry and counterfactual speculation. With factual inquiry we aim at the security of our cognitive commitments and accordingly opt for specificity as the more reliable guide. By contrast, with counterfactual reasonings we look for the results of disbelieved hypotheses and strive to retain the maximum of information that survives the turmoil produced in our cognitive commitments by the impact of discordant assumptions. And here, as elsewhere, it is the difference in the aims and purposes
82
of the enterprise at hand that accounts for the difference of the procedural process that is appropriate. And so what we have here is a vivid illustration of how differences in functional context will impact differently on our reasoning in situations of inconsistency reflecting differences in the pragmatic purposiveness of the enterprise at hand. HISTORICAL POSTSCRIPT The thesis that in specifically counterfactual situations a Principle of Generality Preference obtains was urged by the present author originally in a 1961 paper 8 and developed more fully in his book, Hypothetical Reasoning (Amsterdam: North Holland, 1964). Subsequently, a series of empirical investigations by R. Revlis and his colleagues confirmed experimentally that in actual practice people do indeed proceed in this way.9 However, among philosophers too the idea of prioritizing lawful generality has become seen accepted as a matter of empirical practice grounded in psychological inclinations. 10 Such psychologism is decidedly different from my own position, which sees the prioritization of lawful generality as a matter of functional efficacy in the light of the inherent nature of the context of deliberation. At bottom the matter is not a personally psychological matter of preference or reluctance to change, but one of information-processing prioritization rooted in the function-oriented ground rules of communicative practice. That people do in fact think in this functionally suitable manner I regard as the result of rather than the ground for the validity of the principle. It does no more than to reflect the happy circumstance that in this particular area people generally proceed in a rationally appropriate way.11 NOTES 1
To be sure, as Pierre Duhem insisted, a theory-observation clash will in general involve a plurality of participating theories, so that it will not be clear which particular theory will have to be jettisoned. See his The Aim and Structure of Physical Theory, tr. by P. Wiener (Princeton, NJ.: Princeton University Press, 1982).
2
Essays on the Intellectual Powers of Man, (Edinburgh: John Bell, 1785), VI iv, p. 570.
3
An Inquiry into the Human Mind (1764), I, v (ed. Wm. Hamilton, p. 102b; ed. Derek R. Brookes, p. 21.)
83
NOTES 4
See Arthur Fine, “The Natural Ontological Attitude,” in Jarret Lephon (ed.), Scientific Realism (Berkeley and Los Angeles; University of California Press, 1984), pp. 83–107 (see esp. p. 85). The maxim is articulated in line with David Hilbert’s endeavor to demonstrate the consistency of set theory on a more concrete non-settheoretical basis.
5
On counterfactual conditionals and their problems see N. Rescher, Hypothetical Reasoning (Amsterdam: North Holland, 1964); David Lewis, Conditionals (Oxford: Blackwell, 1973); Ernest Sosa (ed.), Causation and Conditionals (London: Oxford University Press, 1975); Anthony Appiah, Assertion and Conditionals (Cambridge: Cambridge University Press, 1985); Frank Johnson (ed.), Conditionals (Oxford: Clarendon Press, 1991); N. Rescher, Conditionals (Cambridge, MA; MIT Press, 2006).
6
In this context it is, however, important that the generalization at issue should be seen as somehow lawful and as not a merely fortuitous and accidental aggregation of special cases, so that the factor of generality is present in name only.
7
This issue is addressed in E. W. Adams “Subjective and Indicative Conditionals,” Foundations of Language, vol. 6 (1970) pp. 39–94.
8
See the author’s “Belief Contravening Suppositions,” The Philosophical Review, vol. 70 (1961), pp. 176–95.
9
R. Revlis and J. R. Hayes, “The Primacy of Generalities in Hypothetical Reasoning,” Cognitive Psychology, vol. 3 (1972), pp. 268–90. R. Revlis, S. G. Lipton, and J. R., Hayes, “The Importance of Universal Quantifiers in a Hypothetical Reasoning Task,” Journal of Verbal Learning and Verbal Behavior, vol. 10 (1971), pp. 86–91. See also M. D. Braine and D. P. O’Brian, “A Theory of If: A Lexical Entry, Reasoning Program and Pragmatic Principles,” Psychological Review, vol. 98, pp. 182–203. And see moreover N. O. Roese and J. M. Olsan, What Might Have Been: The Social Psychology of Counterfactual Thinking (Mahwwah, N.J.: Lawrence Erlbaum Associates, 1995), p. 4.
10
See David Lewis, Counterfactuals (Oxford: Blackwell, 1973); David Lewis, (1977); Roy A. Sorensen, Thought Experiments (Oxford: Oxford University Press, 1992); and Peter Unger, “Toward a Psychology of Common Sense,” American Philosophical Quarterly, vol. 19 (1982), pp. 117–129. Unger sees the matter as one of “the psychology of thought experimentation and insight into a psychological resistance to abandonment.”
84
NOTES 11
This chapter is a revised version of a paper originally published under the same title in Gereon Wolters and Martin Carrier (eds.), Homo Sapiens und Homo Faber: Festschrift Mittelstrass (Berlin: De Gruyter, 2004), pp. 201–12.
85
Chapter 7
ON REALISM AND THE PROBLEM OF TRANSPARENT FACTS 1. COGNITIVE TRANSPARENCY A cognitive commitment is epistemically impregnable when its holder’s grounds for it are such that no possible course of rational argumentation would dislodge a rational exponent from holding it. The doctrinal stance of an epistemic fallibilism is just such a position. For let it be that one accepts: (F) Some of my beliefs are false. Any course of reasoning designed to refute (F) will—through this very circumstance—afford an argument for its negation, viz., that all of my beliefs are true. And since (F) itself falls within the scope at issue, this purported counterargument to (F) would actually serve to sustain rather than undermine it. By contrast, consider the radical skeptic who accepts a hyperbolic version of (F) to the effect that: (F*) All of my beliefs are false. This position is epistemically unsustainable. For in taking this hyperskeptical line, one boxes oneself into an impossible corner. Since (F*) itself falls within its own scope as one of the beliefs at issue, the concession of its falsity becomes self-refuting. By contrast to (F) and (F*) consider: (G)
Some of my beliefs are true.
(G*) None of my beliefs are mistaken. A rational believer who accepts anything at all cannot but accept (G). On the other hand no sensible person would confidently endorse (G*)—even though no specific counter-examples could possibly be offered by the indi-
vidual who is himself at issue. 1 (G) is epistemically impregnable, but (G*) represents a Quixotic hope. These considerations point to the idea of a transparent fact. This would be a fact that it is self-certifying: once we actually believe in it, no further validation, beyond that acceptance itself, is needed to assure its truth. In the traditional philosophical terminology such a fact is evident: its acknowledgement suffices to certify its truth and it is in this sense selfvalidating. Thus we here have: Ap o p, where A represents acceptance When actually accepted, such a contention cannot be wrong. Once in place, it is immune to counterconsiderations. We come here to the philosophical tradition of “the evident” the selfcertifyingly obvious experiential facts envisioned by Thomas Reid and revivified in 20th century philosophy by Roderick Chisholm.2 This, however, is not the place to pursue the historical side of the subject. Instead, the question on which we will focus here is this: Are there actually any such self-certifyingly transparent facts? “I accept something”—believing some of one’s benefits to be true—affords a strikingly cogent illustration since in accepting it one assures its truth. But this is certainly not the end of the story. For many other similarly transparent mental and cognitive conditions seem to afford further examples. • I believe, doubt, question whether, am convinced that, am under the impression that ... [there is a cat on the mat] • I take myself to be looking at ... [a cat on the mat] When this sort of fact obtains, there may well, of course, be no cat or mat at all. I could readily be wrong about that. But it is hard to conceive that I could be wrong with regard to thinking (or doubting) it to be so. The situation as regards actual knowledge is decidedly different from that of mere belief (or acceptance). For the very concept at issue is such that whenever one actually knows something it must be so. The principle: Kp o p
88
is clearly appropriate. 3 But when one merely takes oneself to know something, this need not be so. We certainly do not have: AKp o p However, we certainly do not have it that p must be true when someone believes (accepts) it to be so. The thesis certainly does not obtain in general. Instead, what we do have is merely: AKp o Ap And p does not follow here save in the special case that Ap o p, that is, in the presently contemplated situation when p is itself something self-certifyingly transparent. It is also of interest to consider the stronger version of transparency that obtains when we have not merely Ap o p but even: Ap o Kp, where K represents not merely accepting but actually knowing something to be true, with the result that there is a warranted inferential transit from mere belief not just to facticity but to actual knowledge thereof. Statement about strong emotions that one has oneself provide good candidates here: facts about one’s own powerful emotions: loving, hating, fearing, etc. are difficult to overlook. Do we ever have propositions that are epistemically self-imposing as per: p o Ap Are any facts in and of themselves so strikingly compelling and obvious as to be cognitively self-energized that their merely being so carries the acceptance of an intelligent being in their wake? Perhaps strong passions and emotions once more provide a plausible example here. One can hardly be
89
furious at someone or frightened by something without being aware of it. In such cases the fact and the awareness of it appear to be coextensive. Note, however, that throughout these examples, we are clearly dealing with one’s mental state and thus proceeding at the level of subjectivity. All of those aforementioned self-certifying facts relate to the agent’s psychic state or condition. Accordingly one can—and should—ask: is this condition of self-reflexive subjectivity necessary for cognitive transparency? Perhaps the prime instance of the evident as a rational transit from subjectivity to objectivity is afforded by René Descartes’ cogito ergo sum. Here even appearance assures reality: when I merely think that I am—that I exist as part of the actual furnishings of the world—I cannot really be mistaken about it. (Let alone about the even vaguer contention that something exists which, after all, is bound to hold even when there is a misimpression, seeing that even a misimpression is still something.) But this sort of thing does not really help much. For as Descartes himself stressed, with the Cartesian cogito the thing that exists when I think is of course the mind (and not a material object of some sort, such as an embodied creature). And the item that exists when I am under some impression or other is just that, an impression, and not something mind-independent. Apparently, those evident and “incorrigible” facts are (1) egocentrically self-oriented (i.e., subjective) and (2) thought/mind related as per: “I take myself to be ... [looking at a tree, suffering a headache, etc.].” For what we have here is mind-oriented subjectivity. “I’m under the impression that I am looking at a cat” is something very different from “I am looking at a cat” (i.e., there is a cat over there and I am looking at it). 2. THE OPAQUENESS OF THE OBJECTIVE On all available indications, statements of objective, mentality-transcending fact simply cannot be cognitively evident because in principle something can always go awry here between thought (appearance) and fact (reality), between a person’s impression and the world’s actualities. It thus emerges that the quest for self-certifying evidentness of something mindindependent is a wild goose chase. For those mind-independent objective facts are always cognitively opaque and never transparent—never selfcertifying. For with objectivity there is always cognitive mediation with its prospect of a slip between the cup of thought and the lip of fact. The idea that our experience renders evident the mind-independent reality of things collides with the fact that experience itself is inevitably mind-
90
involved—there just is no experience available (to us) that fails to belong to some mind or other—i.e., fails to be somebody’s mind-operated experience. And there is no way here to avert the gap between the mind’s thoughts and the world’s facts. There is no great difficulty about transparent fact as long as we stay in the subjectively mind-correlative domain of thought and its objects. But when we move over into the realm of objective fact about mindindependent issues, the prospect of self-certifying evidentness leaves the scene. The role of experience is crucial here. But the term has two senses: the one phenomenally occurrent and the other objectively systemic. The former is episodic and personal (Erlebnis: “I had a strange experience yesterday evening”), and the latter perduring and impersonal (Erfahrung): “Experience teaches that _ _ _.”) And with episodic experience those selfcertifying subjective facts can indeed be self-sustaining, but they are also self-relatingly subjective. By contrast, objectivity can indeed root in systemic experience. But here we always have to do with lessons whose establishment requires subjectively transcending discursive evidentiation. And no belief about objective matters of impersonal fact can possibly be selfcertifyingly evident. All phenomenal experience is inherently subjective, and we cannot extract from such experience an objective factuality that we do not tacitly smuggle into it. Mind-external facticity cannot be experiencedly given, if we are to secure it, we must cast it in the role of a taken. To be sure, philosophical realists do sometimes maintain that the existence of a domain of mind-independent reality is itself an immediate given of our perceptual experience. But if the previously articulated lines of thought hold good, this sort of position is untenable. What indeed is or could be self-certifyingly evidence is our belief in or conviction of the existence of a mind-independent reality. But that this belief is warranted—let alone correct—is something that immediate experience could not possibly certify. In the end, the problem with those so-called immediately self-evident facts is that there is bound to be a gap between what is seen as (regarded as, taken to be) self-evident and what is actually mind-independently and objectively self-revelating and self-substantiating. The all too pervasive gap between what SEEMS and what IS enters also upon the scene with respect to self-evidentiation. On the other hand, with postulation—that is, taking—we ourselves are in charge. And here we can resort to the ex post facto validation of seeing with the wisdom of eventual hindsight that this
91
postulation was in fact well advised. So we can have our substantiation both ways: up front and after the fact. Where objective reality is concerned there is always cognitive opacity: self-evidentiation is unrealizable and substantiation must be secured by additional information. But initially, before any actual evidence is secured, there can be supposition, assumption, presumption and many other forms of taking. However at this initial, pre-evidential stage the given will do no work for us in the realm of objective fact where—ardent desire to the contrary—the transparency of self-evidence is not to be had. Being evident is something confined to the realm of subjectivity. NOTES 1
“I believe that p and this belief is false” is in effect self-contradictory, since in claiming the belief as false I effectively disavow it.
2
On Reid and his work see the references given in the author’s Common Sense (Milwaukee: Marquette University Press, 2005). On Chisholm see Keith Lehrer, “The Quest for the Evident” in Lewis E. Hahn (ed.), The Philosophy of Roderick M. Chisholm (Chicago and La Salle, Open Court, 1997), pp. 388–401.
3
It makes no sense to say “X knows that p but not-p may well be the case.” On this issue see the author’s Epistemic Logic (Pittsburgh: University of Pittsburgh Press, 2004).
92
Chapter 8
ON FALLACIES OF AGGREGATION 1. INTRODUCTION A typical instance of compositive arguing is to conclude that an entire mechanism is made of copper because all its parts are, or that since all of its component bricks are red, so is the entire wall. The reasoning at issue has the format of arguing that when every part of a whole has a certain feature, then the whole in question will also have that feature, as per: (x)(x PT X Fx) o FX, where PT stands for the relation “is a part of” This looks plausible but has its problems. The traditional Fallacy of Composition involves just such reasoning from parts to wholes by maintaining that what is true of the parts will be true of the whole—a line of argument that actually fails in many cases. For it clearly does not follow that because all of its constituent cells are small, the entire organism will also be small, or that the entire wall is inexpensive since the individual bricks are. Compositive reasoning along the indicated lines will only work when the salient feature of those components is additively stable in that the property at issue remains in place with further additions, leaving the status quo intact. Adding more copper parts does not change the stuff-composition of the complex, nor does adding more red items change the color make-up of something all of whose parts are red. By contrast, however, adding more small items does change the size of a thing, and adding further components to something will change the cost. The fallacy at issue thus results when one subscribes to a mistaken presupposition—namely that the feature at issue has additive stability, something that is not in general the case. However, this traditional Fallacy of Composition is emblematic of a more general phenomenon. For there are many other kindred pitfalls in arguing from parts to wholes and from individuals to complexes. And it is interesting and instructive to examine this larger phenomenon in its diversified variety.
2. EXPLANATORY AGGREGATION AND THE HUME-EDWARDS THESIS Some philosophical theorists advocate the philosophical doctrine of what has come to be called the Hume-Edwards Thesis to the effect that: An explanation (e) that accounts for the existence of every member of a set (or every component of a whole) ipso facto thereby explains the existence of that complex: (x)(x X e EXP E!x) o e EXP E!X Here EXP abbreviates “explains” and E! symbolizes “exists”. The line of reasoning at work here has exactly the format of the Fallacy of Composition with respect to sets: (x)(x X Fx) o FX with the salient property F consisting in the e-explicability of existence: Fx = e EXP E!x Notwithstanding its seeming plausibility, this otherwise tempting tactic comes to grief with holistic issues. For in explaining the existence of the parts we do not really as yet explain the existence of the whole. After all, to explain the existence of the individual bricks is not automatically to achieve an explanation of the wall’s existence seeing that this would call for accounting not just for those bricks individually but also for their collectively coordinated co-presence in the structure at issue. Only by addressing the overall amalgamation of those bricks can we put onto the agenda the wall that they collectively constitute. And in the same way, the existence of the camels does not account for the existence of the caravan that forms them into a unit. Consider the following two claims: • If the existence of every member of a team is explained, the existence of that team is thereby explained. • If the existence of each member of a criminal gang is explained, the existence of that criminal gang is thereby explained.
94
Both of these claims seem clearly false as they stand. But now contrast these two theses with the following cognate revisions: • If the existence of every member of a team as a member of that particular team is explained, then the existence of that team is thereby explained. • If the existence of every member of a criminal gang as a member of that particular criminal gang is explained, then the existence of that criminal gang is thereby explained. Both of these theses are indeed true—but of course only achieve this status subject to that added qualification. For the requisite explanation must account not only for the parts but also for their coordinated co-presence as parts of the particular whole at issue. Only by explicitly inserting the issue of functional integration into that distributive proliferation at hand would the Hume-Edwards thesis be made tenable. But it was exactly that collectivization which the theory was designed to resist. In ignoring this need for the explanation of coordinative co-presence, the Hume-Edwards doctrine of distributive explanation is unable to bear the reductive burden that its advocates wish to place upon it. In its present explanatory context, Hume-Edwards approach is based on the decidedly erroneous idea that distributively individualized explanation accomplishes the job of collectively integrated explanation: that the former entails the latter. In the end, a logical fallacy is at issue here, an illicit conflation that overlooks the crucial distinction that emerges by conflating: Distributive e-explanation: (x) e (XP (x S Fx) with Collective e explanation: e (XP (x)(x S Fx) Clearly the first of these (distributivity) by no means entails the second (collectivity). The lesson of these considerations is that the Hume-Edwards thesis collapses because a synoptically holistic explanation cannot be aggregated from bits and pieces but must proceed at a duly collectivized level. What is needed here is an unified, integral theory able to achieve the explanatory
95
task on a holistic rather than distributive basis. Distributive reductionism just does not meet the needs of the situation because there will be various large-scale issues that are irreducible and holistic through resisting dissolution into components. 3. ANY AND EVERY: THE MUSICAL CHAIRS FALLACY One of the most striking versions of the Fallacy of Composition arises in connection with what might be called any/every conflation. For the move from ANY to EVERY is inherently tricky. The common phrase “each and every” is emphatically not redundant: one can have the former without the latter. Anyone will be better off if they take the shortcut road in going from point A to point B. But it is not the case that everyone will be better off if they all do so. Quite on the contrary, they would almost certainly gridlock in a great traffic jam. Again, anyone in the village can take the ferry over to town, but if everyone tries to do so there will not be room for all. And while anyone can walk across the lawn without causing it harm, yet if everyone did so they will kill the grass. The improper move from ANY to EVERY might be termed the Fallacy of Illicit Totalization. In some cases, to be sure, the transit from any to every is quite unproblematic. If any of the children in the class might catch cold, then all—every one of them—might. If any of the Type Z aircraft exhibit a design flaw, then all of them will presumably do so. We cannot generally reason from some to all, but can often reason unhesitatingly from some-actually to allpossibly as per: if any one of those apples is contaminated then all of them might be. But this does not hold across the board. Let us consider why. As noted above, the Fallacy of Composition involves the format of reasoning from the F-ness of the constituent parts to that of the whole: (x)(x X Fx) o FX And this actualistic mode of reasoning has the possibilistic cognate: (x)(x X Fx) o (x)(x X Fx) This fallacious mode of reasoning from possibly-any to possibly-all, we shall call the Musical Chairs Fallacy because its typical instance is that of mis-reasoning for the fact that any player can find a seat at Musical Chairs to the erroneous conclusion that every player can do so. This sort of move
96
from ANY to EVERY as per “Any ticket holder can win” to “Every ticket holder can win” roots in the fact that the winning ticket holder—whoever he may be, and he may be any one of them—fails because additive stability is just not there. Once somebody wins, the others can no longer do so. And the situation is the same in lotteries and elections. In a lottery: every player can win, but all of them cannot; in an election: any candidate might win, but every one of them cannot. Even the move from SOME to possibly all does not work in general, but only in cases where the success of one does not impede the prospects of another. The move from ANY to ALL at issue in the Musical Chairs Fallacy is predicated on a potentially inappropriate step. The problem here is that the premisses purports the possibility of seating distributively, while the conclusion purports it collectively. And the former is practicable, while the latter is not, being enmeshed in a version of the Fallacy of Composition. A general principle is at issue, namely that with respect to any propositional factor F two sort of claims arise: (3)
(x)Fx
(collective reading)
(x)Fx
(distributive reading)
and (4)
And these claims make decidedly different assertions. While (3) certainly entails (4), the reverse is not the case. In principle, therefore, we can have (4) without (3). As the game of Musical Chairs so clearly illustrates, the thesis (x)Fx & ~(x)Fx is logically fully consistent. With respect to the game of Musical Chairs it would clearly be a fallacy to think that because any player can be seated that every player can be seated. And a striking version of this Musical Chairs Fallacy occurs in philosophical discussion of scepticism for skeptics often reason: “It is possible that in every given case we might be mistaken, therefore it is possible that we might always be mistaken.” This reasoning conforms to the following quantifier inversion:
97
(p)Mp o (p)Mp, where M comes to “being mistaken about” But as we have seen, such reasoning from a distributive to a collective reading is often simply inappropriate. And so the skeptics move on to more powerful arguments yet. Introducing fantastical hypotheses such as that of the Evil Demon of Descartes or his modern descendent, the wicked brain-manipulative mad scientist. They reason: It is (theoretically) possible that we might always be mistaken, therefore there is a real possibility that we are mistaken in this case.
Now to be sure, the inference (x)Mx o Mc is indeed perfectly valid. However, the conclusion the skeptic claims to achieve is not a matter of merely logical possibiltiy () but rather one of real possibility . And this is certainly not something the skeptic is entitled to infer. Conflating logical with real possibility, with , is yet another invitation to compositive fallacy. 4. PREMATURE SPECIFICITY When something is known at the level of generality it is tempting to make a premature leap into specificity. Franklin Delano Roosevelt’s administration had to take a good deal of flack because while the USA know quite well that the Japanese were on the verge of a war-initiating attack, nevertheless it had no knowledge of exactly where the blow would fall. The Agatha Christie detective hero knows full well that one of the suspects did the evil deed, but the problem remains as to just which one. In cognitive contexts we often have Kx(y)Fy but yet are quite at sea in regard to (y)KxFy
98
The placement of that quantifier makes all the difference here. For even when the former condition obtains there just need be no specific y of which x knows that Fy. Here the step from generality to specificity is the problem and to make free and easy with it commits a fallacy. 1 For this proceeds on the mistaken idea that there is an easy move from totality to particularity: that when we know that all A’s are B’s we thereby know of every A that it is a B—a circumstance that will fail to obtain when we do not know in specific the items that indeed are A’s. 5. ILLICIT AMALGAMATION Philosophers are particularly susceptible to The Fallacy of Illicit Amalgamation which consists in treating as one aggregative unit something that has various distinctive types, sorts, and forms that require a differential treatment on a case-by-case basis. They ask about the nature of some generality such as knowledge or justice or truth—all of which in fact disaggregate into distinctively different kinds that need to be understood on very different principles. Here J. S. Mill’s Logic gave the example that it does not follow from the first that every person has a mother that there is some mother that every person has. Various devotees of the Cosmological Argument have also long succumbed to this fallacy, holding that because any and everything in the physical universe has a cause that there need be a (single) cause from which everything in the universe results. Again take property and ownership as an example. John Locke and a long list of his successors asked: What is the rationale of property ownership and the basis of rights to provide property? The problem here is that there is virtually nothing to be said at this level of generality. For every valid form of private property ownership there is a validating rationale alright, but there is not one single validity relation for all modes of property ownership in toto. (Here too, quantifier placement is crucial.) The issue exfoliates in line the item at issue acquired by not only with different kinds of property (real, intellectual, personal, etc.) but also with different modes of acquisition as per: earning with the sweat of one’s brow; speculation; gift; inheritance; winning a bet, and so on. But now even if every particular type of property ownership has some appropriate mode of justification this would not mean that there be a single mode of justification that is appropriate for every mode of ownership. The presupposition of typicality and uniformity that is crucial for such reasoning may readily fail.
99
6. NONCONJUNCTIVITY AND IMPROPER CONJUNCTION A propositional operator or qualifier Z is conjunctive whenever we have (Zp & Zq) o Z(p & q) The Fallacy of Improper Conjunction occurs whenever such conjunctivity is unwarrantedly and inappropriately presumed. To be sure, this move is often warranted. Conjoinable features include: • is true • is false • is probable • is refutable (i.e., where Zp = Ō ~p) • is P-implicative (i.e., where Zp = p o P) • is improbable (doubtful, questionable) • is interesting However, conjunctivity often fails. In particular, non-conjunctive propositional features include • is possible • is probable • is known And so the Fallacy of Improper Conjunction consists in treating nonconjunctive operators as conjunctive. It is a form of the Fallacy of Composition in claiming for conjunctive propositions that what is true of the parts must be true of the conjunctive whole that they collectively constitute. It is temptingly convenient to treat highly probable propositions as true and highly improbable ones as false. But this seemingly sensible and cer-
100
tainly useful proceeding involves a fallacy by running afoul of the failure of [Pr(p) & Pr(q)] o Pr(p & q) to hold for such non-conjunctive propositional features as probability. The difficulty here is that a collection of individually probable situations may well fail to issue in something collectively probable. In this instance, as elsewhere, the step from disaggregate distributivity to an aggregated collectivity may lead us astray. This potential failure is instructively illustrated by the so-called Lottery Paradox of inductive logic. This paradox is the immediate result of a decision-policy for acceptance that is based upon a probabilistic threshold value. Thus let us suppose the threshold level to be fixed at probability 0.80, and consider the following series of statements: This (fair and normal) die will not come up n when tossed (n = 1, 2, ..., 6). According to the specified standard, it transpires that one of these six statements must be accepted as true. Yet their conjunction results in a patent absurdity. 2 (The fact that there was set as low as 0.80 instead of 0.90 or 0.9999 is wholly immaterial. To recreate the same problem with respect to a higher threshold need simply assume a lottery wheel having enough (equal) to exhaust the spectrum of possibilities with individual alternatives of sufficiently small probability.)3 The Recording Angel does not hand us the Truth on a silver platter. We need to seek out evidence in substantiation. But in general even powerfully favorable evidential pro-indications do not guarantee factuality. “Accept p in the presence of powerfully favorably evidence” is not an unrestrictedly usable principle. For as long as the evidence is not conclusive but “merely” powerful, this can lead us into error, and even statements which are by themselves powerfully evidentiated can prove to be collectively inconsistent. Often it proves impossible to accommodate all contestants at once— just as in Musical Chairs.
101
7. THE PREFACE PARADOX AND MORE ON COGNITIVE NONCONJUNCTIVITY A closely related source of nonconjunctivity arises with evidential warrant in general. The so-called Preface Paradox formulated by D. C. Makinson affords a vivid view of this phenomenon: Consider the writer who, in the Preface to his book, concedes the occurrence of errors among his statements. Suppose that in the course of his book a writer makes a great many assertions, which we shall call S1, ..., Sn. Given each one of these, he believes that it is true. ... However, to say that not everything I assert in this book is true, is to say that at least one statement in this book is false. That is to say that at least one of S1, ..., Sn is false, where S1, ..., Sn, are the statements in the book; that (S1 & ... & Sn is false; that (S1 & ... & Sn) is true. The author who writes and believes each of S1, ..., Sn, and yet in a preface asserts and believes (S1 & ... & Sn) is, it appears, behaving very rationally. Yet clearly he is holding logically incompatible beliefs: he believes each of S1, ..., Sn (S1 & ... & Sn), which form an inconsistent set. The man is being rational though inconsistent. 4
The paradox roots in the mistaken idea that a collection of propositions each of which is evidentially warranted by supportive considerations needs to be collectively warranted. The underlying situation, however, is more serious yet. Note that with K for “p is actually known by x” we shall certainly not have: ((x)Kxp & (y) Kyq)) o (z)Kz(p & q) For people may very well fail to put those pieces of knowledge together. (Intelligence failures such as those of Pearl Harbor or 9/11 clearly illustrate this fact.) Moreover, the same sort of thing may well happen even in the case of a single individual knower so that we do not in general even have: (Kxp & Kxq) o Kx(p & q) Even a single individual may well fail to recognize collectively the implications of the bits and pieces of knowledge that are distributively at his disposal. So here once again we have a cousin to the Fallacy of Composi-
102
tion which involves the inappropriate leap from distributivity to collectivity. In this context, consider once more the idea of propositional conjunctivity: (Fp & Fq) o F(p & q) But let us now generalize this to (Fp1, & Fp2 & ...) o F(p1, & p2, & ...) or equivalently (i)Fpi o F(i)pi The structure of the fallacious reasoning from distributively to collectively replicates /F inversion at issue with the Musical Chairs fallacy. *** The overall lesson is clear. Fallacies of aggregation have many versions, all of them exhibiting one common flaw of an illicit and inappropriate supposition that certain critical features of the components are going to be conserved when they are assembled into larger wholes. APPENDIX SUMMARY OF FALLACIES OF AGGREGATION • The Classical Fallacy of Composition (X)(x X Fx) o Fx • The Hume-Edwards Fallacy The Classical Fallacy in the special case: Fx = e exp E!x • Musical Chairs Fallacy (Any to All/Every Fallacy)
103
(x)Fx o (x)Fx • The Skeptics’ Fallacy This is the Musical Chairs Fallacy in the particular format (p)Mp o (p)Mp, where Mp claims the mistakability of p. • The Fallacy of Illicit Totalization: (x)ZFx o Z(x)Fx •
The Fallacy of Illicit Amalgamation (x)(y)Rxy o (y)(x)Rxy
• Fallacy of Illicit Conjunction (Zp & Zq) o Z(p & q) Or in general: (i)Zpi o Z(i)Pi • The Lottery Paradox Illicit Conjunction with respect to probability: Zp = Pr p • The Preface Paradox Illicit Conjunction with respect to warranted assertability: Zp = Wp NOTES 1
On the related issue of “vagrant predicates” where specificity is totally unavailable—as per “a perpetrator who has never been identified”—see chapter 15 of the author’s Epistemic Logic (Pittsburgh: University of Pittsburgh Press, 2005).
2
The derivation of the paradox presupposes that ‘acceptance’ is acceptance and that truths obey the standard conditions of mutual consistency, conjunctivity (i.e., that a conjunction of truths be a truth), and of closure (i.e., logical consequences of truths be true). The lottery paradox was originally formulated by H. K. Kyburg, Jr., Probability and the Logic of Rational Belief (Middletown, Conn.: Wesleyan University Press, 1961). For an analysis of its wider implications for inductive see R. Hil-
104
NOTES
pinen, Rules of Acceptance and Inductive Logic (Helsinki: Acta Philosophica Fennica, 1968 , fasc. 22), pp. 39–49. 3
Not, however, by Henry Kyburg who, to his great credit, has mooted the prospect blocking acceptance of the conjunction of an inconsistent set of accepted. See his “Probability, Rationality, and a Rule of Detachment,” in Y. Bar Hillel (ed), Proceedings of the 1964 Congress for Logic, Methodology and Philosophy of Science (Amsterdam: North Holland, 1965), pp. 203–310.
4
D. C. Makinson, “The Paradox of the Preface,” Analysis, vol. 25 (1964), pp. 205–7. Compare H. E. Kyburg, Jr., “Conjunctivitis” in M. Swain (ed.), Induction, Acceptance, and Rational Belief (Dordrecht: D. Reidel, 1970), pp. 55–82, see esp. p. 77; and also R. M. Chisholm, The Theory of Knowledge, 2nd ed. (Englewood Cliffs, NJ: Prentice hall,1976), pp. 96–97. The fundamental idea of the Preface Paradox goes back to C. S. Peirce, who wrote: “that while holding certain propositions to be each individually perfectly certain, we may and ought to think it likely that some of them, if not more, are false.” (Collected Papers, 5.498.)
105
Chapter 9
LEIBNIZ AND THE CONDITIONS OF IDENTITY 1. INTRODUCTION In theory one can distinguish four different ways of approaching the identification and individuation of items: the descriptive, functional, modal, and relational. However, Leibniz undertook a series of metaphysical commitments that had the significant consequence of fusing all four of these approaches into one single unified whole. On this basis, his metaphysical system achieved a striking integration of logical and philosophical considerations in a way typical of the thought of this great systemalizer. It also affords some instructive insights into the complexities of individuation. From the earliest days of his philosophizing, the idea of identity and individuation played a prominent part in Leibniz’ thought. His baccalaureate dissertation of 1663, De principio individui, was already devoted to this subject to which he was to give lifelong attention. However, before embarking on historical matters, let us consider the logical lay of the land. What is it that identifies something as the thing that it is? In theory, there are four prime possibilities here: • its properties (descriptive individuation) • its processes (processual or functional individuation) • its necessities and/or its possibilities (modal individuation) • its relations (relational individuation) In themselves, viewed in theoretical abstraction, these four approaches represent different conceptions. For, in theory, decidedly variegated sorts of things are at issue here, and items that are identical in one mode could conceivably differ in another, with, for example, descriptively identical items exhibiting processually different behaviors in somewhat the same manner (say) in which identical chemical isotopes could behave differently. The present discussion, however, will try to show that thanks to some of the definitive characteristics of Leibniz’ metaphysics, it transpires that for him all of these approaches come into alignment as so many roads leading
to the same destination. And it is in fact a salient and definitive feature of his metaphysics that it unifies a variety of theoretically distinct and dissonant modes of identification. In this way, the Principle of Individuation comes to play a formative and far-reaching role in Leibniz’s philosophy. 2. DESCRIPTIVE INDIVIDUATION Descriptive individuation is a matter of identification on the basis of the possession of properties. Modern logicians seek to capture this idea in the well-known formula x = y iff (F)(Fx { Fy) On this construal, items are identical when they have exactly the same properties in common. Accordingly, what is at issue here is a matter of descriptive identity. 1 Leibniz himself adopts and implements this conception of descriptive individualism via his idea of the complete individual concept of a substance—its all-embracing identifying description (as present in the mind of God). As he sees it, “c’est la nature d’une substance individuelle d’avoir une telle notion complète, d’où se peut déduire tout ce qu’on luy peut attribuer.” 2 Objects are thus individuated by the properties they actually have: these properties could not be altered without a change in the identity of the object, transmuting it into something different from what it was. Accordingly, for Leibniz individual substances are by necessity what they are; their descriptive nature is definitionally fixed and permits no hypothetical changes: Le principe d’individuation revient dans les individus an principe de distinction ... Si deux individus estient parfaitement semblables et égaux et (en un mot) indistinguables par eux memes, il n’y auroit point de principe d’individuation. 3
Properly differentiation is essential to descriptive individualization. 3. PROPOSITIONAL OR FUNCTIONAL INDIVIDUATION With functional individuation, by contrast, interchangability in propositional contexts is the key. Just this is the burden of Leibniz classic formula:
108
Eadem sunt quarum unus in alterius locum substitui potest salva veritate: things are the same when they can be substituted for each other (in propositional contexts) without any alteration of truth.4 The basic idea here is that a thing is what it does and that when the same contentions hold for the doings and operations or items then the same thing must be at issue: x = y iff (• • • x • • •) { (• • • y • • •), for all relevant contexts: (• • • [ ] • • •).
Accordingly, the identity of a substance becomes fixed by the manifold of things it can be said to do. A thing’s assertoric modus operandi thus affords another route to its individuation. And this, of course, is just exactly the position of the Leibnizian metaphysics, with its key idea that its activity serves to characterize a substance as the thing it is. 5 For Leibniz, after all, action is coordinate with the transformation of properties: “toute action de la creature est un changement de ses modifications.”6 Identity is thus maintained through an operational change of context. Observe, however, that subject to a suitably wide notion of a property, and with contextual satisfaction as per (• • • x • • •) taken to represent eligible property, such a mode of processual/functional identity once more amounts to being a version of processual identity. With things identified as what they can contextually be said to do, functional identity is led back to being just another version of property identity. 4. MODAL INDIVIDUATION While descriptive identity is sacrosanct for mathematics, where all the relevant items are abstractions that have only necessary and never contingent properties, metaphysicians face a very different situation and generally see the matter of identity in a very different light. For in dealing with the concreta of actual existence, they will in general view item identity as a matter of sharing not necessarily all of an item’s properties, but only all the essential properties—those properties an item must have to be what it is. On this basis we will have: x = y iff (F)(
Fx {
Fy)
109
Moreover, as long as the properties at issue here are taken in the sufficiently general way, so that the non-possession of a property itself also constitutes a property, the preceding specification will be logically equivalent to: x = y iff (F)(Fx { Fy) Thus what is at issue here is thus a conception of modal individuation on whose bases items are identical whenever the set of properties they can possibly share is exactly the same. While descriptive identity is sufficient and well-suited for operating with abstract objects (all of whose properties are effectively essential to them) modal identity is designed for operating with concrete objects within a metaphysical setting that discriminates between essential and inessential properties. For Leibniz, however, the properties that a substance actually has are all and without exception to be seen as essential so that all of a substance’s properties are necessary and essential to its identity. As he sees it, if any actual state of affairs were anywise different, then, as outcome of a natural course of development, this means that7 the entire universe would have to have a different history of development. We should then be confronted with a world different from ours, involving another possible development of things so that our hypothetical foray would lead us to another, altogether different manifold of being. As Leibniz makes emphatically clear in his correspondence with Arnauld, an item could not in any respect be different from what it actually is and continue to be the same. Only entire worlds can be contingent, not the individuals within them. But now with the necessity operator of the preceding identity-specification so that, simply suppressed, this mode of identity reverts to the earlier specification of property identity. Descriptive individuation carries modal individuation in its wake. For Leibniz, the separation between descriptive to modal individuation comes to be a distinction without a difference. 5. RELATIONAL IDENTITY There is, moreover, yet another mode of identity, namely the relational identity at issue with a sameness of relationships: x = y iff (R)(z)([Rxz { Ryz] & [Rzx { Rzy])
110
On this basis, items are identical whenever they enter into exactly the same relationship to other items. On this approach, sameness of placement within that relational framework relative to other things will provide for the identification of items. The very idea of transporting individuals into other possible world—so near to the heart of contemporary semanticists—is thus flatly ruled out. Leibniz’ metaphysical approach to individuation and his property-geared approach to identification are thus inseparably linked: Observe, however, that if we coordinate property possession with relatedness as per Fx iff (y)(RFxy & RFyx), where RF is a suitably F-corresponding relationship then descriptive individuation and relational individualization come in effect to one and the same thing. This approach to substance-identity is pivotal in Leibniz’s monadological metaphysics. For him, the properties of a substance are, in effect, formed or determined by its relations and thereby by its characteristic point of view with the overall universe of thought.8 The relational/dispositional nature of properties in the Leibnizian metaphysics means that relatability is yet another area to individuation. Every existing substance (monad) is related to all of the others, but each in its own characteristic way that differentiates it from all the rest. Leibniz maintains the coordination of substance-relationships with the attributes (properties, identifications) of the substances involved. As he puts it: A plurality of modifications must necessarily be formed together in the same simple substance, and these modifications must consist of the variety of relations of correspondence which the substance has to things outside.” 9
Leibniz’s metaphysical property-reducibility theory of descriptive identity accordingly manages to carry relational identity in its wake. 6. THE UPSHOT As these deliberations indicate, in the philosophy of Leibniz, all four of the key modes of identity come to coalesce and coincide on the basis of his
111
metaphysical commitments. And so in creating a unity in fact where there is a diversity in thought with respect to the pivotal concept of individuation, the principles of Leibnizian metaphysics play a crucial unifying function. Leibnizian metaphysics enables the many currents of item-identity to flow into a common channel, for with the fundamental commitments of Leibniz’s metaphysics, all of those theoretically distinct modes of individuation coalesce into one. NOTES 1
See sects. 8–9 of the Monadology.
2
Gerhardt, Phil. II, 41. [This form of reference stands for: C. I. Gerhardt (ed.): Die philosophischen Schriften von G. W. Leibniz, 7 vol’s (Berlin: Weidmann, 1875– 90).]
3
Gerhardt, Phil. V, p 214.
4
Gerhardt, Phil. VII, p. 393.
5
Theodicy, Preface (Gerhardt, Phil. VI 45).
6
Gerhardt, Phil. VI, p. 340.
7
In Leibniz, one must remember, we are confronted with a strict mechanist.
8
See Phil. VI, p. 616.
9
“Principles of Nature and of Grace,” sect. 2.
112
Chapter 10
WORLDLY WOES: THE TROUBLE WITH POSSIBLE WORLDS 1. OBSTACLES Merely possible, nonexistent individuals and worlds are in deep difficulty. For as such—as concrete individuals—they must also be definite in the detail of their makeup. An individual must be descriptively so detailed and complete that any descriptively specifiable feature either must hold of the item or fail to hold of it; there is no prospect of being indecisive with regard to its make-up. 1 In consequence the Law of Excluded Middle must apply: a world and its constituent individuals must exhibit a determinateness of composition through which any particular sort of situation either definitely does or definitely does not obtain. The grass blades of a world cannot just be greenish but has to be a particular shade; its rooms cannot contain around a dozen people but have to commit to a definite number. Such a possible world is not just any state of affairs, 2 but will have to be a “saturated” or “maximal”. It will have to resolve if not everything then at least everything that is in theory resolvable. (Unlike the statement that “A pen is writing this sentence” a world cannot leave unresolved whether that pen is writing with black ink or blue.) If an authentic world is to be at issue (be it existent or not) this will have to be descriptively complete: it must “make up its mind,” so to speak, about what features it does or does not have. 3 The Law of Excluded Middle will have to apply: any assertion that purports to be about it must thus be either definitively true or definitively falsehowever difficult (or even impossible) a determination one way or the other may prove to be for particular inquirers, epistemologically speaking. Authentic individuals and worlds do and must accordingly have an altogether definite character. 4 To identify a nonexistent world descriptively—as we must, since ostension is unavailing here—we have to specify everything that is the case regarding it, and this simply cannot be done. To be sure, the identification of our (actual) world is simplicity itself. Since we ourselves belong to this world, all we need do is to indicate that what is at issue is this world of ours (thumping on the table).5 The very fact of its being the world in which we are all co-present together renders such an essentially ostensive identification of this world unproblematic in point of identification and communication. However, the matter is very different
with other “possible worlds” that do not exist at all. One clearly cannot identify them ab extra by some physically ostension-involving indication that is, by its very nature, limited to the domain of the actual. Identification would have to be effected by other and different means. And here comes the difficulty. For the only practicable way to identify an unreal possible world is by some sort of descriptive process. And, as the preceding chapter has argued, this procedure is simply not practicable, since its unavoidably schematic character cannot provide for the uniqueness indispensable for the identification of a particular world. For what it would have us do is to project a hypotheses specifying the descriptive constitution of a world. And, as noted above, such hypotheses can never be elaborated to the point of descriptive definiteness. As regards those merely possible worlds, we simply have no way to get there from here. Authentic world-descriptions are simply not available to finite beings. Their limitless comprehensiveness makes it impracticable to get a descriptive grip on the identifactory particularity necessary for anything worthy of being characterized as a nonexistent world. And so from this angle too we reinforce the thesis that the alternative reality of merely hypothetical individuals and worlds is bound to deal in abstracta and thereby unable to present concrete and authentic objects. And here the situation as regards possible worlds is, if anything, even more problematic than that of possible individuals. Seeing that we can only get at unreal possibilities by way of assumptions and hypotheses, and that such assumptions and hypotheses can never succeed in identifying a concrete world, it follows that we can only ever consider such worlds schematically, as generalized abstractions. Once we depart from the convenient availability of the actual we are inevitably stymied regardless the identification of nonexistent particular worlds. Whatever we can appropriately say about such “worlds” will remain generic, characterizing them only insofar as they are of a certain general type or kind. Possible-world theorists have the hybris of employing a machinery of clarification that utilizes entities of a sort of which they are unable to provide even a single identifiable example. 2. PROBLEMS OF STRONG POSSIBLE-WORLD REALISM Possible world realismoften also called modal realismis the doctrine that, apart from this actual world of ours, the realm of being, of what there is, also includes alternative worlds that are not actual but merely possible.
114
Being—reality at large—is thus seen as something broader than mere actuality. There are two versions of the theory. Strong modal realism holds that those alternative worlds really exist, albeit in a different domain of their own, outside the range of our universe’s space-time framework. And weak modal realism holds that while those alternative worlds do not really exist, they nevertheless somehow quasi-exist or subsist on their own in total disconnection from anything going on in this actual world of oursapart, perhaps, of being thought about by real people. Let us begin with the former. The most emphatic sort of strong modal realism proposed in recent times is that of David Lewis. 6 As he sees it, nonactual possible worlds are comparable to “remote planets, except most of them are much bigger than mere planets and they are not remote [since they are not located in our spatial-temporal realm].”7 All of these worldsand their contentsexist in the generic sense of the term, and all of them stand on exactly the same footing in this regard, although none exists in or interacts with another. (Existence in a narrower sense is always world-correlative, a matter of placement within some possible world.) This world of ours is nowise privileged in relation to the rest; it does not differ in ontological status but only in the parochial respect that we ourselves happen to belong to it. As Lewis puts it: Our actual world is only one world among others. We call it alone actual not because it differs in kind from all the rest but because it is the world we inhabit. The inhabitants of other worlds may truly call their own worlds actual, if they mean by “actual” what we do; for the meaning we give to “actual” is such that it refers at any world i to that world i itself. “Actual” is idexical, like “I” or “here,” or “now”: it depends for its reference on the circumstances of utterance, to with the world where the utterance is located. 8
And so, as Lewis’ approach has it, the manifold of possible worlds as a whole is the fundamental ontological given. Thus, strictly speaking, there are no “unrealized possible worlds” at allall possible worlds are realized, all of them exist as parts of one all-comprehensive reality, one vast existential manifold. It is just that they are spatiotemporally and causally disconnected, there being no way save that of thought alone to effect a transit from one to another. What Lewis, in effect, does is to abolish the idea of “nonexistent possibility” in favor of one vast existential realm in which our spatiotemporal real world is only one sector among many. (His theory is much like that of the Greek atomists, except that their worlds were em-
115
placed within a single overarching space and could collide with one another.) With Lewis, as with Spinoza before him, reality is an all-inclusive existential manifold that encompasses the whole of possibility. He holds that it is a fallacy “to think of the totality of all possible worlds as if it were one grand world” because this invites the misstep of “thinking that there are other ways that grand worlds might have been.” Of course the manifold of possibility could not possibly be different since whatever is possible at all is part of it. But this clearly does not block the path to thinking of the totality of all possible worlds as one all-embracing superworld.9 Lewis thus projects an (extremely generous) conception of existence according to which (1) anything whatsoever that is logically possible is realized in some possible world; (2) the entire manifold of “nonexistent possible worlds” is actually real; (3) the existential status of all of these possible worlds is exactly alike, and indeed is exactly the same as our own “real” world; and (4) there is also a narrower, more parochial sense of existence/reality which comes to co-existence with ourselves in this particular worldthat is, being co-located with ourselves in this particular world’s spatiotemporal framework. But this position runs up against the decisive fact that one must “begin from where one is,” and we are placed within this actual world of ours. There is no physical access to other possible worlds from this one. For us other possible worlds cannot but remain mere intellectual projections mere “figments of the imagination.” The problem with Lewis’s strong actualism is that from our own starting point in the realm of the realthe only one that we ourselves can possibly occupythis whole business of otherwise existence is entirely speculative because our own access to the wider realm beyond our parochial reality is limited to the route of supposition and hypothesis. 10 Our standpoint—the one we actually have—is the only one in whose terms our own considerations can proceed. The priority of actuality in any discussion of ours is inevitable: it is not a matter of overcoming some capriciously adopted and optimally alterable point of departure. But what of a weaker possible-world realism, one which, while holding that such worlds do not exist, nevertheless concedes them an attenuated form of being or reality—of quasi-existence in an actuality-detached domain of their own? Many philosophers deem even this sort of thing deeply problematic. As they were coming into increasing popularity, J. L. Mackie wrote that “talk of possible worlds ... cries out for further analysis. There are no possible worlds except the actual one; so what are we up to when
116
we talk about them?” 11 And Larry Powers quipped that “The whole idea of possible worlds (perhaps laid out in space like raisins in a pudding) seems ludicrous.” 12 However, while such disinclination to fanciful speculation seems plausible enough, the principal reason for rejecting the subsistence or quasireality of possible worlds lies in their cognitive inaccessibility. For being and identity are correlative, and as the previous discussion has stressed, there is simply no viable way of identifying such merely possible worlds and their merely possible constituents. The problem lies in thinking that the locution “a world just like ours except that ...” can be implemented meaningfully. It cannot. For one specifies any change in the world’s actual state of things that “except that” listing can never be brought to an end. Once we start to play fast and loose with the features of the world we cannot tell with any assurance how to proceed. Consider its law-structure, for example. If electromagnetic radiation propagated at the speed of sound how would we have to readjust cosmology? Heaven only knows! To some extent we can conjecture about what consequences would possibly or probably from a reality-abrogating supposition. (If the law of gravitation were an inverse cube law, their significantly lesser weight would permit the evolution of larger dinosaurs.) But we cannot go very far here. We could not redesign the entire world—too many issues would always be left unresolved. In a well-articulated system of geometry, the axioms are independent—each can be changed without affecting the rest. But we have little if any knowledge about the interdependency of natural laws, and if we adopt an hypothesis to change one of them we cannot concretely determine what impact this will have on the rest. The specification of alternative possible worlds is an utterly impracticable task for us finite mortals. Even when viewed epistemically as mere methodological thought-tools, merely possible worlds and individuals remain deeply intractable. 3. AVERTING THE PROBLEM OF TRANS-WORLD REIDENTIFICATION But do we not achieve merely possible individuals—and thus worlds— via such scenarios as: • He thought he saw a man in the garden and that top hat and carried a cane.
THAT MAN
wore a
117
• She thought there was a ghost on the landing and that wore a long white robe and made strange wailings.
THIS GHOST
In such cases we do not, properly speaking, have an anaphorical backreference (THAT MAN or THIS GHOST) to a “nonexistent object”—to something that does not in fact exist. To see this it is necessary to note two important considerations: (1) Letting Tp represent that someone (he, she, they, whatever) thinks p to be the case, observe that • TZ(x)Fx (Z thinks that there is an x that has F) is something very different from the claim: • (x)TZFx (There is an x which Z thinks to have F) As that out-front (de re) quantifier indicates, the latter thesis involves an existential claim on our part. But the former de dicto quantifier leaves us entirely out of it in point of any existential claim or commitment. It is certainly (and unproblematically) possible for people to think about nonexistent individuals, but that neither entails nor require that there actually be nonexistent individuals for them to think about. We must be very careful not to conflict or confuse theses of those very different formats. My thoughts about the Easter Rabbit neither presupposes nor entails his existence. (2) The second important consideration is in that the two preceding statements actually come down to • He thought he saw a man in the garden who wore a top hat and carried a cane. • She thought there was a ghost on the landing which wore a white robe and made strange wailings. The appropriate grammar of these statements is not T(!x)Fx & G(Lx TFx)
118
but rather T(!x)(Fx & Gx) With the former formulation we stake a claim that is made on our own account, viz. G(Lx TFx) which commits us to attaching G to an object of a certain sort to whose existence we stand committed. But this is nowise the case. The entire contention at issue is (and should be seen as being) within the scope of T. It is the subject at issue (Z) who bears the total responsibility for any purportings of existence; no existential commitment of any sort rubs off on us who are merely the reporters of Z’s odd belief in objects that do not exist. After all, there is a significant difference between “I think that there is a possible world with the feature F” (symbolically T(w)Fw) and “There is a possible world that I think to have the feature F”, symbolically (w)TFw). For the former does not entail the existence of possible worlds: (T(w)Fw can be perfectly true not only when both (w)TFw and (w)Fw are false, but even when every statement of the fact (w)(w z w* & [... w ...]) is false—w* here being the actual world—because these just are no “merely possible worlds” at all. Neither here nor elsewhere does one bring something (except a thought) into existence by merely thinking it to exist.13 But what of the following objection: Suppose that someone—a purely fictional Mr. Smith, say—had done this-orthat (which in fact nobody did). Then this person would have been regarded as a hero.
Does this sort of hypothesis not bring a nonexistent individual upon the agenda of consideration? By no means! The so-called individual supposedly at issue here is no individual at all, nor strictly speaking is “Smith” a proper name. That individual is one “in name only” and that name is a pseudo name. All that we have here is an expository device, a stylistically different way of stating the generalized conditional: “If someone had done this-or-that he would have been regarded as a hero;” which in turn encapsulates the de-individualized generalization: “Anyone who would have do this-or-that would have been regarded as a hero.” One positive result of rejecting possible individuals at the ontological rather than merely discursive level and worlds is that the vexed question of the “trans-world identity” across the range of merely possible individuals
119
will now never arise. 14 Lewis proposes to settle this issue of “counterparts,” as he calls them, on the basis of similarity. But this clearly will not do, for with things as with worlds there is only similarity of respect: there is, after all, no such thing as synoptic similarity but only in regard to this or that particular respect or aspect. The most sensible view is that transworld-identity is simply a matter of supposition, assumption, or postulation. There is no supposition-independent fact of the matter about it. Thus consider what might be called Chisholm-type questions, 15 Messrs. A and B gradually interchange their properties (first their height, then their weight, then their hair color, etc.). At what point does A become B? Answerit’s all up to you. It’s your hypothesis that’s at issue here. You have to tell us how you want it to play out. The issue is one of Questioner’s Prerogative: it is his right (and duty) to spell out just exactly what the question is. There are no supposition-external facts of the matter to constrain this. Once we acknowledge that assumption and hypothesis alone provides our pathway to the domain of the merely possible, this vexed question of the “transworld-identity” of individuals becomes simplicity itself. Given that nonexistent worlds are constituted by hypotheses (by imaginative stipulations) the identity of their individuals is dependent entirely on the nature of the hypotheses at issue. Is that imaginary general identical with Caesar or not? Ask the individual whose imagination is at issue! It will all depend on what he says. In a classic paper, W. V. Quine (1948) embarrassed the possible-individual theorists of his day with the question: “How many possible fat men are there in the doorway?” His opponents proceeded to seek refuge in possible worlds, attempting the response that there was one such individual per world, at best. But in taking this line, however, they failed to note that essentially the same difficulty with identifying possible fat men arose with identifying the worlds that supposedly contain them. Even when world w puts its single fat man into doorway No. 1, how many other doorways does it have for occupancy by thinner men? It might seem possible to arrive at possible worlds via fictionalizing assumptions, as per: “Assume a possible world in which dogs have horns.” But this, of course, takes us no further than certain radically incomplete “states of affairs” and does not put any concrete particulars on the agenda. In effect it deals not with authentic worlds but with a schematic supposition of propositional possibilities. After all, how are we to manage this sort of thing. How did those dogs come by horns. By sorcery? By crossbreeding with goats? By a different course of evolution? And in any case,
120
what are the details of the developmental modus operandi? The reality of things will always go beyond what we are explicitly instructed to assume. Assumptions can never suffice to characterize an authentic world. The long and short of it is that in refusing to grant any sort of existence or reality to nonexistent possible individuals and worlds averts plurality of intractable issues. Let us examine some of them. 4. WORLDLY WOES Charles Chihara had proposed viewing the problem of nonactual possible worlds in an epistemological light, arguing that “it is not clear that we have any conceivable way to gain knowledge of other possible worlds” so that “knowledge of such worlds is completely beyond our powers.”16 And this is both sensible and correct as far as it goes. But the real problem is not just the limit of our cognitive powers with respect to other-worldly things but the inherent improbability of characterizing and identifying such a “world”of specifying what is and what is not at issue with any particular instance of this sort of thing. The problem, in other words, is not with ourselves but with those “worlds”—it is not just knowledge of such objects that is infeasible but the very “objects” themselves. As William Lycan has observed “a view according to which worlds are the way we say they are because they are simply stipulations by us has a considerable advantage over [a supposedly selfsufficient realm of nonexistent].” 17 But, of course, what we can actually ever say about worlds (in abstracto) never suffices to identify a world. When possible world theorists propose to identify worlds via sets of propositions they fail to recognize that actually available (and thereby finite) sets of propositions simply cannot do the requisite job. Anything worthy of being designated as a world will have to involve a plethora of descriptive detail that can never actually be articulated. R. C. Stalnaker has written: There is no mystery to the fact that I can partially define a possible world in such a way as to be ignorant of some of the determinate truths in that world. One way I can do this is to attribute to it features of the actual world which are unknown to me. Thus I can say, “I am thinking of a possible world in which the population of China is just the same, on each day, as it is in the actual world.” I am making up this worldit is a pure product of my intentionsbut there are already things true in it which I shall never know. 18
121
One must, however, be careful here. Those “incomplete worlds” are not worlds at all; they are world-schemata. For no one single definitely identifiable world is in view, but entire spectra or families thereof. It is not the case that some one individuated world is before us, only one that is in some respects inherently indeterminate, so that there are facets about it that cannot be known. There simply is no definite “it” about which there are certain facts of the matter that we cannot determine. Consider the Stalnaker-analogous supposition of a world where the population of Shanghai is otherwise just exactly as is, except that oneself is among them. Can such a world count as other than schematic? Well perhaps so as far as its people go. But what about the rest of its make-up? How did I get there? Where are those air particles that my body displaces. And how did they get there? What that assumption has done is to confront us once again with something that is not a world but a world schema. And the fact remains that the reason for ignorance about matters of content is simply that the world to be at issue has been characterized only partially, so that in effect no one single definitely identifiable world is in view. The descriptive specification of a whole worldany worldis an impossible task. The assumptions that supposedly take us into nonexistent possible worlds are always incomplete and thereby merely schematic.19 A merely possible individual or world that is only “partially defined” by way of an assumption or supposition is in effect a schema to which a plurality of definite possible individuals (or worlds) can in principle answer, exactly as a partially described actual individual can (for all we know) turn out to be any one of a plurality of alternatives. To speak of “partially defined individuals”an expression favored by some theoristsmakes about as much sense as speaking of a “partially pregnant woman.” Such incomplete specifications of individuals confront us with possible individual world schemata rather than individuated possible individuals or worlds, and indicate an indeterminacy that is epistemic rather than ontological. A striking fact about schematically identified individuals is that the Law of Excluded Middlein the form of principle that of thesis p and its contradictory not-p, one must be truefails to obtain. Thus in the context of the hypothesis “Assume there were a red-headed person sitting in that chair” we could neither say that this hypothetical person is male nor that it is not, nor again, neither that it is six feet tall nor that it is not. Indeed this very failure, even in principle, of a complete characterization precludes the prospect that we are dealing with an authentic world.
122
Some semanticists cover the inescapable indefiniteness of possible individuals and worlds by treating them as abstract objects of a particular sort. 20 But this is deeply problematic because abstractions cannot achieve concreteness and individuals and worlds must be concrete. For while abstractions may indeed characterize a type of worldsand lots of themthey do not identify a single one of themany more than as abstracta like “bluish” determines any particular blue color patch to which (among others) it appertains. To be sure, some theorists, while rejecting the existence/subsistence of nonactual possible worlds as such, are prepared to endorse the existence of certain abstract entities that qualify as “world descriptions” or “maximal state-of-affairs characterizations.”21 But this does no more than to shift the problem to another mode of impracticability owing to the infeasibility of ever arriving at such an abstract entitya viably world-characterizing description). The task of providing something that would be qualified as such is simply unachievable. Those theorists who so glibly conjure with nonexistent possible-worlds cannot produce a single adequate example of the kind of thing they so glibly purport to deal with. 5. REDISTRIBUTION WORLDS NO EXCEPTION “But surely it’s not so difficult to specify worlds. A variant world could, for example, be just like this one except for Caesar’s deciding not to cross the Rubicon on that fateful occasion.” Very well. But now just exactly what is this world to be like. What actually happens in such a world? An endless proliferation of questions arises here. Does Caesar change his mind again a moment later and proceed as before with just a minor delay? Is he impelled across by force majeure and then decides to carry on as was? And if he doesn’t cross, then exactly what does he do? And what will all those who interacted with him then and later be doing instead? The resulting list of questions is literally endless. 22 Innumerable alternatives confront us at every point and these themselves lead to further alternatives. As we specify even more detail we do not reach definite worlds but continually open doorways to get further possibilities. The idea of identifying a possible world in some descriptive way or other is simply unworkable. (To be sure, even this actual world is not adequately describable by us, but that does not matter from its identification, seeing that, most fortunately, we can— thanks to our emplacement within “this actual world of ours”—individuate its contents ostensively.)
123
Again consider the hypothesis: “Suppose a world otherwise just like our actual one except that there is an elephant in yonder corner of the room.” Contrary to first appearance, this supposition also does not introduce a particular (individuated) world, but a world-schema that can be filled out in many alternative ways. Thus consider the questions: Are we to redistribute the actually existing elephants and put one of them into the corner (if so— which one)? Are we to take some actual thing—say the chair in the corner—and transmute it into an elephant (given that such a supposition could qualify as feasible)? Are we to keep all the actual things of our world’s inventory in existence and somehow “make room” for an additional, supernumerary one—viz., the (hypothetical) elephant at issue? Until issues of this sort have been resolved, the supposition does not introduce a definitely specified world into the framework of discussion—any more than the supposition “Assume there were a red-headed person sitting in that chair” succeeds in introducing a definitely identified person. But even though it is not practicable to realize individual nonexistent possible worlds through populating them with nonexistent objects, could we not at least create them by redistributing existing objects?23 Wittgenstein’s Tractatus proposed what William Lycan called a “combinationalist” construal of other possible worlds via the idea that such worlds are simply rearrangements of what in actual fact are the ultimate constituents of this world. 24 Now as David Lewis rightly emphasized, any recombinant theory of possibility will have to make up its mind as to the nature of the basic building blocks.25 It cannot avoid the question: Recombinations of what? And here the two prime prospects are: (i) of physical objects (perhaps at the atomic or even subatomic levels) in the framework of space and time, (ii) of some of the descriptive properties of otherwise invariant objects. Either way, we take a fundamentally conservative view of possible worlds: all of them contain the same basic objects as their actual one (no additions or deletions allowed!) but with changes only in either (i) location in space-time, or (ii) possession of certain (inessential) descriptive features. Such recombinant possible worlds can either shuffle actual objects about in space-time or shuffle their properties about a descriptive “space.”26 Let us consider these two prospects. Consider the idea of obtaining a world via a hypothetical spatial rearrangement of actual objects. So, for example, we might populate that shelf (pointing) with these particular books (pointing) that are currently placed on other shelves in the bookcase. Or again we might proceed by hypothetically stipulating that yon two cats, Tom and Jerry, be interchanged on their
124
respective mats. Then would the hypothesis “Let us suppose that Tom were on Mat 2 (instead of Mat 1 as is) and Jerry were on Mat 1 (instead of Mat 2 as is)” not succeed in projecting another possible world for us? Alas no. For consider once more how we could ever manage to get there from here. Assume those two cats to be interchanged right now. Where were they a nanosecond ago? In their actual places—and if not, then where? And how did they effect their transit? How are the laws of physics to be changed to accommodate these changes of place? Or if we hypothesis-shuffled the books in that bookcase about, then what are we to do with the course of world history that have emplaced them in their present positions and will subsequently position them in their future locations? And, problems of laws of nature apart, another difficulty with those positional redistribution worlds is that they rest on the naïve doctrine of ancient Greek atomism that space is something entirely independent of the objects that “occupy” it, so that a uniform space is available for objects to be redistributed in it. Given a realistic physics that involves space and its objects conditionally with time and its processes, these redistribution worlds are no longer available as such. The trouble with spatial rearrangement worlds is that they open up a vast host of questions as to how to get there from here to which there just is no available answer. Those interchange hypotheses can only result in something schematic because the necessary concreteness is not achieved, since so much remains unresolved: the exact positing of the constituents, the exact processes by which they came to be where they are, etc. In hypothetically exchanging or interchanging those various real-world objects we emphatically do not effect a transit into another particular possible world, but simply put on the agenda a vast schema of alternative possibilities that would, for individuation, require a filling in of a volume of detail which we simply cannot provide in the required completeness. Such hypotheses lead down a primrose path where every answer to a question that arises leads to yet further questions, ad infinitum. 27 A different sort of rearrangement world has also been contemplatedone in which we are to rearrange not positions but properties. “Suppose a possible world just like this one except that its elephants are pink.” However, such a supposition is literally absurd. We cannot possibly implement its indication of “just like.” For we cannot get those elephants to be a different color without changing the laws of biological or of optical or other phenomena in totally imponderable ways. And such change is
125
clearly bound to have wider ramifications and those descriptively recombinant “worlds” are a mere illusion. But what of the prospect of rearranging not objects or properties but rather the truth-values of statements about them? Can we not simply coordinate possible worlds with specifications that assign the truth values T or F to every sentence of a language? Not really! For no verbal confabulation can characterize a complete world. The range of fact far outruns the limits of language, 28 so that this tactic is bound to prove inadequate to the needs of the situation. And, given the logical and metaphysical integrity of the real, it could not be otherwise. For once we start conjuring with the real world’s facts we are on the slippery slope to nowhere. Rearrangement proposals are simply unable to overcome the indefiniteness and underdetermination that affects fact-violating suppositions in general. 6. CONCLUSION Possible-world theorists are caught up in a position-unravelling dilemma. They can either hold that the merely possible individuals that populate a merely world are just like real-world ones in point of descriptivedefiniteness—in which case they have no practicable way of identifying or individuating such particulars, and thereby no practicable way to conceive of their possible worlds. Or else they can be content with descriptive individuation of a practicable kind at the level of schematic generalitythus indeed achieving a meaningful basis for discussing possible worlds—but thereby only dealing at a level of generality with “worlds” whose nature is schematic and whose individuation is unachieved and unachievable. They cannot have it both ways. Possible world semanticists talk and reason as though possible worlds were somehow given—available and in hand to work with. Where these worlds are to come from—how we can actually get there from here—is a question they simply ignore. They never manage to unfold a successful story about how we are to arrive at possible worlds given our de facto starting point in this one. They proceed as though one could obtain by mere fiat that which would have to be the work of honest toilalbeit a labor which, however extensive, is in the present case bound to be unavailing.29
126
NOTES 1
On this feature of concrete worlds see the author’s “Leibniz and Possible Worlds,” Studia Leibnitaina, vol. 28 (1995), pp. 129–62.
2
“A possible world, then, is a possible state of affairsone that is possible in the broadly logical sense.” (Plantinga 1974, p. 44).
3
Some logicians approach possible worlds by way of possible world characterizations construed as collections of statement rather than objects. And there is much to be said on behalf of such an approach. But it faces two big obstacles: (1) not every collection of (compatible) statements can plausibly be said to constitute a world, but rather (2) only those can do so which satisfy an appropriate manifold of special conditions intending that any “word characterizing” set propositions must be both inferentially closed and descriptively complete by way of assuring that any possible contention about an object is either true or false.
4
Authentic worlds thus differ from the schematic “worlds” contemplated in such works as Rescher & Brandom 1979. These, of course, are not possible worlds as such but conceptual constructs.
5
Van Inwagen 1980, pp. 419–22, questions that we can use ostension to indicate uniquely the world we live in, since he holds that actual individuals can also exist in other possible worlds. But this turns matters upside down. For unless one has a very strange sort of finger its here-and-now pointing gesture does not get at things in those other worlds. There is no way of getting lost en route to a destination where we cannot go at all.
6
Lewis 1986, p. 2. The many worlds theory of quantum-mechanics projected by Everett and Wheeler can also be considered in this connection. Other “modal realists” (as they are nowadays called) include not only Leibniz but Robert Adams (see his 1979), and Robert Stalnaker (see his 1984).
7
Despite abjuring a spatial metaphor, Lewis’ theory in one of its versions required a metric to measure how near or far one possible world is from another. This leads to hopeless problems. Is a world with two-headed cats closer to or more remote from ours than one with two-headed dogs?
8
Lewis 1973, pp. 85–86.
9
On this Lewis-Lycan controversysee Lycan 2000, pp. 8586the present deliberations come down emphatically on Lycan’s side.
127
NOTES 10
Lewis 1986 devotes to this problem a long section (pp. 108–115) entitled “How Can We Know?” It is the most unsatisfactory part of his book, seeing that what it offers is deeply problematic, owing to its systematic slide from matters of knowledge regarding possibility de dicto to existential commitments de re.
11
John L. Mackie, Truth, Probability and Paradox (Oxford: Clarendon Press, 1973), p. 84.
12
Powers 1976, p. 95.
13
Statements of the format (w)Fw are systematically false unless Fw*. But this of course does not hold for T(w)Fw.
14
On this issue see Chisholm 1967, Lewis 1968, Plantinga 1974 (Chap VI), Forbes 1985, Chihara 1998 (Chap. 2). The problem, of course, vanishes once we turn from possible worlds to the schematic scenarios as assumption and supposition. Here objects in different contexts are the same just exactly when this is stipulated in the formative hypotheses of the case.
15
See Chisholm 1967.
16
Chihara 1998, p. 90.
17
Lycan in Loux 1979, pp. 295–96.
18
Stalnaker 1968, pp. 111–12.
19
In the semantical literature such “worlds” are also called “partial” or “incomplete,” but this of course concedes that they are not really worlds at all.
20
See, for example, Zalta 1988.
21
See Plantinga 1974.
22
Compare Quine 1948.
23
Recombinatory suppositions of this sort are the guiding idea behind David Armstrong’s approach to nonexistent possible worlds as nonactual recombinations of actual objects in Armstrong 1989.
24
See Lycan 1979.
128
NOTES 25
See Lewis 1973.
26
Latter-day Combinationist theories along these lines are offered by Cresswell 1972, Skyrms 1981, and Armstrong 1989. Regarding such possibilia arising from redistribution or recombination see Lewis 1986 and also the various papers by Rosenkranz listed in the References.
27
And what about counting possible worlds? Counting anything, be it worlds or beans, presupposes identifying the items to be counted. What we cannot individuate we cannot count either. Given that one cannot tell just where one cloud leaves off and where another begins, one cannot count the clouds in the sky. Given that one cannot tell one idea from the rest, one cannot count how many ideas a person has in an hour. And the same story goes for possible worlds. If we cannot identify possible worlds, we cannot possibly count them. How many of them are there? God only knows. As far as we are concerned, possible worlds are literally uncountable. And this is so not because they are too numerous but because they lack the critical factor of individuation/identification. Accordingly, we have little alternative but to see the question of quantification as inappropriate—effectively meaningless, seeing that it rests on presupposing our doing something that is in principle impossible for us.
28
See the author’s Epistemetrics (Cambridge: Cambridge University Press, 2006).
29
For material relevant to this chapter’s deliberations, see also Rescher 2003.
129
REFERENCES Adams, Robert M., “Theories of Actuality,” Nous, vol. 8 (1974), pp. 211– 231. Reprinted in Loux 1979, pp. 190-209. , “Primitive Thisness and Primitive Identity,” The Journal of Philosophy, vol. 76, (1979), pp. 5–26. Armstrong, David M., A Combinatorial Theory of Possibility (Cambridge: Cambridge University Press, 1989). Chihara, Charles S., The Worlds of Possibility: Modal Realism and the Semantics of Modal Logic (Oxford: Clarendon Press, 1998). Chisholm, R. M., The Encyclopedia of Philosophy, ed. by P. Edwards (New York, 1967), vol. 5, pp. 261–263. Cresswell, M. J., “The World is Everything that is the Case,” Australian Journal of Philosophy, vol. 50 (1972), pp. 1–13. Reprinted in Loux 1979, pp. 129–45. Felt, James W., “Impossible Worlds,” The International Philosophical Quarterly, vol. 23 (1983), pp. 251–265. Forbes, Graeme, The Metaphysics of Modality (Oxford: Oxford University Press, 1985). Lewis, David K., “Counterpart Theory and Quantified Modal Logic,” The Journal of Philosophy, vol. 65 (1968), pp. 113-26. Reprinted in Loux 1979, pp. 210–28. Lewis, David K., “Counterfactuals and Comparative Possibility,” Journal of Philosophical Logic, vol. 2 (1973), pp. 918–46; reprinted in Philosophical Papers, vol. 2 (Oxford: Oxford University Press, 1986). Loux, Michael J. (ed.), The Possible and the Actual: Readings in the Metaphysics of Modality (Ithaca, NY: Cornell University Press, 1979). Lycan, William G., “The Trouble with Possible Worlds,” in Loux 1979, pp. 274–316.
130
————, Philosophy of Language: A Contemporary Introduction (London: Routledge, 2000). Mackie, J. L., Truth, Probability and Paradox (Oxford: Clarendon Press, 1973). Mates, Benson, The Philosophy of Leibniz: Metaphysics and Language (New York: Oxford University Press, 1986). Plantinga, Alvin, The Nature or Necessity (Oxford: Oxford University Press, 1974). Powers, Larry, “Comments on Stalnaker’s ‘Propositions’” in A. F. MacKay and D. D. Merrill (eds.), Issues in the Philosophy of Language (New Haven: Yale University Press, 1976). Quine, W.V. O., “On What There Is,” The Review of Metaphysics, vol. 2 (1948), pp. 21–38, reprinted in From a Logical Point of View, 2nd ed. (New York: Harper Torchbooks, 19xy, pp. 1–19, and also in L. Linsky (ed.), Semantics and the Philosophy of Language (Urbana, 1952), pp. 189–206. Rescher, Nicholas, Imagining Irrrealtiy (Chicago: Open Court, 2003). Rescher, Nicholas and Robert Brandom, The Logic of Inconsistency (Totowa, N.J: Rowman and Littlefield, 1979). Rosenkranz, Gary, “Reference, Intensionality, and Nonexistent Entities,” Philosophical Studien, vol. 50 (1980). ———, “Nonexistent Possibles and their Individuation,” Grazer Philosophische Studien, vol. 22 (1984), pp. 127–147. ———, “On Objects Totally Out of this World” Grazer Philosophische Studies, vol. 25/26 (1985–86). ———, Hacceity: An Ontological Essay (Dordrecht: Kluwer, 1993).
131
Skyrms, Bryan, “Tractarian Nominalism,” Philosophical Studies, vol. 40 (1981), pp. 199–206. Stalnaker, Robert, “A Theory of Conditionals, Studies in Logical Theory (Oxford, 1968: American Philosophical Quarterly Monograph Series, No. 2), pp. 98–112; see pp. 111–12. ———, Inquiry (Cambridge MA.: Bradford Books/MIT Press, 1984). van Inwagen, Peter, “Indexicality and Actuality,” The Philosophical Review, vol. 89 (1980), pp. 403–26. ———, “Indexicality and Actuality,” The Philosophical Review, vol. 89 (1989), pp. 403–26. Zalta, Edward N., Intensional Lotic and the Metaphysics of Intentionality (Cambridge, MS: Bradford Books/MIT Press, 1988).
132
Chapter 11
TRIGRAPHS: A RESOURCE FOR ILLUSTRATION IN PHILOSOPHY 1. INTRODUCTION Plato’s counsel to the contrary notwithstanding, philosophers do not generally have much truck with geometry. Their expositions are almost entirely verbal rather than diagrammatic and usually proceed sequentially, one word and one sentence at a time. However, it turns out that for many philosophical purposes a simple 3 x 3 tic-tac-toe-style diagram can render good service to illustrate and establish a wide variety of philosophical contentions. The present metaphilosophical essay will offer a series of exemplary case-studies to exhibit the utility of such trigraphs in philosophical deliberation. For a considerable number and variety of interesting philosophical points can be made vivid and persuasive by means of this very simple illustrative device. The proverb has it that a picture is worth a thousand words, and similarly a diagram can sometimes help to diminish the need for extensive verbiage. 2. CREDIT FOR DISCOVERY Let us begin with an issue of distributive justice. How is credit to be divided among investigators whose several separate contributions jointly yield the solution to a research problem? To gain clearer insight into this question let us begin with a helpful oversimplification. For the sake of an instructive albeit highly schematic example, consider the idea of searching for a problem-resolution within an overall solution space that has the structure of a tic-tac-toe grid canvassing nine alternative possibilities: 1
2
3
4
5
6
7
8
9
We shall suppose that all of the possible alternatives are on a par, both as regards challenges and as regards utility. And let it be supposed further that the problem at issue is pursued by two different investigators, X and Y, separately at work on relevant researches. But now let it be that investigator X determines that the solution must lie on a diagonal, while investigator Y, in pursuing a very different line of investigation, determines that it cannot lie in a corner position. Between the two they have solved the problem and have succeeded in placing its solution in the middle as solution 5. Taken together, the two get credit for the whole. But each gets credit only for the particular piecethe particular sub-problem they have resolved. And here the basic facts are as follows: Possibilities eliminated by X: 2, 3, 4, 6, 7, 8 Possibilities eliminated by Y: 1, 3, 7, 9 Possibilities eliminated only by X: 2, 4, 6, 8 Possibilities eliminated only by Y: 1, 9 Possibilities eliminated by both: 3, 7 Interestingly, one can now look at the matter in two rather different perspectives reflecting two quite different yet plausible principles for credit allocation: viz. the comparative effort of the contribution, and the comparative essentiality of the contribution to final result. And these standards are potentially discordant—as the present example actually shows. For in terms of effort of contribution, X contributes half again as much as Y by eliminating six possibilities as compared to Y’s four. But as regards essentiality, X contributes twice as much as Y, since four possibilities are eliminated by X only, as compared with the mere two eliminated only by Y. Our simple example brings to light a significant disparity of perspective for credit-allocation in matters of discovery. Let us now turn to a rather different issue, namely that of blame and culpability or credit and praise in matters of inquiry and research. Suppose once more a research situation of the same structure as that of the preceding example. And let us again assume two investigators, X and Y, working separately in a non-collaborative effort. Suppose that X shows that the solution must lie at one of the corners (1, 3, 7, 9), and that Y claims
134
to show the solution must be either 6 or 9. Together they have solved the problem. However, Y cheats. All that his work actually shows is that the solution must be in the lower right had square consisting of 5-6-8-9. Note that: 1. Between the two of them they have solved the problem: as a group they get full credit. 2. By hypothesis, each has succeeded in eliminating five possibilities in toto (and three of them uniquely). And so as far as individual epistemic credit goes, their shares are equal. 3. However, as regards ethical or moral credit, the inquiry as a whole is contaminated by Y’s cheating. 4. Nevertheless, from the ethical point of view X is altogether blameless: he is innocent as the driven snow. And so 5. Y must bear the entire burden of ethical discredit. 6. However, Y’s moral culpability and cheating nowise unravels the problem resolution collaboratively arrived at, and so does nothing diminish Y’s epistemic credit for his contribution. Again, even a greatly oversimplified illustration can help to establish some significant lessons. 1 3. CONJECTURAL GAPFILLING Inductive reasoning can be viewed as an exercise in conjectural gapfilling—in reasoning ampliatively across the evidential gap from lesser data to larger conclusions. What is at work here is seemingly a matter of cognitive magic—of extracting information from thin air in violation of the principle that ex nihilo nihil. But this impression is misleading. The operative principle of inductive reasoning is in fact a matter of cognitive gapfilling subject to the largest practicable extension of lawfulness. And this process is vividly illustrated and conveniently explained by means of our trigraphic microworlds. Thus consider to begin with the following situation:
135
X
O
X
O
?
O
O
O
X
What are we to suppose with respect to that unknown center? Among the conceivably eligible general laws we here have: • All
rows columns are O-containing
• All diagonals are X-containing And these will continue to hold irrespective of how we proceed. But if we set ? at O we will add also on further law: • All diagonals are O-containing On the other hand, if we set ? at X we will enhance the manifold of lawfulness by adding two further laws: • All rows are X-containing • All columns are X-containing Since two exceeds one, the lawfulness-maximization standard of inductive reasoning calls for the second of these steps. This sort of conjectural gapfilling which is standardly at work in inductive reasoning’s lawfulness maximization is vividly illustrated by this over-simple example. 4. FROM ACTUALITY TO MODALITY (ON EXTRACTING LAWS FROM PHENOMENA) In developing empirical science from observation we manage to take two cognitive steps of virtually alchemical mysteriousness. The one is the move from discrete, limited observation to universal, open-ended generalization. The second is the move from actuality to necessity and possibility,
136
from fact to modality, from observations to laws. A vast amount of ink has been spilled on the first issue—in essence aforementioned “problem of induction.” But what of the second—how is this modal transit to be accomplished? What authorizes us to venture into such counterfactual claims as “If this lump of sugar had been immersed in water it would have melted”? The answer is that the second is made to ride piggy-back on the first. Two procedural moves are involved. • Inductively to take suitably configured observational uniformity to provide for lawful universality. • Transductively to take lawful universality to provide for (merely) natural (in contrast with rigidly logical) necessity. This second step then puts us into a position to move on to (nomic) possibility via the familiar principle that anything whose non-realization is consistent with the necessary is possible.2 This proceeding is conveniently illustrated by a recourse to trigraphdepicted microworlds. Thus suppose that observation puts before us a microworld answering to the following description. X
O
O
O
X
X
X
X
O
We begin by considering natural kinds—in this case rows, columns, diagonals, and corners. We then look to those features which are universal to any such natural kind and take them to obtain by lawful necessity. On this basis the potentially available general laws will have the following format: rows colmns All diagonals corner-quartets
O must be X
filled containing devoid
However, of these 4 x 2 x 3 = 24 conceivable laws, the presently contemplated microworld admits just exactly eight, specifically:
137
All
rows columns diagonals corner-quartets
must be
O-containing X-containing
The idea of nomic necessity (lawfulness) and possibility (feasible alternativeness) become a readily comprehensible project on this basis of this simple illustration. Observe, however, that even if we accept all of those operative laws as representing the necessities of the case, nevertheless other variants to the actual situation could well obtain. In the present case this would include: X
O
O
O
X
X
O
X
O
For, as is readily seen, this alternative would conform to all the same general laws holding for the reality with which we began. This example both suggests and illustrates the significant point that the laws of nature need not necessarily issue in a unique determination of the world’s phenomena. 5. ON COGNITIVE FAILINGS Two importantly different sorts of “worlds” figure on the agenda whenever matters of knowledge are at issue, namely (1) the world as it actually is, the real world, and (2) the world as we think it to be, the phenomenal world. And as regards the latter three sorts of prospective deficiencies loom: (1) the error of getting the facts wrong, (2) uncertainty of indecision about how the facts stand among identifiable alternatives, and (3) the sheer ignorance or unknowing of not having any idea as to what the possibilities are (let alone the facts) actually are. This is readily illustrated by contrasting:
138
The Real World
The Phenomenal World (Our world picture)
0
1
0
0
1
1
1
1
1
?
1
0
0
0
0
1
1
Here our world-picture involves all three cognitive failings: uncertainty (indicated by ?), ignorance (indicated by blanks), and outright error (betokened by the incorrect entries in boldface). Yet note that all of our cognitive failings to the contrary notwithstanding, we have a correct grasp of various of the world’s laws, specifically those having it that: • Every row is 1-contiaining • Every column is 1-containing • Every diagonal is 1-contianing • Every diagonal is 0-containing • Every corner-set is 0-containing • Every corner-set is 1-containing These being all the general laws that do in fact obtain, it happens in this example that our cognitive grip on the world’s law structure is complete and correct, notwithstanding the imperfection of our knowledge of its phenomenal detail. Ignorance and error certainly can, but need not, stand in the way of having correct information about the world’s lawful structure. One salient respect in which our 3 x 3 microworlds are decidedly oversimple is in negating the prospect of anarchy—the total absence of lawful order. For consider the two prospective laws: • Every diagonal is O-containing
139
• Every diagonal is X-containing It is clear that it is impossible for both of these to fail, given that the central position must be occupied by something (either X or O). There is no way for our trigraphic worlds to be truly anarchic: they are just too small for that. 6. APPEARANCE AND REALITY Our trigrammatic microworlds also afford a pathway to various significant points about the dialectic of appearance and reality. Thus suppose Reality differentiates the detail within columns while Appearance fails to do so by losing sight of this particular aspect of order. We will thus contrast: Reality
Appearance
X
O
X
X
X
X
X
X
O
X
X
X
O
X
X
O
O
O
Here appearance gets one fundamental fact right: every column contained two X’s and one O. However, the actual situation affords further detail which appearance looses in the confusion of a mistaken view that the columns are just alike, each with two O’s and one X. The result is a clearly mistaken view of the real. A key epistemological lesson emerges here: Confusion and confliction issue in a loss of discriminative details. They both lead to an error that diminishes detail and discrimination by treating unlikes as likes. The errors of confusion and conflation are thus going to be errors of oversimplification.
As such examples show, the element of confusion that is pretty well inevitable in our perceptual knowledge of the real can readily spill over into the range of our conceptual knowledge as well. And this situation has significant consequences, preeminently the following two:
140
• To an observer who is oblivious to various details of reality, things may well appear simpler and subject to a cruder lawful order than is the case. And— • Thanks to such nomic (over)-simplification certain phenomena can become inexplicable. The appearance of chance can thus be an artifact of ignorance of detail. 7. AXIOLOGY AND LAW-DETERMINATENESS As has emerged in the preceding deliberations, the world’s laws need not determine the reality of things because one selfsame law system can admit of different and distinct realizations. Thus consider the microworld: X
O
O
O
X
X
X
O
Either way of filling in that blank—whether with X of with O—will conform to the overall law-situation of this world by honoring all of the relevant laws, specifically: rows columns All diagonals corner-quartets
are
O X containing
This sort of underdetermination of the world’s detail by its phenomenologically descriptive laws could however be overcome with the admittedly unorthodox step of introducing evaluative and economic complexities. Thus if O were qualified as inherently more meritorious on grounds of economy or if X-preponderance had some intrinsic merit, then such a normative principle would at once settle how the above-included blank space will be filled in. It is clear, however, that such a shift from descriptivity to axiology would bring an entirely different conceptual principle upon the scene, a step whose appropriateness might well have to be taken in stride
141
if—as is not implausible—determinativeness were accepted as a requisite demand for a law-based comprehension of reality. 8. COUNTERFACTUAL REASONING Consider the situation that obtains when our beliefs regarding the position of X in a tic-tac-toe trigraph are aligned to the following schema:
X
We accordingly hold the following beliefs: (1) X is in the center position of the first column (2) X is not in the first row (3) X is not in the third row (4) X is not in the second column (5) X is not in the third column (6) X is not on a diagonal And now let us enter into the hypothesis that X is not located as is, but elsewhere. Then even after we remove (1) from the roster—as per that hypothesis—nevertheless, the rest of those beliefs suffice to yield it back. So at the very least two of those other beliefs will have to be sacrificed. But this can, of course, be done in various different ways, and here all of the available alternatives seem equally qualified. The problem is that there simply is no such thing as a uniquely acceptable minimal revision of prevailing beliefs in the face of a counterfactual hypotheses. 3 For this problem faces insuperable obstacles:
142
1. Minimality requires a quantitative comparison in point of size. But what is to make one revision greater or lesser than another given the unending potential for the internal complexity of what is involved? 2. Minimality becomes impracticable in slippery-slope situations where additional steps towards enhanced differentiation are always possible. 3. Can the idea of a “minimal assumption-accommodating revision” of a belief set be implemented at all? Is a well-defined concept at issue here? Is minimality something we can actually realize in situation of counterfactual reasoning? As regards point 1, for example, let it be that there are three beagles in the yard and now stipulate: “Assume there are two beagles.” How are our change-minimizing deliberations to proceed here? Are we to annihilate one beagle? (And which one?) Or should we retain “There are three dogs in the yard” and replace that missing beagle by another dog? (And if so, one of what species?) Or should one contract the yard a bit and exclude one beagle from its boundaries? The mind boggles. David Lewis has proposed by viewing all available possibilities as “alternative possible worlds” and then looking for the descriptively closest possible hypothesis-consonant descriptive rearrangement. With the foregoing X-relocation we would be led to endorse the counterfactual: • If that X were not in the first column, then it would be in the center position. This is as close as we can possibly keep to the original positional situation subject to implementing that hypothesis. A different approach is, however, possible which seems to be on a more solid procedural footing. On this alternate approach what we seek to retain in the face of hypotheses is not descriptive similarity but rather the greatest possible nomic uniformity, the object now being to retain the fabric of natural laws insofar as possible. Returning to this section’s initial trigraph world this means that what we could in theory have the following law possibilities:
143
All
rows columns diagonals corner-quartets
must be
X blank
filled containing devoid
But of these, the given situation only accommodates two: • All diagonals must be blank-filled • All corners must be blank-filled With our hypothesis in place and these laws retained we would get no further than: • If that X were not located where it is, then it would be in one of the shaded positions:
Observe that, indefinite though it is, this result is flatly incompatible with that reached via Lewis’s proximity approach. (None of the admissible possibilities is adjacent to the original location.) All the same, it is decidedly more compatible with standard principles of nomic prioritization.4 9. DESIGN AESTHETICS Philosophers of science often equate or at least coordinate simplicity with beauty (in the form of aesthetic elegance, symmetry, or the like). But this is rather problematic. Consider simplicity. In the context of our tic-tac-toe trigraphs, simplicity becomes a matter of descriptive economy. Here clearly the simplest case is that of the single rule: • Put Os (or Xs) everywhere. The next simple case is that of rules taking the format:
144
• Put Os (or Xs) at but only at the Least simple would be those configurations which are effectively random, without rationale, rhyme, or reason. Granted no case is simpler (and more symmetric) than that of that initial rule, yielding X
X
X
O
O
O
X
X
X
O
O
O
X
X
X
O
O
O
However, from an aesthetic point of view, there seems good reasons to prefer such more complex alternatives as O
X
O
X
O
X
X
O
X
O
X
O
O
X
O
X
O
X
which clearly add interest through variation. As G. W. Leibniz maintained long ago, lawfulness (symmetry, economy, uniformity, etc.) does not of itself afford a sufficient criterion of ontological merit. A further “aesthelic,” factor must also be brought into it—one that he called variety but which would also encompass novelty, complexity, an internal differentialism. And even our simple tic-tac-toe trigraphs suffice to convey this point in a graphic and telling way. Similarity and aesthetic merit can certainly not be coordinated. Variation and complexity must certainly be brought into it. Let us explore this Leibnizian theme a bit further. 10.
ISSUES OF OPTIMALISM
Without a lawful order of a decidedly complex and sophisticated sort, the processes of cosmic and biological evolution could not bring intelligent beings onto the scene. But when a lawful order also has to accommodate the
145
vagaries of chance and choice required for the developmental emergence of intelligence, anomalies are going to be unavoidable. For situations are now bound to arise where the well-being of intelligence-endowed organisms and the axiological demands of overall systemic advantage come into conflict. Rational optimalism now becomes complicated. A much oversimplified analogy to the sort of deliberations at issue may nevertheless help to render the idea of world optimization more graphic and show how this sort of issue can be addressed. Let us adopt the general line of approach suggested by Leibniz and assess the merit—the comparative optimality—of possible worlds in terms of two factors: the orderliness of their law structure (what is clearly needed to possibilize the developmental emergence of intelligent beings) and the variety of discernible phenomena (which is clearly needed to afford such beings the material of stimulus and interest requisite for cognitive developments).5 So let it now be that world-possibilities are based on a parametric “space” of descriptive possibilities that has the configuration of our trigraphic 3 x 3 tic-tac-toe grid-work that is to be filled in with Os or Xs, with the former indicating that the descriptive possibility at issue is realized in the world, and the later that it is not, so that O indicates presence and X absence. Moreover, let it also be supposed that • Lawfulness is determined by the position of Xs in the space of possibilities. For instance, a law of the format “All As are Bs,” which has it that no As are non-Bs, is betokened by an X in every descriptive compartment that combines As with Bs. • Variety is determined by the extent of Os in the possibility spectrum. The more Os there are—the larger #O—the more featurecombinations and thus the more variety the world exhibits. (And correspondingly, the more Xs there are—the larger #X—the more lawful order.) These two parameters #O and #X accordingly set the stage for assessing the merit of worlds in the context of optimization. For world-optimization is here to be achieved—as per Leibniz—through arriving at the most favorable combination of lawfulness and variety as assessed via the product: Lawfulness x Variety and our measure of merit is thus the Leibnizian measure: M = L x V. (After all, an unlawful world cannot give rise to intelligent beings and an unvaried one cannot provide them with grist for their
146
cognitive mill.) So what we have here is the complex negotiation of competing desiderata that is characteristic of optimization subject to constraint—the constraint in the present case being the realization of those arrangements which best conduce to the well-being of the intelligent beings. To concretize this line of thought, suppose a world with two predicates F and G each of which can vary in intensity over a spectrum of Small, Medium, and Large. (For example, F might be the footprint area of a building and G its height—each of which can be Small, Medium, and Large!) The result is a spectrum of possibilities of the following 3 x 3 format: G S
M
L
S F
M L
Each of the descriptive compartments of this manifold can either be instantiated or uninstantiated amongst the membership of a particular architectural “world.” And this can be indicated by filling in that compartment with O or X, yielding the result that there are 29 = 1,024 possibilities overall. 6 For every X within the diagram, there is a corresponding exclusion law of the generic format An item that is in point of F must not be in point of G. Lawfulness will now be a matter of how many such laws there are (with a maximum of 9 in the present sort of case). And the extent of variety is measured by the number of Os within that spectrum of possibilities (again with a maximum of 9). For clearly #X # O = 9. Returning now to the Leibnizian formula: M = lawfulness x variety = #X x #O
147
It will be noted that since #X #O = 9, this quantity is going to be #X x (9 - #X). And in the circumstances, with the possibility-range of #X ranging from 0 to 9 this will be greatest when #X is 4 or 5. 7 then this is requisite for M-maximization. This, of course, still leaves open various different possibilities. Accordingly, ontological merit—as we have construed it to this point in terms of order and variety—proves to be underdeterminative, with some further, yet unacknowledged factor required to established uniqueness. To address this problem, let it be that one takes the standpoint that there is yet another variety-related factor that turns on minimizing imbalance and thus avoiding the corner positions. Then this would narrow matters down to a single result, namely O
X
O
X
X
X
O
X
O
On this basis, then, there will be but one single, uniquely optimal outcome for the descriptive constituting of a world within the property-spectrum at our disposal under the indicated conditions. Such an oversimple analogy illustrates how, at least in principle, a suitable survey across the spectrum of available alternatives can provide a basis for assessing the ontological aesthetics of world order. And so, while the analogy is imperfect—as analogies are bound to be8 —it does go some way toward illustrating the basic idea at issue with the sort of world optimization under constraint that is at issue in these deliberations. Oversimple though they are, the considerations at issue do at least effectively illustrate how a Leibnizian approach to assessing the merit of alternative possibilities renders thought experimentation practicable in this domain. 11.
ON OPTIMALITY VS. MAXIMALITY
It is also instructive to deploy those simplified world models in the context of the problem of evil in its metaphysical rather than moral construction. To see this, let us carry our previous analogy one step further by stipulating that a negativity or misfortune arises whenever both of those salient properties F and G are present only in a low degree (L). It then transpires that
148
even the best of possible arrangements cannot avert a misfortune, since even the optimum realizable result unavoidably carries a misfortune in its wake. There is now nothing for it but to view this as the price one has to pay in achieving metaphysical optimality—an upshot which suggests that imperfection will be an unavoidable feature of even the best of possible worlds. In this respect too the present approach to ontological evaluation effectively revisits the Leibnizian theodicy. For the sake of a straightforward illustration consider the possibilities for houses and their furnishings. Here we have it that either of these can instantiate one of three generic types: Traditional, or Modern (“contemporary”), or Futuristic. Clearly a “house-world” can in theory encompass houses that realize any feasible combination of these, leading to a possibility space of the format: Furniture T
M
F
T House
M F
We shall use an entry of O in our “house-world” possibility space to indicate that this particular combination is comparatively rare in the realm of under consideration and an entry of X to indicate its being comparatively common. As regards lawfulness we may suppose the following situation to obtain: Lawfulness: That there is to be only one basic law, namely: Extremes do not mix well: avoid combining extremes. This of itself serves to fix four O-positions.
149
O
O
O
O
As regards variety we will have the following principle: Variety: To minimize bland uniformity. This clearly means that we will need O’s along the main diagonal. Elsewhere, however, X’s are in order. So our resultant housing microworld would then be O
X
O
X
O
X
O
X
O
This microworld answers to the law system row O Every column is X containing Every
diagonal corner is O-filled
This particular law system is fully determinative of the microworld at issue. 12.
INTELLIGENT DESIGN: PROBLEMS AND PROSPECTS
The idea of intelligent design can be understood in three very different ways, the descriptive, the productive, and the purposive. These three versions of intelligent design are very far from amounting to the same thing. They relate, respectively, to what has come to be
150
• in the manner of • by means of
intelligent agency
• for the sake of The descriptive approach sees intelligent design as a matter of being arranged intelligently, that is, of being arranged in a way in which an intelligent being would do the job were an intelligent being were indeed to do it. As the name suggests, what is thus at issue is a purely descriptive rather than originative matter. After all, saying that some arrangement is structured intelligently is to say that it was done by an intelligent agent no more than saying that some arrangement is constructed symmetrically is necessarily to say that it is the product of a symmetric producer. The productive approach to intelligent design is thereby something quite different. Here intelligence is attributed to the agent or agency that brings about the result at issue. It is a matter not of the descriptive nature of the end product but of the means of its realization, the idea being that that product was created by an intelligent being. Perhaps only God can make a tree, but for sure only an intelligent being can make an automobile. The purposive aspect of intelligent design has to do with the matter of being designed for intelligence—that is, of being so arranged as to possibilize or even probabilize the development of intelligent beings. What is at issue here is a matter of conducing arranged as to the development and thriving of intelligent beings in the universe. It is clear that such arrangements can in principle come about in ways that are neither intelligently designed (i.e., are not particularly efficient and effective) nor yet designed by intelligence but rather developed by accident or choice. And so it is—or should be—clear that descriptivity, productivity, and purposiveness are substantially independent issues, each of which can in principle obtain in the absence of the others. On this basis, it is important to acknowledge two significant points: (1) that there is a decided difference between being designed intelligently and being designed by intelligence, and (2) that evolution, broadly understood, is in principle a developmental process through which the former feature— being designed intelligently—can be realized independently of the issue of whether or not an intelligent being is productively or purposively involved. Specifically with regard to being designed intelligently there are three fundamental factors at issue here, the systemic, economic, and aesthetic:
151
Systemic aspects: LAWFULNESS—law and order, coherence amidst complexity, mutual support among complicated elements. Economic aspects: ECONOMY—efficiency and simplicity, uniformity avoiding needless complexity. Aesthetic aspects: HARMONY AMIDST VARIETY—“interesting variation”—symmetry amidst diversity. Moreover, all three of these can be present in the absence of yet another fourth factor Telic aspects: TELEOLOGY—effectiveness, getting done successfully the things that must be so to function as the kind of thing at issue, purposive efficacy. Being designed intelligently is a fundamentally descriptive factor relating to how well things function as the sorts of things they are. And the modus operandi of an item need not be related to anybody’s aims and purposes, even as waves seem to form and propagate at sea without answering to somebody’s designs. And so, when intelligent design is once understood as a matter of being designed intelligently, there is no reason why this cannot be understood in a perfectly naturalistic way. For it is now readily conceivable that nature’s laws should interrelate and coordinate in their operations in such a way that intelligent design is a straightforward product of their (altogether natural) modus operandi. With these ideas in place, let us now consider an illustration of how the structure of a world can reflect intelligent design. Here our simplified, miniaturized “worlds” are once more going to be a trigraphic configurations field with O’s and X’s. As with intelligent design in general, our concern will focus here on three aspects of regularity and order, to wit: lawfulness, aesthetics (e.g., symmetry), and economy. Let us consider them in turn. Lawfulness: And now for the sake of having an example with a high degree of lawfulness, consider having the following group of laws:
152
row column diagonal corner-quartet
Every
Every
row column diagonal corner-quartet
O must be X
filled containing devoid
must be X-containing
Economy: Here we shall suppose that X’s are a good deal harder (i.e., less economical to achieve than O’s. Aesthetics: Here we shall suppose that exhibit several symmetries along axes defined by either a row, column, or diagonal. On this basis, it emerges that there are only two possibilities for meeting the specified conditions, namely O
X
X
X
O
X
X
X
O
and
X
X
O
X
O
X
O
X
X
Note, however, if we take a “relativistically internalized” view of the “space” at issue in our worlds and have no “absolute” way of determining top vs. bottom (or left vs. right) then these two arrangements will actually come to be equivalent. For they are in fact identical in point of their internal relationships and thus collapse into one single prospect, if that minispace is vivid relationistically, our miniworld’s arrangements being invariant under 90 degree relations. And so if in the specified circumstances we were to find the actual world to have just this structure we would have little alternative but to see this world as being intelligently designed. For it—and it alone—satisfies the various conditions and requirements which are requisite for realizing an obviously intelligent design. (Of course how and why they got to be that way is something else again.) A common objection to intelligent design arises along the following lines: “Does not reality’s all too evident imperfection constitute a decisive
153
roadblock to intelligent design? For if optimal alternatives were always realized would not the world be altogether perfect in every regard?” Not at all! After all, the best achievable result for a whole will, in various realistic conditions require a less-than-perfect outcome for the parts. In distributing its burdens, a society of many members cannot put each of them at the top of the heap. In an engaging and suspenseful plot things cannot go with unalloyed smoothness for every character. It is a pivotal consideration in this context that there will in general be multiple parameters of positivity that function competitively so that some can only be enhanced at the cost of other—even as to make a car speedier we must sacrifice its operating cost. With an automobile, the parameters of merit clearly includes such factors as speed, reliability, repair infrequency, safety, operating economy, aesthetic appearance, road-handle ability. But in actual practice such features are interrelated. It is unavoidable that they trade off against one another: more of A means less of B. It would be ridiculous to have a supersafe car with a maximum speed of two miles per hour. It would be ridiculous to have a car that is inexpensive to operate but spends threefourths of the time in a repair shop. Invariably, perfection—an all-at-once maximization of every value dimension—is inherently unrealizable because of the inherent interaction of evaluative parameters in every respect—because this sort of absolute perfection is in principle impossible of realization. In the context of multiple and potentially competing parameters of merit the idea of an all-at-once maximization which envisions perfection in point of every mode of merit is inherently unrealizable and will have to give way to an on-balance optimization. In designing a car you cannot maximize both safety and economy of operation, and analogously, a world will not, and cannot possibly be, absolutely perfect. The illustration of the preceding section helps to drive this point home vividly. As already noted, lawfulness, uniformity, and simplicity would all be provided for in spades by the bland uniformity of:
154
O
O
O
O
O
O
O
O
O
or
X
X
X
X
X
X
X
X
X
But any element of interesting variation and instructive diversity is now decidedly lacking. The interactive complexity of value is crucial here. For it is the fundamental fact of axiology that every object has a plurality of evaluative features some of which will in some respects stand in conflict. Absolute perfection becomes in-principle infeasible and creative tension amidst conflicting positivities is the crux. For what we have here is a relation of competition and trade-off among modes of merit akin to the complementarity relation of quantum physics. The holistic and systemic optimality of a complex whole will require some of its constituent comportments to fall short of what would be content for it if abstractly considered in detached isolation. This suffices to sideline any objection along the lines of: “If intelligent design obtains, why isn’t the world absolutely perfect?” To be sure, that reality maximizes L x V would not be something inherent in the abstract nature of things, mandated by general principles of inevitable necessity. Its truth, if truth there be, would be a matter of the world’s contingent arrangements. The issue would be one not of abstract necessity but rather of the nature of nature’s lawfulness—a second-order superlaw, if you will, that serves to determine and delineate the laws of nature themselves. 13.
CONCLUSION
Tic-tac-toe is an extremely simple game; it is literally child’s play. And yet it teaches a vital lesson about human conflict. For it affords a vivid illustration that two sufficiently intelligent and determined opponents can always frustrate over another’s efforts and create a mutually defeating deadlock— a lesson better learned in the nursery than the marital household! Moreover, it conveys the further useful lesson that carelessness does not pay— that in this life one does well to put one’s God-given brains to work. Indeed as the present discussion has tried to show, it even has something to contribute to our ventures in philosophical inquiry. Christian Wolff maintained that while science is preoccupied with what actually does happen in the world, philosophy has among its prime tasks that of deliberating about what might happen. A recourse to illustrative microworlds can certainly be useful in this regard. The present deliberations are thus of a somewhat nonstandard sort, constituting what is an exercise less in philosophical doctrine than in philosophical methodology. Its aim is to exhibit the utility of a certain conceptual
155
device in philosophical argumentation. Of course it is only insofar as the specific points made by its use are cogent that the utility of such a method can be maintained. But here, fortunately, we have one of those comparatively rare situations where the strength of the chain accords with its strongest rather than with its weakest link.9 NOTES 1
On the ideas of this section see also the author’s “Credit for Making a Discovery,” Episteme, vol. 1 (2005), pp. 189–200.
2
Here the operative idea is that we have p = ~
~p, where
p = L o p. In consequence:p = ~(L o ~p), which means that possibility is tantamount to consistency with—i.e., conformability to—the laws in question.
3
In propounding his change-minimalization approach to counterfactuals, the father of this minimal-revision approach, F. P. Ramsey, did not himself offer much guidance as to how that minimally revised belief-set is to be formed.
4
For further detail regarding the deliberations of this section see the author’s Imagining Irreality (Chicago: Open Court, 1903), and Conditionals (Cambridge, Mass.: MIT Pres, 2007).
5
Any comprehensive exposition of the philosophy of Leibniz can be consulted on these matters, including the present author’s the Philosophy of Leibniz (Englewood Cliffs, N.J.: Prentice-Hall, 1967).
6
The Leibnizian spirit of the sort of deliberations at issue here are manifest in his companion of God’s creative choice with certain games in which all the space on a board are to be filled according to definite rules, but unless we do careful planning, we find ourselves at the end blocked from the difficult spaces and compelled to have more spaces vacant than we needed or wished to. Yet there is a definite rule by which a maximum number of spaces can be filled in the surest way. (C. I. Gerhardt, Die Philosophischen Schriften von G. W. Leibniz, vol. VII (Berlin: Weidermann, 1890), p.303; L. WE. Loemker, G. W. Leibniz: Philosophical Papers and Letters (Dordrecht: D. Reidel, 1969), p. 487.)
7
If (contrary to fact) the quantity at issue were continuous with a range from 0 to 1, we would have it that merit (M) is given by M = variety x lawfulness = z x (1 - z) = z -z 2 By elementary calculus this quantity is maximized when z = ½. In our example, having #(O) be 4 or 5 is thus as close as we can get to the maximum.
156
NOTES 8
For one thing, our optimal resolution is not unique, since its variant with X and O interchanged will yield the same result as would a systematic row/column interchange that would result from looking at the box sideways. (Of course uniqueness could be assured by additional stipulations—e.g., that there must be a minimum of Xs.)
9
This essay was originally published in the American Philosophical Quarterly, vol. 45 (2008), pp. 165–178.
157
Chapter 12
FRAGMENTATION AND DISINTEGRATION IN PHILOSOPHY (Its Grounds and Its Implications) 1. STAGESETTING THE TASK OF PHILOSOPHY Philosophy and science were once of a piece: originally science was simply natural philosophy with the scientist and the philosopher as one selfsame person. But in the wake of the scientific revolution of the 17th century—itself the work of philosopher-scientists—increasing specialization divided the natural sciences into distinct compartments, and contradistinguished the lot from theoretical philosophy, itself seen as being just another specialty. Theoretical philosophers were now eager to establish the autonomy of the discipline as a legitimate venture in its own right. But how was the work of this specialty to be distinguished from the others? The history of modern philosophy has seen many different approaches to the question of the distinctive mission of philosophy in contradistinction to the sciences. In each case a fundamental thesis regarding delimiting contrasts underlay the position at issue. (See Display 1.) The guiding idea has been that the natural sciences do not concern themselves with the items listed in the column dedicated to philosophy’s objects (matters of necessity, human nature, and the rest). A common theme throughout, however, has the contrast between the facts themselves on the one hand, and our normatively laden cognitive and evaluative dealings with them on the other. Nevertheless, recent times have seen the gradual emergence of a school of thought holding that one cannot separate these two dimensions. It is contended that while they can indeed be distinguished, nevertheless they cannot be kept apart. For one should—and in propriety must—see the facts as not only produced but actually constituted by our ways of dealing with them. Process and product as inseparable components of one unified whole. Hegel long ago anticipated this view of the matter, sociologists of science have more recently enlarged upon it, and recent neo-idealism has taken it as the centerpiece of its metaphilosophy. The crux of such a view is that science without philosophical meta-
Display 1 APPROACHES TO THE SCIENCE/PHILOSOPHY DISTINCTION The sciences study
Philosophy studies
Theorists
1. Matters of contingent fact
Matters of necessity
Wolff
2. Impersonal nature
Human nature
Lock-Hume
3. The facts themselves
How we humans come to know necessary facts
Kant
4. Nature at large
How the sciences themselves work
Comte, Positivists
5. Substantive facts
Methodological processes for fact determination
J. S. Mill
6. Matters of fact
Matters of value
N. Hartmanm, Axiologists
7. The observable facts
How we humans now think and talk about the facts
Analytic philosophers
8. Facts
Norms
Idealists
9. The world
The history of thought about the world’s facts
Heidegger
science is blind and philosophical metascience without science empty (to echo a Kantian theme). The guiding idea behind such a holistic perspective is that of the systemic unity of knowledge as an integral and comprehensively interconnected whole. What is at work here is in effect a return to Leibniz’s vision of the realm of knowledge as one vast and variegated but nevertheless unified whole. (The garden planned for Versailles by the great landscape architect André le Nôtre affords a vivid analogy here.) The irony regarding this unfolding tendency towards integration is its occurrence at a time when the realm of knowledge itself is in process of becoming unmanageable as a whole. The aim of the present discussion is to illustrate this phenomenon and to consider some of its implications.
160
2. THE PRINT EXPLOSION AND AGENDA ENLARGEMENT North American philosophers are a audibly productive lot. They publish some 400 books per annum nowadays. And issue by successive issue they fill up the pages of over 400 professional journals. 1 To be sure, the aggregate published output of philosophers—some 300,000 pages per annum—does not match that of other, larger branches of the academic profession. (In recent years, American scholars in English literature published over 500 articles on William Shakespeare annually over 200 on John Milton, and well over 100 on Henry James. 2 ) But even without such scholarly overkill, the productivity of American philosophy is an impressive phenomenon. In the wake of this proliferation of print, agenda-enlargement has become a striking feature of contemporary American philosophy. The pages of its journals and the programs of its meetings bristle with discussions of issues that would seem bizarre to their predecessors of earlier days and to present-day philosophers of other places. It is a daunting— and perhaps even dispiriting—experience to read through the overall program of the annual meeting of the Eastern Division of American Philosophical Association. Most of the topics one encounters were absent from the agenda of the field a generation ago and even those currently upon the scene cannot manage to keep up. Their own education in the past has generally not equipped the philosophical professionals for the changes of the present, and we are all too frequently led to ask ourselves “But what, in heaven’s name, has this got to do with philosophy?” Entire societies are dedicated to the pursuit of issues nowadays deemed philosophical that no-one would have dreamt of considering so a two generations ago. (Some examples are the societies for Machines and Mentality, for Informal Logic and Critical Thinking, for the Study of Ethics and Animals, for Philosophy and Literature, for Philosophy of Spirit, for Analytical Feminism, for Philosophy of Sex and Love—and the list goes on and on.) The fact that some twelve thousand North American professional philosophers are looking for something to do that is not simply a matter of re-exploring familiar ground has created a substantial population pressure for enlarged philosophical Lebensraum.
161
3. TAXONOMIC PROLIFERATION The result of this agenda enlargement has been a revolutionizing of the structure of philosophy itself by way of taxonomic complexification beyond anything applicable to earlier times. The taxonomy of the subject has burst for good and all the bounds of the ancient tripartite scheme of logic, metaphysics and ethics. Specialization and division of labor runs rampant, and cottage industries are the order of the day. The situation has grown so complex and diversified that the most comprehensive recent English-language encyclopedia of philosophy3 cautiously abstains from providing any taxonomy of philosophy whatsoever. (This phenomenon also goes a long way towards explaining why no one has written a comprehensive history of philosophy that carries through to the present-day scene. 4 ) Philosophy—which ought by mission and tradition to be an integration of knowledge—has itself become increasingly disintegrated. The growth of the discipline has forced it beyond the limits of feasible surveillance by a single mind. After World War II it becomes literally impossible for American philosophers to keep up with what their colleagues were writing and thinking. The rapid growth of “applied philosophy”—that is, philosophical reflection about detailed issues in science, law, business, information management, social affairs, computer use, and the like—is a striking structural feature of contemporary North American philosophy. In particular, the past three decades have seen a great proliferation of narrowly focused philosophical investigations of particular issues in areas such as economic justice, social welfare, ecology, abortion, population policy, military defense, and so on. In any area of science or scholarship there is a hierarchy of taxonomic categories: —field —subfield —specialty —problem area —individual problem —subproblem
162
With the expansion of information there is growth at every taxonomic level. And as we descend on the taxonomic hierarchy this growth assumes exponential proportion. Subfields expand twice as rapidly as fields, specialties twice as rapidly as subfields, and so on. The exponential explosion of base-level items at the bottom of the ladder is tracked by a slower but nevertheless still exponential rate of growth as one moves up the taxonomic ladder. Given the unfolding of this phenomenon, no single thinker commands the whole range of knowledge and interests that characterizes presentday American philosophy, and indeed no single university department is so large as to have on its faculty specialists in every branch of the subject. The field has outgrown the capacity not only of its practitioners but even of its institutions. 4. COGNITIVE LIMITS AND LIMITATIONS As Immanuel Kant emphatically and cogently insisted, ideas can only be connected and interrelated in a single mind. It may take two to tango, but only one can think. Information may be scattered but thought requires the “unity of apperception” in a single mind. The content of thought can be shared in common by many different individuals, but the actual thinking has to be done by one or another of them. There are, however, only so many waking and thinking hours available to each one of us. The overall of our thought is thus limited. To be sure, our thought life has two dimensions: breadth and depth. In Sir Isaiah Berlin’s splendid analogy one can be a fox who knows a large terrain superficially or a hedgehog who knows a smaller one profoundly; but life being what it is, one cannot have it both ways. Given the limitations of time, one must make choices. One can devote attention superficially to many topics or more deeply to fewer. But the maximum for and breadth and depth of thinking is limited, and the structure of the resultant teeter-totter situation is that of an equilateral hyperbola:
163
D X B = const. Depth (D)
Breadth (B) 5. SPECIALIZATION AND DIVISION OF LABOR This state of affairs illustrates the most characteristic feature of contemporary English-language philosophizing: the emphasis on detailed investigation of particular concrete issues and themes. For better or for worse, Anglophone philosophers have in recent years tended to stay away from large-scale abstract matters of wide and comprehensive scope, characteristic of the earlier era of Whitehead or Dewey, and nowadays incline to focus their investigations on issues of specialized small-scale detail that relate to and grow out of those larger issues of traditional concern. Bigpicture thinking is out of fashion. Philosophers have become cognitive micro-managers. The turning of philosophy from globally general, large-scale issues to more narrowly focused investigations of matters of microscopically fine-grained detail is a characteristic feature of American philosophy after World War II. Its flourishing use of the case-study method in philosophy is a striking phenomenon for which no one philosopher can claim credit—to a contemporary observer it seems like the pervasively spontaneous expression of “the spirit of the times.” In line with the increasing specialization and division of labor, American philosophy has become increasingly technical in character, preoccupied with matters of small-scale philosophical and conceptual miniature. And philosophical investigations make increasingly extensive use of the formal machinery of semantics, modal logic, compilation theory, learning theory, etc. Ever heavier theoretical armaments are brought to bear on ever smaller problem-targets in ways that journal readers will occasionally wonder whether the important principle that technicalities should never be multiplied beyond necessity have been lost sight of. There is certainly no doubt that the increasing technicalization of philosophy has been achieved at the expense of its wider accessibility—and
164
indeed even to its accessibility to members of the profession who work in other specialties. 6. EVOLVING COMPLEXIFICATION In philosophy as elsewhere, ongoing refinement in the division of cognitive labor in the wake of the information explosion has issued in a literal disintegration of knowledge. The “progress of knowledge” has been marked by an ever-continuing proliferation of ever more narrowly defined specialties marked by the unavoidable circumstance that the any given specialty cell cannot know exactly what is going on even next door—let alone at the significant remove. One’s understanding of matters outside one’s immediate bailiwick is bound to become superficial. Few scholars nowadays have a detailed understanding of their own field beyond the boundaries of a narrow subspecialty. At the home base of one’s specialty or subspecialty one knows the details, nearby one has an understanding of generalities, but at a greater remove one can be no more than an informed amateur. The increasing complexity of our scientific world-picture is a striking phenomenon throughout the development of modern learning. Whatever information one achieves is bought dearly through the proliferation of complexity. It is, of course, possible that the development of physics may eventually carry us to theoretical unification where everything that we class among the “laws of nature” belongs to one grand unified theory—one all-encompassing deductive systematization integrated even more tightly than that Newton’s Principia Mathematica. 5 But, on all discernible indications, the covers of this elegantly contrived “book of nature” will have to encompass a mass of ever more elaborate diversity and variety. And the integration at issue at the pinnacle of a pyramid will cover further down an endlessly expansive range and encompassing the most variegated components. The lesson of such considerations is clear. In the course of cognitive progress our knowledge grows not just in extent but also in complexity, so that science presents us with a scene of ever-increasing sophistication in point of complex detail. The history of science tells an ongoing story of taxonomic complexification. And it is thus fair to say that modern science confronts us with a cognitive manifold that involves an ever more extensive specialization and division of labor. The harsh reality of it is that no physicist today understands the whole of physics. And the same holds for every
165
branch of learning, scholarship, and science—philosophy included. In consequence, the timespan of apprenticeship that separate master from novice grows ever longer. A science that moves continually from an over-simple picture of the world to one that is more complex calls for ever more elaborate processes for its effective cultivation. And as the enterprise of science and scholarship grows more extensive, the greater elaborateness of its productions requires an ever more intricate intellectual structure for its accommodation. The regulative ideal of a scientifically informed philosophy is to integrate our knowledge of the world’s modus operandi into a coherent and cohesive unifying system. But nevertheless, the world’s complexity means that this will be an aspiration rather than an accomplished fact: it represents a goal towards which our philosophical forays may be able to make progress but which we will never be able to attain.6 7. UNMANAGEABILITY OF THE LITERATURE The classical problem of freedom of the will affords a graphic illustration of the unmanageability—the unsureveyability, or what the German would call Unübersichtlichkeit—of philosophical issues in the current condition of affairs. Not only are the issues complex and difficult in themselves, but they ramify by way of substantive interconnection and interlinkage into many other issues and topics—philosophy of mind, the metaphysics of causality, ethics, criminology, and others. My recent endeavor to create a bibliography of the free will problem, one that is comprehensive albeit by no means complete, yields some 2,500 entries. Few writers on the topic has read as many as fifty of these—and to judge by their own bibliographies most of them have read a good deal less. To be sure, it is not the case here that when you have seen one essay or one chapter you have seen them all. All the same, here as elsewhere the range of understanding increases not with the literature of the topic but merely with its logarithm. 7 To gain the benefit of significant enhancement one has to grapple with a great mass of material. And here electronic aids are not really of much help. Search engines, automated indexes, computerized concordances and the rest of it do indeed bring useful information to view—but do it in such a way as to complicate rather than simplify the processes of analysis and interpretation.
166
These fundamental realities of the cognitive situation apply to the philosopher as much as to any other sort of inquirer or investigation. 8. DISASSEMBLY, SCATTERING, FRAGMENTATION AND INCOMPREHENSION The ongoing population explosion in both investigators and information means that today virtually every field of research and investigation is highly compartmentalized, divided into a large proliferation of separate units. This may or may not be all that serious for other branches of thought and inquiry, but for philosophy it is a decided misfortune. For it is the characteristic task of philosophy to provide for cognitive comprehensiveness, for systemic unity of understanding across large conceptual vistas. But specialization and division of labor runs rampant, as does the demand for interdisciplinarity. While reality is one and its facts connected, mastering those connections becomes increasingly difficult. There is little if anything that the members of different specialties are able—or willing—to say to one another. The tower of Babel rises ever higher. And so what we have in the current cognitive condition of things in philosophy is not just specialization and division of intellectual labor but fragmentation, scattering and disintegration. As our knowledge of the world and its ways expands, so does the task of philosophy to interpret and coordinate this information—to provide us with an orientation of thought. And in the wake of this mission, the obligation to connect, relate, integrate, and systematize grows ever more urgent—and ever more difficult. What we have here is a deep and unfortunate irony. The field’s own development, it seems, provides the seeds of its own undoing. We are led back to Greek mythology with its talk of children who devour their own parents. The natural growth of the subject itself creates conditions in which the very life of the subject becomes endangered. Our efforts to comprehend issue is incomprehension. 9. PROSPECTS Is there an answer here? Can anything constructive be done to hold things together or is “all adherence” lost? One thing is clear: there is precious little that an individual as individual can do. The only realistic hope lies not in the individual but in the community. What seems to be
167
required is a general effort to build bridges of systemic interrelatedness so that even if no single individual can master the whole, the larger community can strive for integration. One can then aspire to the confident assurance that, like the chain-mail vest, the whole gets to be a single unified garment. The salient idea here is that of a return to the Leibnizian vision of a collaborative effort of coordination through a multilateral collaboration that renders philosophizing a venture in multilateral collaboration where the concatenated contributions of many individuals come together in overall cohesion on the model of the sciences themselves. What is needed is the development of teamwork and cooperation, even if only by way of coordination rather than actual collaboration—which may in fact prove more difficult in philosophy than elsewhere. The 20th century has put a small handful of models before us, the most vivid instances being the logical empiricism of the middle years of the century and the process-philosophy and feminist-philosophy movements of its second half. By pursuing the disaggregated teamwork and collaboration, such movements are able to renovate the seemingly incapable demonstration of depth and breadth in ways that no individual, however hardworking, can possibly manage. A certain irony emerges here. For the basic units of philosophical work are its ideas of individual intelligences. But if in the present condition of the subject the contributions they make are to be substantial and telling, then these contributions will increasingly have to be made within the setting of a spontaneously coordinated school or movement. We will never get all philosophers onto the same page; establishment of field-wide consensus is infeasible—and is not the issue here. But the elective affinities of schools of thought is something else again—and something perfectly practicable. Philosophers doubtless like to ride their hobby-horses, but even hobby aficionados can and do find kindred spirits. And, unnoticed by the subject’s historian, this trend is already hard at work about us. Medical ethics, feminist theory, and environmental ethics are all flourishing disciplines on the present scene. Not one of them is the product of some outstandingly innovative foundering thinker. To all visible appearances, the future of the subject lies less with its occasional geniuses than with movements.
168
NOTES 1
Ulrich’s Periodicals Directory for 2008 lists 447 philosophical journals for the U.S.A.
2
Edward B. Fiske, “Lessons,” The New York Times (August 2, 1989), p. B8. At this rate, the annual output of Shakespearian scholarships is over six times as large as the collected works of the Bard himself.
3
The Encyclopedia of Philosophy, ed. by Paul Edwards (London and New York: Macmillan, 1967).
4
John Passmore’s Recent Philosophers (La Salle, 1985) is as close as anything we have, but—as the very title indicates—this excellent survey makes no pretentions to comprehensiveness. In this direction an earlier multi-person survey went somewhat further, exemplifying what is the best and most that one can hope to obtain: Roderick M. Chisholm et. al., Philosophy: Princeton Studies of Humanistic Scholarship in America (Englewood Cliffs, NJ: Prentice Hall, 1964). Yet not only does this book attest to the fragmentation of the field—but it conveys (from its Foreword onwards) the defeatist suggestion that whatever larger lessons can be extracted from an historically minded scrutiny of the substantive diversity of the contemporary situation are destined to lie substantially in the eyes of the beholder.
5
See Steven Weinberg, Dreams of a Final Theory (New York: Pantheon, 1992). See also Edoardo Amaldi, “The Unity of Physics,” Physics Today, vol. 261 (September, 1973), pp. 23–29. Compare also C. F. von Weizsäcker, “The Unity of Physics” in Ted Bastin (ed.) Quantum Theory and Beyond (Cambridge: Cambridge University Press, 1971).
6
For variations on this theme see the author’s The Limits of Science (Berkeley and Los Angeles: University of California Press, 1984).
7
For further detail on this issue see the author’s Epistemetrics, (Cambridge: Cambridge University Press, 2006).
169
NAME INDEX Adams, E. W., 84n5 Adams, Robert M., 127n6, 130 Amaldi, Edoardo, 169n5 Appiah, Anthony, 84n5 Aristotle, 68 Armstrong, David M., 128n23, 129n26, 130 Arnauld, Antoine, 110 Arrow, Kenneth, 25 Barrow, John P., 34n5 Bennett, Jonathan, 21n3 Bentham, Jeremy, 1 Berkeley, George, 74 Berlin, Isaiah, 163 Beutel, Eugen, 34n1 Bohr, Niels, 25, 40 Braine, M. D., 84n9 Brandom, Robert, 127n4, 131 Cardwell, D. S. C., 56n10 Chihara, Charles S., 121, 128n14, 128n16, 130 Chisholm Roderick M., 88, 92n2, 105n4, 120, 128n14, 128n15, 130, 169n4 Cresswell, M. J., , 129n26, 130 Darwin, Charles, 56n10 Descartes, René, 60, 90, 98 Dewey, John, 1, 38, 39, 164 Duhem, Pierre Maurice, 41, 83n1 Edwards, Paul, 94-96 Edwards, C. H., Jr., 34n1 Einstein, Albert, 25 Everett, Hugh, 127n6 Felt, James W., 130 Fine, Arthur, 74, 84n4
Nicholas Rescher x Epistemic Pragmatism
Fiske, Edward, B., 169n2 Forbes, Graeme, 128n14, 130 Freud, Sigmund, 25 George, William, 55n8 Gerhardt, C. I., 112n2, 112n3, 112n4, 112n6, 156n6 Gödel, Kurt, 25 Gore, George, 55n8 Grice, H. P., 21n3 Hayes, J. R., 84n9 Hegel, G. W. F., 159 Hess, Mary, 34n3 Hilbert, David, 74, 84n4 Hilpinen, Risto, 104-105n2 Hume, David, 47, 94-96 James, Henry, 161 James, William, 47 Joule, James P., 56n10 Kant, Immanuel, 6, 34 Kyburg, Henry K., 104n2, 105n3, 105n4 Lambert, J. H., 24 Lehrer, Keith, 92n2 Leibniz, G. W., 31, 59, 107-112, 112n7, 127n6, 146, 156n5, 160 Lewis, David K., 84n5, 84n10, 115-16, 120, 124, 127n6, 127n7, 127n8, 127n9, 128n10, 128n14, 129n25, 130, 143 Lipton, S. G., 84n9 Locke, John, 99 Loemker, L. WE., 156n6 Loux, Michael J., 128n17 Lycan, William G., 121, 124, 127n9, 128n17, 128n24130 Mackie, John L., 116, 128n11, 131 Makinson, D. C., 102, 105n4 Mates, Benson, 131 Mill, John Stuart, 1, 99, 161
172
NAME INDEX
Newton, Isaac, 165 Nôtre, André le, 160 O’Brian, D. P., 84n9 Olsan, J. M., 84n9 Ord-Humen, A. W. J. G., 34n2 Passmore, John, 35n6, 169n4 Peirce, Charles Sanders, 1, 21n2, 38, 52, 55n7, 105n4 Price, Derek J., 56n11 Planck, Max, 25, 50-54, 55n7 Plantinga, Alvin, 128n14, 128n21, 131 Plato, 1, 133 Powers, Larry, 117, 128n12, 131 Quine, W. V. O., 128n22, 131 Ramsey, F. P., 156n3 Reid, Thomas, 73-74, 88, 92n2 Rescher, Nicholas, 84n5, 127n4, 129n29, 131 Revlis, R., 83, 84n9 Roese, N. O., 84n9 Roosevelt, Franklin Delano, 98 Rosenkranz, Gary, 129n26, 131 Shakespeare, William, 161 Skyrms, Bryan, 129n26, 132 Sorensen, Roy A., 84n10 Spinoza, Baruch, 33, 116 Spottiswoode, William, 56n10 Stalnaker, Robert C., 121-122, 127n6, 128n18, 132 Ulam, Stanislaw M., 55n6 Unger, Peter, 84n10 van Inwagen, Peter, 127n5, 132 Weinberg, Steven, 169n5
173
Nicholas Rescher x Epistemic Pragmatism
Weizsäcker, C. F. von, 169n5 Wertheimer, F. H., 56n9 Wheeler, John, 127n6 Whitehead, A. N., 164 Wittgenstein, Ludwig, 124 Wolff, Christian, 155 Zalta, Edward N., 128n20, 132
174
Ontos
NicholasRescher
Nicholas Rescher
Collected Paper. 14 Volumes Nicholas Rescher is University Professor of Philosophy at the University of Pittsburgh where he also served for many years as Director of the Center for Philosophy of Science. He is a former president of the Eastern Division of the American Philosophical Association, and has also served as President of the American Catholic Philosophical Association, the American Metaphysical Society, the American G. W. Leibniz Society, and the C. S. Peirce Society. An honorary member of Corpus Christi College, Oxford, he has been elected to membership in the European Academy of Arts and Sciences (Academia Europaea), the Institut International de Philosophie, and several other learned academies. Having held visiting lectureships at Oxford, Constance, Salamanca, Munich, and Marburg, Professor Rescher has received seven honorary degrees from universities on three continents (2006 at the University of Helsinki). Author of some hundred books ranging over many areas of philosophy, over a dozen of them translated into other languages, he was awarded the Alexander von Humboldt Prize for Humanistic Scholarship in 1984. ontos verlag has published a series of collected papers of Nicholas Rescher in three parts with altogether fourteen volumes, each of which will contain roughly ten chapters/essays (some new and some previously published in scholarly journals). The fourteen volumes would cover the following range of topics: Volumes I - XIV STUDIES IN 20TH CENTURY PHILOSOPHY ISBN 3-937202-78-1 · 215 pp. Hardcover, EUR 75,00
STUDIES IN VALUE THEORY ISBN 3-938793-03-1 . 176 pp. Hardcover, EUR 79,00
STUDIES IN PRAGMATISM ISBN 3-937202-79-X · 178 pp. Hardcover, EUR 69,00
STUDIES IN METAPHILOSOPHY ISBN 3-938793-04-X . 221 pp. Hardcover, EUR 79,00
STUDIES IN IDEALISM ISBN 3-937202-80-3 · 191 pp. Hardcover, EUR 69,00
STUDIES IN THE HISTORY OF LOGIC ISBN 3-938793-19-8 . 178 pp. Hardcover, EUR 69,00
STUDIES IN PHILOSOPHICAL INQUIRY ISBN 3-937202-81-1 · 206 pp. Hardcover, EUR 79,00
STUDIES IN THE PHILOSOPHY OF SCIENCE ISBN 3-938793-20-1 . 273 pp. Hardcover, EUR 79,00
STUDIES IN COGNITIVE FINITUDE ISBN 3-938793-00-7 . 118 pp. Hardcover, EUR 69,00
STUDIES IN METAPHYSICAL OPTIMALISM ISBN 3-938793-21-X . 96 pp. Hardcover, EUR 49,00
STUDIES IN SOCIAL PHILOSOPHY ISBN 3-938793-01-5 . 195 pp. Hardcover, EUR 79,00
STUDIES IN LEIBNIZ'S COSMOLOGY ISBN 3-938793-22-8 . 229 pp. Hardcover, EUR 69,00
STUDIES IN PHILOSOPHICAL ANTHROPOLOGY ISBN 3-938793-02-3 . 165 pp. Hardcover, EUR 79,00
STUDIES IN EPISTEMOLOGY ISBN 3-938793-23-6 . 180 pp. Hardcover, EUR 69,00
ontos verlag
Frankfurt • Paris • Lancaster • New Brunswick 2006. 14 Volumes, Approx. 2630 pages. Format 14,8 x 21 cm Hardcover EUR 798,00 ISBN 10: 3-938793-25-2 Due October 2006 Please order free review copy from the publisher Order form on the next page
P.O. Box 1541 • D-63133 Heusenstamm bei Frankfurt www.ontosverlag.com • [email protected] Tel. ++49-6104-66 57 33 • Fax ++49-6104-66 57 34
NicholasRescher
Nicholas Rescher
Being and Value And Other Philosophical Essays Being and Value collects together fifteen essays by Nicholas Rescher on salient issue in metaphysics, axiology and metaphilosophy. In the way in which they shed new light on significant philosophical issues, these deliberations are emblematic of Rescher’s characteristic way of illuminating timeless issues and historical perspectives in a reciprocal interrelationship. The chapter of the book are as follows: Being and Value: On the Prospect of Optimalism; On Evolution and Intelligent Design; Mind and Matter; Fallacies Regarding Free Will; Sophisticating Naïve Realism; Taxonomic Complexity and the Laws of Nature; Practical Vs. Theoretical Reason; Pragmatism as a Growth Industry; Cost Benefit Epistemology; Quantifying Quality; Explanatory Surdity; Can Philosophy be Objective?; On Ontology in Cognitive Perspective; Plenum Theory [Essay Written Jointly with Patrick Grim]; and Onometrics (On Referential Analysis in Philosophy)
About the Author Nicholas Rescher is University Professor of Philosophy at the University of Pittsburgh where he also served for many years as Director of the Center for Philosophy of Science. He is a former president of the Eastern Division of the American Philosophical Association, and has also served as President of the American Catholic Philosophical Association, the Americna Metaphysical Society, the American G. W. Leibniz Society, and the C. S. Peirce Society. An honorary member of Corpus Christi College, Oxford, he has been elected to membership in the European Academy of Arts and Sciences (Academia Europaea), the Institut International de Philosophie, and several other learned academies. Having held visiting lectureships at Oxford, Constance, Salamanca, Munich, and Marburg, Professor Rescher has received six honorary degrees from universities on three continents. Author of some hundred books ranging over many areas of philosophy, over a dozen of them translated into other languages, he was awarded the Alexander von Humboldt Prize for Humanistic Scholarship in 1984. In November 2007 Nicholas Rescher was awarded by the American Catholic Philosophical Association with the „Aquinas Medal“
ontos verlag
Frankfurt • Paris • Lancaster • New Brunswick 2008. 204 Seiten. Format 14,8 x 21 cm Hardcover EUR 79,00 ISBN 13: 978-3-938793-88-6 Due Februar 2008
P.O. Box 1541 • D-63133 Heusenstamm bei Frankfurt www.ontosverlag.com • [email protected] Tel. ++49-6104-66 57 33 • Fax ++49-6104-66 57 34
NRCP Supplementary Volume
Nicholas Rescher
Autobiography
Nicholas Rescher was born in Germany in 1928 and emigrated to the United States shortly before the outbreak of World War II. After training in philosophy at Princeton University he embarked on a long and active career as professor, lecturer, and writer. His many books on a wide variety of philosophical topics have established him as one of the most productive and versatile contributors to 20th century philosophical thought, combining historical and analytical investigators to articulate an amalgam of German idealism with American pragmatism. The book accordingly has two dimensions, both as a contribution to German-American cultural interaction and as a contribution to the history of philosophical ideas.
ontos verlag
Frankfurt • Paris • Lancaster • New Brunswick 2007. IV, 342 Seiten Format 14,8 x 21 cm Paperback EUR 79,00 ISBN 13: 978-3-938793-59-6 Due July 2007
P.O. Box 1541 • D-63133 Heusenstamm bei Frankfurt www.ontosverlag.com • [email protected] Tel. ++49-6104-66 57 33 • Fax ++49-6104-66 57 34