235 116 388KB
English Pages 133 [134] Year 2006
NICHOLAS RESCHER COLLECTED PAPERS
Volume V
Nicholas Rescher
Studies in Cognitive Finitude
ontos verlag Frankfurt I Paris I Ebikon I Lancaster I New Brunswick
Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliographie; detailed bibliographic data is available in the Internet at http://dnb.ddb.de
North and South America by Transaction Books Rutgers University Piscataway, NJ 08854-8042 [email protected]
United Kingdom, Ire Iceland, Turkey, Malta, Portugal by Gazelle Books Services Limited White Cross Mills Hightown LANCASTER, LA1 4XS [email protected]
2006 ontos verlag P.O. Box 15 41, D-63133 Heusenstamm www.ontosverlag.com ISBN 3-3-938793-00-7
2006 No part of this book may be reproduced, stored in retrieval systems or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use of the purchaser of the work
Printed on acid-free paper This hardcover binding meets the International Library standard Printed in Germany by buch bücher dd ag
Contents PREFACE Chapter 1: FINITUDE AND LIMITATIONS (ON UNREALIZABLE ASPIRATIONS) 1. 2. 3. 4. 5. 6. 7.
Finitude and Unrealizable Aspirations Some Salient Impossibilities, Past and Present Sources of Finitude: Necessity Incapacity (Categorial and Temporal) Scarcity Uncontrollability: Fate and Luck Imperfectability (I)—Desideratum Conflicts and Complementarity 8. Imperfectability (II)—Resistance Barriers and Diminishing Returns 9. Reactions to Finitude 10. Theoretical Issues Regarding Limits and Finitude
1 2 5 6 7 7 8 9 12 13
Chapter 2: ON COGNITIVE FINITUDE: IGNORANCE AND ERROR 1. 2. 3.
The Corrigibility of Conceptions Communicative Parallax The Communicative Irrelevance of Inadequate Conceptions
15 19 20
Chapter 3: SCEPTICISM AND FINITUDE 1. 2. 3. 4.
Scepticism Scepticism and Error Avoidance Rationality and the Risk of Error The Poverty of Scepticism
27 29 30 31
Chapter 4: LIMITS OF COGNITION (A LEIBNIZIAN PERSPECTIVE ON THE QUANTITATIVE DISCREPANCY BETWEEN LINGUISTIC TRUTH AND OBJECTIVE FACT) 1.
How Much can a Person Know? Leibniz on Language Combinatorics
37
2. 3. 4. 5. 6. 7. 8.
The Leibnizian Perspective Statements are Enumerable, as are Truths Truths vs. Facts The Inexhaustibility of Fact Facts are Transdenumerable More Facts Than Truths Musical Chairs Once More Appendix: Further Implications
41 44 45 45 47 50 51 54
Chapter 5: COGNITIVE PROGRESS AND ITS COMPLICATIONS 1. 2. 3.
In Natural Science, the Present Cannot Speak for the Future The Import of Innovation Against Domain Limitations
57 60 62
Chapter 6: AGAINST SCIENTIFIC INSOLUBILIA 1. 2. 3.
The Idea of Insolubilia The Reymond-Haeckel Controversy The Infeasibility of Identifying Insolubilia
65 67 70
Chapter 7: THE PROBLEM OF UNKNOWABLE FACTS 1. 2. 3. 4.
Limits of Knowledge Cognitive Finitude On Surdity and Limits of Knowledge Larger Lessons
77 79 81 83
Chapter 8: EPISTEMIC INSOLUBILIA AND COGNITIVE FINITUDE 1. 2. 3. 4. 5.
Finite and Infinite Knowers: Distributive vs. Collective Knowledge Noninstantiable Properties and Vagrant Predicates Vagrant Predicates as Epistemic Problems Relating to Questions and Answers Insolubilia That Reflect Limits to our Knowledge of the Future
87 88 91 94 98
Chapter 9: CAN COMPUTERS OVERCOME OUR COGNITIVE FINITUDE? 1. 2. 3. 4. 5. 6. 7.
Could Computers Overcome Our Limitations? General-Principle Limits are not Meaningful Limitations Practical Limits: Inadequate Information Performative Limits of Predication— Self-Insight Obstacles Performative Limits: A Deeper Look A Computer Insolubilium Conclusion
103 106 108 111 113 114 117
CONCLUSION 1. 2. 3.
Being Realistic About Knowledge and Ignorance Ramifications of Cognitive Finitude Instructions of Realism
119 120 121
PREFACE I have been preoccupied in the mid-1970s since working on my book Scientific Progress with exploring the scope and limits of human knowledge from various points of view. Overall this project has also resulted in such later books as Limits of Science, Epistemic Logic, and Epistemetrics.1 Gradually this preoccupation with various different aspects of the problem has led me to contemplate a systemic integration of my ideas on this important theme. The aim of the present book is to weave these diverse threads into a unified treatment of this overall terrain. Accordingly, the present discussion unites in systemic coordination various perspectives and aspects of our cognitive finitude. The result is, I hope, a cohesive and perspicuous account of significant aspects of this critical feature of our cognitive condition.
Nicholas Rescher Pittsburgh, PA November 2005
1
The works at issue are Scientific Progress (Oxford: Blackwell, 1978); Limits of Science (Berkeley and Los Angeles: University of California Press, 1984); Epistemic Logic (Pittsburgh: University of Pittsburgh Press, 2005), and Epistemetrics (Cambridge: Cambridge University Press, 2006).
Chapter 1 FINITUDE AND LIMITATIONS (ON UNREALIZABLE ASPIRATIONS) ___________________________________________________ SYNOPSIS (1)The idea of limits and limitations pivots on the concept of impossibility. (2) Some historic impossibilities in mathematics and physics illustrate this situation—as do various impossibility demonstration that have constituted a major theme in 20th century thought. The sources of finitude and bases of limitations that are at work here preeminently include the following five: (3) necessity, (4) incapacity, (5) scarcity, (6) uncontrollability, and (7) imperfectability. (8) Limits are often manifested by diminishing returns and resistance barriers. (9) The rational reaction to finitude is one either of curtailing aspirations or of resignation to the inevitable. (10) Some theoretical issues regarding limits, finitude, and incapacity can take very different forms and can address very different issues. And in principle one sort of limitedness need not necessarily spread over to another. ___________________________________________________ 1. FINITUDE AND UNREALIZABLE ASPIRATIONS
T
here is a significant difference between limits and limitations. Limits, inhere in outright impossibilities—conditions that simply cannot be realized in the very nature of things. Limitations, by contrast, have a sociological aspect: they relate to things that intelligent agents would like to do—if only they could, which is not the case. Every law of nature sets a limit. Take “Acids turn blue litmus paper red.” This is correlative with the impossibility of finding some acidimmersed blue litmus paper that take on a color other than red. But of course no limitation is involved. Nobody hankers after an acid that turns blue litmus paper black. So no limitation is involved here. Limits belong primarily to the natural sciences; limitations by contrast, have a whiff of the social sciences about them. Even as “it takes two to tango”, so it takes two parties to create a limitation—a reality that sets
Nicholas Rescher • Collected Papers V
limits and an agent who aspires to transcend them. A being of whose aspiration-horizon is narrow—confined entirely within the range of what is well within its powers—encounters no limitation in the presently operative sense of the term (notwithstanding the fact that all those limits that nonetheless confront it will reflect its status as a finite being). To be sure, there are some things that are impossible for this or that particular individual (as, for example, my succeeding as a Sumo wrestler), while others are impossible for all members of the species (as, for example, outrunning gazelles is for humans). Limitation, however, is a matter of generic infeasibility in realizing something that people in general might ideally want to do. We humans, accordingly, are prey to both finitude and limitations. We are limited in what we can do with our bodies—we cannot, for example, turn them into bronze. But this hardly qualifies as a limitation—nobody in their senses wants to transform themselves into a statue. Actual limitations represent limits we would ideally like to transcend if we could have things in our own way. And there are, of course, a great many of them: our wishes and aspirations outrun the reach of our capabilities and capacities. It is a characteristic feature of our condition in this regard that we humans are all too clearly limited in matters of knowledge, power, beauty, and many other desiderata. And this salient aspect of our condition deserves scrutiny and clarification. 2. SOME SALIENT IMPOSSIBILITIES, PAST AND PRESENT Certain infeasibilities have been on the agenda for a long time. In mathematics, for example, the project of “squaring the circle”—of using ruler and compass for the construction of a circle—was demonstrated to be impossible by J. H. Lambert in the middle of the eighteenth century.1 Again, in physics, the idea of a perpetual motion machine, which has intrigued theorists ever since the middle ages, came to grief with the demonstration of its infeasibility during the rise of thermodynamics in the middle years of the nineteenth century.2 And yet again in physics, we have 1
See Eugen Beutel, Die Quadratur des Kreises (Leipzig/Berlin: B. G. Teubner, 1913; 5th ed. 1951); C. H. Edwards, Jr., The Historical Development of the Calculus (New York-Heidelberg-Berlin: Springer-Verlag, 1979).
2
See A. W. J. G., Ord-Hume, Perpetual Motion: The History of an Obsession (New York: St. Martin’s Press, 1977).
2
FINITUDE AND LIMITATIONS
the long recognized idea of the impossibility of achieving a perfect vacuum.3 All of these impossibilities—these insuperable limits to goal achievement—betoken limitations exactly because those infeasible achievements have been a focus of aspiration. But with advances in mathematical and physical science, these longstanding aspirations ended up on the scrap-heap of demonstrated impossibility. And this is only the beginning. The demonstration of impossibilities is among the most strikingly characteristic features of twentieth century science.4 A handful of salient instances that illustrate this fact is given in Display 1. All of these milestone achievements of the era share the common feature of demonstrating the inherent infeasibility of achieving some desideratum to which practitioners of the discipline at issue had long and often aspired. Such findings had the effect of derailing unreasonable aspirations by bringing significant limitations to light. In this regard, the twentieth century has proven itself to be an era of dis-illusion where time and again the discovery of limits has thrown a bright, and often unwelcome light on our insuperable limitations.
3
See Mary Hesse, “Vacuum and Void,” The Encyclopedia of Philosophy (New York: Macmillan and Free Press), Vol. VII (1967), pp. 217-18.
4
For an instructive discussion of relevant issues see John P. Barrow, Impossibility (Oxford: Oxford University Press, 1998).
3
Nicholas Rescher • Collected Papers V
___________________________________________________ Display 1 IMPOSSIBILITY DEMONSTRATIONS IN TWENTIETH CENTURY SCIENCE • Physics/Relativity: Albert Einstein’s demonstration of the impossibility of physical transmissions faster than the speed of light. • Physics/Quantum Theory: Niels Bohr’s demonstration of the Principle of Complementarity inherent in the infeasibility of a conjointly precise specification of certain physically descriptive parameter (i.e., position and velocity) of physical micro-entities. • Psychology: Sigmund Freud’s insistence on the impossibility of selfmastery on grounds of there being no way for our rational conscious deliberation to gain complete control of our psychological processes. • Thermodynamics/Cryogenics: Max Plank’s demonstration of the effective impossibility of reaching absolute zero in experimental situations. • Cybernetics: Claude Shannon’s demonstration of the impossibility of a flawless (loss-free) transmission of information, any channel having a level beneath which noise cannot be reduced. • Mathematics: Kurt Gödel’s demonstration of the impossibility of axiomatizing arithmetic. • Social Theory/Economics: Kenneth Arrow’s theorem establishing the impossibility of reconciling social preferability with individual preferences. ___________________________________________________ It is one of the ironies of twentieth century science that, as its achievements have pushed ever further the frontiers of science and technology, this has at the same time brought various insuperable limits more sharply to view. Accordingly, the twentieth century has witnessed an
4
FINITUDE AND LIMITATIONS
ever more emphatic awareness of limits. For despite the vast new vistas of possibility and opportunity that modern science and technology have opened up, there has emerged an ever clearer and decidedly sobering recognition that the region beyond those new horizons is finite—that progress in many directions—be it material or cognitive—has its limits and that we can go only so far in realizing our desires. And this has constrained a realistically modest sensibility—a growing awareness of human finitude thanks to the limits and limitations that confront us. 3. SOURCES OF FINITUDE: NECESSITY There is nothing eccentric or anomalous about all of those manifold impossibilities. They root in certain fundamental features of reality. And this prominence of limitations in reality’s larger scheme of things calls for a closer look at the underlying grounds of such a state of affairs. And here it emerges that the etiology of limits—the systematic study of this topic— brings to light the operation of certain very general and fundamental processes that account for a wide variety of particular cases. In particular, the following five figure among the prime sources of finitude: • Necessity • Incapacity • Scarcity (of resources or time) • Uncontrollability — Fate — Chance • Imperfectability — via desiderata conflicts — via resistance barriers
5
Nicholas Rescher • Collected Papers V
We can thus assemble the prime sources of finitude types under the acronym of NISUI. Let us take these five factors up in turn, beginning with necessity. Limits of necessity root in the fundamental principles of logic (logical impossibility) and in the laws of nature (physical impossibility). The crux here is in the final analysis a matter of the laws of thought and/or nature which alike render the realization of certain conditions in principle impossible. For every scientific law is in effect a specification of impossibility. If it indeed is a law that “Iron conducts electricity” then a piece of nonconducting iron thereby becomes unrealizable. Limitation of necessity are instantiated by such aspirations as squaring the circle or accelerating spaceships into hyperdrive at transluminal speed. Many things that we might like to do—to avoid ageing, to erase the errors of the past, to transmute lead into gold—are just not practicable. Nature’s modus operandi precludes the realization of such aspirations. We had best abandon them because the iron necessity of rational law stands in the way of their realization. 4. INCAPACITY (CATEGORICAL AND TEMPORAL) A second key limitation of finite creatures relates to limits of capacity. In this regard there are various desiderata that individual finite beings can realize alright but only at a certain rate—so much per hour, or year, etc. Reading, communicating, calculating—there is a virtually endless list of desirable tasks that people can manage within limits. For throughout such matters we encounter a limit to performance—a point beyond which more efficient realization becomes effectively impossible. Here we have a sort of second-order limit of impossibility. The issue is no longer “Can X be done at all?”, but rather “Yes, X can be done alright, but how much of it can one manage in a given timespan?” The prospects of X-performance accordingly become subject to the limitedness of time. With virtually all performatory processes people generally have a capacity limit—there is only so much that can be accomplished in a given timespan. And this leads to the phenomenon of what might be characterized as the time compression of man-managed activities such as reading or writing or speaking—or proofreading for that matter. One can perform them at increasing speed but only at the price for increasing malfunctions. “Haste makes waste” as the proverb sagely has it.
6
FINITUDE AND LIMITATIONS
5. SCARCITY Scarcity is another prime source of limitations. It is not the laws of nature as much as the condition of our planet that precludes diamonds from being as plentiful as blackberries, and truffles as common as mushrooms. Many of the things that people would like to have are matters of scarcity— there is just not enough of them to go around. Not everyone can have their castle in Spain, their personal field of oil wells, their daily commute along peaceful country lanes. Even fresh, clean air is not all that easy to come by. Many or most resources are in short and limited supply—there is just not enough of them—time, that is to say, lifetime included. In hankering after them we encounter insuperable limits of scarcity that impel us into some of the unavoidable limitations that manifest our finitude. 6. UNCONTROLLABILITY: FATE AND LUCK The inexorable rulings of fate are yet another prime source of limitations. We come into a world not of our making and occupy conditions not of our own choosing. We would all like to have a healthy genetic heritage, but have no choice about it. We would like to have in peaceful, prosperous, easy times but cannot reclaim the conditions of a past golden age. We would like to be graceful, talented, charming, accomplished—and would welcome having children of the same sort—but yet have relatively little say in the matter. All of us have to play the game of life with the cards we have been dealt. Our control over the conditions that fate assigns to us is somewhere between minute and nonexistent. Then too there is the impetus of chance. Often as not in life matters develop in ways governed by pure luck rather than by arrangements within the scope of our control. Often as not it is chance alone that gets us involved in accidents or in fateful encounters. Much that is important for us in life issues from fortuitous luck rather than deliberate choice and control.5 And all of this betokens limits to the extent to which we can achieve the control we would fain have with regard to our circumstances in times world.
5
On these matters see the author’s Luck (New York: Farrar Straus Giroux, 1990).
7
Nicholas Rescher • Collected Papers V
7.
IMPERFECTABILITY (I)—DESIDERATUM CONFLICTS AND COMPLEMENTARITY
Prominent among the root causes of finitude is the phenomenon of what might be called desideratum conflicts where in advancing with one positivity we automatically diminish another. What we have here is vividly manifested in the positivity complementarity that obtains when two parameters of merit are so interconnected that more of one automatically means less of the other, much as illustrated by the following diagram:
Positivity 1
Positivity 2
The systemic modus operandi of the phenomenology at issue here is such that one aspect of merit can be increased only at the price of decreasing another. Consider a simple example, the case of a domestic garden. On the one hand we want the garden of a house to be extensive—to provide privacy, attractive vistas, scope for diverse planting, and so on. But on the other hand we also want the garden to be small—affordable to install, convenient to manage, affordable to maintain. But of course we can’t have it both ways: the garden cannot be both large and small. The desiderata are at issue and locked into a see-saw of conflict. Again, in many ventures—and especially in the provision of services— the two desiderata of quantity and quality come into conflict: processing too many items in pursuit of efficiency compromises quality, providing too high quality compromises quantity. Some further examples are as follows: • In quest of the perfect automobile we encounter a complementarity between speed vs. safety. • In quest of the perfect kitchen we encounter a complementarity between spaciousness vs. convenience.
8
FINITUDE AND LIMITATIONS
• In pursuit of the perfect vacation spot we encounter a complementarity relationship between attractiveness vs. privacy and again between affordability vs. amenities. A philosophically more germane example arises on epistemology. With error-avoidance in matters of cognition the trade-off between errors of type 1 and errors of type 2—between inappropriate negatives and false positives—is critical in this connection. For instance, an inquiry process of any realistically operable sort is going to deem some falsehoods acceptable and some truths not. And the more we fiddle with the arrangement to decrease the prospect of one sort of error, the more we manage to increase the prospect of the other. Analogously, any criminal justice system realizable in this imperfect world is going to have inappropriate negative thought let some of the guilty off while also admitting false positives by condemning some innocents. And the more we rearrange things to diminish one flaw the scope we give to the other. And so it goes in other situations without number. The two types of errors are linked in a see-saw balance of complementarity that keep perfection at bay. Throughout such cases we have the situation that to all intents and purposes realizing mode of one desideratum entail a correlative decrease in the other. We cannot have it both ways so that ideal perfection lies beyond our grasp. So in all such cases there will be an impossibility of achieving the absolute perfection at issue with maximizing every parameter of merit at one and the same time. In the interest of viability some sort of compromise must be negotiated seeing that the concurrent maximization of desiderata is now automatically unrealizable. And the unattainability of perfection also has other interesting ramifications.
8. IMPERFECTABILITY (II)—RESISTANCE DIMINISHING RETURNS
BARRIERS
AND
The utopian idea of human perfection—be it at the level of individuals or of the social order at large—has been with us throughout recorded
9
Nicholas Rescher • Collected Papers V
history.6 After all, it lies in our nature to aspire after ever greater things. (“To be man,” Sartre wrote, “means to reach towards God.”7) But experience and theorizing alike indicate that nothing is clearer than that neither our lives, or our knowledge, nor yet our morals can ever be brought even remotely near to the pinnacle of perfection. And for good reason. In medicine, life prolongation affords as vivid example, since with the elimination of one form of life-curtailment others emerge that are yet more difficult to overcome. Again, performance in such sports as speed-racing or high-jumping also illustrates this phenomenon of greater demand for lesser advance. Throughout there is a point where a further proportionate step towards an ideal limit becomes increasingly difficult with the result that an exponentially diminishing group of performers will be able to realize a corresponding level of achievement. There is something about perfection that generally resists realization. In physics and engineering this sort of thing is called a resistance barrier, a phenomenon encountered in the endeavor to create a perfect vacuum, and that of achieving absolute zero in low-temperature research, or again that of propelling subatomic particles to the speed of light with accelerators. The closer we get to that ideal condition the harder it pushes back in reaction against further progress. And just the same sort of phenomenon is encountered in many areas of ordinary life—as is illustrated by the quest for a perfectly safe transport system or a perfectly efficient employment economy. One of the prime instances of a resistance barrier is encapsulated in the phenomenon of entropy—of disorder. For not only does nature “abhor” a vacuum, it does so with order as well. It insists on disorder through something of an entropic principle of dissonance preservation: the more one intervenes in nature to control disorder by compressing it into a more limited area, the more strongly it pushes back and resists further compression. Nature insists upon an ultimately ineliminable presence of chaos and disorder. Resistance barriers involve two aspects: first an outright impossibility of reaching an ideal goal, and second an effective infeasibility of drawing ever nearer to it because this requires a level of capability (and thus 6
For a lucid and instructive discussion of these issues see John Passmore, The Perfectibility of Man (London: Duckworth, 1970).
7
Jean-Paul Sartre, Being and Nothingness, tr. H. E. Barnes (New York: Publisher, 1956), pp. 566.
10
FINITUDE AND LIMITATIONS
resource investment) that is ever-larger and thereby ultimately bound to outreach the extent of resources at our disposal. Two different albeit interrelated sorts of limits are accordingly at issue here, namely limits of possibility (of unrealizability in principle due to an inherent impossibility) and limits of feasibility (of unrealizability in practice due to a shortfall of resource, time, or capability). In either case, however, the pervasive reality of resistance barriers constitutes a decisive obstacle to realization of perfection. A crucial aspect of resistance barriers lies in the fact that the more progress one makes along the lines at issue the more difficult—and expensive—still further progress always becomes. Resistance barriers inevitably combine cost escalation with diminishing returns. An instructive illustration of this phenomenon is afforded by the situation of what might be called the nonstandard response effect in the realm of medicaments. When one nowadays purchases a drug it generally comes accompanied by a long slip of paper listing the possible unwelcome “side effects”—nonstandard reactions that occur in some relatively few cases. The response-list at issue consists of a series of entries inventorying nonstandard reactions in a format something like: (Ei) In pi percent of cases patients have responded in manner Mi. But how are we—or our physicians—to determine in advance if we belong to the particular group Gi of individuals to whom Ei applies? Is there a test Ti that provides predictive guidance through generalizations of the format: (Xi) It is those who pass Ti who constitute that percentage group Pi of cases where patients respond in manner Mi. Or in other words, is there providing an advance determination of those instances where Ei applies—a test that renders what is ex-post facto explainable also be predictable in advance? The reality of it is that a striking phenomenon occurs in this connection. Only in the first few instances—at best, that is, only for the first few E1 entries—say only for E1 and E2 and E3—will a test ever actually be available in advance of the fact. For the most part—and always for the lower end of our response list—there is just no way of telling in advance why people react but they do and no way of determining in advance of the fact which individual will respond in that way. And in general, the rarer a
11
Nicholas Rescher • Collected Papers V
nonstandard response is, the more difficult it is to explain and the more difficult to predict. A fundamental principle regarding the modus operandi of biomedicine in relation to its cognitive domestication is encountered here: That is, the rarer the phenomenon (in our case, the negative reaction to a medicament), the less frequently it is encountered in the course of experience, the more difficult—the more costly in terms of time, effort, and resources—its cognitive domestication by way of explanation, prediction, rational systematization will generally prove to be. And the reason why we cannot predict the extremely rare nonstandard responses is not that they are inherently inexplicable, but rather that their explanation or prediction becomes unaffordable through the extent to which its realization would require advancing the cognitive state of the art. So what we have here is ultimately once again a limitation rooted in the finitude of resources, with the inability to achieve a desired result whose ultimate ground lies—as is all too commonly the case—in an inability to afford it. 9. REACTIONS TO FINITUDE When aspiration outreaches attainability there are really only two basic responses: 1. to remove such the disparity between the two by curtailing aspirations so as to restore equilibrium, with attainability, or else 2. to accept the disparity a) with grudging annoyance and regret b) with resignation to accepting the inevitable But in matters of human finitude where authentic limits are at issue, we do well to avert frustration by confining our aspirations to the limits of the feasible, accepting our limitations with realism and Stoic resignation. There is, after all, little point in baying after an unattainable moon. Once we realize that an authentic limitation is at issue with an aspiration of ours,
12
FINITUDE AND LIMITATIONS
the best course is that of a realistic determination to “get over it”. It is perhaps not entirely correct to hold that understanding an infeasibility will automatically issue its acceptance (that tout comprendre c’est tout accepter, to paraphrase a well-known French dictum). But all the same, Spinoza was clearly right in insisting that this is the rational thing to do. Unfortunately, however, the realization of complete rationality in the management of our affairs is itself one of those unrealizable aspirations that characterizes our limitations as a species. Still, when all is said and done, we do well to “push the envelope” in matters of realizing our desiderata. For it is a sound policy never to be readily overconfident that what is thought to be unattainable is actually so. In the absence of demonstrations to the contrary, the burden of proof is best assigned to naysayers in matters of this sort. 10.
THEORATICAL ISSUES REGARDING LIMITS AND FINITUDE
Finally, a few observations on theoretical issues. Rational agents are in general capable of three generic sorts of things lying in the area of: knowledge, action, and evaluation. And of course limits are possible in every direction—be it cognition or power or judgment. Now in theory, these limits can diverge and supposition can disconnect what fact has joined together. Thus one can conceive of a being whose knowledge is limited but whose power is not—who could accomplish great things if only he could conceive of them. Or again one can envision a creature beset by value blindness who is incapacitated for implementing the belief-desire mechanics of agency by an apathy that blinds him from seeing anything as desirable. And even with regard to a single factor such as knowledge, incapacities of very different sorts can be envisioned. One can conceive of an intelligent being whose knowledge is confined to the necessary truths of logic and mathematics, but for whom all knowledge of contingent fact is out of range. Or again one can imagine a being whose knowledge is geared to matters of fact, but who is wholly lacking in imagination, so that all matters of hypothesis and supposition remain out of view. The possibilities of capacity divergence are manifold. And so, the prospect to very different sorts of limits and limitations looms large before us. But the sobering fact remains that the definitive feature of man as a finite being is that we are, to some extent, subject to all of them.
13
Chapter 2 ON COGNITIVE FINITUDE: IGNORANCE AND ERROR ___________________________________________________ SYNOPSIS (1) Cognitive finitude in specific is a pivotal feature of the human condition. Our knowledge is both incomplete and in various regards incorrect. (2) However our efforts at communication address what there really is and not merely what we think to be. (3) Accordingly, error, though virtually inevitable, presents no decisive impediment to communication. ___________________________________________________ 1. THE CORRIGIBILITY OF CONCEPTIONS
C
ognitive finitude is a crucial aspect of the human condition. By definition, as it were, finite knowers cannot avoid errors of omission. But they are bound to be involved in errors of commission as well. For in deliberating about error it is necessary to distinguish between the correctness of our particular claims about things and that of our very ideas of them— between a true or correct contention on the one hand, and a like conception on the other. To make a true contention about a thing we need merely get some one particular fact about it right. To have a true conception of the thing, on the other hand, we must get all of the important facts about it right. With a correct contention (statement) about a thing, all is well if we get the single relevant aspect of it right, but with a correct conception of it we must get the essentials right—we must have the correct overall picture.1 1
This provides a basis for multiplying conceptions—e. g., by distinguishing between the scientifically important facts and those important in the context of everyday life. Think of Arthur Eddington’s distinction between the scientists’ table and the table of our ordinary experience. See A. S. Eddington, The Nature of the Physical World (Cambridge: Cambridge University Press, 1929), pp. ix-xi.
Nicholas Rescher • Collected Papers V
This duality of error as between false belief and erroneous conception (“applying to one thing the definition proper to another”) goes back at least to St. Thomas Aquinas.2 To ensure the correctness of our conception of a thing we would have to be sure—as we very seldom are—that nothing further can possibly come along to upset our view of just what its important features are and just what their character is. The qualifying conditions for true conceptions are thus far more demanding than those for true claims. No doubt, in the 5th Century, B.C., Anaximander of Miletus may have made many correct contentions about the sun—for example, that it is not a mass of burning stuff pulled about on its circuit by a deity with a chariot drawn by a winged horse. But Anaximander’s conception of the sun (as the flaming spoke of a great wheel of fire encircling the earth) was seriously wrong. With conceptions—unlike propositions or contentions—incompleteness means incorrectness, or at any rate presumptive incorrectness. A conception that is based on incomplete data must be assumed to be at least partially incorrect. If we can decipher only half the inscription, our conception of its overall content must be largely conjectural—and thus must be presumed to contain an admixture of error. When our information about something is incomplete, obtaining an overall picture of the thing at issue becomes a matter of theorizing, or guesswork, however sophisticatedly executed. And then we have no alternative but to suppose that this overall picture falls short of being wholly correct in various (un-specifiable) ways. With conceptions, falsity can thus emerge from errors of omission as well as those of commission, resulting from the circumstance that the information at our disposal is merely incomplete, rather than actually false (as will have to be the case with contentions). The incompleteness of our knowledge does not, of course, ensure its incorrectness—after all, even a single isolated belief can represent a truth. But it does strongly invite it. For if our information about some object is incomplete then it is bound to be unrepresentative of the objective make up-as-a-whole so that a judgment regarding that object is liable to be false. The situation is akin to that depicted in the splendid poem about “The Blind Men and the Elephant” which tells the story of certain blind sages, those
2
16
See Thomas Aquinas, Summa theologica, Bk. I, quest 17, sect. 3.
IGNORANCE AND ERROR
Six men of Indostan, To learning much inclined, Who went to see the elephant, (Though all of them were blind). One sage touched the elephant’s “broad and sturdy side” and declared the beast to be “very like a wall”. The second, who had felt its tusk, announced the elephant to resemble a spear. The third, who took the elephant’s squirming trunk in his hands, compared it to a snake; while the fourth, who put his arm around the elephant’s knee, was sure that the animal resembled a tree. A flapping ear convinced another that the elephant had the form of a fan; while the sixth blind man thought that is had the form of a rope, since he had taken hold of the tail. And so these men of Indostan, Disputed loud and long; Each in his own opinion Exceeding stiff and strong: Though each was partly in the right, And all were in the wrong. An inadequate or incomplete description of something is not thereby false—the statements we make about it may be perfectly true as far as they go. But an inadequate or incomplete conception of a thing is eo ipso one that we have no choice but to presume to be incorrect as well,3 because we cannot justifiably take the stance that this incompleteness relates only to inconsequentiate matters and touches nothing important, thereby distorting our conception of things so that errors of commission result. It is important to be clear about just what point is at issue here. It is certainly not being denied that people do indeed know many truths about things, for example, that Caesar did correctly know many things about his sword. Rather, what is being maintained is not only that there were many things he did not know about it (for example, that it contained tungsten), but also that his overall conception of it was in many ways inadequate and in some ways incorrect. Of course, cognitive finitude is not all there is to it. Clearly we humans 3
Compare F. H. Bradley’s thesis: “Error is truth, it is partial truth, that is false only because partial and left incomplete.” Appearance and Reality (Oxford: Clarendon Press, 1893), p. 169.
17
Nicholas Rescher • Collected Papers V
are beings of moral and practical finitude as well. For one thing, our actions are canalized by our knowledge—or at least our belief—regarding those prospects and consequences. Nor are our intentions all that rightminded, seeing that we are creatures schooled by evolution to look to the safety and security on ourselves and our own. The Biblical injunction “Be ye prefect” is beyond our reach, and a more realistic replacement might go something like: “Do the best you can within the limits of possibility set by the circumstances of the existing situation.” However this practical dimension of the matter of man as agent is not at the focus of the present deliberations which will be overted at our defensive not as agents but as knowers. This vulnerability of our putative knowledge of the world in the face of potential error is rather exhibited than refuted by a consideration of scientific knowledge. For this is by no means as secure and absolute as we like to think. There is every reason to believe that where scientific knowledge is concerned further knowledge does not just supplement but generally corrects our knowledge-in-hand, so that the incompleteness of our information implies its presumptive incorrectness as well. We must come to terms with the fact that, at any rate at the scientific level of generality and precision, each of our accepted beliefs may eventuate as false and many of our accepted beliefs will eventuate as false. The road to scientific progress is paved with acknowledged error. The history of science is the history of changes of mind about the truth of things. The science of the present is an agglomeration of corrections of the science of the past. Throughout the cognitive enterprise—and above all throughout the sciences—much of what we vaunt as “our knowledge” is no more than our best estimate of the truth of things. And we recognize in our heart of hearts that this putative truth in fact incorporates a great deal of error. But if we acknowledge the presence of error within the body of our putative knowledge, then why don’t we simply correct it? The salient lesson here is conveyed by what has become known under the rubric of The Preface Paradox whose jest is as follows: An author’s preface reads in part as follow: “I realize that, because of the complex nature of the issues involved, the text of the book is bound to contain some errors. For these I now apologize in advance.” There is clearly something paradoxical going on with this otherwise far from outlandish disclaimer because the statements of the main text are flatly asserted and thereby claimed as truths while the preface statement affirms that some of them are false. Despite an
18
IGNORANCE AND ERROR
acknowledgement of a collective error there is a claim to distributive correctness. Our author obviously cannot have it both ways.4 In reading the preface, the impatient reader may want to exclaim: “You silly author, if there are errors why don’t you just correct them.” But there’s the rub. The author would have to correct these errors if only he would tell where and what they are. But this is just exactly what he doesn’t know. They may be present in full view but they are not identifiable as such. As error they are totally invisible. And exactly this situation of the Preface Paradox is paradigmatic for the situation of science as regards its errors. The very concept of a thing that underlies our discourse about this world’s realities is thus based on a certain sort of tentativity and fallibilism—the implicit recognition that our own personal or even communal conception of things may well be wrong, and is in any case inadequate. At the bottom of our belief about things at the level of generality and precision at issue in science there always is—or should be—a certain wary fallibilism that recognizes the possibility of error. 2. COMMUNICATIVE PARALLAX The fact that real things have hidden depths—that they are cognitively opaque—has important ramifications that reach to the very heart of the theory of communication. Any particular thing—the moon, for example—is such that two related but critically different versions of it can be contemplated: (1) the moon, the actual moon as it “really” is and (2) the moon as somebody (you or I or the Babylonians) conceives of it. The crucial fact to note in this connection is that it is virtually always the former item—the thing itself—that we intent to communicate or think (= self-communicate) about, the thing as it is, and not the thing as somebody conceives of it. Yet we cannot but recognize the justice of Kant’s teaching 4
The paradox was formulated in D. C. Makinson, “The Paradox of the Preface,” Analysis, vol. 25 (1965), pp. 205-07.
19
Nicholas Rescher • Collected Papers V
that the “I think” (I maintain, assert, etc.) is an ever-present implicit accompaniment of every claim or contention that we make. This factor of attributability dogs our every assertion and opens up the unavoidable prospect of “getting it wrong”. Ambitious intentions or pretensions to the contrary notwithstanding, all that one can ever actually manage to bring off in one’s purportedly factassertive discourse is to deliver information about item (2)—to convey what one thinks or conceives to be so. I can readily distinguish the features of (what I take to be) “the real moon” from those of “the moon as you conceive of it”, but I cannot distinguish them from those of “the moon as I conceive of it”. And when I maintain “The moon is roughly spherical” all that I have successfully managed to deliver to you by way of actual information is: “Rescher maintains that the moon is roughly spherical.” And there is nothing that can be done to alter this circumstance. If you bind me by the injunction, “Tell me something about the Eiffel Tower, but please don’t merely put before me your beliefs or convictions regarding it; just give me facts about the thing itself, rather than presenting any parts of your conception of it!”, you condemn me to the silence of the Lockean je ne sais quoi. 3. THE COMMUNICATIVE IRRELEVANCE OF INADEQUATE CONCEPTIONS The acknowledgement of potential error does not preclude effective communication, however. For our intention to take real objects to be at issue, objects as they are in themselves, notwithstanding our potentially idiosyncratic and erroneous conceptions of them, communicatively is fundamental because it is overriding—that is, it overrides all of our other intentions when we enter upon the communicative venture. Without this conventionalized intention we should not be able to convey information—or misinformation—to one another about a shared “objective” world. We could never establish communicative contact about a common objective object of discussion if our discourse were geared to the things as conceived of in terms of our own specific information about them. Any pretensions to the predominance, let alone the correctness of our own conceptions regarding the furniture of this realm must be put aside in the context of communication. The fundamental intention to deal with the objective order of this “real world” is crucial. If our assertoric commitments did not transcend the information we ourselves have on hand, we
20
IGNORANCE AND ERROR
would never be able to “get in touch” with others about a shared objective world. No claim is made for the primacy of our conceptions, or for the correctness of our conceptions, or even for the mere agreement of our conceptions with those of others. The fundamental intention to discuss “the thing itself” predominates and overrides any mere dealing with the thing as we ourselves conceive of it. This ever-operative contrast between “the thing itself” and “the thing as we ourselves take it to be” means that we are never in a position to claim definitive finality for our conception of a thing. We are never entitled to claim to have exhausted it au fond in cognitive regards—that we have managed to bring it wholly within our epistemic grasp. For to make this claim would, in effect, be to identify “the thing itself” in terms of “our own conception of it”, an identification which would effectively remove the former item (the thing itself) from the stage of consideration as an independent entity in its own right by endowing our conception with decisively determinative force. And this would lead straightaway to the unpleasant result of a cognitive solipsism that would preclude reference to intersubjectively identifiable particulars, and would thus block the possibility of interpersonal communication. Seen in this light, the key point may be put as follows: It is indeed a presupposition of effective communicative discourse about a thing that we purport (claim and intend) to make true statements about it. But it is not required for such discourse that we purport to have a true or even adequate conception of the thing at issue. On the contrary, we must deliberately abstain from any claim that our own conception is definitive if we are to engage successfully in discourse. We deliberately put the whole matter of conception aside—abstracting from the question of the agreement of my conception with yours, and all the more from the issue of which of us has the right conception.5 5
It is thus perfectly possible for two people to communicate effectively about something that is wholly nonexistent and about which they have substantially discordant conceptions (for example, X’s putative wife, where X is, in fact, unmarried, though one party is under the misimpression that X is married to A, and the other under the misimpression that X is married to B). The common focus is the basis on which alone the exchange of information (or misinformation) and the discovery of error becomes possible. And this inheres, not in the actual arrangements of the world, but in our shared (conventionalized) intention to talk about the same thing: about X’s wife or, rather X’s putative wife in the case at hand.
21
Nicholas Rescher • Collected Papers V
In communication regarding things we must be able to exchange information about them with our contemporaries and to transmit information about them to our successors. And we must be in a position to do this in the face of the presumption that their conceptions of things are not only radically different from ours, but conceivably also rightly different. What is at issue here is not the commonplace that we do not know everything about anything. Rather, the key consideration is the more interesting thesis that it is a crucial precondition of the possibility of successful communication about things that we must avoid laying any claim either to the completeness or even to the ultimate correctness of our own conceptions of any of the things at issue. If we were to set up our own conception as somehow definitive and decisive, we would at once erect a grave impediment to the prospect of successful communication with one another. Communication could then only proceed retrospectively with the wisdom of hindsight. It would be realized only in the implausible case that extensive exchange indicates that there has been an identity of conceptions all along. We would then learn only by experience—at the end of a long process of wholly tentative and provisional exchange. And we would always stand on very shaky ground. For no matter how far we push our inquiry into the issue of an identity of conceptions, the prospect of a divergence lying just around the corner— waiting to be discovered if only we pursued the matter just a bit further— can never be precluded. One could never advance the issue of the identity of focus past the status of a more or less well-grounded assumption. Any so-called “communication” would no longer be an exchange of information but a tissue of frail conjectures. The communicative enterprise would become a vast inductive project—a complex exercise in theory-building, leading tentatively and provisionally toward something which, in fact, the imputational groundwork of our language enables us to presuppose from the very outset.6 The fact that we need not agree on our conceptions of things means, a fortiori, that we need not be correct in our conceptions of things to communicate successfully about them. This points, in part, to the trivial fact that I need not agree with what you are saying to understand you. But it points also, more importantly, to the consideration that my having a conception of a thing massively different from yours will not prevent me from 6
The justification of such imputations is treated more fully in Chapter IX of the author’s Induction (Oxford, 1980). Cf. also pp. 15-18 above.
22
IGNORANCE AND ERROR
taking you to be talking about the same thing that I have in mind. Objectivity and referential commonality of focus are matters of initial presumption or presupposition. The issue here is not with what is understood, but with what is to be understood (by anybody) in terms of a certain generalized and communicative intentions. (The issue here is not one of meaning but only of meaningfulness.) Our concept of a real thing is accordingly such that a thing is a fixed point, a stable center around which communication revolves, the invariant focus of potentially diverse conceptions. What is to be determinative, decisive, definitive, (etc.) of the things at issue in my discourse is not my conception, or yours, or indeed anyone’s conception at all. The conventionalized intention discussed above means that: a coordination of conceptions is not decisive for the possibility of communication. Your statements about a thing will convey something to me even if my conception of it is altogether different from yours. To communicate we need not take ourselves to share views of the word, but only to take the stance that we share the world being discussed. It is crucial that the mechanisms of human communication should lie within the domain of human power. Now with respect to the meanings of words this condition is satisfied, because this is something that we ourselves fix by custom or by fiat. But the correctness of conceptions is not simply a matter of human discretion—it is something that lies outside the sphere of our effective control. For a “correct conception” is akin to Spinoza’s true idea of which he stipulates that it must “agree with its object”7—in circumstances where this issue of agreement may well elude us. (Man proposes but does not dispose with respect to this matter of idea/actuality coordination.) We do, no doubt, purport our conceptions to be correct, but whether this is indeed so is something we cannot tell with assurance until “all the returns are in”—that is, never. This fact renders it critically important that (and understandable why) conceptions are communicatively irrelevant. Our discourse reflects our conceptions and perhaps conveys them, but it is not substantive about them. And it is this downing to realism that also opens an access to error. The subjective conception people have of things may be the vehicle of thought, but it is never the determinant of reference. By their very nature, conceptions are too personal—and thus potentially too idiosyncratic—for our communicative needs. For communication, interpersonal and public in7
Benedictus de Spinoza, Ethics, Bk. 1, axiom 6.
23
Nicholas Rescher • Collected Papers V
strumentalities are indispensably requisite. And language affords this desideratum. It provides the apparatus by which the identity of the referents of our discourse becomes fixed, however imperfectly we ourselves perceive their nature. (The specifications of things as enshrined in language are Kripkean “rigid designators” in an epistemic manner: our indicators for real-things-in-the-world are designed in both senses, constructed and intended to perform—insofar as possible—an invariant identificatory job across the diversified spectrum of epistemic worlds.) How do we really know that Anaximander was talking about our sun? He isn’t here to tell us. He didn’t leave elaborate discussion about his aims and purposes. How can we be so confident of what he meant to talk about? The answer is straightforward. That he is to be taken to talk about our sun is, in something that turns, the final analysis, on two very general issues in which Anaximander himself plays little if any role at all: (1) our subscription to certain generalized principles of interpretation with respect to the Greek language, and (2) the conventionalized subscription by us and ascription to other language-users in general of certain fundamental communicative policies and intentions. In the face of appropriate functional equivalences we allow neither a difference of language nor a difference of “thought-worlds” to block a commonality of reference. The commitment to objectivity is basic to our discourse with one another about a shared world of “real things” to which none of us is in a position to claim privileged access. This commitment establishes a need to “distance” ourselves from things—i.e., to recognize the prospect of a discrepancy between our (potentially idiosyncratic) conceptions of things and the true character of these things as they exist objectively in “the real world”. The ever-present contrast between “the thing as we view it” and “the thing as it is” is the mechanism by which this crucially important distancing is accomplished. The overarching INTENTION to communicate about a common object—abandoning any and all claims to regard our own conceptions of it as definitive (decisive)—is the indispensable foundation of all communication. And this intention is not something personal and idiosyncratic—a biographical aspect of certain particular minds—it is a shared feature of “social mind”, built into the use of language as a publicly available communicative resource. The wider social perspective is crucial. In subscribing to the conventionalized intention at issue, we sink “our own point of view” in the interests of entering into the wider community of fellow communicators. Only by admitting the potential distortion of one’s own conceptions of
24
IGNORANCE AND ERROR
things through “communicative parallax” can one manage to reach across the gulf of divergent conceptions so as to get into communicative touch with one another. In this context, the pretension—humbling stance of a cognitive Copernicanism is not only a matter of virtue, but one of necessity as well. It is the price we pay for keeping the channels of communication open. The information that we may have about a thing—be it real or presumptive information—is always just that, viz. information that WE lay claim to. We cannot but recognize that it is person-relative and in general persondifferentiated. However, our attempts at communication and inquiry are thus undergirded by an information-transcending stance—the stance that we communally inhabit a shared world of objectively existing things—a world of “real things” amongst which we live and into which we inquire but about which we do and must presume ourselves to have only imperfect information at any and every particular stage of the cognitive venture. This is not something we learn. The “facts of experience” can never reveal it to us. It is something we postulate or presuppose. Its epistemic status is not that of an empirical discovery, but that of a presupposition that is a product of a transcendental argument for the very possibility of communication or inquiry as we standardly conceive of them. True enough, cognitive change carries conceptual change in its wake. But nevertheless—and this point is crucial—we have an ongoing commitment to a manifold of objective things that are themselves impervious to conceptual and cognitive change. This commitment is built into the very ground-rules that govern our use of language and embody our determination to maintain the picture of a relatively stable world amidst the everchanging panorama of cognitive world-pictures. The continuing succession of the different states of science are all linked to a pre- or sub-scientific view of an ongoing “real world” in which we live and work, a world portrayed rather more stably in the lingua franca of everyday-life communication and populated by shared things whose stability amidst cognitive change is something rather postulated than learned. This postulation reflects the realistic stance that the things we encounter in experience are the subject and not the product of our inquiries. And it serves a supremely important function in limiting the impact of error on the practicability interpersonal communication. The imperfect information of finite knowers is
25
Nicholas Rescher • Collected Papers V
fundamentally no impediment to their effective communication regarding a shared world.8
8
26
On the issues of this chapter see also the author’s Error (Pittsburgh: University of Pittsburgh Press, 2006).
Chapter 3 SCEPTICISM AND FINITUDE ___________________________________________________ SYNOPSIS (1) The vulnerability of our putative knowledge greases the slippery slope from fallibility to scepticism. (2) Scepticism mistakenly trades errors of commission and deems errors of omission irrelevant. (3) Instead, rationality calls for a sensible balance of risk with respect to these negativities. (4) Scepticism is simply an irrational overreaction against the inevitable risk of error. ___________________________________________________ 1. SCEPTICISM
N
ature may or may not abhor a vacuum, but the human mind certainly does. We need to resolve our questions and are so constituted that having a wrong answer can be preferable for us having none at all, the prospect of misinformation often being more acceptable than ignorance. And it is here—in relation to our irrepressible quest for information—that error makes its way upon the scene as the great spoiler of the cognitive enterprise. How do errors come about; what are their sources or causes? There is no prospect of a complete inventory here: the avenues of error are too numerous to admit of anything like a comprehensive listing. The most we can do here is to give a handful of prominent examples with regard to cognitive error: oversight, misjudgment, confusion and conflation, miscalculation, under- and over-estimation, and conclusion-leaping. And correspondingly, error reduction can take many forms: concentration of effort, double checking, proofreading, second opinions, etc. And then there is also the issue of damage control—of measures we can take to minimize the consequences of error if and when they occur despite our best efforts at minimizing them. The prominence of error in our affairs is an open invitation to scepticism, the philosophical doctrine that maintains the infeasibility of attaining
Nicholas Rescher • Collected Papers V
knowledge. This can take many forms, according as it holds that in claiming to know some particular fact p • we claim more that is really warranted by the information actually at our disposal (ampliative scepticism), and accordingly— • we may well be wrong about it (fallibilistic scepticism), and therefore should be prepared for the eventuality that— • we are (always and invariably) going to be wrong about it so that authentic knowledge is simply unattainable (radical scepticism). The fallibility of much of our factual knowledge of the world is rather exhibited than refuted by a consideration of scientific knowledge. For the status of our knowledge as merely purported knowledge is nowhere clearer than with science. Our scientific “knowledge” is by no means as secure and absolute as is generally pretended. If there is one thing we can learn from the history of science, it is that the science of one day is looked upon on the next as naive, deficient, and basically wrong from the vantage point of the wisdom of hindsight. The clearest induction from the history of science is that science is always mistaken—that at every stage of its development its practitioners, looking backwards with the vision of hindsight, view the work of their predecessors as seriously misinformed and mistaken in very fundamental respects. In due realism it is necessary to adopt the epistemological Copernicanism of the second premisses here—a view that rejects the egocentric claim that we ourselves occupy a pivotal position in the epistemic dispensation. After all, there is every reason to think that where scientific knowledge is concerned further knowledge does not just supplement but generally corrects our knowledge-in-hand, so that the incompleteness of our information implies its presumptive incorrectness as well. We must recognize that there is nothing inherently sacrosanct about our own present cognitive posture vis-a-vis that of other, later historical junctures. A kind of intellectual humility is called for—a self-abnegatory diffidence that abstains from the hubris of pretensions to cognitive finality or centrality. The original Copernican revolution made the point that there is nothing ontologically privileged about our own position in space. The doctrine now at issue effectively holds that there is nothing cognitively privileged about our own position in time. It urges that there is nothing epistemically privileged about the present—ANY present, our own promi-
28
SCEPTICISM AND FINITUDE
nently included. Such a perspective indicates not only the incompleteness of “our knowledge” but its presumptive incorrectness as well. All this brings much grist to the sceptic’s mill. What is the sensible way to deal with it? 2. SCEPTICISM AND ERROR AVOIDANCE To be sure, agnosticism is a sure-fire safeguard against errors of commission in cognitive matters. If you accept nothing then you accept no falsehoods. But error avoidance as such does not bring one much closer to knowing how pancakes are actually made. The aims of inquiry are not necessarily enhanced by the elimination of cognitive errors of commission. For if in eliminating such an error we simply leave behind a blank and for a wrong answer substitute no answer at all we have simply managed to exchange an error of commission for one of omission. But the fact remains that errors of commission are not the only sort of misfortune there are.14 Ignorance, lack of information, cognitive disconnection from the world’s course of things—in short, errors of omission—are also negativities of substantial proportions. This too is something we must work into our reckoning. In claiming that his position wins out because it makes the fewest mistakes, the sceptic uses a fallacious system of scoring, for while he indeed makes the fewest errors of one kind, he does this at the cost of proliferating those of another. Once we look on this matter of error realistically, the sceptic’s vaunted advantage vanishes. The sceptic is simply an overcautious risk avoider whose aversion to risk takes the form of a stubborn insistence on minimizing errors of the second kind alone, heedless of the errors of the first kind into which he falls at every opportunity. Ultimately, we face a question of value trade-offs. Are we prepared to run a greater risk of mistakes to secure the potential benefit of an enlarged understanding? In the end, the matter is one of priorities—of safety as against information, of ontological economy as against cognitive advantage, of an epistemological risk aversion as against the impetus to understanding. The ultimate issue is one of values and priorities, weighing the negativity of ignorance and incomprehension against the risk of mistakes and misinformation. Still, measures to reduce error are seldom cost-free. Double checking takes time and effort, warning signals at railway crossing cost money. Such measures only diminish error but do not eliminate. Thanks to their inher-
29
Nicholas Rescher • Collected Papers V
ence in the chance and chaos that rule the world errors are always out there waiting to be made. (“Die Fehler sind da um gemacht zu werden,” my father’s drill sergeant told him in 1914.) Eliminating error, like creating a vacuum, becomes exponentially more difficult as we approach nearer to an unattainable perfection. The reduction of error is a venture in diminishing returns whose point is bound to become impracticable at some point. We cannot eliminate error but can—at best—display care and caution in coming to terms with it. The crucial fact is that inquiry, like virtually all other human endeavors, is not a cost-free enterprise. The process of getting plausible answers to our questions also involves costs and risks. Whether these costs and risks are worth incurring depends on our valuation of the potential benefit to be gained. And unlike the committed sceptic, most of us deem the value of information about the world we live in to be a benefit of so great a value as to make it well worthwhile to incur the substantial instance that can be involved. 3. RATIONALITY AND THE RISK OF ERROR Rationality is closely connected to error avoidance. However, a rational cognitive agent need not be someone who commits no errors—this status being unachievable in principle for the finite beings that we are. Rather it is someone who makes a concerted endeavor to employ those methods, procedures, and processes that yield the minimum of error—those of— omission included. Rationality is, after all, a matter of achieving favorable balance of truth over ignorance and falsehood in matter of belief and successes over failure and frustration in matters of action. The reality of it is that Homo sapiens has evolved within nature to fill the ecological niche of an intelligent being. The demand for understanding, for a cognitive accommodation to one’s environment, for “knowing one’s way about: is one of the most fundamental requirements of the human condition. Humans are Homo quaerens. We have questions and want (nay, need) answers. And the need for information, for cognitive orientation in our environment, is as pressing a human need as that for food itself. We are rational animals and must feed our minds even as we must feed our bodies. In pursuing information, as in pursuing food, we have to settle for the best we can get at the time. We have questions and require the best answers we can get here and now, regardless of their possible imperfections. We cannot live a satisfactory life in an environment we do not understand. For us,
30
SCEPTICISM AND FINITUDE
cognitive orientation is itself a practical need: cognitive disorientation is actually stressful and distressing. The need for knowledge is part and parcel of our nature. A deep-rooted demand for information and understanding presses in upon us, and we have little choice but to satisfy it. Once the ball is set rolling it keeps on under its own momentum—far beyond the limits of strictly practical necessity. The great Norwegian polar explorer Fridtjof Nansen put it well. What drives men to the polar regions, he said, is The power to the unknown over the human spirit. As ideas have cleared with the ages, so has this power extended its might, and driven Man willy-nilly onwards along the path of progress. It drives us in to Nature’s hidden powers and secrets, down to the immeasurably little world of the microscopic, and out into the unprobed expanses of the Universe … it gives us no peace until we know this planet on which we live, from the greatest depth of the ocean to the highest layers of the atmosphere. This Power runs like a strand through the whole history of polar exploration. In spite of all declarations of possible profit in one way or another, it was that which, in our hearts, has always driven us back there again, despite all setbacks and suffering.
The discomfort of unknowing is a natural component of human sensibility. Being ignorant of what goes on about us is almost physically painful for us—no doubt because it is so dangerous from an evolutionary point of view. Reason’s commitment to the cognitive enterprise of inquiry is absolute and establishes an insatiable demand for extending and deepening the range of our information. As Aristotle observed, “Man by nature desires to know.” 4. THE POVERTY OF SCEPTICISM Scepticism not only runs us into practical difficulties by impeding the cognitive direction of action, but it has great theoretical disadvantages as well. Its problem is that it treats the avoidance of mistakes as a paramount good, one worth purchasing even at a considerable cost in ignorance and lack of understanding. For the radical sceptic’s seemingly high-minded insistence on definitive truth, in contradistinction to merely having reasonable warrant for acceptance—duly followed by the mock-tragic recognition that this is of course unachievable—is totally counterproductive. It blocks from the very outset any prospect of staking reasonable claims to information about the ways of the world. To be sure, the averting of errors of
31
Nicholas Rescher • Collected Papers V
commission is a very good thing when it comes free of charge, or at any rate cheap. But it may well be bought at too high a cost when it requires us to accept massive sacrifices in for going the intellectual satisfaction of explanation and understanding. It would of course be nice if we could separate errors of commission from errors of omission and deploy a method free from both. But the realities do not permit this. Any method of inquiry that is operable in real life is caught up the fundamental trade-off between errors of omission and commission. From such a standpoint, it becomes clear that scepticism purchases the avoidance of mistakes at an unacceptable price. After all, no method of inquiry, no cognitive process or procedure that we can operate in this imperfect world, can be altogether failure free and totally secure against error of every description. Any workable screening process will let some goats in among the sheep. With our cognitive mechanisms, as with machines of nay sort, perfection is unattainable; the prospect of malfunction can never be eliminated, and certainly not at any acceptable price. Of course, we could always add more elaborate safeguarding devices. (We could make automobiles so laden with safety devices that they would become as large, expensive, and cumbersome as busses.) But that defeats the balance of our purposes. A further series of checks and balances prolonging our inquiries by a week (or a decade) might avert certain mistakes. But for each mistake avoided, we would lose much information. Safety engineering in inquiry is like safety engineering in life. There must be proper balance between costs and benefits. If accident avoidance were all that mattered, we could take our mechanical technology back to the stone age, and our cognitive technology as well. The sceptic’s insistence on safety at any price is simply unrealistic—if only on the essentially economic basis of a sensible balance of costs and benefits. Risk of error is worth running because it is unavoidable in the context of the cognitive project of rational inquiry. Here as elsewhere, the situation is simply one of nothing ventured, nothing gained. Since Greek antiquity, various philosophers have answered our present question, Why accept anything at all?, by taking the line that man is a rational animal. Qua animal, he must act, since his very survival depends upon action. But qua rational being, he cannot act availingly, save insofar as his actions are guided by his beliefs, by what he accepts. This argument has been revived in modern times by a succession of pragmatically minded thinkers, from David Hume to William James.
32
SCEPTICISM AND FINITUDE
The sceptic seemingly moves within the orbit of rationality, but only seemingly so. For, in fact scepticism runs afoul of the only promising epistemological instrumentalities that we have. Philosophical sceptics generally set up some abstract standard of absolutistic certainty and then try to show that no knowledge claims in a certain area (sense, memory, scientific theory, and the like) can possibly meet the conditions of this standard. From this circumstance, the impossibility of such a category of “knowledge” is accordingly inferred. But this inference is totally misguided. For, what follows is rather the inappropriateness or incorrectness of the standard at issue. If the vaunted standard is such that knowledge claims cannot possibly meet it, the moral is not “too bad for knowledge claims”, but “too bad for the standard”. Any position that precludes in principle the possibility of valid knowledge claims thereby effectively manifests its own unacceptability. The traditional pragmatic argument against sceptical agnosticism goes roughly as follows: On the plane of abstract, theoretical reasoning the sceptical position is, to be sure, secure and irrefutable. But scepticism founders on the structure of the human condition—that man finds himself emplaced in medias res within a world where his very survival demands action. And the action of a rational being requires the guidance of belief. Not the inferences of theory and cognition but the demands of practice and action make manifest the untenability of the sceptic’s position. Conceding that scepticism cannot be defeated on its own ground, that of pure theory, it is held to be invalidated on practical grounds by an incapacity to support the requisites of human action. Essentially this argument is advanced by such diverse thinkers as the Academic Sceptics of classical antiquity, David Hume and William James.1 Unfortunately however, this traditional approach leaves it open for the sceptic to take to the high ground of a partisan of rigorous rationality. For the sceptic may well take the following line: This charge of stultifying practice is really beneath my notice. Theoretical reason and abstract rationality are what concerns the true philosopher. The issue of what is merely practical does not concern me. As far as “mere practice” goes, I am perfectly prepared to conform my actions 1
On these issues see author’s SCEPTICISM (Oxford: Basil Blackwell, 1980)
33
Nicholas Rescher • Collected Papers V
to the pattern that men in general see fit to follow. But one should recognize that the demands of theoretical rigor point in another—and altogether sceptical—direction. The present pragmatic line of argument does not afford the sceptic this comfortable option. Its fulcrum is not the issue of practice as such, but rather that of pragmatic rationality. For it is our specifically cognitive practice of rational inquiry and argumentation that is central here, and in affecting to disdain this the sceptic must now turn his back not simply on the practice of ordinary life, but rationality itself. His “victory” is futile because he conveniently ignores the fact that the whole enterprise of reasongiving is aimed at rationale construction and is thus pointless save in the presence of a route to adequacy in this regard—the standard machinery for assessing probative propriety. The sceptic in effect stands in the unhappy position of being unwilling or unable to abide by the evidential ground rules that govern the management of rational deliberation in human affairs. In the final analysis, the sceptic thus runs afoul of the demands of that very rationality in whose name he so high-mindedly claims to speak. Rationality, after all, is not a matter of logic alone—of commitment to the logical principles of consistency (i.e., not to accept what contradicts accepted premisses) and completeness (i.e., to accept what is entailed by accepted premisses), which are, after all, purely hypothetical in nature (“If you accept …, then“). For cognitive validation rests not upon discursive reasoning alone but upon inductive immediacy as well. It is not just a hypothetical issue of making proper inferences from given premisses; it involves also the categorical issue of giving their proper evidential weight to the premisses themselves. Thus rationality indispensably requires a categorical and material constraint inherent in the conception of evidencenamely, to abide by the established evidential ground rules of various domains of discussion in terms of the locus of presumption and the allocation of benefit of doubt. The sceptic is not embarked on a defense of reason, but on a selfimposed exile from the enterprise of cogent discussion and the community of rational inquirers. And at this juncture he is no longer left in possession of the high ground. In refusing to give to the standard evidential considerations the presumptive and prima facie weight that is their established value on the market of rational interchange, the Sceptic, rather than being the defender of rigid reason, is in fact profoundly irrational. The sceptic seemingly moves within the orbit of rationality. But by his refusal to acknowl-
34
SCEPTICISM AND FINITUDE
edge the ordinary probative rules of plausibility, presumption, evidence, etc., he effectively opts out of the rational enterprise of our standard practice in the interests of what can count as knowledge inappropriately hyperbolic standard. Scepticism defeats from the very start any prospect of realizing our cognitive purposes and aspirations. It runs counter to the teleological enterprise to which we humans stand committed in virtue of being the sort of intelligent creatures we are. It is ultimately this collision between scepticism and our need—alike practical and theoretical—for the products of rational inquiry that makes the rejection of scepticism a rational imperative. The recognition of cognitive finitude need not—should not—engender scepticism. For in the end, scepticism is simply an irrational overreaction to the unavoidable risk of error in cognition—a risk that is simply inevitable when errors of omission and commission are both alike cast into the balance.2
2
On the issues of this chapter see also the author’s Scepticism (Oxford: Blackwells, 1980).
35
Chapter 4 LIMITS OF COGNITION A LEIBNIZIAN PERSPECTIVE ON THE QUANTITATIVE DISCREPANCY BETWEEN LINGUISTIC TRUTH AND OBJECTIVE FACT ______________________________________________________ SYNOPSIS (1) Propositional knowledge is a matter of a textualization that hinges on linguistic realizability. (2) As Leibniz already saw, this imposes limits on the extent of knowledge. (3)-(6) For while statements (and thus truths and realizable knowledge) are enumerable, actual objective facts are not. For there is good reason to think that facts are inexhaustible and nondenumerable. (7) Reality bursts the bounds of textualization: there are more facts than truths. (8) There are various facts that we finite beings cannot manage to know. Truths and facts as locked in a game of Musical Chairs. (9) And so, there is no warrant for the view that reality as such answers faithfully to what we finite intelligences can know of it. ______________________________________________________ 1. HOW MUCH CAN A PERSON KNOW? LEIBNIZ ON LANGUAGE COMBINATORICS Over and above the issue of what people do know there is also that of what is knowable. How much can someone possibly know? What could reasonably be viewed as an upper limit of an individual’s knowledge—supposing that
Nicholas Rescher • Collected Papers V
factually informative knowledge rather than performative how-to knowledge or subliminally tacit knowledge is to be at issue? The extraction of knowledge from mere information becomes exponentially more demanding in the course of cognitive progress. However, while finite resources will doubtless impose limits in practice, nevertheless, the process is one which in principle goes endlessly on and on. Does this mean that there is really no theoretical limit to the enlargement of knowledge? In pursuing this question, it is convenient to takes a textual approach. Accordingly, let us suppose someone with perfect recall who devotes a long lifespan to the acquisition of information. For 70 years this individual spends 365 days per annum reading for 12 hours a day at the rate of 60 pages an hour (with 400 words per page). That yields a lifetime reading quota of some 7.4 x 109 words. Optimistically supposing that, on average, a truth regarding some matter of fact or other takes only some seven words to state, this means a lifetime access to some 109 truths, around a billion of them: 1,000,000,000. No doubt most of us are a great deal less well informed than this. But it seems pretty well acceptable as an upper limit to the information that a human individual could probably not reach and certainly not exceed. After all, with an average of 400 pages per book, the previously indicated lifetime reading quota would come to some forty-six thousand books. The world’s largest libraries, the Library of Congress for example, nowadays have somewhere around 20 million books (book-length assemblages of monographs and pamphlets included.) And it would take a very Hercules of reading to make his way through even one-quarter of one percent of so vast a collection (= 50,000), which is roughly what our aforementioned reading prodigy manages. And this means that while a given individual can read any book (so that there are no inherently unreadable books), the individual cannot possibly read every book (so that for anyone of us there are bound to be very many unread books indeed). If mastery of Library of Congress-encompassed material is to be the measure, then few of us would be able to hold our heads up very high.1 1
38
To be sure, there lurks in the background here the question of whether having mere information is to count a having knowledge. With regard to this quantitative issue it has been argued here that authentic knowledge does not increase propositionally with the amount of information as such, but only proportionally with its logarithm. This would suggest that the actual knowledge within the Library of Congress’s many volumes could be encompassed telegraphically in some far more modest collection, so that our Herculean reader could access about half of the actual knowledge afforded by the LC’s vast collection.
LIMITS OF COGNITION
All this, of course, still only addresses the question of how much knowledge a given person—one particular individual—can manage to acquire. There yet remains the question of how much is in principle knowable—that is, can be known. And here it is instructive to begin with the perspective of the great seventeenth century polymath G. W. Leibniz (1646-1717). Leibniz took his inspiration from The Sand Reckoner of Archimedes, who in this study sought to establish the astronomically large number of sand grains that could be contained within the universe defined by the sphere of the fixed stars of Aristotelian cosmology—a number Archimedes effectively estimated at 1050. Thus even as Archimedes addressed the issue of the scope of the physical universe, so Leibniz sought to address the issue of the scope of the universe of thought.2 For just this is what he proceeded to do in a fascinating 1693 tract On the Horizon of Human Knowledge, De l’horizon de la doctrine humaine.3 Here Leibniz pursued this project along textual lines. He wrote: All items of human knowledge can be expressed by the letters of the alphabet … so that it follows that one can calculate the number of truths of which humans are capable and thus compute the size of a work that would contain all possible human knowledge, and which would contain all that could ever be known, written, or invented, and more besides. For it would contain not only the truths, but also all the falsehoods that men can assert, and meaningless expressions as well.4
Thus if one could set an upper limit to the volume of printed matter accessible to inquiring humans, then one could map out by combinatorial means
2
On Archimedes’ estimate see T. C. Heath, The Works of Archimedes (Cambridge: Cambridge University Press, 1897).
3
See G. W. Leibniz, De l’horizon de la doctrine humaine, ed. by Michael Fichant (Paris: Vrin, 1991). There is a partial translation of Leibniz’s text in “Leibniz on the Limits of Human Knowledge,” by Philip Beeley, The Leibniz Review, vol. 13 (December 2003), pp. 93-97. (Not that in old French “doctrine” means knowledge.) It is well-known that Leibniz invented entire branches of science, among the differential and integral calculatus, the calculus of variations, topology (analysis situs), symbolic logic, and computers. But he deserves to be seen as a pioneer of epistemetrics as well. The relevant issues are analyzed in Nicholas Rescher, “Leibniz’s Quantitative Epistemology,” Studia Leibnitiana, vol. 37 (2005).
4
Op. Cit. pp. 37-38.
39
Nicholas Rescher • Collected Papers V
the whole manifold of accessible verbal material—true, false, or gibberish—in just the manner that Leibniz contemplated. Any alphabet devisable by man will have only a limited number of letters (Leibniz here supposes the Latin alphabet of 24 which take w and W). So even if we allow a word to become very long indeed (Leibniz overgenerously supposes 32 letters5) there will be only a limited number of words that can possibly be formed (namely 24 exp 32). And so, if we suppose a maximum to the number of words that a single run-on, just barely intelligible sentence can contain (say 100), then there will be a limit to the number of potential “statements” that can possibly be made, namely 100 exp (24 exp 32).6 This number is huge indeed—far bigger than Archimedes’ sandgrains. Nevertheless, it is still finite, limited. Moreover, with an array of basic symbols different from those of the Latin alphabet, the situation is changed in detail but not in structure. (And this remains the case even if one adds the symbols at work in mathematics, where Descartes’ translation of geometrically pictorial propositions into algebraically articulated format stood before Leibniz’s mind, to say nothing of his own project of a universal language and a calculus ratiocinator.7) 5
The longest word I have seen in actual use is the 34 letter absurdity supercalifragilisticexpialidocious from the movie “Mary Poppins”.
6
G. W. Leibniz, De l’horizon (op. cit.), p. 11. This of course long antedates the (possibly apocryphal) story about the Huxley-Wilberforce debate which has Huxley arguing that sensible meaning could result from chance process because a team of monkeys typing at random would eventually produce the works of Shakespeare—or (on some account) all the books in the British Library, including not only Shakespeare’s works but the Bible as well. (The story—also found in Sir Arthur Eddington’s The Nature of the Physical World (London: McMillan, 1929; pp. 72-73) is doubtless fictitious since the Huxley-Wilberforce debate of 1860 antedated the emergence of the typewriter.) However, the basic idea goes back at least to Cicero: “If a countless number of the twenty-one letters of the alphabet ... were mixed together it is possible that when cast on the ground they should make up the Annals of Ennius, able to be read in good order” (De natura deorum, II, 27). The story launched an immense discussion that continues actively on the contemporary scene as is readily attested by a Google or Yahoo search for “typing monkeys”. It has also had significant literary repercussions as is exemplified by Jorge Luis Borges’ well-known story of “The Library of Babel” which contains all possible books.
7
Louis Couturat, La logique de Leibniz (Paris: Alcan, 1901) is still the best overall account of this Leibnizian project.
40
LIMITS OF COGNITION
The crux of Leibniz’s discussion is that any propositionalizable fact can in principle be spelled out in print. And there is only so much, so finitely much, that can be stated in sentences of intelligible length—and so also that can explicitly be thought of beings who conduct their thinking in language. Moreover, since this encompasses fiction as well, our knowledge of possibility is also finite, and fiction is for us just as much language-limited as is the domain of truth. 2. THE LEIBNIZIAN PERSPECTIVE The moment one sets a realistic limit to the length of practicably meaningful sentences one has to realize that the volume of the sayable is finite—vast thought it will be. And this means that as long as people transact their thinking in language—broadly understood to encompass the whole diversity of symbolic devices—the thoughts they can have—and thereby the things they possibly can know—will be limited in number. Moving further along these lines, let it be that the cognitive (in contrast to the affective) thought-life of people consists of the language-framed propositions that they consider. And let us suppose that people can consider textualized propositions at about the same speed at which they can read—optimistically, say, some 60 pages per hour where each page consists of 20 sentences. Assuming a thought-span of 16 waking hours on average, it will then transpire that in the course of a year a person can entertain a number of propositional thoughts equal to: 365 x 16 x 60 x 20 ≅ 7 x 106 So subject to the hypotheses at issue, this is how much material one would need in order to replicate in print the stream of consciousness thought-life of a person for an entire year. Once again, this number of seven million, though not small, is nevertheless limited. And these limits will again finitize the combinatorial possibilities. There is only so much thinking that a person can manage. And in the context of a finite species, these limits of language mean that there are only so many thoughts to go around—so many manageable sentences to be formulated. Once again we are in the grip of finitude. Now as Leibniz saw it, matters can be carried much further. For the finitude at issue here has highly significant implications. Consider an analogy.
41
Nicholas Rescher • Collected Papers V
Only a finite number of hairs will fit on a person’s head—say 1,000. So when there are enough individuals in a group (say 1001 of them) then two of them must have exactly the same number of hairs on their heads. And so also with thoughts. Even as hairs have to fit within the available dermatology, so thoughts will have to fit within the available textuality. If there are sufficiently many thinking intelligences in the aeons of cosmic history while yet the number of thoughts—and thus also thought-days and thought-lives—are finite, then there will inevitably be several people in a sufficiently large linguistic community whose thoughts are precisely the same throughout their lives. It also becomes a real prospect that language imposes limits on our grasp of people and their doings. Thus suppose that the Detailed Biography of a person is a minute-by-minute account of their doings, allocating (say) 10 printed lines to each minute, and so roughly 15,000 lines per day to make up a hefty volume of 300 fifty-line pages. So if a paradigmatic individual lives 100 years we will need 365 x 100 or roughly 36,500 such substantial tomes to provide a comprehensive step-by-step account of his or her life. But, as we have seen, the number of such tomes, though vast, is limited. In consequence, there are only so many Detailed Biographies to go around, so that it transpires that the number of Detailed Biographies that is available is also finite. This, of course, means that: If the duration of the species were long enough—or if the vastness of space provided sufficiently many thinkers—then there would have to be some people with exactly the same Detailed Biography. Given enough agents, eventual repetitions in point of their doings became inevitable. And now, moving on from biographies (or diaries) to public annals, Leibniz thought to encounter much the same general situation once again. Thus suppose that (as Leibniz has it) the world’s population is one hundred million (that is 108) and that each generation lives (on average) for 50 years. Then in the 6,000 years during which civilized man may be supposed to have existed, there have lived some 1.2 x 1010 people—or some 1010 of them if we assume smaller generations in earlier times.8 Recall now the above-mentioned idea of 36,500 hefty tomes needed to characterize in detail the life of an individual. It then follows that we would need some 36.5 x 1013 of them for a complete history of the species. To be sure, we thus obtain an astronomically vast number of possible overall an8
42
Leibniz, De l’horizon, p. 112.
LIMITS OF COGNITION
nals for mankind as a whole. But though vast, this number will nevertheless be finite. And so, if the history of the race is sufficiently long, then some part of its extensive history will have to repeat itself in full with a parfaite repetition mot pour mot, since there are only so many possible accounts of a given day (or week or year). For, once again, there are only a finite number of possibilities to go around, and somewhere along the line total repetitions will transpire and that life stories will occasionally recur in toto (ut homines novi eadem ad sensum penitus tota vita agerent, quae alii jam egerunt9). As Leibniz thus saw it, the finitude of language and its users carries in its wake the finitude of possible diaries, biographies, histories—you name it, including even possible thought-lives in the sense of propositionalized streams of consciousness as well. Even as Einstein with his general relativity (initially) saw himself as finitizing the size of the physical universe, so Leibniz’s treatise saw the size of mankind’s cognitive universe as a manifold of limited horizons—boundless but finite. It was accordingly a key aspect of Leibniz’s thought that human understanding cannot keep up with reality. For Leibniz, the propositional thought of finite creatures is linguistic and thereby finite and limited. But he also held that reality—as captured in the thought of God, if you will—is infinitely detailed. Only God’s thought can encompass it, not ours. Reality’s infinite detail thus carries both costs and benefits in its wake. Its cost is the unavoidability of imperfect comprehension by finite intelligences. Its benefit is the prospect of endless variability and averted repetition. And the result is a cognitively insuperable gap between epistemology and metaphysics. Everything that humans can say or think by linguistic means can be comprehended in one vast but finite Universal Library.10 But what do these Leibnizian ruminations mean in the larger scheme of things? Twentieth century philosophers of otherwise the most radically different orientation have agreed on prioritizing the role of language. “The limits of my language set the limits of my world” (“Die Grenzen meiner Sprache bedeuten die Grenzen meiner Welt”) says the Wittgenstein of the Tractatus at 5.6). “There is nothing outside text” (Il n’y a pas de hors de texte) say the devotees of French deconstructionism. But already centuries earlier
9
Ibid., p. 54.
10
This of course leads back to the mega-library of J. L. Borges.
43
Nicholas Rescher • Collected Papers V
Leibniz had taken the measure of this sort of textualism. He looked at it closely and saw that it could not be sustained. 3. STATEMENTS ARE ENUMERABLE, AS ARE TRUTHS The preceding deliberations have unfolded on the basis of the emphatically contingent supposition that there are certain limits to human capabilities—and, in particular, to the length of the words and sentences with which our discourse can effectively operate. But let us now also waive this (otherwise surely realistic) restriction and break through the limits of finitude in the interests of getting a grip on the general principles of the matter. Even if one construes the idea of an “alphabet” sufficiently broadly to include not only letters but symbols of various sorts, it still holds that everything stateable in a language can be spelled out in print through the combinational concatenation of some sequential register of symbols.11 And with a “language” construed as calling for development in the usual recursive manner, it transpires that the statements of a language can be enumerated in a vast and indeed infinite but nevertheless ultimately countable listing.12 But since the world’s languages, even if not finite in number, are nevertheless at most enumerable, it follows that the set of all statements— including every linguistically formidable proposition—will be enumerably infinite (and thus have the transfinite cardinality that mathematicians designate as alef-zero). As a matter of principle, then, we obtain: Thesis 1: The Enumerability of Statements. Statements (linguistically formulated propositions) are enumerable and thus (at most) denumerably infinite. Our linguistic resources for describing concrete states of affairs are thus subject to quantitative limitation. And insofar as our thoughts about things proceeds by recursively developed linguistic means it is inherently limited
11
Compare Philip Hugly and Charles Sayward, ‘Can a Language Have Indenumerably Many Expressions?’ History and Philosophy of Logic, Vol. 4, 1983.
12
This supposes an upper limit to the length of intelligible statements. And even if this restriction were waived, the number of statements will still be no more than countably infinite.
44
LIMITS OF COGNITION
in its reach within the confines of countability. And so the upshot is that the limits of textuality impose quantitative limitations upon propositionalized thought—albeit not limits of finitude. Being inherently linguistic in character, truths are indissolubly bound to textuality, seeing that any language-framed declaration can be generated recursively from a sequential string of symbols—i.e., that all spoken language can in principle be reduced to writing. Since they correspond to statements, it follows that truths cannot be more than countably infinite. And on this basis we have: Thesis 2:
The Denumerability of Truth. While the manifold of the truth cannot be finitely inventoried, nevertheless, truths are no more than denumerably infinite in number.
4. TRUTHS VS. FACTS It serves the interests of clarity to introduce a distinction at this stage, that between truths and facts. Truths are linguistically stated facts, correct statements, in sum, which, as such, must be formulated in language (broadly understood to include symbols systems of various sorts). A “truth” is something that has to be framed in linguistic/symbolic terms— the representation of a fact through its statement in some language, so that any correct statement represents a truth. A “fact”, on the other hand, is not a linguistic item at all, but an actual aspect of the world’s state of affairs which is thereby a feature of reality.13 Facts correspond to potential truths whose actualization as such waits upon their appropriate linguistic embodiment. Truths are statements and thus language-bound, but facts outrun linguistic limits. Once stated, a fact yields a truth, but with facts at large there need in principle be no linguistic route to get from here to there. 5. THE INEXHAUSTIBILITY OF FACT Accordingly, facts need not be exhausted by truths. It is a key facet of
13
Our position thus takes no issue with P. F. Strawson’s precept that “facts are what statements (when true) state.” (“Truth,” Proceedings of the Aristotelian Society, Supplementary Vol. 24, 1950, pp. 129-156; see p. 136.) Difficulty would ensue with Strawson’s thesis only if an “only” were added.
45
Nicholas Rescher • Collected Papers V
our epistemic stance towards the real world that its furnishings possess a complexity and diversity of detail so elaborate there is always more to be said than we have so far managed. Every part and parcel of reality has features beyond the range of our current cognitive reach—at any juncture whatsoever. Moreover, any adequate account of inquiry must recognize that the process of information acquisition at issue in science is a process of conceptual innovation. In consequence, the ongoing progress of scientific inquiry always leaves various facts about the things of this world wholly outside the conceptual realm of the inquirers of any particular period. Caesar did not know—and in the then extant state of the cognitive art could not have known—that his sword contained tungsten and carbon. There will always be facts about a thing that we do not know because we cannot even express them in the prevailing conceptual order of things. To grasp such a fact means taking a perspective of consideration that as yet we simply do not have, because the state of knowledge (or purported knowledge) has not reached a point at which such a consideration is feasible. And so, the facts about any actual physical object—are in theory inexhaustible. Its susceptibility to further elaborate detail—and to potential changes of mind regarding this further detail—is built into our very conception of a “real thing”. The range of fact about anything real is thus effectively inexhaustible. There is, as best we can tell, no limit to the world’s everincreasing complexity that comes to view with our ever-increasing grasp of its detail. The realm of fact and reality is endlessly variegated and complex. And so we also arrive at: Thesis 3: The Inexhaustibility of Fact. Facts are infinite in number. The domain of fact is inexhaustible: there is no limit to facts about the real. In this regard, however, real things differ in an interesting and important way from fictive ones. For a key about fictional particulars is that they are of finite cognitive depth. In characterizing them we shall ultimately run out of steam as regards their non-generic features. A point will always be reached when one cannot say anything further that is characteristically new about them—presenting non-generic information that is not inferentially implicit in what has already been said.14 New generic information can, of
46
LIMITS OF COGNITION
course, always be forthcoming through the progress of science: when we learn more about coal-in-general then we know more about the coal in Sherlock Holmes’ grate. But the finiteness of their cognitive depth means that the prospect of ampliatively novel non-generic information must by the very nature of the case come to a stop when fictional things are at issue. With real things, on the other hand, there is no reason of principle why the elaboration of non-generically idiosyncratic information need ever end. On the contrary, we have every reason to presume real things to be cognitively inexhaustible. The prospect of discovery is open-ended here. A precommitment to description-transcending features—no matter how far description is pushed—is essential to our conception of a real thing. The detail of the real world is inexhaustible: obtaining fuller information about its constituents is always possible in principle—through not of course in practice, since only a finite number of things have actually been said up to now—or indeed up to any actually realized moment of world history. Something whose character was exhaustible by linguistic characterization would thereby be marked as fictional rather than real.15 And so we have it that facts regarding reality are infinite in number. But just how infinite? 6. FACTS ARE TRANSDENUMERABLE While statements in general—and therefore true statements in particular—can be enumerated, and truths are consequently denumerable in number, there is good reason to suppose that this will not hold for facts. On the contrary, there is every reason to think that, reality being what it is, there will be an uncountably large manifold of facts. The reality of it is that facts, unlike truths, cannot be enumerated: no listing of fact-presenting truths—not even one of infinite length—can pos14
To deny inferentially implicit information the title of authentic novelty, is not of course, to say that it cannot surprise us in view of the limitations of our own deductive powers.
15
This also explains why the dispute over mathematical realism (Platonism) has little bearing on the issue of physical realism. Mathematical entities are akin to fictional entities in this—that we can only say about them what we can extract by deductive means from what we have explicitly put into their defining characterization. These abstract entities do not have non-generic properties since each is a “lowest species” unto itself.
47
Nicholas Rescher • Collected Papers V
sibly manage to constitute a complete register of facts. Any attempt to register-fact-as-a-whole will founder: the list is bound to be incomplete because there are facts about the list-as-a-whole which no single entry can encompass. We thus arrive at yet another salient thesis: Thesis 4: The Transdenumerability of Facts. The manifold of fact is transdenumerably infinite. The idea of a complete listing of all the facts is manifestly impracticable. For consider the following statement. “The list F of stated facts fails to have this statement on it.” But now suppose this statement to be on the list. Then it clearly does not state a fact, so that the list is after all not a list of the facts (contrary to hypothesis). And so it must be left off the list. But then in consequence that list will not be complete since the statement is true. Facts, that is to say, can never be listed in toto because there will always be further facts—facts about the entire list itself—that a supposedly complete list could not manage to register. This conclusion can be rendered more graphic by the following considerations. Suppose that the list F F: f1, f2, f3, … were to constitute a complete enumeration of all facts. And now consider the statement (Z) the list F takes the form f1, f2, f3, … By hypothesis, this statement will present a fact. So if F is indeed a complete listing of all facts, then there will be an integer k such that Z = fk Accordingly, Z itself will occupy the k-the place on the F listing, so that: fk = the list L takes the form f1, f2, f3, ... fk, ...
48
LIMITS OF COGNITION
But this would require fk to be an expanded version of itself, which is absurd. With the k-th position of the F listing already occupied by fk we cannot also squeeze that complex fk-involving thesis into it. The crux here is simply that any supposedly complete listing of facts f1, f2, f3, ... will itself exhibit, as a whole, certain features that none of its individual members can encompass. Once those individual entries are fixed and the series is defined, there will be further facts about that series-as-a-whole that its members themselves cannot articulate. Moreover, the point at issue can also be made via an analogue of the diagonal argument that is standardly used to show that no list of real numbers can manage to include all of them, thereby establishing the transdenumerability of the reals. Let us begin by imagining a supposedly complete inventory of independent facts, using logic to streamline the purported fact inventory into a condition of greater informative tidiness through the elimination of inferential redundancies, so that every remaining item adds some information to what has gone before. The argument for the transdenumerability of fact can now be developed as follows. Let us suppose (for the sake of reductio ad absurdum argumentation) that the inventory f1, f2, f3, ... represents our (non-redundant but yet purportedly complete) listing of facts. Then by the supposition of factuality we have (∀i)fi. And further by the supposition of completeness we have it that (∀p)(p → (∃i)[fi→p]) Moreover, by the aforementioned supposition of non-redundancy, each member of the sequence adds something quite new to what has gone before. (∀i)(∀j)[i < j → ~[(f1 & f2 & . . . & fi) → fj)] Consider now the following course of reasoning. (1) (∀i)fi
by “factuality”
49
Nicholas Rescher • Collected Papers V
(2) (∀j)fi → (∃i)(fi → (∀j)fj)
from (1) by “completeness” via the substitution of (∀j)fj for p
(3) (∃i)(fi → (∀j)fi)
from (1), (2)
But (3) contradicts non-redundancy. This reductio ad absurdum of our hypothesis indicates that facts are necessarily be too numerous for complete enumeration. 7. MORE FACTS THAN TRUTHS In such circumstances, no purportedly comprehensive listing of truths can actually manage to encompass all facts. This trans-denumerability of fact means that the domain of reality-characterizing fact inevitably transcends the limits of our capacity to express it, and a fortiori those of our capacity to canvas completely. The realm of fact is endlessly complex, detailed, and diversified in its make-up. And the limitedness of our recursively constituted linguistic resources thus means that our characterizations of the real will always fall short.16 We arrive at: Thesis 5.
There are quantitatively more facts than truths seeing that the facts are too numerous for enumerabilty.
And so, language cannot capture the entirety of fact. It is not only possible but (apparently) likely that we live in a world that is not digital but analogue and whose manifold of states of affairs is simply too rich to be fully comprehended by our linguistically digital means. Truth is to fact what film is to reality—a merely discretized approximation. Cognition, being bound to language, is digital and sequentially linear. Reality, by con16
50
Even in matters of actual linguistic practice we find an embarrassing shortcoming of words. The difficulty in adapting a compact vocabulary to the complexities of a diversified world are betokened by the pervasive phenomenon of polysemy—the contexualized pluralism of varied senses and differentiated uses of the same words in different semantical and grammatical categories. On this phenomenon see Hubert Cuyckens and Britta Zawada (eds.), Polysemy in Cognitive Linguistics (Amsterdam & Philadelphia: John Benjamins, 2003).
LIMITS OF COGNITION
trast, is analogue and replete with feed-back loops and non-sequentially systemic interrelations. It should thus not be seen as all that surprising that the two cannot be brought into smooth alignment. The comparative limitedness of language-encapsulable truth points to an inevitable limitedness of knowledge. 8. MUSICAL CHAIRS ONCE MORE It is instructive at this point to consider once more the analogy of Musical Chairs. Of course any individual play can/might be seated. And the same goes for any team or group of them with one exception; namely the whole lot. But since the manifold of knowable truth is denumerable and the manifold of fact in toto is not, then (as in our Musical Chairs example) the range of the practicable will not, cannot encompass the whole. (And note then while a team of individuals is not an individual, a complex of facts will nevertheless constitute a fact.) With regard to language too we once again confront a Musical Chairs situation. Conceivably, language-at-large might, in the abstract, manage to encompass non-denumerably many instances—particularly so if we indulge the prospect of idealization and resort to Bolzano’s Sätze an sich, Frege’s denkerlose Gedanken, and the like. But given the granular structure of a universe pervaded by atoms and molecules, only a denumerable number of language-using creatures can ever be squeezed into the fabric of the cosmos. And so the realistically practicable possibilities of available languages are at best denumberable. When reality and language play their game of Musical Chairs, some facts are bound to be left in the lurch when the music of language stops. The discrepancy manifests itself in the difference between any and every. Any candidate can possibly be accommodated in seating/stating. (We have (∀x)◊(∃y)Syx.) But it is not possible to accommodate every candidate. (We do not have ◊(∀x)(∃y)Syx.) The limits of knowledge are thus in the final analysis quantitative. The crux of the problem is a discrepancy of numbers. They root in the Musical Chairs Perplex—in the fact that the realm of fact is too vast for the restrictive confines of propositionalized language. And this situation has important cognitive ramifications that are brought to view by the following line of thought: (1) Everything there is—(and indeed even presumably everything there possibly can be)—has an idiosyncratic property, some feature, no
51
Nicholas Rescher • Collected Papers V
doubt complex and perhaps composite, that holds for it and it alone. (Metaphysical principle) (2) The possession of such a unique characteristic property cannot obtain in virtue of the fact that the item at issue is of a certain natural kind or generic type. It can only obtain in virtue of something appertaining to this item individually and specifically. (3) Accordingly, for anything whatsoever, there is a fact—viz., that that thing has that particular idiosyncratic property—that you can know only if you can individuate and specify that particular thing. (4) The inherent limitations of language mean that there are more things that it is possible to individuate and specify. The inevitability of unknown facts emerges at once form these considerations of general principle. The reality of it is that the domain of fact is ampler than that of truth so that language cannot capture the entirety of fact. We live in a world that is not digital but analogue and so the manifold of its states of affairs is simply too rich to be fully comprehended by our linguistically digital means.17 The domain of fact inevitably transcends the limits of our capacity to express it, and a fortiori those of our capacity to canvass it in overt detail. Truth is to fact what moving pictures are to reality—a merely discretized approximation. To be sure, the numerical discrepancy at issue with the Musical Chairs Perplex does no more than establish the existence of unknown facts. It does not got so far as to establish the existence of facts that are unknowable, facts which cannot, as a matter of principle, possibly be known. To see what can be done in this line we shall have to look at matters in a different light.
17
52
Wittgenstein writes “logic is not a body of doctrine, but a mirror-image of the world” (Tractatus, 6.13). This surely gets it wrong: logic is one instrumentality (among others) for organizing our thought about the world, and this thought is (as best and at most) a venture in describing or conceiving the world and its modus operandi in a way that—life being what it is—will inevitably be imperfect, and incomplete. And so any talk of mirroring is a totally unrealistic exaggeration here.
LIMITS OF COGNITION
There clearly is, however, one fact that is unstatable to language and thereby unknowable by creatures whose knowledge is confined to the linguistically formulatable. This is the grand mega-fact consisting of the amalgamation of all facts whatever. For language-dependent knowers can at most and at best have cognitive access to a denumerable number of facts, whereas factuality itself in principle encompasses a non-denumerable quantity. And an important point is at issue here. With Musical Chairs we know that there will be someone unseated, but cannot (given the ordinary contingencies) manage to say who this will be. And with facts, which from a cognitive point of view reduplicate the Musical Chairs situation, we also cannot manage to say which facts will be unknown. For here too there is a lot of room for contingency. But there is one very big difference. With Musical Chairs the totality of individuals, while of course not reliable, does not combine to form a single unseatable mega-individual. But the totality of facts—which cannot possibly be known—does indeed combine to form one grand unknowable mega-fact.18
18
On the issues of this chapter see also the author’s Epistemetrics (Cambridge: Cambridge University Press, 2006).
53
Nicholas Rescher • Collected Papers V
Appendix FURTHER IMPLICATIONS It is worthwhile to note that the numerical discrepancy between truths and facts that textuality imposes recurs time and again in other contexts, and in particular as between • names and entities • statements and possibilities • descriptions and objects • novels and plots • instructions and actions • explanations and phenomena The same disproportion between the verbal and the ontological realm occurs throughout. While in each case the former is a verbalized placeholder for the latter, there just are not enough of the former to go around. In particular, consider names. Of course everything is capable of being named. Nothing is name-resistant. We could (as someone has quipped) simply name everything Charlie? The real question is if everything could have a unique name characteristic of itself alone: an identifying name. Now everything that has actually been identified could be named via the specification: “the item identified in such and such a way.” Or at least this would work if the identification process answered to some verbalized formula or other. But even supposing this to be the case, the question remains: Are there enough verbal/textual identifiers to go around? Can everything that has an identity be individuated by verbalized formulas? And the answer is categorically negative. Select any language you please—take your pick. As long as it—like any other human language—is produced recursively it will only have countably many expressions (words, sentences, texts). But we know full well that the number of objects is transdenumerable: uncountably infinite. (Think of the real numbers for example.) So there just are not enough names to encompass everything. In
54
LIMITS OF COGNITION
musical chairs not everybody gets to be seated. In reality not everything gets to be named. Of course, things will stand differently if we radically revise the concept of language. Thus if we are prepared to countenance a thing-language (rather than a word language) we could adopt the rule that everything names itself. And then of course everything is at once namable and named. But this sort of thing is clearly cheating. And so while nothing is textually name-resistant and everything is namable in the sense of being able, in principle, to bear a verbal name, the possibility of realizing this prospect across the board—with everything whatsoever actually bearing a name—is precluded by the general principles of the situation.19
19
This Appendix has benefited from exchanges with C. Anthony Anderson.
55
Chapter 5 COGNITIVE PROGRESS AND ITS COMPLICATIONS ___________________________________________________ SYNOPSIS (1) Natural science affords a clear illustration of cognitive finitude. Future science is by its very nature unpredictable. It is difficult, indeed impossible, to predict the future of science. For we cannot forecast in detail even what the questions of future science will be—let alone the answers. (2) We thus cannot discern the substance of future science with sufficient clarity to say just what it can and cannot accomplish. Affording definite boundaries to science—putting entire ranges of phenomena outside its explanatory grasp—is a risky and unprofitable business. ___________________________________________________ 1. IN NATURAL SCIENCE, THE PRESENT CANNOT SPEAK FOR THE FUTURE
C
ognitively oriented self-prediction is a profoundly problematic issue. And this is nowhere more decidedly the case than with respect to natural science. The prospects for making scientifically responsible predictions about the future of science itself is deeply problematic. The splendid dictum that “the past is a different country—they do things differently there” has much to be said for it. For we cannot fully comprehend the past in terms of the conceptions and presumptions of the present. And this is all the more drastically true of the human future—in its cognitive aspect in particular. After all, information about the thought world of the past is at any rate available—however difficult extracting it from the available data may prove to be. But it lies in the nature of things that we cannot secure any effective access to the thought-world of the future. Its details, science specifically included, all its details are hidden from our view. All that we know is that it will be different.
Nicholas Rescher • Collected Papers V
The impact of chance on innovation is one major source of scientific cognition’s unpredictability. Entire fields of empirical inquiry have been launched by serendipity. After the American Telegraph and Telephone Company started its overseas shortwave-radio telephone service in 1927, Karl Jankey, a Bell Laboratories’ scientist, began to monitor short-wave static. He observed one kind of static whose source he could no identify but whose intensity peaked 4 minutes earlier each sidereal day. Jankey’s surprising regular “noise” was composed of radio signals from astronomical objects. Thus a quality-control project for the telephone industry was launched the innovative field of radio astronomy, which opened the way for investigating astronomical features and processes undetectable at optical wavelengths. (Serendipity pervades radio astronomy: Penzias and Wilson, also scientists from Bell Laboratories, were using their radio telescope to track communications satellites when they picked up—and initially failed to recognize—the 3˚ cosmic background radiation. This finding lent important empirical support to the Big Bang theory, and so fostered innovation in theoretical cosmology.1) In particular, those who initially conceive and produce a device (a typewriter, say, or a printing press, or even a guillotine) cannot possibly foresee the uses to which it will eventually be put. Negative prediction may be possible—there will certainly be things one cannot do with it (you cannot use a screwdriver as a fountain pen—although you could “write” with it on clay in the manner of the Babylonian tablets). But it is often difficult to map out in advance the range of things that can be done with a mechanism—and next to impossible to specify in advance the range of things that will be done with it. Human ingenuity is virtually limitless, and the old devices are constantly put to new and previously unimaginable uses. One must, after all, never forget the prospect of major innovation issuing from obscurity. The picture of Einstein toiling in the patent office in Bern should never be put altogether out of mind. Commenting shortly after the publication of Frederick Soddy’s speculations about atomic bombs in his 1930 book Science and Life,2 Robert A. Millikan, a Nobel laureate in physics, wrote that “the new evidence born of further scientific study is to the effect that it is highly improbable that there is any appreciable amount of avail-
1
See Jeremy Bernstein, Three Degrees Above Zero (New York: Scribner, 1984).
2
Frederick Soddy, Science and Life (New York: E. P. Dutton, 1930).
58
COGNITIVE PROGRESS AND ITS COMPLICATIONS
able subatomic energy to tap.”3 In science forecasting, the record of even the most qualified practitioners is poor. The best that we can do in matters of science and technology forecasting is to look towards those developments that are “in the pipeline” by looking to the reasonable extrapolation of the character, orientation, and direction of the current state of the art—is a powerful forecasting tool on the positive side of the issue. And this conservative approach has its problems. Since we cannot predict the answers to the presently open questions of natural science, we also cannot predict its future questions. For these questions will hinge upon those as yet unrealizable answers, since the questions of the future are engendered by the answers to those we have on hand. Accordingly, we cannot predict science’s solutions to its problems because we cannot even predict in detail just what these problems will be. In scientific inquiry as in other sectors of human affairs, major upheavals can come about in a manner that is sudden, unanticipated, and often unwelcome. Major breakthroughs often result from research projects that have very different ends in view. Louis Pasteur’s discovery of the protective efficacy of inoculation with weakened disease strains affords a striking example. While studying chicken cholera, Pasteur accidentally inoculated a group of chickens with a weak culture. The chickens became ill, but, instead of dying, recovered. Pasteur later reinoculated these chickens with fresh culture—one strong enough to kill an ordinary chicken. To Pasteur’s surprise, the chickens remained healthy. Pasteur then shifted his attention to this interesting phenomenon, and a productive new line of investigation opened up. In empirical inquiry, we generally cannot tell in advance what further questions will be engendered by our endeavors to answer those on hand, those answers themselves being as yet unavailable. Accordingly, the issues of future science simply lie beyond our present horizons. The past may be a different country, but the future is a terra incognita. Its science, its technology, its fads and fashions, etc. lie beyond our ken. We cannot begin to say what ideas will be at work here, though we know on general principles they will differ from our own. And where our ideas cannot penetrate we are ipso facto impotent to make and detailed predictions. Throughout the domain of inventive production in science, technology, and the arts we find processes of creative innovation whose features defy all prospects of predictability. The key fact in this connection is that of the 3
Robert A. Millikan as quoted in Daedalus, vol. 107 (1978), p. 24.
59
Nicholas Rescher • Collected Papers V
fundamental law of epistemology that a less powerful intellect cannot comprehend the ways of a more powerful one. And this means that the cognitive resources of an inferior (lower) state of the art cannot afford the means for foreseeing the operations of a superior (higher) one. Those who know only tic-tac-toe cannot foresee how chess players will resolve their problems. 2. THE IMPORT OF INNOVATION We know—or at any rate can safely predict—that future science will make major discoveries (both theoretical and observational/phenomenological) in the next century, but we cannot say what they are and how they will be made (since otherwise we could proceed to make them here and now). We could not possibly predict now the substantive content of our future discoveries—those that result from our future cognitive choices. For to do so would be to transform them into present discoveries which, by hypothesis, they just are not.4 In the context of questions about matters of scientific importance, then, we must be prepared for surprises. An ironic but critically important feature of scientific inquiry is that the 4
60
As one commentator has wisely written: “But prediction in the field of pure science is another matter. The scientist sets forth over an unchartered sea and the scribe, left behind on the dock, is asked what he may find at the other side of the waters. If the scribe knew, the scientist would not have to make his voyage” (Anonymous, “The Future as Suggested by Developments of the Past Seventy-Five Years,” Scientific American, vol. 123 (1920), p. 321). The role of unforeseeable innovations in science forms a key part of Popper’s case against the unpredictability of man’s social affairs—given that new science engenders new technologies which in turn make for new modes of social organization. (See K. R. Popper, The Poverty of Historicism (London: Routledge & Kegan Paul, 1957), pp. vi and passim. The unpredictability of revolutionary changes in science also figures centrally in W. B. Gallie’s “The Limits of Prediction” in S. Körner (ed), Observation and Interpretation (New York: Academic Press, 1957). Gallie’s argumentation is weakened, however, by a failure to distinguish between the generic fact of future discovery in a certain domain and its specific nature. See also Peter Urbach, “Is Any of Popper’s Arguments Against Historicism Valid?” British Journal for the Philosophy of Science, vol. 29 (1978), pp. 117-30 (see pp. 128-29), whose deliberations seem (to this writer) to skirt the key issues. A judicious and sympathetic treatment is given in Alex Rosenberg, “Scientific Innovation and the Limits of Social Scientific Prediction,” Synthese, vol. 97 (1993), pp. 161-81. On the present issue Rosenberg cites the instructive anecdote of the musician who answered the question “Where is jazz heading” with the response: “If I knew that, I’d be there already” (op. cit., p. 167).
COGNITIVE PROGRESS AND ITS COMPLICATIONS
unforeseeable tends to be of special significance just because of its unpredictability. The more important the innovation, the less predictable it is, because its very unpredictability is a key component of importance. Science forecasting is beset by a pervasive normality bias, because the really novel often seems so bizarre. The inherent unpredictability of future scientific developments—the fact that inferences from one state of science to another are generally precarious—means that present-day science cannot speak for future science. The prospect of future scientific revolutions can never be precluded. We cannot say with unblinking confidence what sorts of resources and conceptions the science of the future will or will not use. Given that it is effectively impossible to predict the details of what future science will accomplish, it is no less impossible to predict in detail what future science will not accomplish. We can never confidently put this or that range of issues outside “the limits of science”, because we cannot discern the shape and substance of future discoveries with sufficient clarity to be able to say with any assurance what it can and cannot do. Present-day natural science cannot speak for future science. Viewed not in terms of its aims but in terms of its results, science is inescapably plastic: it is not something fixed, frozen, and unchanging but endlessly variable and protean—given to changing not only its opinions but its very form. We cannot say with unblinking confidence what sorts of resources and conceptions the science of the future will or will not use. Given that it is effectively impossible to predict the details of what future science will accomplish, it is no less impossible to predict in detail what future science will not accomplish. We can never confidently put this or that range of issues outside “the limits of science”, because we cannot discern the shape and substance of future science with sufficient clarity to be able to say with any assurance what it can and cannot do. Any attempt to set “limits” to science—any advance specification of what science can and cannot do by way of handling problems and solving questions—is destined to come to grief. An apparent violation of the rule that present science cannot bind future science is afforded by John von Neumann’s attempt to demonstrate that all future theories of subatomic phenomena—and thus all future theories— will have to contain an analogue of Heisenberg’s uncertainty principle if they are to account for the data explained by present theory. Complete predictability at the subatomic level, he argued, was thus exiled from science. But the “demonstration” proposed by von Neumann in 1932 places a sub-
61
Nicholas Rescher • Collected Papers V
stantial burden on potentially changeable details of presently accepted theory.5 The fact remains that we cannot preclude fundamental innovation in science: present theory cannot delimit the potential of future discovery. In natural science we cannot erect our structures with a solidity that defies demolition and reconstruction. Even if the existing body of “knowledge” does confidently and unqualifiedly support a certain position, this circumstance can never be viewed as absolutely final. Not only can one never claim with confidence that the science of tomorrow will not resolve the issues that the science of today sees as intractable, but one can never be sure that the science of tomorrow will not endorse what the science of today rejects. This is why it is infinitely risky to speak of this or that explanatory resource (action at a distance, stochastic processes, mesmerism, etc.) as inherently unscientific. Even if X lies outside the range of science as we nowadays construe it, it by no means follows that X lies outside science as such. We must recognize the commonplace phenomenon that the science of the day almost always manages to do what the science of an earlier day deemed infeasible to the point of absurdity (“split the atom”, abolish parity, etc.). Present science can issue no guarantees for future science. 3. AGAINST DOMAIN LIMITATIONS The project of stipulating boundaries for natural science—of placing certain classes of nature’s phenomena outside its explanatory reach—runs into major difficulties. Domain limitations purport to put entire sectors of fact wholly outside the effective range of scientific explanation, maintaining that an entire range of phenomena in nature defies scientific rationalization. This claim is problematic in the extreme. The scientific study of human affairs affords a prime historical example. Various nineteenth century German theorists maintained that a natural science of man—one that affords explanation of the full range of human phenomena, including man’s thoughts and creative activities—is in principle impossible. One of the central arguments for this position ran as follows: It is a presupposition of scientific inquiry that the object of investigation is essentially independent of the process of inquiry, remaining altogether unaffected. But this is clearly not the case when our own thoughts, beliefs, and actions are at is5
62
See also the criticisms of his argument in David Bohm, Causality and Chance in Modern Physics (London: Routledge, 1957), pp. 95-96.
COGNITIVE PROGRESS AND ITS COMPLICATIONS
sue, since they are all affected as we learn more about them. Accordingly, it was argued, there can be no such thing as a science of the standard sort regarding characteristically human phenomena. One cannot understand or explain human thought and action on the usual procedures of natural science but must introduce some special ad hoc mechanisms for coming to explanatory terms with the human sphere. (Thus, Wilhelm Dilthey, for example, maintained the need for recourse to a subjectively internalized Verstehen as a characteristic, science-transcending instrumentality that sets the human “sciences” apart from the standard—that is, natural—sciences.) And, so it was argued, seeing that such a process cannot, strictly speaking, qualify as scientific, there can be no such thing as a “science of man”. But this position is badly misguided. With science we cannot prejudge results nor predelimit ways and means. One cannot in advance rule in or rule out particular explanatory processes or mechanisms (“observationindependence”, etc.). If in dealing with certain phenomena they emerge as observer-indifferent, so be it; but if they prove to be observer-dependent, we have to take that in our stride. If we indeed need explanatory recourse to some sort of observer-correlative resource (“sympathy” or the like), then that’s that. In science we cannot afford to indulge our a priori preconceptions. The upshot of these observations is clear. To set domain limitations to natural science is always risky and generally ill advised. The course of wisdom is to refrain from putting issues outside the explanatory range of science. Since present science cannot speak for future science, it also cannot establish what sorts of resources science as such can or cannot use and what sorts of results it can or cannot achieve. Science as such can and cannot do. The myopia that characterizes our cognitive finitude regarding the world’s ways means that open horizons will ever lie before us in our efforts to domesticate the world to scientific comprehension.6
6
On the issues of this chapter see also the author’s Limits of Science (2nd ed.; Pittsburgh: University of Pittsburgh Press, 1999).
63
Chapter 6 AGAINST SCIENTIFIC INSOLUBILIA ___________________________________________________ SYNOPSIS (1) Although unanswered questions and unresolved issues remain at every stage of scientific development, this itself does not mean that there are questions regarding nature that science cannot answer al all—insolubilia. The immortality of questions in natural science does not imply the existence of immortal questions. (2) Both parties went wrong in the famous du Bois-Reymond/Haeckel Controversy: Reymond was wrong in his claim to have identified insolubilia; Haeckel was wrong in his claim that natural science was effectively complete. (3) While science does indeed have limitations and can never solve all of its problems, it does not have specifiable limits. There is no way of identifying scientifically appropriate questions that science cannot resolve in principle—and thus might not resolve in the future. Cognitive finitude does not entail cognitive debility. ___________________________________________________ 1. THE IDEA OF INSOLUBILIA
O
ne characteristic sort of limitation upon the question-resolving capacity of our scientific knowledge is characterized by the thesis:
Weak Limitedness—The Permanence of Unsolved Questions: There are always, at every temporal stage,1 at least some questions on the agenda of the day to which no answer is then available: questions for whose resolution the-current science is inadequate, though they may well be answered at some later stage.
1
Or perhaps alternatively: “always after a certain time—at every stage subsequent to a certain juncture.”
Nicholas Rescher • Collected Papers V
This thesis has it that there will always be questions that the science of the day raises but does not resolve. It envisions a permanence of cognitive limitation, maintaining that our knowledge is at no stage ever completed because certain then-intractable (that is, posable but as yet unanswerable) questions figure on the agenda of every cognitive state-of-the-art. In accepting Kant’s Principle of Question Propagation, to the effect that every state of knowledge generates further new and as yet unanswered questions this permanence of unsolved questions is at once assured since science will then never reach a juncture at which all of its questions are resolved. The prospect of further progress is ever-present: completeness and finality are unrealizable in the domain of science. In this light, the presently limitation thesis seems altogether plausible. It should be noted, however, that weak limitedness of this sort is perfectly compatible with the circumstance that every question that arises at any state-of-the-art stage of inquiry will eventually be answered. The eventual resolution of all our (present-day) scientific problems would not necessarily means that science is finite or completable because of the prospect—perhaps, certainty—that other issues will have arisen by the time the earlier ones are settled. A doctrine of perpetual incompleteness in science is thus wholly compatible with the view that every question that can be asked at each and every particular state of natural science is going to be resolved—or dissolved—at some future state. Having everunanswered questions does not mean having ever-unanswerable ones. This situation means that one cannot simply transpose the quantifiers of the preceding thesis to obtain the very different thesis of: Stronger Limitedness—The Existence of Insolubilia: There are questions unanswered in every state of science—then-posable questions that cannot be answered then and there. The former, weaker thesis envisaged the immorality of questions, but this present strengthened thesis envisions the existence of immortal questions. This stronger thesis posits the existence of insolubilia, maintaining that certain questions go altogether “beyond the limits” of our explanatory powers and admit of not resolution within any cognitive corpus that we are able to bring to realization. This is something very different from the claim of weaker limitation—and far less plausible. One can also move on to the yet stronger principle of:
66
AGAINST SCIENTIFIC INSOLUBILIA
Hyperlimitation—The Existence of IDENTIFIABLE Insolubilia: Certain questions can be identified as insolubilia: we can here-and-now formulate questions that can never be resolved, and are able to specify concretely some question that is unanswerable in any state of science. This thesis of hyperlimitation makes a very strong claim—and a distinctly implausible one. After all, even a position holding that there indeed are insolubilia certainly need not regard them as being identifiable in the current state of scientific development. Even if there were actual insolubilia questions that science will never resolve, this would not mean that any such questions can be specified by us here and now. The very idea that certain now-specifiable questions can be identified as never-to-beresolved requires claiming that present science can speak for future science, that the science of today can establish what the science of tomorrow cannot do by way of dealing with the issues. And this, as we have seen, is a contention that is altogether untenable in view of the essential unpredictability of future science. Let us explore some of the ramifications of the issues raised by this family of insolubility theses. 2. THE REYMOND-HAECKEL CONTROVERSY In the 1880s, the German physiologist, philosophers, and historian of science Emil du Bois-Reymond published a widely discussed lecture on The Seven Riddles of the Universe (Die Sieben Welträtsel).2 In it, he maintained that some of the most fundamental problems about the workings of the world were insoluble. Reymond was a rigorous mechanist, and argued that the limit of our secure knowledge of the world is confined to the range where purely mechanical principles can be applied. Regarding anything else, we not only do not have but cannot in principle obtain reliable knowledge. Under the banner of the slogan ignoramus et 2
This work was published together with a famous prior (1872) lecture on the limits of scientific knowledge as Über die Grenzen des Naturerkennens: Die Sieben Welträtsel—Zwei Vorträge, 11th. ed. (Leipzig: Veit & Co., 1916). The earlier lecture has appeared in English translation as “The Limits of Our Knowledge of Nature,” Popular Science Monthly, vol. 5 (1874), pp. 17-32. For du BoisReymond, see Ernst Cassirer, Determinism and Indeterminism in Modern Physics: Historical and Systematic Studies of the Problem of Causality (New Haven: Yale University Press, 1956), Part. I.
67
Nicholas Rescher • Collected Papers V
ignorabimus (“we do not know and shall never know”), du Bois-Reymond maintained a sceptically agnostic position with respect to various foundational issues in physics (the nature of matter and force, and the ultimate source of motion) and psychology (the origin of sensation and of consciousness). These basic issues are simply explanatory insolubilia that altogether transcend man’s scientific capabilities. Certain fundamental biological problems he regarded as unsolved but perhaps in principle soluble (though very difficult): the origin of life, the adaptiveness of organisms, and the development of language and reason. And as regards his seventh riddle—the problem of freedom of the will—he was undecided. The position of du-Bois-Reymond was soon sharply contested by the zoologist Ernest Haeckel, in a book Die Welträtsel, published in 1889,3 which attained a great popularity. Far from being intractable or even insoluble—so Haeckel maintained—the riddles of du Bois-Reymond had all virtually been solved. Dismissing the problem of free will as a pseudoproblem—since free will “is a pure dogma [which] rests on mere illusion and in reality does not exist at all”—Haeckel turned with relish to the remaining riddles. Problems of the origin of life, of sensation, and of consciousness, Haeckel regarded as solved—or solvable—by appeal to the theory of evolution. Questions of the nature of matter and force, he regarded as solved by modern physics except for one residue: the problem (perhaps less scientific than metaphysical) of the ultimate origin of matter and its laws. This “problem of substance” was the only riddle recognized by Haeckel, but was downgraded by him as not really a problem for science. In discovering the “fundamental law of the conservation of matter and force”, science had done pretty much what it could do with respect to this problem; what remained was metaphysics, with which the scientist has no proper concern. Haeckel summarized his position as follows: The number of world-riddles has been continually diminishing in the course of the nineteenth century through the aforesaid progress of a true knowledge of nature. Only one comprehensive riddle of the universe now remains—the problem of substance ... [But now] we have the great, comprehensive “law of substance”, the fundamental law of the constancy of matter and force. The fact that substance is 3
68
Bonn, 1889 trans. by J. McCabe as The Riddle of the Universe—at the close of the Nineteenth Century (New York and London: Harper & Bros., 1901). On Haeckel, see the article by Rollo Handy in The Encyclopedia of Philosophy, ed. by Paul Edwards, Vol. III (New York: Macmillan, 1967).
AGAINST SCIENTIFIC INSOLUBILIA
everywhere subject to eternal movement and transformation gives it the character also of the universal law of evolution. As this supreme law has been firmly established, and all others are subordinate to it, we arrive at a conviction of the universal unity of nature and the eternal validity of its laws. From the gloomy problem of substance we have evolved the clear law of substance.4
The basic structure of Haeckel’s position is clear: science is rapidly nearing a state in which all big problems admit of solution—substantially including those “insolubilia” of du Bois-Reymond. (What remains unresolved is not so much a scientific as a metaphysical problem.) Haeckel concluded that natural science in its fin de siècle condition had pretty much accomplished its mission—reaching a state in which all scientifically legitimate problems were substantially resolved. The dispute exhibits the interesting phenomenon of a controversy in which both sides went wrong. Du Bois-Reymond was badly wrong in claiming to have identified various substantive insolubilia. The idea that there are any identifiable issues that science cannot ever resolve has little to recommend it to be sure, various efforts along these lines have been made: • The attempts of eighteenth century mechanists to bar action at a distance. • The attempts of early twentieth century vitalists to put life outside the range of scientific explicability. • The attempts of modern materialists to exclude hypnotism or autosuggestion or parapsychology as spurious on grounds of scientific intractability. But the historical record does not augur will in this regard. The annals of science are replete with achievements which, before the fact, most theoreticians had insisted could not possibly be accomplished. The course of historical experience runs counter to the idea that there are any identifiable questions about the world (in a meaningful sense of these terms) that do in principle lie beyond the reach of science. It is always risky to say never, and particularly so with respect to the prospects of 4
Haeckel, Die Welträtsel, pp. 365-66.
69
Nicholas Rescher • Collected Papers V
knowledge. Never is a long time, and “never say never” is a more sensible motto than its paradoxical appearance might indicate. The key point here was well made by Karl Pearson: Now I venture to think that there is great danger in this cry, “We shall be ignorant.” To cry “We are ignorant” is sage and healthy, but the attempt to demonstrate an endless futurity of ignorance appears a modesty which approaches despair. Conscious of the past great achievements and the present restless activity of science, may we not do better to accept as our watchword that sentence of Galilei: “Who is willing to set limits to the human intellect?”—interpreting it by what evolution has taught us of the continual growth of man’s intellectual powers.5
However, Haeckel was no less seriously wrong in his insistence that natural science was nearing the end of the road—that the time was approaching when it would be able to provide definitive answers to the key questions of the field. The entire history of science shouts support for the conclusion that even where “answers” to our explanatory questions are attained, the prospect of revision—of fundamental changes of mind—is ever-present. For sure, Haeckel was gravely mistaken in his claim that natural science had attained a condition of effective completeness. And yet, both these theorists were, in a way, also right. But BoisReymond correctly saw that the work of science will never be completed; that science can never shut up shop in the final conviction that the job is finished. And Haeckel was surely right in denying the existence of identifiable insolubilia. 3. THE INFEASIBILITY OF IDENTIFYING INSOLUBILIA Any state of science can and will limit the range of legitimately posable questions to those whose presuppositions concur with its contentions. If quantum theory is right, the position and velocity of certain particles cannot be pinpointed conjointly. This renders the question “What is the exact position and velocity of particle X at time t” not insoluble but illegitimate. If Newton was right we must not ask what keeps a moving body moving. Question-illegitimacy represents a limit that grows out of science itself—a limit on appropriate questions rather than on available solutions. Insolubilia, however, are something very different: they are appropriate and legitimate questions to which no answer can possibly be 5
70
The Grammar of Science (London: A. and C. Black, 1892), sect. 7.
AGAINST SCIENTIFIC INSOLUBILIA
given—now or ever. Any claim to identify insolubilia by pinpointing here and now questions that science will never resolve is bound to be problematic-indeed, extremely far-fetched. The other side of the coin of present-day science’s myopia is that nobody can predict with warranted confidence what natural science will not be able to do. Insolubilia are legitimate questions to which no answer can possibly be given—now or ever. And any claim to identify scientific insolubilia by pinpointing here and now is extremely far-fetched because it encounters deep theoretical difficulties. To show that a scientific question is insoluble we would have to show that its resolution lies beyond every (possible or imaginable) state of future science. This is clearly a very tall order. The best we can ever do here and now is to put a question’s resolvability beyond the power of natural science as it looks from where we stand. But we shall never be in a position to put it beyond the reach of possible future states of science as such. If a question belongs to science at all—if it reflects the sort of issue that science might possibly resolve in principle and in theory—then we cannot categorize it as an insoluble. With respect to science, we have no alternative to adopting the principle that what CAN be done in theory MIGHT be done in the future. Who is to say how future science can resolve its questions—seeing that it need certainly not do so in ways circumscribed by our present conceptions? (Surely, for example, considerations of theory might in principle—and to some extent actually do in practice—enable us to describe “what goes on in a black hole”.) We cannot ever cogently establish that answering a question hinges un-avoidably on doing something impossible (for example, sending a signal faster than the speed of light). Charles S. Peirce has put the key point trenchantly: For my part, I cannot admit the proposition of Kant—that there are certain impassable bounds to human knowledge ... The history of science affords illustrations enough of the folly of saying that this, that, or the other can never be found out. Auguste Comte said that it was clearly impossible for man ever to learn anything of the chemical constitution of the fixed stars, but before his book had reached its readers the discovery which he had announced as impossible had been made. Legendre said of a certain proposition in the theory of numbers that, while it appeared to be true, it was most likely beyond the powers of the human mind to prove it; yet the next writer on the subject gave six independent demonstrations of the theorem.6
71
Nicholas Rescher • Collected Papers V
To identify an insolubilium, we would have to show that a certain scientifically appropriate question is such that its resolution lies beyond every (possible or imaginable) state of future science. This is clearly a very tall order—particularly so in view of our inevitably deficient grasp on future states of science. How could we possibly establish that a question Q will continue to be raisable and unanswerable in every future state of science, seeing that we cannot now circumscribe the changes that science might undergo in the future? We would have to argue that the answer to Q lies “in principle” beyond the reach of science. And this would gravely compromise the legitimacy of the question as a genuinely scientific one. For if the question is such that its resolution lies in principle beyond the powers of science, it is difficult to see how we could maintain it to be an authentic scientific question. The best we can do here and now is to put Q’s resolvability beyond the power of any future state of science that looks to be a real possibility from where we stand. But we shall never be in a position to put it beyond the reach of possible future states of science as such. If a question belongs to science at all—if it reflects the sort of issue that science might possibly resolve in principle and in theory—then we cannot categorize it as an insolubilium. With respect to science, we have no alternative to adopting the principle that what CAN be done in theory MIGHT be done in the future. We can no more circumscribe how science will accomplish its future tasks than we can circumscribe what those tasks will be. The idea of identifiable insolubilia accordingly shipwrecks on the essentially unpredictable nature of science itself. And the fact that present science cannot speak for future science means that no identifiable issue can confidently be placed outside the limits of science for good and all. For example, it might be maintained that the question “What physical processes will be discovered inside a black hole?” cannot possibly be answered unless we manage to secure hole-internal observations (that is, make them and get information about them out), and that this cannot be done because the hole exerts a “cosmic censorship” that imposes an insuperable barrier on data-extraction. But the vulnerable point here lies in contention of the mode “Q cannot be answered unless X is done” (that 6
72
Charles Sanders Peirce, Collected Papers, ed. by C. Hartshorne et al., Vol. VI (Cambridge, Mass.: Harvard University Press, 1929), sect. 6.556.
AGAINST SCIENTIFIC INSOLUBILIA
hole-internal processes cannot be characterized unless we extract physical data from the hole). For no one is in a position to foretell here and now what the science of the future can and cannot achieve. Here we can set no a priori restrictions. In this context it is best to keep an open mind. Ironically, natural science— our most powerful predictive tool—is itself unpredictable. There is no question that outside the realm of everyday commonplaces science is the most powerful and reliable instrument of prediction at our disposal. But the inescapable imperfection of this instrument means that the predictive prospect too is imperfectable and that our aspirations in this direction must be kept within realistic bounds.7 A dismissal of natural-science insolubilia might meet with the objection that present-day science seems to put the answers to certain (perfectly meaningful) factual questions wholly beyond our reach. For example, let the “big bang” theory of cosmic origination be taken as established, and suppose further that its fundamental equations take such a form that, owing to the singularities that arise at the starting point t = 0, no inference whatever could be drawn regarding the state of things on “the far side” of this starting point—the time preceding the world originating big bang. Then a question like “Was the physical universe pretty much like ours before the big bang, or were the laws of physics of the preceding worldcycle different?” (ex hypothesi) cannot be answered. Yet, this would be so not because this is an inherently disallowed (scientifically meaningless) question but because there could be no possible way of securing the needed data. Its status would be akin to that of questions about the mountains on the other side of the moon in Galileo’s time. But just herein lies the crux. The question at issue is not necessarily insoluble as such: it is simply that the existing state of science affords no way of getting an answer. But as we have seen, present science cannot speak for future science. And so there can be no basis for claims of inherent unanswerability—no way of justifying the claim that an answer will remain unattainable in every future state of science. It has eventuated, amazingly enough, that the big bang 7
Some of the ideas of the preceding discussion are developed more fully in the author’s The Limits of Science (Berkeley, Los Angeles, London: University of California Press, 1984). The book is also available in translation: German transl., Die Grenzen der Wissenschaft (Stuttgart: Reclam, 1984), Spanish transl., Los Limites de le ciencia (Madrid: Editorial Tecnos, 1994), Italian transl., I Limiti della scientia (Roma: Armando Editore, 1990).
73
Nicholas Rescher • Collected Papers V
itself has left traces that we ourselves can detect and use to make inferences about its nature. Who is to say that we may not one day discover, if we are sufficiently ingenious, that earlier cosmic cycles somehow leave a trace on their successors? It would be ill-advised to be so presumptuous as to say, on the basis of a mere few centuries of experience with science, that the problems that seem insoluble to us today are also destined to perplex our successors many thousands of years further down the corridors of time. A key point to emerge from these deliberations is that, as regards questions, science does not have limits. In his vivid Victorian prose, Baden Powell put the matter well more than a century ago: It is the proper business of inductive science to analyse whatever comes before it. We cannot say that any physical subject proposed is incapable of such analysis, or not a proper subject for it, until it has been tried and found to fail; and even then, the result is not unprofitable, … for the unknown regions on the frontier of science enjoy at least a twilight from its illumination, and are still brightened by the rays of present conjecture, and the hope of future discovery. We can never say that we have arrived at such a boundary as shall place an impassable limit to all future advance, provided the attempts at such advance be always made in a strictly inductive spirit. To the truly inductive natural philosopher, the notion of limit to inquiry is no more real than the mirage which seems to bound the edge of the desert, yet through which the traveler will continue his march to-morrow, as uninterruptedly as to-day over the plain.8
No adequate justification can be found for the view that science has barriers—that there are facts in its domain that science cannot in principle ascertain. (After all, something that is in principle unattainable by science could not justifiably be held to belong to its province at all.) The inherent unpredictability of science has the immediate consequence that no relevant issue can securely be placed outside its reach. To put particular explanatory issues outside the effective capacity of science, we would need to develop an argument of roughly the following format: (1) Scientifically acceptable explanations will always have such-andsuch a character (are Z-conforming).
8
74
Baden Powell, Essays on the Spirit of the Inductive Philosophy (London: Longman, 1855) pp. 106-107.
AGAINST SCIENTIFIC INSOLUBILIA
(2) No Z-conforming explanation will ever be devised to resolve a certain explanatory question. ________________________________________________ (3) The explanatory question at issue must thus always remain outside the scope of scientific explicability. But, of course, this sort of argumentation accomplishes no more than to show the incompatibility of not-(3) with (1) and (2). Thus, very different approaches are possible. One could always accept (3) and deny either (1) or (2). Consider, in particular, the prospect of (2)-denial. The only way to show that no future state of science can ever resolve a certain question is to show that no possible state of science can resolve the question—that no possible state of science can bear one way or another toward answering it. At this stage, a dilemma comes into operation with respect to the sort of possibility at issue. For this can either be epistemic possibility (based on what we know) or theoretical possibility (based on abstract general principles). But argumentation with respect to the first sort of possibility founders on the fact that present science cannot speak for future science— that “what we (take ourselves to) know” in science cannot be projected into the future. And argumentation with respect to the second sort of possibility fails because to show this sort of impossibility would be to put the question outside the range of issues appropriately characterizable as “scientific”. (Natural science is not rendered incomplete by its failure to deal with the subtleties of Shakespearean imagery!) To be sure, someone might object as follows: “One can surely develop a plausible inductive argument on behalf of insolubilia along the following lines: We establish (1) that Q cannot be answered unless X is done, and (2) that there is good reason to think that X cannot be done at all—now or in the future.” For example, it might be maintained that the question “What physical processes go on inside a black hole?” cannot possibly be answered unless we manage to secure hole-internal observations (that is, make them and get information about them out), and that this cannot be done because the hole exerts a “cosmic censorship” that imposes an insuperable barrier on data-extraction.
The vulnerable point here is the contention “Q cannot be answered unless X is done” (that hole-internal processes cannot be characterized unless we extract physical data from the hole). Who is to say how future science can resolve its questions—seeing that it need certainly not do so in ways circumscribed by our present conceptions? (Surely, for example,
75
Nicholas Rescher • Collected Papers V
considerations of theory might in principle—and to some extent actually do in practice—enable us to describe “what goes on in a black hole”.) We cannot ever cogently establish that answering a question hinges unavoidably on doing something impossible (for example, sending a signal faster than the speed of light). We can no more circumscribe how science will realize its future tasks than we can circumscribe what those tasks will be. The course of wisdom here is in the idea that there are no particular scientific questions—and certainly no presently identifiable ones—that science cannot resolve as a matter of principle. The idea of identifiable insolubilia accordingly shipwrecks on the same rock as the idea of domain limitations—namely, on the essentially unpredictable nature of science in its substantive regards. The cardinal fact is simply that no one is in a position to delineate here and now what the science of the future can and cannot achieve. No identifiable issue can confidently be placed outside the limits of science. If a question does indeed belong to the province of science (if it does not relate to belles lettres or philosophy or musicology, etc.), then we have to accept the prospect of a scientific answer to it. No scientifically appropriate question can plausibly be held to be beyond the powers of science by its very nature: questions cannot at one and the same time qualify as authentically scientific and be such that their answers lie in principle beyond the reach of science. And so it is a salient lesson of these deliberations that there are no identifiable scientific insolubilia. No specifiable question about nature and its works regarding what we can say with warranted assurance that science will never answer it. But what does this mean in the larger course of things? What about questions not of natural science but of the ordinary affairs of man—and, in particular, about human knowledge itself?9
9
76
On the issues of this chapter see also the author’s Limits of Science (2nd ed.: Pittsburgh: University of Pittsburgh Press, 1999).
Chapter 7 THE PROBLEM OF UNKNOWABLE FACTS _____________________________________________________ SYNOPSIS (1) It is in principle impossible ever to give an example of an unknowable fact. (2) While universalizations do enable us to assert some truths about non-surveyable totalities, such totalities nevertheless serve to demarcate us as cognitively finite beings. (3) For general facts regarding an open-ended group will—whenever they are contingent and not law-inherent—open the door to surd facts that are beyond the cognitive grasp of finite knowers. (4) Nevertheless, as Kant insightfully saw it, the realm of knowledge—of ascertainable fact—while indeed limited, is nevertheless unbounded. ______________________________________________________ 1. LIMITS OF KNOWLEDGE
U
nanswerable questions—insolubilia—are one thing and unknowable facts another. With an unanswerable question we must at least have an idea of what is at issue, while with unknowns—let alone unknowable facts—the very idea of what is at issue may never have occurred to us. But are there any unknowable facts? Is anything in the domain of the real inherently inaccessible to rational inquiry? The cognitive beings that will concern us here are language-dependent finite intelligences. And by their very nature these are bound to be imperfect knowers. For the factual information at their disposal by way of propositional knowledge that something or other is the case will—unlike practical how-to knowledge—have to be verbally formulated. And languageencompassed textuality is—as we have seen—outdistanced by the facts themselves. But just what is one to make of the numerical disparity between facts and truths, between what is knowable in theory and what we
Nicholas Rescher • Collected Papers V
finite intelligences can actually manage to know? Just what does this disproportion portend? First and foremost it means that our knowledge of fact is incomplete— and inevitably so!—because we finite intelligences lack the means for reality’s comprehensive characterization. Reality in all its blooming buzzing complexity is too rich for faithful representation by the recursive and enumerable resources of our language. We do and must recognize the limitations of our cognition, acknowledging that we cannot justifiably equate facticity with what can explicitly be known by us through the resources of language. And what transpires here for the circumstantial situation of us humans obtains equally for any other sort of finite intelligence as well. Any physically realizable sort of cognizing being can articulate—and thus can know—only a part or aspect of the real. Knowing facts is in one respect akin to counting integers1—it is something that must be done seriatim: one at a time. And this means, among other things, that: 1. The manifold at issue being inexhaustible, we can never come to grips with all of its items as particular individuals. Nevertheless— 2. Further progress is always possible: in principle we can always go beyond whatever point we have so far managed to reach. Nevertheless— 3. Further progress gets ever more cumbersome. In moving onwards we must be ever more prolix and make use of ever more elaborate symbol complexes so that greater demands in time, effort, and resources are unavoidable. Accordingly— 4. In actual practice there will be only so much that we can effectively manage to do. The possibilities that obtain in principle can never be fully realized in practice. However— 5. Such limitations nowise hamper the prospects of establishing various correct generalizations about the manifold in its abstract entirety.
1
78
We here take “counting” to be a matter of indicating integers by name—e.g., as “thirteen” or “13”—rather than descriptively, as per “the first prime after eleven”.
THE PROBLEM OF UNKNOWABLE FACTS
And a parallel situation characterizes the cognitive condition of all finite intelligences whose cognitive operations have to proceed by a symbolic process that functions by language. Inductive inquiry, like counting, never achieves completeness. There is always more to be done, and in both cases alike we can always do better by doing more. But it also means that much will always remain undone—that we can never do it all. But are those unknown facts actually unknowable? The answer is neither yes nor no because some distinctions are called for. As already foreshadowed above, it all depends upon exactly how one construes the possibilistic matter of “knowability”. For there will clearly be two rather different ways in which the existence of an inherently unknowable fact can be claimed, namely “Some fact is necessarily unknown”, on the one hand and “Necessarily, some fact is unknown” on the other. The difference in the quantifier placement in the preceding two formulas is crucial when one contemplates the idea that all facts are knowable. Now the first of these two contentions logically entails the second. And this second is in the circumstances inevitable, there being more facts than finite humans ever will or can know. However, the first, stronger contention is clearly false. For as long as the nonexistence of an omniscient God is not a necessary circumstance there can be no fact that is of necessity unknown. But of course even though there are—or may well be—unknowable facts such a fact can never be identified as such, namely as a fact, since doing so is effectively to claim knowledge of it. It is thus in principle impossible for us ever to give an example of one of unidentifiable facts ever though there must be some. And nothing more strikingly betokens the imperfection and limitedness of our knowledge than the ironic circumstance of the uneliminable incompleteness of our knowledge regarding our knowledge itself which arises because we do not—and cannot—know the details of our own ignorance (i.e., cannot possibly know what it is that we do not know). 2. COGNITIVE FINITUDE First the good news. Generalizations can of course refer to everything. Bishop Butler’s “Everything is what it is and not another thing” holds with unrestricted universality. And once continuous quantities are introduced, the range of inferentially available statements becomes uncountable. “The length of the table exceeds x inches.” Once known, this straightaway opens the door to uncountably knowable consequences. And fortunately, a case-
79
Nicholas Rescher • Collected Papers V
by-case determination is not generally needed to validate generalizations. We can establish claims about groups larger than we can ever hope to inventory. Recourse to arbitrary instances, the process of indirect proof by reductio ad absurdum, and induction (mathematical and empirical) all afford procedures for achieving general knowledge beyond the reach of an exhaustive case-by-case check. But will this always be so? Or are there also general truths whose determination would require the exhaustive surveying of all specific instances of a totality too large for our range of vision? At this point our cognitive finitude becomes a crucial consideration and the difference between finite and infinite knowers becomes of fundamental importance and requires closer elucidation. For an “infinite knower” need not and should not be construed as an omniscient knowerone from whom nothing knowable is concealed (and so who knows, for example, who will be elected U.S. President in the year 2200). Rather, what is at issue is a knower who can manage to know in individualized detail an infinite number of independent facts. (Such a knower might, for example, be able to answer such a question as: “Will the decimal expansion of π always continue to exhibit 1415 past any given point?” Finite knowers cannot manage this sort of thing.) And so since we must acknowledge the prospect of inductive knowledge of general laws, we will have it that a knower can unproblematically know—for example—that “All dogs eat meat.”2 But what finite knowers cannot manage is to know this sort of thing in detail rather than at the level of generality. They cannot know specifically of each and every u in that potentially infinite range that Fu obtains—that is, while they can know collectively that all individuals have F, they cannot know distributively of every individual that it has Fsomething they could not do without knowing who they individually are. So the issue now before us is that of the question of general truths that can be known only by assessing the situation of an untractable manifold of individual cases.
2
80
To be sure, the prospect of inductively secured knowledge of laws is a philosophically controversial issue. But this is not the place to pursue it. (For the author’s position see his Induction (Oxford: Blackwell, 1980.)
THE PROBLEM OF UNKNOWABLE FACTS
3. ON SURDITY AND LIMITS OF KNOWLEDGE One cannot, of course, provide concrete examples of facts that are unknowable to finite knowers, seeing that a claim to factuality automatically carries a claim to knowledge in its wake. However while we cannot know specifically which is such a fact one can certainly substantiate the claim generally that there are such things. Let us consider this situation more closely. A feature F of an object/item x is surd if Fx cannot be deduced from the body of knowledge consisting of: • the identifying (discussion-introducing) characterization/description of the item x at issue. • a specification of the various natural kinds (Ki) to which said item x belongs, together with • a specification of the various kind-correlative laws—all given by generalizations having the structure: “Everything of kind K has the property F.” For example it is not the case that “being a prime” is a surd property of 5. For the non-divisibility of 5 by any lesser integer (save 1) can be deduced from 5’s defining specification together with the general laws of arithmetic that govern integers at large (a natural kind of which 5 belongs). By contrast, “being the number of books on that table” is a surd property of 5, seeing that there is no way of deriving it from the general principles at issue with the characterizing specification of 5 together with a the laws that govern its correlative natural kinds. Accordingly, the specifically surd feature of objects/items are those facts about them that are not inferentially accessible for a knowledge of their nature—and thereby not explicable through recourse to general principles. As far as the relevant body of general principles is concerned, the feature is anomalous, contingent, and is by its very nature not lawrationalizable.3 Its possession by an object has to be determined by inspec-
3
The mathematical counterpart to such surdity is randomness. Thus a series on the order of 010011... is random when the is no specifiable law to characterize its composition.
81
Nicholas Rescher • Collected Papers V
tion: it cannot be established by inference from that object’s specifying features via general principles. Now suppose we lived in a world in which no strictly universal laws obtained at all. Such a world would be literally anarchic (although it need not thereby be unruly since its characterizing generalizations could take the qualified form: “Almost always and in the vast preponderance of cases— obtains”) It would be a realm in which every rule has its exceptions. And therefore in such a world every feature of an object would be surd. Explanatory reliance on universal laws and principles alone would get us nowhere. Every property of every item in such a world would be surd and its attribution requires case-specific inspection. While, fortunately, we do not live in such a domain, nevertheless surdity indeed is a prominent feature of this world of ours. This circumstance is illustrated by the example of warning-label phenomenology. For if we adjudge “medically natural kinds” of individuals to be those who response uniformly on some test or test-complex, then there is reason to believe that no lawfully universal generalizations will obtain. Medical pharmacology, in sum, affords a paradigm illustration for surdity. Consider now the situation of a surd feature that is shared in common by all members of a given collection. For example, it must be supposed that as long as the paper exists, every issue of the New York Times will be such that the word THE occurs more than five times on its front page. This is almost certainly a fact. But since it cannot be settled by general principles (laws) and ascertainment requires a case-by-case check it is clearly a surd. Any such surd universal generalization can only be ascertained through a case-by-case check of the entire membership of the group at issue. It cannot be established on general principles in view of the ex-hypothesis of unavailability of governing generalizations. And this means that finite knowers can never ascertain the surd/contingent general features of an infinite or indefinitely large collection. For our knowledge of the universal features of infinite groups is limited to the reach of lawful generalizations alone. Determination of surd generality would require an item-by-item check, which is by hypothesis impracticable for us with infinite or indefinite collections. Accordingly, where such large groups are concerned, secure general knowledge is confined to the region of nomic fact. Though there will doubtless be universal facts that are surd in character in our complex world, they remain, for us, in
82
THE PROBLEM OF UNKNOWABLE FACTS
the realm of supposition and conjecture. For finite knowers firm knowledge of surd universality is unrealizable.4 Given any collection of items there are two importantly different kinds of general properties: those that all members of the collection DO have in common, and those that all member of the collection MUST have in common. The latter are the necessitated general features of the collection, the former its contingently geared features. Thus that all prime numbers greater than 2 are odd is a necessity-geared feature of this set of primes. Or consider the set of all US presidents. That all of them are native born and that all of them are over 35 years of age is a necessitated general feature of the collection in view of our Constitution’s stipulations. However, that all were born outside Hawaii will (if indeed true) be a contingently geared feature of the collection that is nowise necessitated in by the general principles of its constitution. Such examples illustrate the general phenomenon that finite knowers can never decisively establish a surd/contingent general feature of an infinite collection. For whenever a generality holds for a collection on a merely contingent basis, this is something that we finite intelligences can never determine with categorical assurance because determination of such kind-pervasive surdity would require an item-by-item check, which is by hypothesis impracticable for us. This situation clearly manifests yet another dimension of our cognitive finitude. 4. LARGER LESSONS Something about which finite intelligence cannot possibly be mistaken is our belief that we are beings who make mistakes. And analogously one regard in which our knowledge cannot be incomplete—something that it is effectively impossible for us to overlook—is the fact of our cognitive finitude itself, the realization that we do not actually “know-it-all”. Given the inevitable discrepancy—and numerical disproportion— between our propositionally encodable information about the real and the factual complexity of reality itself, we have to be cautious regarding the kind of scientific realism we endorse. Any claims to the effect that reality is in its details exactly as the science of the day claims it to be is (irrespec4
For a further discussion of insuperable cognitive limits see the author’s Epistemic Logic (Pittsburgh: University of Pittsburgh Press, 2004).
83
Nicholas Rescher • Collected Papers V
tive of how the calendar reads) of an extremely questionable tenability. This is something that already became apparent early on in the discussion of the security/definiteness trade-off. And everything that the subsequent discussion has brought to light only serves to substantiate this conclusion in greater detail. Some writers analogize the cognitive exploration of the realm of fact to the geographic exploration of the earth. But this analogy is profoundly misleading. For the earth has a finite and measurable surface, and so even when some part of it is unexplored terra incognita its magnitude and limits can be assessed in advance. Nothing of the kind obtains in the cognitive domain. The ratio and relationship of known truth to knowable fact is subject to no fixed and determinable proportion. Geographic exploration can expect eventual completeness, cognitive exploration cannot. All the same, there can be no doubt that ignorance exacts its price in incomprehension. And here it helps to consider the matter in a somewhat theological light. The world we live in is a manifold that is not of our making but of Reality’s—of God’s if you will. And what is now at issue might be called Isaiah’s Law on the basis of the verse: “For as the heavens are higher than the earth, so are my ways higher than your ways, and my thoughts than your thoughts.”5 A fundamental law of epistemology is at work here, to wit, the principle that a mind of lesser power is for this very reason unable to understand adequately the workings of a mind of greater power. To be sure, the weaker mind can doubtless realize that the stronger can solve problems it itself cannot. But it cannot understand how it is able to do so. An intellect that can only just manage to do well at tic-tac-toe cannot possibly comprehend the ways of one that is expert at chess. Consider in this light the vast disparity of computational power between a mathematical tyro like most of us and a mathematical prodigy like Ramanujan. Not only cannot our tyro manage to answer the number-theoretic question that such a genius resolves in the blink of an eye, but the tyro cannot even begin to understand the processes and procedures that the Indian genius employs. As far as the tyro is concerned, it is all sheer wizardry. No doubt once an answer is given he can check its correctness. But actually finding the answer is something which that lesser intellect cannot manage—the how of the business lies beyond its grasp. And, for much the same sort of reason, a mind of lesser power cannot discover what the ques5
84
Isaiah, 58:9.
THE PROBLEM OF UNKNOWABLE FACTS
tion-resolving limits of a mind of greater power are. It can never say with warranted assurance where the limits of question-resolving power lie. (In some instances it may be able to say what’s in and what’s out, but never map the dividing boundary.) It is not simply that a more powerful mind will know more facts than a less powerful one, but that its conceptual machinery is ampler in encompassing ideas and issues that lie altogether outside the conceptual horizon of its less powerful compeers. Now insofar as the relation of a lesser towards a higher intelligence is substantially analogous to the relation between an earlier state of science and a later state, some instructive lessons emerge. It is not that Aristotle could not have comprehended quantum theory—he was a very smart fellow and could certainly have learned. But what he could not have done is to formulate quantum theory within his own conceptual framework, his own familiar terms of reference. The very ideas at issue lay outside of the conceptual horizon of Aristotle’s science, and like present-day students he would have had to master them from the ground up. Just this sort of thing is at issue with the relation of a less powerful intelligence to a more powerful one. It has been said insightfully that from the vantage point of a less developed technology another, substantially advanced technology is indistinguishable from magic. And exactly the same holds for a more advanced conceptual (rather than physical) technology. It is instructive to contemplate in this light the hopeless difficulties encountered nowadays in the popularization of physics—of trying to characterize the implications of quantum theory and relativity theory for cosmology into the subscientific language of everyday life. A classic obiter dictum of Niels Bohr is relevant: “We must be clear that, when it comes to atoms, language can be used only as in poetry.” Alas, we have to recognize that in philosophy, too, we are in the final analysis, something of the same position. In the history of culture, Homo sapiens began his quest for knowledge in the realm of poetry. And in the end it seems that we are destined to remain at this starting point in some respects. However, the situation regarding our cognitive limits is not quite as bleak as it may seem. For even though the thought and knowledge of finite beings is destined to be ever incomplete, it nevertheless has no fixed and determinate limits. Return to our analogy. As us counting integers, there is a limit beyond which we never will get. But there is no limit beyond which we never can get.
85
Nicholas Rescher • Collected Papers V
The line of thought operative in these deliberations was already mooted by Kant: [I]n natural philosophy, human reason admits of limits (“excluding limits”, Schranken), but not of boundaries (“terminating limits”, Grenzen), namely, it admits that something indeed lies without it, at which it can never arrive, but not that it will at any point find completion in its internal progress ... [T]he possibility of new discoveries is infinite: and the same is the case with the discovery of new properties of nature, of new powers and laws by continued experience and its rational combination ...6
This distinction has important implications. For while our cognitive limitedness as finite beings is real enough, there nevertheless are no boundaries—no determinate limits—to the manifold of discoverable fact. And here Kant was right—even on the Leibnizian principles considered earlier in the present discussion. For while the cognitive range of finite beings is indeed limited, it is also boundless because it is not limited in a way that blocks the prospect of cognitive access to ever new and ongoingly more informative facts that afford us an ever ampler and more adequate account of reality.7
6
Prolegomena to any Future Metaphysics, sect. 57. Compare the following passage from Charles Sanders Peirce: “For my part, I cannot admit the proposition of Kant—that there are certain impassable bound to human knowledge ... The history of science affords illustrations enough of the folly of saying that this, that, or the other can never be found out. Auguste Comte said that it was clearly impossible for man ever to learn anything of the chemical constitution of the fixed stars, but before his book had reached its readers the discovery which he had announced as impossible had been made. Legendre said of a certain proposition in the theory of numbers that, while it appeared to be true, it was most likely beyond the powers of the human mind to prove it; yet the next writer on the subject gave six independent demonstrations of the theorem.” (Collected Papers, [Cambridge, MA: Harvard University Press, 1931-58, 2nd ed.], vol. VI, sect. 6.556.)
7
On the issues of this chapter see also the author’s Epistemetrics (Cambridge: University of Cambridge Press, 2006).
86
Chapter 8 EPISTEMIC INSOLUBILIA AND COGNITIVE FINITUDE ___________________________________________________ SYNOPSIS (1) Full knowledge regarding the particularized detail of things is a salient incapacity of finite knowers. (2) This is exemplified by vagrant predicates that present noninstantiable properties. There is a key level for knowledge here in that we can ascertain that such predicates apply but not where. (3) Such vagrant predicates always involve an epistemic component in which matters of cognition are involved. And there are always limits to one’s capacity to identify what it is that one is ignorant of. (4) One’s ignorance has to be specified on the side of questions, not answers. And here vagrant predicates will always provide for insolubilia questions. (5) Some such questions—and specifically insolubilia relating to the finite growth of knowledge—are unanswerable for reasons of fundamental principle rather then matters of contingent ignorance. ___________________________________________________ 1. FINITE AND INFINITE KNOWERS: DISTRIBUTIVE VS. COLLECTIVE KNOWLEDGE
A
s argued above, specifying particular factual questions about issues that lie beyond the range of scientific resolvability is a risky and problematic business and the search for nature-geared insolubilia is an impracticable and futile quest. However, insolubility about knowledge itself is a horse of a different color—and a more manageable one at that. To see how this is so, one must go back to basics. The difference between a finite and an infinite knower is of far-reaching importance and requires careful elucidation. For an “infinite knower” should not be construed as an omniscient knowerone from whom nothing knowable is concealed (and so who knows, for example, who will be
Nicholas Rescher • Collected Papers V
elected U.S. President in the year 2200). Rather, what is now at issue is a knower who can manage to know in individualized detail an infinite number of independent facts. Unlike a finite knower who can know universal truth only at the level of collective generality, such a knower can know its distributive detail. And this has important ramifications. For one thing it puts secure knowledge generalization outside the cognitive range of finite knowers. But the difficulties do not end there, they extend to particularities as well. 2. NONINSTANTIABLE PROPERTIES AND VAGRANT PREDICATES One can refer to an item in two distinctly different ways: either specifically and individually by means of naming or identifying characterization (“George Washington, the Father of our Country”), or obliquely and sortally as an item of a certain type or kind (“an American male born in the eighteenth century”). Now a peculiar and interesting mode of reference occurs when an item is referred to obliquely in such a way that its specific identification is flat-out precluded as a matter of principle. This phenomenon is illustrated by claims to the existence of a thing whose identity will never be known. an idea that has never occurred to anybody. an occurrence that no one has ever mentioned. —an integer that is never individually specified. Here those particular items that render “Some x has F” true are referentially inaccessible: to indicate them individually and specifically as instances of the predicate at issue is ipso facto to unravel them as socharacterized items.1 The concept of an applicable but nevertheless noninstantiable predicate comes to view at this point. This is a predicate F whose realization is noninstantiable because while it is true in abstracto that this property is exem1
88
We can, of course, refer to such individuals and even to some extent describe them. But what we cannot do is to identify them in the sense specified above.
EPISTEMIC INSOLUBILIA AND COGNITIVE FINITUDE
plifiedthat is (∃u)Fu will be truenevertheless the very manner of its specification makes it impossible to identify any particular individual u0 such that Fu0 obtains. Accordingly: F is a vagrant predicate iff (∃u)Fu is true while nevertheless Fu0 is false for each and every specifically identified u0. Such predicates are “vagrant” in the sense of having no known address or fixed abode: though they indeed have applications these cannot be specifically instanced—they cannot be pinned down and located in a particular spot. Predicates of this sort will be such that: one can show on the basis of general principles that there must be items to which they apply, while nevertheless one can also establish that no such items can ever be concretely identified.2 The following predicates present properties that are clearly noninstantiable in this way: being an ever-unstated (proposition, theory, etc.). being a never-mentioned topic (idea, object, etc.). being a truth (a fact) no one has ever realized (learned, stated). being someone whom everyone has forgotten. being a never-identified culprit. being an issue no one has thought about since the sixteenth century. Noninstantiability itself is certainly not something that is noninstantiable: many instances of it have already been given. The existence of vagrant predicates shows that applicability and instantiability do not come to the same thing. By definition, vagrant predicates will be applicable: there indeed are items to which they apply. However, this circumstance will always have to be something that must be claimed
2
A uniquely characterizing description on the order of “the tallest person in the room” will single out a particular individual without specifically identifying him.
89
Nicholas Rescher • Collected Papers V
on the basis of general principles, doing so by means of concretely identified instances is, by hypothesis, infeasible. Consider an example of this sort of general-principle demonstration. There are infinitely many positive integers. But the earth has a beginning and end in time. Its overall history has room for only a finite number of intelligent earthlings, each of whom can only make specific mention of a finite number of integers. (They can, of course, refer to the set of integers at large, but they can only specifically take note of a finite number of them.) There will accordingly be some ever-unmentioned, ever unconsidered integersindeed an infinite number of them. But clearly no one can give a specific example of this. Or again consider being an unverified truth. Since in the history of the species there can only be a finite number of specifically verified propositions, while actual truths must be infinite in number, we know that there will be some such unverified truths. But to say specifically of a particular proposition that it is an unverified truth is impracticable, seeing that it involves claiming it as a truth and thereby classing it as a proposition whose truth has been determined. We can allude to such items but cannot actually identify them. Such examples show how considerations of general principle can substantiate claims to the existence of vagrant predicates. Vagrant predicates are by nature noninstantiable, but we can nevertheless use them to individuate items that we can never identify. (Recall the discussion of this distinction in #7.) Thus if we speak of “the oldest unknown” (i.e., never-to-be identified) victim of the eruption of Krakatoa, then we can make various true claims about the so-individuated person— for example that he-or-she was alive at the time of Krakatoa’s eruption. We can allude to that individual but by hypothesis cannot manage to identify him. Predicative vagrancy thus reinforces the distinction between mere individuation and actual identification. In sum, then, vagrant predicates reflect a salient cognitive incapacity: we can ascertain that such predicates apply but not where they do so.
90
EPISTEMIC INSOLUBILIA AND COGNITIVE FINITUDE
3. VAGRANT PREDICATES AS EPISTEMIC With formalistic discussions in matters of logic or mathematics—where predicates cast in the language of cognitive operators have no place—one never encounters vagrant predicates. For in such contexts we affirm what we know but never claim that we know. However, with epistemic matters the situation can be very different. Consider such predicates as being a book no one has ever read. being a sunset never witnessed by any member of homo sapiens. Such items may be difficult to instantiatebut certainly not impossible. The former could be instantiated by author and title; the latter by place and date. In neither case will an instantiation unravel that item as described. Being read is not indispensably essential to books, nor being seen to sunsets: being an unread book or being an unwitnessed sunset involves no contradiction in terms. But in those epistemic cases that concern us now, epistemic inaccessibility is built into the specification at issue. Here being instantiated stands in direct logical conflict with the characterization at issue, as with: being a person who has passed into total oblivion. —being a never-formulated question. being an idea no one any longer mentions. To identify such an item (in the way now at issue) is thereby to unravel its specifying characterization.3
3
To be sure one could (truthfully) say something like “The individual who prepared Caesar’s breakfast on the fatal Ides of March is now totally unknown.” But the person at issue here goes altogether unknown, that is, he or she is alluded to but not specified—individuated but not concretely identified. So I cannot appropriately claim to know who the individual at issue is but only at best that a certain individual is at issue.
91
Nicholas Rescher • Collected Papers V
The knowledge operator K is of the essence here. What is pivotal in all of these cases of vagrant predicates is that they involve a specification whichlike identification, comprehension, formulation, mention, etc.is fundamentally epistemicsomething that can only be performed by a creature capable of cognitive and communicative performances. This is readily established. Let F be a vagrant predicate. Since we then by hypothesis have it that (∃u)Fu is true, there is clearly nothing impossible about being Fpossessing as such. Ontologically speaking there are, by hypothesis, items to which F applies; what is infeasible is only providing an instance—a specific example or illustration. The impossibility lies not in “being an F” as such but in “being an concretely instantiated F.” The problem is not with the indefinite “something is an F” but with the specific “this is an F.” Difficulty lies not with F-hood as such, but with its specific applicationnot with the ontology of there being an F but with the epistemology of its apprehension in individual cases. The salient point is that specification, exemplification, etc., are epistemic processes which, as such, are incompatible with those epistemically voided characterizations provided by vagrant predicates. Total oblivion and utter non-entertainment are automatically at odds with identificatory instantiation. After all, honoring a request to identify the possessor of an noninstantiable property is simply impossible. For any such response would be self-defeating. It is this uniting, common feature of all vagrant predicates that they are so specified that in the very act of identifying a would-be instantiation of them we will automatically violatethat is, falsifyone of the definitive features of the specification at issue. In other words, with such noninstantiable features their noninstantiability is something inherent in the defining specification of the features at issue. The very concept of instantiability/noninstantiability is thus epistemic in its bearing because all of the relevant proceduresexemplifying, illustrating, identifying, naming, and the likeare inherently referential by way of purporting a knowledge of identity. And since all such referential processes are mind-projected—and cannot but be so—they are epistemic in nature. On this basis, the idea of knowledge is unavoidably present throughout the phenomenon of predicative vagrancy, seeing that the factor of ignorance is essential here. Now the key fact here is that while one can know that one is ignorant of something, one cannot know exactly what it is that one is ignorant of, i.e., what the fact at issue is.
92
EPISTEMIC INSOLUBILIA AND COGNITIVE FINITUDE
For vagrant predicates always involve ignorance. And, indeed, one of the most critical but yet problematic areas of inquiry relates to knowledge regarding our own cognitive shortcomings. To be sure there is no problem with the idea that Q is a question we cannot answer. But it is next to impossible to get a more definite fix on our own ignorance, because in order even to know that there is a certain particular fact that we do not know, we would have to know the item at issue to be a fact, and just this is, by hypothesis, something we do not know.4 And so, “being a fact I do not know” is a noninstantiable predicate as far as I am concerned. (You, of course, could proceed to instantiate it.) But “being a fact that nobody knows” is flat-out noninstantiable—so that we here have a typical vagrant predicate. Correspondingly, one must recognize that there is a crucial difference between the indefinite “I know that there is some fact that I do not (or can not) know” and the specific “Such and such is a fact of which I know that I do not know it.” The first is unproblematic but the second not, seeing that to know of something that it is a fact I must know it as such so that what is at issue is effectively a contradiction in terms. And so it lies in the nature of things that my ignorance about facts is something regarding which one can have only generic and not specific knowledge. I can know about my ignorance only abstractly at the level of indefiniteness (sub ratione generalitatis), but I cannot know it in concrete detail. I can meaningfully hold that two and two’s being four is a claim (or a purported fact) that I do not know to be the case, but cannot meaningfully maintain that two and two’s being four is an actual fact that I do not know to be the case. To maintain a fact as fact is to assert knowledge of it: in maintaining p as a fact one claims to know that p. One can know that that one does not know various truths, but is not in a position to identify any of the specific truths one do not know. In sum, I can have general but
4
The thesis “I know that p is a known fact that I don’t know” comes to: Ki[(∃x)Kxp & ~ Kip])
(here i = oneself)
This thesis entails my knowing both (∃x)Kxp and ~Kip. But the former circumstance entails Kip, and this engenders a contradiction. Of course “knowing a certain particular fact” involves not just knowing THAT there is a fact, but also calls for knowing WHAT that fact is.
93
Nicholas Rescher • Collected Papers V
not specific knowledge about my ignorance, although my knowledge about your ignorance will be unproblematic in this regard.5 4. PROBLEMS RELATING TO QUESTIONS AND ANSWERS Since vagrant predicates always engender unanswerable questions, it is instructive for present concerns to adopt an erotetic—that is, questionoriented—view of knowledge and ignorance. It can be supposed, without loss of generality, that the answers to questions are always complete propositions. Often, to be sure, it appears on the surface that a specific item is merely at issue with a question, as per such examples as: Q. “Who is that Man?” A. “Tom Jones.” Q. “When will he come?” A. “At two o’clock.” Q. “What prime numbers lie between two and eight?” A. “Three, five, and seven.” But throughout, the answers can be recast in the form of completed proposition, respectively: “That man is Tom Jones”; “He will come at two o’clock”; “Three, five, and seven are the prime numbers between two and eight.” So we shall here take the line that the answers to questions are given as complete propositions. And conversely, known propositions are correlative with answered questions since to know that p, one must be in a position cogently to provide a correct answer to the question: Is p the case? To be sure, answering a question is not simply a matter of giving a response that happens to be correct. For a proper answer must not just be correct but credible: it must have the backing of a rationale that renders its correctness evident. For example, take the question whether the mayor of San Antonio had eggs for breakfast yesterday. You say yes, I say no— though neither of us has a clue. One of us is bound to be right. But neither one of us has managed to provide an actual answer to the question. One of 5
94
Accordingly, there is no problem about “to is a truth you don’t know,” although I could not then go on to claim modestly that “You know everything that I do.” For the contentions ~Kyto and (∀t)(Kit ⊃ Kyt) combine to yield ~Kito which conflicts with the claim Kito that I stake in claiming to as a truth.
EPISTEMIC INSOLUBILIA AND COGNITIVE FINITUDE
us has made a verbal response that happens to be correct, but neither of us has given a cognitively appropriate answer in the sense of the term that is now at issue. For that would require the backing of a cogent rationale of credibility; merely to guess at an answer, for example, or to draw it out of a hat, is not really to provide one. Initially the schoolmen understood by insolubilia such self-refuting propositions as that of the Liar Paradox (“What I am saying is false”). However, the term eventually came to cover a wider spectrum of examples—all related to puzzle situations where more than a mere lack of information is involved in making it difficult to see on which alternate side the truth of the matter lies.6 We shall not be concerned here with ill-formed questions that suffer from some sort of inherent defect. Thus paradox is out and selfinvalidating questions as “Why is this question unintelligible?” must be put aside. And so must the inquiry “How many rocks are there in Africa?” seeing that the problem here is that it is rather unclear what it is to count as being a rock. Is a grain of sand a rock? Is a stony mountain outcropping a rock? The question evaporates in a fog of imprecision. A question can also be ill-formed in that it somehow involves an inappropriate presupposition. Consider “What is one-third of the prime number between 7 and 11?” Since there just is no such prime, the question asks for something whose nonexistence can be established as a matter of general principles, a flaw which renders the question meaningless. A clear example of this phenomenon is provided by paradoxical questions, that is, questions such that every possible answer is false. An instance is afforded by a yes/no question that cannot be answered correctly as per “When you respond to this a question, will the answer be negative?” For consider the possibilities as per Display 1.
6
See Paul Vincent Spade, “Insolubilia” in Norman Kretzmann et. al. (eds.), The Cambridge History of Later Medieval Philosophy (Cambridge: Cambridge University Press, 1982), pp. 246-53.
95
Nicholas Rescher • Collected Papers V
___________________________________________________ Display 1 When next you answer a question, will the answer be negative? Answer given Yes
Truth status of the answer False
No False ___________________________________________________ On this basis, that query emerges as meaningless through representing a question which, as a matter of general principle, cannot possibly be answered correctly. Another instance of a paradoxical question is: “What is an example of a question you will never state (consider, conceive of, etc.)?” Any answer that you give is bound to be false though someone else may well be in a position to give a correct answer. But paradoxical questions of this sort are readily generalized. Thus consider: “What is an example of a question no one will ever state (consider, conceive of, etc.)” No one can answer this question appropriately. Yet, nevertheless, the question is not unanswerable in principle since there will certainly be questions that individuals and indeed knowers-at-large will never state (conceive of, etc.). But it is impossible to give an example of this phenomenon. All such ill-formed questions will be excluded from our purview. Only meaningful question that do indeed have correct answers will concern us here. (There are, of course, also questions that cannot be answered incorrectly. An instance is “What is an example of something that someone has given an example of?” Any possible answer to this question will be correct.) Some questions are unanswerable for essentially practical reasons: we lack any prospect of finding effective means for there resolution. For reasons of contingent fact—the impracticability of time travel, say, or of space travel across the vast reaches involved. But such contingently grounded ignorance is not as bad as it gets. For some questions are in principle irresoluble in that purely theoretical reasons (rather than mere practical limitations) preclude the possibility of securing the information required for their resolution. There is—or may be—no sound reasons for
96
EPISTEMIC INSOLUBILIA AND COGNITIVE FINITUDE
dismissing such questions as meaningless because hypothetical beings can be imagined by whom such a question can be resolved. But given the inevitabilities of our situation as time-bound and finite intelligences—the question may be such that any prospect of resolution is precluded on grounds of general principle. But are there any such questions? Are there some issues regarding which we are condemned to ignorance? In inquiring into this problem area, we are not interested in questions whose unanswerability resides merely in the contingent fact that certain information is not in practice accessible. “Did Julius Caesar hear a dog bark on his thirtieth birthday?” There is no possible way in which we can secure the needed information here-and-now. (Time travel is still impracticable.) But of course such questions are not inherently unanswerable and it is unanswerability as a matter of principle that will concern us here.7 There are two principal sorts of meaningfully unanswerable questions, those that are locally irresolvable, and those that are so globally. Locally unanswerable questions are those which a particular individual or group is unable to answer. An instance of such a question is: “What is an example of a fact of which you are altogether ignorant?” Clearly you cannot possibly manage to answer this, because whatever you adduce as such a fact must be something you know or believe to be such (that is, a fact), so that you cannot possibly be altogether ignorant of it. On the other hand, it is clear that somebody else could readily be in the position to answer the question. Again, consider such questions as: • What is an example of a problem that will never be considered by any human being? • What is an example of an idea that will never occur to any human being? There are sound reasons of general principle (the potential infinitude of problems and ideas; the inherent finitude of human intelligence) to hold that the items at issue in these questions (problems that will never be considered; ideas that will never occur) do actually exist. And it seems alto7
Nor will we be concerned here with the issue of indemonstrable truths and unanswerable questions in mathematics. Our concern is only with factual truths and the issue of truth in such formal descriptions as mathematics or logic will be left aside.
97
Nicholas Rescher • Collected Papers V
gether plausible to think that other (non-human) hypothetically envisionable intelligences could well answer these questions correctly. But though it is equally clear that we humans could never provide the requisite answers. If such questions can indeed be adduced, then, while one cannot identify an unknown truth, one would be able to identify cases of unspecifiable truth, propositions such that either p0 or not-p0 must be true and yet nevertheless there is no prospect of determining which it is. Here we can localize truth by emplacing it within a limited range (one here consisting of p0 and ~p0) but cannot pinpoint it within this range of alternatives. One member of the assertion/denial pair will unquestionably prove to be true. And so one way or the other a case of truth stands before us. It’s just that we cannot possibly say which member of the pair it is: the specifics of the matters are unknowable. But are there such unspecifiable truths? 5. INSOLUBILIA THAT REFLECT LIMITS TO OUR KNOWLEDGE OF THE FUTURE Let us return in this light to the problem of our knowledge of the scientific future as already discussed in some detail in Chapter 6. Clearly to identify an insoluble scientific problem, we would have to show that a certain inherently appropriate scientific question is nevertheless such that its resolution lies beyond every (possible or imaginable) state of future science. This is obviously a very tall orderparticularly so in view of our inevitably deficient grasp of future science. After all, that aspect of the future which is most evidently unknowable is the future of invention, of discovery, of innovation—and particularly in the case of science itself. As Immanuel Kant insisted the every new discoveries opens the way to others, every question that is answered gives rise to yet further questions to be investigated.8 The present state of science can never answer definitively from that of the future, since it cannot even predict what questions lie on the agenda. After all, we cannot foresee what we cannot conceive. Our questionslet alone answerscannot outreach the limited horizons of our 8
98
On this theme see the author’s Kant and the Reach of Reason: Studies in Kant's Theory of Rational Systematization (Cambridge: Cambridge University Press, 2000).
EPISTEMIC INSOLUBILIA AND COGNITIVE FINITUDE
concepts. Having never contemplated electronic computing machines as such, the ancient Romans could also venture no predictions about their impact on the social and economic life of the 21st century. Clever though he unquestionably was, Aristotle could not have pondered the issues of quantum electrodynamics. The scientific questions of the future areat least in partbound to be conceptually inaccessible to the inquirers of the present. The question of just how the cognitive agenda of some future date will be constituted is clearly irresolvable for us now. Not only can we not anticipate future discoveries now, we cannot even pre-discern the questions that will arise as time moves on and cognitive progress with it.9 Scientific inquiry is a venture in innovation. And in consequence it lies in the nature of things that present science can never speak decisively for future science, and present science cannot predict the specific discoveries of future inquiry. After all, our knowledge of the present cannot encompass that of the future—if we could know about those future discoveries now they would not have to await the future. Accordingly, knowledge about what science will achieve over alland thus just where it will be going in the long runare beyond the reach of attainable knowledge at this or any other particular stage of the scientific “state of the art”. It is clear on this basis that the question “Are there non-decidable scientific questions that scientific inquiry will never resolveeven were it to continue ad indefinitum” represents an insolubilium that cannot possibly ever be settled in a decisive way. After all, how could we possibly establish that a question Q about some issue of fact will continue to be raisable and unanswerable in every future state of science, seeing that we cannot now circumscribe the changes that science might undergo in the future? And, since this is so, we have—quite interestingly—it that this question itself is self-instantiating: it is a question regarding an aspect of reality (of which of course science itself is a part) that scientific inquiry will neverat any specific state of the artbe in a position to settle decisively.10 9
Of course these questions already existwhat lies in the future is not their existence but their presence on the agenda of active concern.
10
And this issue cannot be settled by supposing a mad scientists who explodes the superbomb that blows the earth to smithereens and extinguishes all organic life as we know it. For the prospect cannot be precluded that intelligent life will evolve elsewhere. And even if we contemplate the prospect of a “big crunch” that is a reverse “big bang” and implodes our universe into an end, the project can never be
99
Nicholas Rescher • Collected Papers V
The long and short of it is that the very unpredictability of future knowledge renders the identification of scientific insolubilia impracticable. (In this regard it is effectively a bit of good future that we are ignorant about the lineaments of our ignorance.)11 We are cognitively myopic with respect to future knowledge. It is in principle infeasible for us to tell now but only how future science will answer present questions but even what questions will figure on the question agenda of the future, let alone what answers they will engender. In this regards, as in others, it lies in the inevitable realities of our cognitive condition that the detailed nature of our ignorance is—for us at least—hidden away in an impenetrable fog of obscurity. It is clear that the question What’s an example of a truth one cannot establish as such—a fact that we cannot come to know? is one that leads ad absurdum. The quest for unknowable facts is inherently quixotic and paradoxical because of the inherent conflict between the definitive features at issue—factuality and unknowability. Here we must abandon the demand for knowledge and settle for mere conjecture. But how far can we go in this direction? To elucidate the prospect of identifying unknowable truth, let us consider once more the issue of future scientific knowledge—and specifically upon the already mooted issue of the historicity of knowledge. And in this light let us consider a thesis on the order of: (T) As long as scientific inquiry continues in our universe, there will always be a time when some of the then-unresolved (but resolvable) questions on the scientific agenda of the day will be sufficiently difficult to remain unresolved for at least two years.
precluded that at the other end of the big crunch, so to speak, another era of cosmic development awaits. 11
That contingent future development are by nature cognitively intractable, even for God, was a prospect supported even by some of the scholastics. On this issue see Marilyn McCord Adams, William Ockham, vol. II (Notre Dame, Ind.: University of Notre Dame Press, 1987), chap. 27.
100
EPISTEMIC INSOLUBILIA AND COGNITIVE FINITUDE
What is at issue here is clearly a matter of fact—one way or the other. But now let Q* be the question: “Is T true or not?” It is clear that actually to answer this question Q* one way or the other we would need to have cognitive access to the question agenda of all future times. And, as emphasized above, in relation to theses of the ∀∃ format just this sort of information about future knowledge is something that we cannot manage to achieve. By their very nature as such, the discoveries of the future are unavailable at present, and in consequence Q* affords an example of an insolubiliuma specific and perfectly meaningful question that we shall always and ever be unable to resolve decisivelyirrespective of what the date on the calendar happens to read. But nevertheless, the issue is certainly one that lies open to reasonable conjecture—something that is of course, very far from achieving knowledge. For as viewed in the light of the present deliberations thesis (T) is altogether plausible—it has all the earmarks of a likely truth.12 And so it seems reasonable to hold that a conjecture of this sort is the best and most that we can ever hope to do, given that actual knowledge in such a matter is simply unattainable.13
12
And of course there are many other plausible theses of this sort, such, for example, as “As long as scientific inquiry continues in the universe, every scientific discovery will eventually be improved upon and refined.”
13
On the issues of this chapter see also the author’s Epistemic Logic (Pittsburgh: University of Pittsburgh Press, 2005).
101
Chapter 9 CAN COMPUTERS OVERCOME OUR COGNITIVE FINITUDE? ___________________________________________________ SYNOPSIS (1) Several preliminary explanations are needed to clarify what “problem solving” involves in the present context. (2) Purely theoretical limits do not represent genuine limitations in problem solving: it is no limitation to be unable to do that which cannot possibly be done. (3) But inadequate information does become a crucial factor here. (4) And other practical limitations include real-time processing difficulties as well as matters of detail management and self-insight obstacles. (5) There are, moreover, some crucial limitations of self-monitoring where computer determinations of computer capacity is concerned. (6) Such performative limits to scientific knowledge affect computers as much as ourselves. (7) However much computers may surpass us in capacity, the same sorts of limitations that mark us as finite knowers will afflict them as well. ___________________________________________________ 1. COULD COMPUTERS OVERCOME OUR LIMITATIONS?
I
n view of the difficulties and limitations that beset our human efforts at answering the questions we confront in a complex world, it becomes tempting to contemplate the possibility that computers might enable us to overcome our cognitive disabilities and surmount those epistemic frailties of ours. And so we may wonder: Can computers overcome our limitations? If a problem is to qualify as soluble at all, will computers always be able to solve it for us? Of course, computers cannot bear human offspring, enter into contractual agreements, or exhibit heroism. But such processes address practical problems relating to the management of the affairs of human life and so do not count in the present cognitive context. Then too we must put
Nicholas Rescher • Collected Papers V
aside evaluative problems of normative bearing or of matters of human affectivity and sensibility: computers cannot offer us meaningful consolation or give advice to the lovelorn. The issue presently at hand regards the capacity of computers to resolve cognitive problems of the regarding matters of empirical or formal fact. Typically, the sort of problems that will concern us here are those that characterize cognition, in particular problems relating to the description, explanation, and prediction of the things, events, and processes that comprise the realm of physical reality. And to all visible appearances computers are ideal instruments for addressing the matters of cognitive complexity that arise in such contexts. The history of computation in recent times is one of a confident march from triumph to triumph. Time and again, those who have affirmed the limitedness of computers have been forced into ignominious retreat as increasingly powerful machines implementing increasingly ingenious programs have been able to achieve the supposedly unachievable. However, the question now before us is not “Can computers help with problem-solving?”—an issue that demands a resounding affirmative and needs little further discussion. There is no doubt whatever that computers can do a lot here—and very possibly more than we ourselves can. But the question before us is: Is there anything in the domain of cognitive problem solving that computers cannot manage to do? And there is an awesomely wide gap between much and everything. First some important preliminaries. To begin with, we must, in this present context, recognize that much more is at issue with a “computer” than a mere electronic calculating machine understood in terms of its operational hardware. For one thing, software also counts. And, for another, so does data acquisition. As we here construe computers, they are electronic information-managing devices equipped with data banks and augmented with sensors as autonomous data access. Such “computers” are able not only to process information but also to obtain it. Moreover, the computers at issue here are, so we shall suppose, capable of discovering and learning, and thereby able significantly to extend and elaborate their own initially programmed modus operandi. Computers in this presently operative sense are not mere calculating machines, but general problem solvers along the lines of the fanciful contraptions envisioned by the aficionados of artificial intelligence. These enhanced computers are accordingly question-answering devices of a very ambitious order. On this expanded view of the matter, we must correspondingly enlarge our vision both of what computers can do and of what can reasonably be
104
CAN COMPUTERS OVERCOME OUR COGNITIVE FINITUDE?
asked of them. For it is the potential of computers as an instrumentality for universal problem solving that concerns us here, and not merely their more limited role in the calculations of algorithmic decision theory. The computers at issue will thus be prepared to deal with factually substantive as well as merely formal (logico-mathematical) issues. And this means that the questions we can ask are correspondingly diverse. For here, as elsewhere, added power brings added responsibility. The questions, it is appropriate to ask thus, can relate not just to matters of calculation but to the things and processes of the world. Moreover, some preliminary discussion of the nature of “problem solving” is required because one has to become clear from the outset about what it is to solve a cognitive problem. Obviously enough, this is a matter of answering questions. Now “to answer” a question can be construed in three ways: to offer a possible answer, to offer a correct answer, and finally to offer a credible answer. It is the third of these senses that will be at the center of concern here. And with good reason. For consider a problem solver that proceeds in one of the following ways: it replies “yes” to every yes/no question; or it figures out the range of possible answers and then randomizes to select one; or it proceeds by “pure guesswork.” Even though these so-called “problem solvers” may give the correct response some or much of the time, they are systematically unable to resolve our questions in the presently operative credibility-oriented sense of the term. For the obviously sensible stance calls for holding that a cognitive problem is resolved only when a correct answer is convincingly provided—that is to say, when we have a solution that we can responsibly accept and acknowledge as such. Resolving a problem is not just a matter of having an answer, and not even of having an answer that happens to be correct. The actual resolution of a problem must be credible and convincing—with the answer provided in such a way that its cogency is recognizable. In general problem solving we want not just a response but an answer—a resolution equipped with a contextual rationale to establish its credibility in a way accessible to duly competent recipients. To be warranted in accepting a third-party answer we must ourselves have case-specific reasons to acknowledge it as correct. A response whose appropriateness as such cannot secure rational confidence is no answer at all.1 And in this regard 1
The salient point is that unless I can determine (i.e., myself be able to claim justifiedly) that you are warranted in offering your response, I have no adequate grounds to accept it as answering my question: it has not been made credible to me, irrespective of how justified you may be in regard to it. To be sure, for your claim
105
Nicholas Rescher • Collected Papers V
we are in the driver’s seat because in seeking acceptable answers for computers we are bound to mean acceptable to us. With these crucial preliminaries out of the way, we are ready to begin. 2. GENERAL-PRINCIPLE LIMITS ARE NOT MEANINGFUL LIMITATIONS The question before us is: “Are there any significant cognitive problems that computers cannot solve?” Now it must be acknowledged from the outset that certain problems are inherently unsolvable in the logical nature of things. One cannot square the circle. One cannot co-measure the incommensurable. One cannot decide the demonstrably undecidable nor prove the demonstrably unprovable. Such tasks represent absolute limitations whose accomplishment is theoretically impossible— unachievable for reasons of general principle rooted in the nature of the realities at issue.2 And it is clear that inherently unsolvable problems cannot be solved by computers either.3 Other sorts of problems will not be unsolvable as such but will, nevertheless, be demonstrably prove to be computationally intractable. For with respect to purely theoretical problems it is clear from Turingesque results in algorithmic decision theory (ADT) that there will indeed be computer insolubilia—mathematical questions to which an algorithmic to be credible for me I need not know what your justification for it is, but I must be in a position to realize that you are justified. Your reasons may even be incomprehensible to me, but for credibility I require a rationally warranted assurance—perhaps only on the basis of general principles—that those reasons are both extant and cogent. 2
On unsolvable calculating problems, mathematical completeness, and computability see Martin Davis, Computability and Unsolvability (New York: McGrawHill, 1958; expanded reprint edition, New York: Dover, 1982). See also N. B. Pour-El and J. I. Richards, Computability in Analysis and Physics (Berlin: Springer Verlag, 1989) or on a more popular level Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (New York: Basic Books, 1979).
3
Some problems are not inherently unsolvable but cannot in principle be settled by computers. An instance is “What is an example of word that no computer will ever use?” Such problems are inherently computer-inappropriate and for this reason a failure to handle them satisfactorily also cannot be seen as a meaningful limitation of computers.
106
CAN COMPUTERS OVERCOME OUR COGNITIVE FINITUDE?
respondent will give the wrong answer or be unable to give any answers at all, no matter how much time is allowed.4 But this is a mathematical fact which obtains of necessity so that this whole issue can be also set aside for present purposes. For in the present context of universal problem solving (UPS) the necessitarian facts of Gödel-Church-Turing incompleteness become irrelevant. Here any search for meaningful problem-solving limitations will have to confine its attention to problems that are in principle solvable: demonstrably unsolvable problems are beside the point of present concern because an inability to do what is in principle impossible hardly qualifies as a limitation, seeing that it makes no sense to ask for the demonstrably impossible. For present purposes, then, it is limits of capability not limits of feasibility that matter. In asking about the problem-solving limitedness of computers we are looking to problems that computers cannot resolve but that other problem solvers conceivably can. The limits that will concern us here are accordingly not rooted in conceptual or logico-mathematical infeasibilities of general principle nor in absolute physical impossibilities, but rather in performatory limitations imposed specifically upon computers by the world’s contingent modus operandi. And in this formulation the adverb “specifically” does real work by way of ruling out certain computer limitations as irrelevant. Thus some problems will simply be too large given the inevitable limitations on computers in terms of memory, size, processing time, and output capacity. Suppose for the moment that we inhabit a universe which, while indeed boundless, is nevertheless finite. Then no computer could possibly solve a problem whose output requires printing more letters or numbers than there are atoms in the universe. However, such problems ask computers to achieve a task that is not “substantively meaningful” in the sense that no physical agent at all—computer, organism, or whatever, could possibly achieve it. By contrast, the problems that concern us here are those that are not solution-precluding on the basis of inherent mathematical or physical impossibilities. To reemphasize: our concern is with the performative limitations of computers with regard to problems that are not inherently intractable in the logical or physical nature of things. It is on this basis that we must proceed here. 4
On Gödel’s theorem see S. G. Shanker (ed.) Gödel’s Theorems in Focus (London: Croom Helm, 1988), a collection of essays that provide instructive, personal, philosophical, and mathematical perspectives on Gödel’s work.
107
Nicholas Rescher • Collected Papers V
3. PRACTICAL LIMITS: INADEQUATE INFORMATION Often the information needed for credible problem-resolution is simply unavailable. Thus no problem-solver can at this point in time provide credible answers to questions like “What did Julius Caesar have for breakfast on that fatal Ides of March?” or “Who will be the first auto accident victim of the next millennium?” The information needed to answer such questions is just not available at this stage. In all problemsolving situations, the performance of computers is decisively limited by the quality of the information at their disposal. “Garbage in, garbage out,” as the saying has it. But matters are in fact worse than this. Garbage can come out even where no garbage goes in. One clear example of the practical limits of computer problem-solving arises in the context of prediction. Consider the two prediction problems set out in Display 1. ___________________________________________________ Display 1 Case 1 Data:
X is confronted with the choice of reading a novel by Dickens or one by Trollope. And further: X is fond of Dickens.
Problem:
To predict which novel X will read. Case 2
Data:
Z has just exactly $10.00. And further: Z promised to repay his neighbor $7.00 today. Moreover, Z is a thoroughly honest individual.
Problem: To predict what Z will do with his money. ___________________________________________________ On first sight, there seems to be little difficulty in arriving at a prediction in these cases. But now suppose that we acquire some further data to enlarge our background information: pieces of information supplementary to—but
108
CAN COMPUTERS OVERCOME OUR COGNITIVE FINITUDE?
nowise conflicting with or corrective of—the given premisses: Case 1: X is extremely, indeed inordinately fond of Trollope. Case 2: Z also promised to repay his other neighbor the $7.00 he borrowed on the same occasion. Note that in each case our initial information is nowise abrogated but merely enlarged by the additions in question. But nevertheless in each case we are impelled, in the light of that supplementation, to change the response we were initially prepared and rationally well advise to make. Thus when I know nothing further of next year’s Fourth of July parade in Centerville U.S.A., I shall predict that its music will be provided by a marching band; but if I am additionally informed that the Loyal Sons of Old Hibernia have been asked to provide the music, then bagpipes will now come to the fore. It must, accordingly, be recognized that the search for rationally appropriate answers to certain questions can be led astray not just by the incorrectness of information but by its incompleteness as well. The specific body of information that is actually at hand is not just important for problem resolution, it is crucial. And we can never be unalloyedly confident of problem-resolutions based on incomplete information, seeing that further information can always come along to upset the applecart. As available information expands, established problem-resolutions can always become destabilized. One crucial practical limitation of computers in matters of problem solving is thus constituted by the inevitable incompleteness (to say nothing of potential incorrectness) of the information at their disposal. And here the fact that computers can only ever ingest finite—and thus incomplete—bodies of information means that their problem-resolving performance is always at risk. Computers are physical devices, they are subject to the laws of physics and limited by the realities of the physical universe. In particular, since a computer can process no more than a fixed number of bits per second per gram, the potential complexity of algorithms means that there is only so much that a given computer can possibly manage to do. Then there is also the temporal aspect. To solve problems about the real world, a computer must of course be equipped with information about it. But securing and processing information is a time-consuming process and the time at issue can never be reduced to an instantaneous zero. Time-
109
Nicholas Rescher • Collected Papers V
constrained problems that are enormously complex—those whose solution calls for securing and processing a vast amount of data—can exceed the reach for any computer. At some point it always becomes impossible to squeeze the needed operations into available time. There are only so many numbers that a computer can crunch in a given day. And so if the problem is a predictive one it could find itself in the awkward position that it should have started yesterday on a problem only presented to it today. Thus even under the (fact-contravening) supposition that the computer can answer all of our questions, it cannot, if we are impatient enough, produce those answers as promptly as we might require them. Even when given, answers may be given too late. This situation is emblematic of a larger issue. Any computer that we humans can possibly contrive is going to be finite: its sensors will be finite, its memory (however large) will be finite, and its processing time (however fast) will be finite.5 Moreover, computers operate in a context of finite instructions and finite inputs. Any representational model that functions by means of computers is of finite complexity in this sense. It is always a finitely characterizable system: Its descriptive constitution is characterized in finitely many information-specifying steps and its operations are always ultimately presented by finitely many instructions. And this array of finitudes means that a computer’s modelling of the real will never capture the inherent ramifications of the natural universe of which it itself is a constituent (albeit a minute one). Artifice cannot replicate the complexity of the real; reality is richer in its descriptive constitution and more efficient in its transformatory processes than human artifice can ever manage to realize. For nature itself has a complexity that is effectively endless, so that no finistic model that purports to represent nature can ever replicate the detail of reality’s make-up in a fully comprehensive way, even as no architect’s blueprint-plus-specifications can possibly specify every feature of the structure that is ultimately erected. In particular, the complications of a continuous universe cannot be captured completely via the resources of discretized computer languages. All endeavors to represent reality— computer models emphatically included—involve some element of oversimplification, and in general a great deal of it. The fact of the matter is that reality is too complex for adequate 5
For a comprehensive survey of the physical limitations of computers see Theodore Leiber, “Chaos, Berechnungskomplexität und Physik: Neue Grenzen wissenschaftlicher Erkenntnis,” Philosophia Naturalis, vol. 34 (1997), pp. 23-54.
110
CAN COMPUTERS OVERCOME OUR COGNITIVE FINITUDE?
cognitive manipulation. Cognitive friction always enters into matters of information management—our cognitive processing is never totally efficient, something is always lost in the process; cognitive entropy is always upon the scene. But as far as knowledge is concerned, nature does nothing in vain and so encompasses no altogether irrelevant detail. Yet oversimplification always makes for losses, for deficiencies in cognition. For representational omissions are never totally irrelevant, so that no oversimplified descriptive model can get the full range of predictive and explanatory matters exactly right. Put figuratively, it could be said that the only “computer” that can keep pace with reality’s twists and turns over time is the universe itself. It would be unreasonable to expect any computer model less complex than this totality itself to provide a fully adequate representation of it, in particular because that computer model must of course itself be incorporated within the universe. 4. PERFORMATIVE LIMITS OF PREDICTION—SELF-INSIGHT OBSTACLES Another important sort of practical limitation to computer problemsolving arises not from the inherent intractability of questions but from their unsuitability for particular respondents. Specifically, one of the issues regarding which a computer can never function perfectly is its own predictive performance. One critical respect in which the self-insight of computers is limited arises in connection with what is known as “the Halting Problem” in algorithmic decision theory. Even if a problem is computer solvable—in the sense that a suitable computer will demonstrably be able to find a solution by keeping at it long enough—it will in general be impossible to foretell how long a process of calculation will actually be needed. There is not—and demonstrably cannot be—a general procedure for foretelling with respect to a particular computer and a particular problem: “Here is how long it will take to find the solution— and if the problem is not solved within this timespan then it is not solvable at all.” No computer can provide general insight into how long it—or any other computer, for that matter—will take to solve problems. The question “How long is long enough?” demonstrably admits of no general solution here. And computers are—of necessity!—bound to fail even in much simpler self-predictive matters. Thus consider confronting a predictor with the problem posed by the question:
111
Nicholas Rescher • Collected Papers V
P1: When next you answer a question, will the answer be negative? This is a question which—for reasons of general principle—no predictor can ever answer satisfactorily.6 For consider the available possibilities:
Answer given
Actually correct answer
YES NO CAN’T SAY
NO YES NO
Agreement? NO NO NO
On this question, there just is no way in which a predictive computer’s response could possibly agree with the actual fact of the matter. Even the seemingly plausible response “I can’t say” automatically constitutes a selffalsifying answer, since in giving this answer the predictor would automatically make “No” into the response called for by the proprieties of the situation. Here, then, we have a question that will inevitably confound any conscientious predictor and drive it into baffled perplexity. But of course the problem poses a perfectly meaningful question to which another predictor could give a putatively correct answer—namely, by saying: “No—that predictor cannot answer this question at all; the question will condemn a predictor (Predictor No. 1) to baffled silence.” But of course the answer “I am responding with baffled silence” is one which that initial predictor cannot cogently offer. And as to that baffled silence itself, this is something which, as such, would clearly constitute a defeat for Predictor No. 1. Still, that question which impelled Predictor No. 1 into perplexity and unavoidable failure presents no problem of principle for Predictor No. 2. And this clearly shows that there is nothing improper about that question as such. For while the question posed in P1 will be irresolvable by a particular computer, and so it could—in theory— be answered by other computers is not irresolvable by computers-in-general. However, there are other questions that indeed are computer insolubilia 6
As stated this question involves a bit of anthropomorphism in its use of “you.” But this is so only for reasons of stylistic vivacity. That “you” is, of course, only shorthand for “computer number such-and-such”.
112
CAN COMPUTERS OVERCOME OUR COGNITIVE FINITUDE?
for computers-at-large. One of them is: P2: What is an example of a predictive question that no computer will ever state? In answering this question the computer would have to stake a claim of the form: “Q is an example of a predictive question that no computer will ever state.” And in the very making of this claim the computer would falsify it. It is thus automatically unable to effect a satisfactory resolution. However, the question is neither meaningless nor irresolvable. A non-computer problem solver could in theory answer it correctly. Its presupposition, “There is a predictive question that no computer will ever consider” is beyond doubt true. What we thus have in P2 is an example of an inprinciple solvable—and thus “meaningful”—question which, as a matter of necessity in the logical scheme of things, no problem-solving computer can ever resolve satisfactorily. The long and short of it is that every predictor— computers included—is bound to manifest versatility-incapacities with respect to its own predictive operations.7 However, from the angle of our present considerations, the shortcoming of problems P1 and of P2 is that they are computer irresolvable on the basis of theoretical general principles. And it is therefore not appropriate, on the present perspective—as explained above—to count this sort of thing as a computer limitation. Are there any other, less problematic examples? 5. PERFORMATIVE LIMITS: A DEEPER LOOK At this point we must contemplate some fundamental realities of the situation confronting our problem-solving resources. The first of these is that no computer can ever reliably determine that all its more powerful compeers are unable to resolve a particular substantive problem (that is, one that is inherently tractable and not demonstrably unsolvable on logicoconceptual grounds). And in view of the theoretical possibility of ever more powerful computers this means that:
7
On the inherent limitation of predictions see the author’s Predicting the Future (Albany: State University of New York Press, 1997).
113
Nicholas Rescher • Collected Papers V
T1: No computer can reliably determine that a given substantive problem is altogether computer irresolvable. Moreover, something that we are not prepared to accept from any computer is cognitive megalomania. No computer is, so we may safely suppose, ever able to achieve credibility in staking a claim to the effect that no substantive problem whatever is beyond the capacity-reach of computers. And this leads to the thesis: T2: No computer can reliably determine that all substantive problems whatever are computer resolvable. But just what is the ultimate rationale for these thesis? 6. A COMPUTER INSOLUBILIUM The time has come to turn from generalities to specifics. At this point we can confront a problem-solving computer with the challenging question: P3: What is an example of a (substantive) problem that no computer whatsoever can resolve? There are three possibilities here: 1. The computer offers an answer of the format “P is an example of a problem that no computer whatsoever can resolve.” For reasons already canvassed we would not see this as an acceptable resolution, since by T1 our respondent cannot achieve credibility here. 2. The computer responds: “No can do: I am unable to resolve this problem: it lies outside my capability.” We could—and would— accept this response and take our computer at its word. But the response of course represents no more than computer acquiescence in computer incapability. 3. The computer responds: “I reject the question as improper and illegitimate on the grounds of its being based on an inappropriate presupposition, namely that there indeed are problems that no
114
CAN COMPUTERS OVERCOME OUR COGNITIVE FINITUDE?
computer whatsoever can resolve.” We ourselves would have to reject this position as inappropriate in the face of T2. The response at issue here is one that we would simply be unable to accept at face value from a computer. It follows from such deliberations that P3 is itself a problem that no computer can resolve satisfactorily. At this point, then, we have realized the principal object of the discussion: We have been able to identify a meaningful concrete problem that is computer irresolvable for reasons that are embedded—via theses T1 and T2—in the world’s empirical realities. For—to reemphasize—our present concern is with issues of general problem solving and not algorithmic decision theory. To this point, however, our discussion has not, as yet, entered the doctrinal terrain of discussions along the lines of Hubert L. Dreyfus’ What Computers Still Can’t Do.8 For the project that is at issue there is to compare computer information processing with human performance in an endeavor to show that there are things that humans can do that computers cannot accomplish. However, the present discussion has to this point looked solely to problems that computers cannot manage to resolve. With whether humans can or cannot resolve them as an issue that has remained out of sight. And so there is a big question that yet remains untouched, namely: Is there any sector of this problem-solving domain where the human mind enjoys a competitive advantage over computers? Specifically: P4: Are there problems that computers cannot solve satisfactorily but people can? And in fact what we would ideally like to have is not just an abstract answer to P4, but a concrete answer to: P5: What is an example of a problem that computers cannot solve satisfactorily but people can?
8
Cambridge MA: MIT Press, 1992. This book is an updated revision of his earlier What Computers Can’t Do (New York: Harper Collins, 1972).
115
Nicholas Rescher • Collected Papers V
What we are now seeking is a computer-defeating question that has the three characteristics of (i) posing a meaningful problem, (ii) being computer-unsolvable, and (iii) admitting of a viable resolution by intelligent non-computers, specifically humans.9 This, then is what we are looking for. And—lo and behold!—we have already found it. All we need do is to turn around and look back to P3. After all, P3 is—so it was argued—a problem that computers cannot resolve satisfactorily, and this consideration automatically provides us— people that we are—with the example that is being asked for. In presenting P3 within its present context we have in fact resolved it. And moreover P5 is itself also a problem of just this same sort. It too is a computerirresolvable question that people can manage to resolve.10 In the end, then, the ironic fact remains that the very question we are considering regarding cognitive problems, that computers cannot solve but people can, provides its own answer.11 P3 and P5 appear to be eligible for membership in the category of “academic questions”—questions that are effectively self-resolving—a category which also includes such more prosaic members as: “What is an example of a question formulated in English?” and “What is an example of a question that asks for an example of something?” The presently operative mode of computer unsolvability thus pivots on the factor of self-reference—just as is the case with Gödelian incompleteness. 9
For some discussion of this issue from a very different point of approach see Roger Penrose, The Emperor’s New Mind (New York: Oxford University Press, 1989).
10
We have just claimed P5 as computer irresolvable. And this contention, of course, entails (∃P)(∀C)~C res P or equivalently ~(∀P)(∃C)C res P. Letting this theses be T3, we may recall that T2 comes to (∀C)~C det ~T3. If T3 is indeed true, then this contention—that is T2—will of course immediately follow.
11
Someone might suggest: “But one can use the same line of thought to show that there are computer-solvable problems that people cannot possibly solve by simply interchanging the reference to ‘computers’ and ‘people’ throughout its preceding argumentation?” But this will not do. For the fact that it is people that use computers means that one can credit people with computer-provided problem solutions via the idea that people can solve problems with computers. But the reverse cannot be claimed with any semblance of plausibility. The situation is not in fact symmetrical and so the proposed interchange will not work. This issue will be elaborated in the next section.
116
CAN COMPUTERS OVERCOME OUR COGNITIVE FINITUDE?
To be sure, their inability to answer the question “What is a question that no computer can possibly resolve?” is—viewed in a suitably formidable perspective—a token of the power of computers rather than of their limitedness. After all, we see the person who maintains “I can’t think of something I can’t accomplish” not as unimaginative but as a megalomaniac—and one who uses “we” instead of “I” as only slightly less so. But nevertheless, in the present case this pretension to strength marks a point of weakness. The key issue is whether computers might be defeated by questions which other problem solvers, such as humans, could overcome. The preceding deliberations indicate that there indeed are such limitations. For the ramifications of self-reference are such that no computer could satisfactorily answer certain questions regarding the limitation of the general capacity of computers to solve questions. But humans can in fact resolve such questions because, with them, no self-reference is involved. But could not a computer simply follow in the wake of our own reasoning here and instance P3 and P5 as self-resolving? Not really. For in view of the considerations adduced in relation to T1-T2 above, a computer cannot convincingly monitor the range of computer-tractable problems. And so, the responses of a computer in this sort of issue simply could not secure rational conviction. 7. CONCLUSION The key lesson of these deliberations is thus clear, computers can reduce but not eliminate our cognitive limitation. However much computers may surpass us in capacity, the fact remains that the same sorts of limitations that mark us as finite knowers will afflict computers as well. Computers can no more annihilate our cognitive limitations than machines can eliminate our physical limitations. And the matter has yet another aspect. It has to be realized that computers are our instruments and not our replacements. To accept something as true is always a matter of acting on one’s own responsibility. When one endorses something on the basis of another’s say-so—be it another person, or a reference source, or a computer auxiliary—the fact remains that the responsibility for so proceeding lies on one’s own shoulders. In accepting p on the basis of X’s say-so I do not only commit myself to p but to X’s veracity and reliability as well. In matters relating to p I may well trust your judgment more than mine. But if I do not trust my
117
Nicholas Rescher • Collected Papers V
judgment in matters of your reliability your views on p will not help me in settling the issue. And here it does not matter whether you are a person or a computer. In the final analysis, the use of computers is no cure for cognitive debility, for where there is no self-trust at all computers cannot be of aid.12
12
On the issues of this chapter see also the author’s Limits of Science (2nd ed.; Pittsburgh: University of Pittsburgh Press, 1999).
118
CONCLUSION ___________________________________________________ SYNOPSIS (1) We cannot achieve firm knowledge regarding the extent of our ignorance. (2) Insofar as the development of knowledge is impracticable so also will be the action of agents who base what they do upon what they take themselves to know. (3) Cognitive finitude paves a smooth pathway to philosophical realism. ___________________________________________________ 1. BEING REALISTIC ABOUT KNOWLEDGE AND IGNORANCE
W
hile there indeed are cognitive insolubiliaand we can plausibly identify some of themthe fact remains that detailed knowledge about the extent of our ignorance is unavailable to us. For extent turns on the size-ratio of the manifold of what one does know to the manifold of that what one does not, and it is impossible in the nature of things for us to get a clear fix on the latter. The situation here is not that of a crossword puzzleor of geographic explorationwhere the size of the terra incognita can somehow be measured in advance. We can form no sensible estimate of the imponderable domain of what can be known but is not. To be sure, we can manage to compare what one person or group knows with what some other person or group knows. But mapping the realm of what is knowable as such is beyond our powers.1 And so we return to one of the salient points of these deliberations—the ironic but in some ways fortunate fact is that one of the things about which we are most decidedly ignorant is the detailed nature of our ignorance itself. We simply cannot make a reliable assessment of the extent and substance of our ignorance.
1
On these issues of the magnitude and growth of knowledge see the author’s Scientific Progress (Oxford: Reidel, 1978).
Nicholas Rescher • Collected Papers V
In particular, the world's descriptive complexity is literally limitless. For it is clear that the number of true descriptive remarks that can be made about a thing—about any concrete element of existence, and, in specific, any particular physical object—is theoretically inexhaustible. Take a stone for example. Consider its physical features: its shape, its surface texture, its chemistry, etc. And then consider its causal background: its genesis and subsequent history. And then consider its functional aspects as reflected in its uses by the stonemason, or the architect, or the landscape decorator, etc. There is, in principle, no end to the different lines of consideration available to yield descriptive truths, so that the totality of potentially available facts about a thing—about any real thing whatever—is bottomless. John Maynard Keynes’s “Principle of Limited Variety” is simply wrong: there is no inherent limit to the number of distinct descriptive kinds or categories to which the things of this world can belong. As best we can possibly tell, natural reality has an infinite descriptive depth. It confronts us with a “Law of Natural Complexity”: There is no limit to the number of natural kinds to which any concrete particular belongs.2 And this means that no extent of potential knowledge—and ignorance!—limitedness is limitless as well. 2. RAMIFICATIONS OF COGNITIVE FINITUDE The reality of cognitive finitude is something that we must—and can— come to terms with, a “fact of life” that represents an important aspect of what makes us into the sorts of beings we indeed are. But one important point deserves to be noted in this connection, namely, that our cognitive imperfection means that the universe itself is unpredictable. For a world in which it transpires, as a matter of basic principle, that the future knowledge and thereby the future thoughts of intelligent beings are (at least in part) not predictable is one whose correlative physical phenomena are unpredictable as well. After all, what intelligent beings do will always in some ways reflect the state of their knowledge and where this is not predictable, so will their actions be. And so, as long as intelligent agents continue to exist within the world and act therein under the guidance of their putative knowledge, the world isand is bound to bein part unpredictable. The extinction of intelligent agents would be required to make the world perva2
For further detail on these issues see the author’s Complexity (New Brunswick, NJ: Transaction Publishers, 1998).
120
CONCLUSION
sively predictable.3 Of course, if the cognitive efforts of finite intelligences stood outside and apart from nature, things might be different in this regard, since physical predictability might then be combined with a “merely epistemic” mental unpredictability. But this, clearly, is a prospect that is implausible in the extreme. 3. INSTRUCTIONS OF REALISM One of the most fundamental aspects of our concept of a real thing is that our knowledge of it is imperfect—that the reality of something actual—any bit of concrete existence—is such as to transcend what we can know since there is always more to be said about it. And the inescapable fact of fallibilism and limitednessof our absolute confidence that our putative knowledge does not do full justice to what reality is actually likeis surely one of the best arguments for a realism. After all, the truth and nothing but the truth is one thing, but the whole truth is something else again. And if a comprehensively adequately grasp of “the way things really are” is beyond our powers, then this very circumstance itself constitutes a strong ground for holding that there is more to reality than we humans do or can know about. The cognitive intractability of things is accordingly something about which, in principle, we cannot delude ourselves, since such delusion would vindicate rather than deny a reality of facts independent of ourselves. It is the very limitation of our knowledge of thingsour recognition that reality extends beyond the horizons of what we can possibly know about itthat perhaps best betokens the mind-transcendence of the real. The very inadequacy of our knowledge militates towards philosophical realism because it clearly betokens that there is a reality out there that lies above and beyond the inadequate gropings of mind.
3
To be sure, there doubtless are other sources of unpredictability in nature, apart from the doings of intelligent agents.
121
NAME INDEX Adams, Marilyn McCord, 100n11 Anaximander of Miletus, 16, 24 Anderson, C. Anthony, 55n19 Aquinas, Thomas, 16, 16n2 Archimedes, 39, 39n2 Aristotle, 31, 85, 99 Arrow, Kenneth, 4 Baden, Powell R. S. S., 74, 74n8 Barrow, John P. 3n4 Beeley, Philip, 39n3 Bernstein, Jeremy, 58n1 Beutel, Eugen, 2n1 Bohm, David, 62n5 Bohr, Niels, 4, 85 Bolzano, Bernard, 51 Borges, Jorge Luis, 40n6, 43n10 Bradley, F. H., 17n3 Butler, Joseph, 79 Cassirer, Ernst, 67n2 Church, Alonzo, 107 Cicero, 40n6 Comte, Auguste, 71, 86n6 Couturat, Louis, 40n7 Descartes, René, 40 Dilthey, Wilhelm, 63 Dreyfus, Hubert L., 115 Du Bois-Reymond, Emil, 65, 67-70, 67n2 Eddington, Arthur, 16n1, 40n6 Edwards, C. H., 2n1 Einstein, Albert, 4, 43, 58 Frege, Gottlob, 51 Freud, Sigmund, 4 Galileo, Galilei, 73 Gallie, W. B., 60n4
Gödel, Kurt, 4, 107, 107n4 Haeckel, Ernst, 65, 67-70, 68n3, 69n4 Handy, Rollo, 68n3 Heath, T. C., 39n2 Heisenberg, Werner, 61 Hesse, Mary, 3n3 Hofstadter, Douglas, 106n2 Hugly, Philip, 44n11 Hume, David, 32, 33 Huxley, T. H., 40n6 James, William, 32, 33 Jankey, Karl, 58 Kant, Immanuel, 19, 66, 71, 86, 86n6, 98 Keynes, John Maynard, 120 Lambert, J. H., 2 Legendre, A. M., 71, 86n6 Leiber, Theodore, 110n5 Leibniz, G. W., 37-44, 39n3, 40n6, 42n8 Makinson, D. C., 19n4 Millikan, Robert A., 58, 59n3 Nansen, Fridtjof, 31 Newton, Isaac,70 Ord-Hume, A. W. J. G., 2n2 Passmore, John, 10n6 Pasteur, Louis, 59 Pearson, Karl, 70 Peirce, Charles Sanders, 71, 72n6, 86n6 Penrose, Roger, 116n9 Plank, Max, 4 Popper, K. R., 60n4 Pour-El, N. B., 106n2 Quintus, Ennius, 40n6 Rescher, Nicholas, 39n3 Richards, J. I., 106n2 Rosenberg, Alex, 60n4
124
Sartre, Jean-Paul, 10n7 Sayward, Charles, 44n11 Shannon, Claude, 4 Soddy, Frederick, 58, 58n2 Spinoza, Benedictus de, 23n7 Strawson, P. F., 45n13 Turing, Alan, 107 Urbach, Peter, 60n4 von Neumann, John, 61 Wilberforce, William, 40n6 Wittgenstein, Ludwig, 43, 52n17
125