238 82 2MB
English Pages 197 [201] Year 2011
George Englebretsen Robust Reality An Essay in Formal Ontology
Philosophische Analyse Philosophical Analysis Herausgegeben von / Edited by Herbert Hochberg • Rafael Hüntelmann • Christian Kanzian Richard Schantz • Erwin Tegtmeier Band 46 / Volume 46
George Englebretsen
Robust Reality An Essay in Formal Ontology
Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.d-nb.de.
North and South America by Transaction Books Rutgers University Piscataway, NJ 08854-8042 [email protected] United Kingdom, Ireland, Iceland, Turkey, Malta, Portugal by Gazelle Books Services Limited White Cross Mills Hightown LANCASTER, LA1 4XS [email protected]
Livraison pour la France et la Belgique: Librairie Philosophique J.Vrin 6, place de la Sorbonne; F-75005 PARIS Tel. +33 (0)1 43 54 03 47; Fax +33 (0)1 43 54 48 18 www.vrin.fr
2012 ontos verlag P.O. Box 15 41, D-63133 Heusenstamm www.ontosverlag.com ISBN 978-3-86838-133-7 2012 No part of this book may be reproduced, stored in retrieval systems or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use of the purchaser of the work Printed on acid-free paper FSC-certified (Forest Stewardship Council) This hardcover binding meets the International Library standard Printed in Germany by CPI buch bücher.de
Dedication For five of the most robust constituents of the real world: Aaron, Samuel, Nathan, Carlee and Emma
Contents Preface
ix
Part One Seiendes: Structural Ontology
1
I Introduction II Terms and Things A Classes, Categories, and Types B Trees 1 Aristotle and Ryle 2 Tree Rules a Translation Rules b A Note on Vacuousity c Levels of Rectitude C Bearing Fruit Part Two Sein: Metaphysics au Monde III
From Formal Ontology to Mondial Metaphysics A Term Logic 1 Syntax a Immaculate Predication? b No Predication Without Copulation! 2 Semantics B The World and Existence 1 Bare Facts 2 Naked Truths
1 13 13 18 18 29 30 47 51 59 75 76 76 76 77 80 84 91 91 96
viii
IV
3 An Aristotelian Conjecture 4 Tense, Vacuousity and Truth Reality A Nonfiction: Keeping It Real 1 À Propos of Noneism 2 On What ‘There’ Is B Making Things Up 1 Violators 2 Confabulators 3 Intruders C Seeing (as), Believing, and Knowing
Concluding Remarks Appendices: Appendix " Strawson on Truth Appendix $ Mondial Logic Bibliography Index
102 104 109 109 117 121 123 125 127 130 134 143 147 147 153 159 171
Preface “What is the use of a book,” thought Alice, “without any pictures or conversations?” Lewis Carroll I know very well what the temptations of the devil are, and one of the greatest is to give a man the idea that he can compose and publish a book. Cervantes
Talk is cheap. Or, as Shakespeare wrote: “Words pay no debts.” We can say whatever we want. We can babble; we can utter nonsense; we can produce sensible sentences, we can speak the truth. Caution: talk may be cheap but it is (except for babble) not completely free of charge. The price for speaking in sentences (nonsense or otherwise) is adherence to the basic rules of grammar for the particular natural language we are using. The price for speaking sense adds the tariff of sticking to elementary requirements of logic and semantics. Speaking the truth is hardly cheap at all; in fact, it’s downright expensive. Truth demands, in addition to the usual price for speaking in sensible sentences, that those sentences being used to make statements (rather than to ask questions, make recommendations, to warn, to command, etc.) express propositions that correspond to facts. Speakers of truths have to get things right, say how things in fact are, and that (unlike being grammatical or logical or semantically correct) is often a very difficult thing to do. According to some philosophers it’s quite impossible. Words and things. A pair of familiar enough topics. Let’s turn to things. Some things are real, others are not. The same goes for properties of things. For example, the property of being red is real. It characterizes some real things (Mars, most firetrucks, this apple). By contrast, the property of being a winged horse or the
x property of being Sherlock Holmes are not real properties. They characterize nothing that is real. Since they do characterize fictitious things we might say these properties are fictitious as well. Some properties characterize nothing at all, real or otherwise. Nothing has the property of being faster than light or the property of being Obama’s son. What shall we say of such cases? Adhering to the principle that every property (real or unreal) characterizes some thing, let us say that there just is no property of being faster than light or being Obama’s son. So, among properties, some are real, some are unreal, some are merely purported (they are pseudo-properties), they are not properties at all. There is more. The status of some properties is a matter of genuine dispute. Consider the property of being a ghost or the property of being divine. For some people such properties are real properties of real things, for others they are unreal properties of unreal things, for still others they are only pseudo-properties. It’s important to notice that real properties characterize not only real things, for they may very well characterize unreal things as well. Red characterizes Mars but also Rudolph’s nose. Unreal properties, however, though they characterize unreal things, do not characterize any real things. I have yet to come clean about what I mean by ‘thing’. Be patient. Whatever things are, and whether real or unreal, they have properties. The principle here is the sister of the one stated above. The present principle is that every thing (real or unreal) is characterized by some property. Thus, together they guarantee that there are no unhad properties and no unpropertied things (what philosophers call ‘bare particulars’). Berkeley endorsed these principles in his Principles when he wrote that “it seems no less absurd to suppose a substance without accidents than to suppose accidents without a substance.” Some things are real. Some properties are real. But there is still more. Some worlds are real. Propositions are real. Concepts are real. I can see that my debts to you, dear reader, are adding up quickly. Words, sentences, sense, nonsense, truth, things, properties, real, unreal, worlds, propositions, concepts ... the list grows. The remainder of this essay is my attempt to get clear (or, at least, clearer) about all of these, to pay off some of my debts as best I can. Actually, I’ve already tried to pay off part of this debt in a previous book, Bare Facts and Naked Truths, where I formulated a theory that was aimed in part at shedding light on the idea of truth and many of the others mentioned above. Nonetheless, some of those ideas beg for much more clarification. The idea of what is real will be central among those upon which the present essay focuses.
xi I have another debt, of a different order, to pay off. As has so often been the case, I am deeply indebted to Fred Sommers, who only late in his philosophical career has begun to garner the amount of critical attention and respect his work has so long deserved (see Oderberg 2005). Every path through the jungles of logic, language, and ontology that I have traveled has been marked with direction signs, danger warnings, and large numbers of helpful clues left by him. He blazed every one before I even knew they were there. Sommers began his work on a theory of ordinary language and its relation to what there is in the late 1950s. In a series of papers published over the next two decades, he built, developed, and exploited that theory, which he called the “tree theory.” His main thesis was this: Ordinary language is well-structured by the semantic relations that hold among its terms; what there is, the ontology, is well-structured by the inclusion relations among its categories of things; there is a one-to-one correspondence between terms and categories – ordinary language and ontology are structurally isomorphic. Sommers represented this structure graphically as an inverted tree. Throughout the 1960s, while continuing his own work on the theory, Sommers began to work on the development of a system of formal logic that would, unlike the standard predicate calculus, analyze statements into pairs of terms and functors applicable to them – the Term Functor Logic. Such a logic is intimately connected to the Tree Theory. The task of formulating the logic of terms was an enormous challenge, and Sommers pursued it in the face of steady opposition from many in the logical community. Throughout the final third of the twentieth century and into the present one he persisted in the perfection and defense of his logic. Along the way he was able to extract a number of rich ideas that flow from his theses on language, ontology, and logic. Prominent among these was a strong theory of truth by correspondence. Sommers and others have written extensively on virtually all aspects of his logic and his truth theory. In 1982 he published The Logic of Natural Language. This was followed by an anthology of essays on the logic by various philosophers and logicians, The New Syllogistic, which I edited in 1987. In 1996 I published Something to Reckon With, which attempted to provide some of the historical and philosophical background of Sommers’ logic. Then, in 2000, he and I published a full version of the logic of terms and functors in textbook form – An Invitation to Formal Reasoning. Finally in 2006, I published Bare Facts and Naked Truths, an attempt to give a complete account and defense of Sommers’ version of the
xii correspondence theory of truth. While the tree theory is implicit in much of this work, and various elements of the theory have been used explicitly, the theory has not been given an extensive, unified treatment of its own. Sommers did devote a chapter of The Logic of Natural Language to a summary of it. In 2005 David Oderberg edited a festschrift for Sommers, The Old New Logic, by a number of philosophers, logicians, psychologists, and computer theorists. The main focus of these essays was the termfunctor logic. In my contribution to that collection I gave an extensive account of his half-century of work, devoting part of it to a summary of the tree theory. Sommers said often, especially in the Logic of Natural Language, that he intended to write a book on the theory. He never did. I hope to fulfill that intention here. The origins of this book are complex. Back in 2005 I began to reflect on the tree theory. I had just finished my essay for the Oderberg collection. One of the things I had claimed there was that the tree theory, the term functor logic and the refurbished correspondence theory of truth together constitute a single, grand unified philosophical whole. I had worked hard on the tree theory in the ’60s and ’70s, but had then followed Sommers into term logic and the theory of truth. As it happened, the term logic and the truth theory both raised questions of a generally metaphysical nature. Since I had said that all three parts of Sommers’ work were united into a single omnibus theory, I began to look back to the tree theory and to seek some specific clues about just how the early work on ontology connected with and informed the new metaphysical ideas Sommers had begun harvesting. In 2006 I began working on a book that was to consist of two parts: (1) an account of the tree theory, including a historical background and an appraisal of reactions to the theory, and (2) a summary of Sommers’ newer ideas regarding metaphysical issues, with an attempt to integrate the older and newer ideas. By 2009 I had nearly completed part (1), but then, as so often happens with the best laid plans, things changed. Assuming, no doubt based on my sketchy account of what I was up to, that my new book would be primarily about the tree theory, Sommers wrote to me that he was hard at work on a new book of his own, a book in which he was laying out, once and for all, in detail his new metaphysical theory (“mondialism”). Needless to say, that theory and its relation to the tree theory was to be the subject of my part (2). He asked me to help him with his book and I was both eager and happy to do so. Anything I had to say could wait – not so for Sommers (then well into his ninth decade). Sommers’ book, The Mondial and the Ontological is forthcoming. As it turned out, much of the
xiii work of tying together the tree theory, the term logic, the truth theory and mondialism still needs to be done. So I returned once again to that task. Because I owe so much to Fred Sommers, I offer the present book as a feeble attempt to fulfill his wish to provide a full account of the tree theory, of the structure of language, its relation to ontology, and the many fruits that can be harvested from it – especially when watered by logic and ripened in the sunlight of truth. I also owe a word of gratitude to others who have helped me in various ways during the course of completing this work. Foremost among these are Harvey White, Ian Hacking, David Oderberg, Charles Sayward, Stephen Voss, and the late Philip L. Peterson. It’s often said that the biggest questions involve being, truth, goodness and beauty. This book tries to address the first; a few years ago I wrote a book dealing with the second. As for the last two, I leave those to my wife, the artist Libbey Griffith, who takes care of them quite handily – and in many ways.
PART ONE: SEIENDES STRUCTURAL ONTOLOGY Ontology is the philosophical discipline that tries to find out what there is; what entities make up reality, what is the stuff the world is made from? Thus, ontology is part of metaphysics, and in fact it seems to be about half of all of metaphysics. Thomas Hofweber There can be no philosophical science of ontology. Crispin Wright
I
Introduction I wish to add a final irreverent note. There is a critical technique that is much older than Kant’s; it legislates for metaphysics from the theory of assertion, not from the theory of knowledge. Plato’s Parmenides, Aristotle’s Analytics, and parts of his Metaphysics are the representatives of a critique which uses philosophical logic, not epistemology, to correct metaphysics and to set its bounds. This classical style of critique has witnessed a refreshing revival in the twentieth century at the hands of Husserl, Russell, and Ryle. I cannot help thinking that it would be most unfortunate if the program implicit in the best work of the above philosophers were sacrificed for still one more return to Kant.
2 For it may turn out that once we solve the right problems in philosophical logic we shall discover that Kant belongs to his century and not to ours. Fred Sommers
In his Categories (chapters 2-5) Aristotle famously argued that “of things that are” the most fundamental are particular objects, things that are “numerically one,” individuals. They alone are ontologically independent, not requiring any other things for their existence. They are “primary substance.” Thus, everything else is non-fundamental, dependent (ultimately) on primary substance for existence. So, for Aristotle, the distinction between what is ontologically fundamental and what is not is due to the distinction between ontological independence and ontological dependence. One way in which something can depend on another is mereological – the dependent is a part of the independent. But Aristotle was especially interested in two kinds of dependence: being said of a subject and being in a subject. Primary substances are dependent in neither of these ways. They are the subjects that dependent things are either said of or are in (or both). So-called secondary substances (the genera and species to which primary substances belong) are said of (thus ontologically dependent on) primary substances, particular individuals. Accidents, properties that do not determine “what it is” are also said of individual subject. They determine how many, or where, or when, etc., such subjects are. However, they are doubly dependent (ontologically) because not only are they said of subjects, they are in them as well. An instance of a property (his examples of properties are white and knowledge of grammar) is in a primary substance. For example, knowledge of grammar is said of Socrates (‘Socrates knows grammar’) and a particular instance of that property, Socrates’-knowledge-of-grammar, is in Socrates (viz., in his soul). Such instances of properties are ontologically dependent because they are in individual subjects, not because they are said of those subjects. We say that Socrates knows grammar (because his knowledge of grammar is in his soul); we do not say that Socrates knows his knowledge of grammar. Later philosophers called things that are in but not said of primary substances modes or tropes. Aristotle held that properties were not only said of subjects but also in them because instances of them were in those subjects. We might summarize Aristotle’s account of ontological fundamentals (at least as expressed in Categories) as follows: Individual objects, primary substances (e.g., Socrates) are ontologically fundamental. Their existence is independent of anything else. Everything else is
3 ontologically dependent on them. Ontologically dependent things are (1) secondary substances (genera and species such as animal and man), (2) properties such as white and knows grammar, and (3) tropes, instances of properties (e.g., the white of this wall, the knowledge of grammar in Socrates’ soul). Modern ontologists have generally concentrated on formulating various taxonomies of “things that are.” Usually this has taken the form of dividing things into ontological categories according to certain principles involving fundamental distinctions (universal/particular, singular/general, natural/non-natural kinds, objects/properties, subjects/predicates, simple/complex, etc.). The more interesting ones tend to follow Aristotle in taking the fundamental division to be between ontological independence and ontological dependence. Much of the weight of such taxonomic projects is then shifted to theses about the various intramural and intermural relations involving kinds, sorts, types, categories, etc., and their elements. For example, attempts are made to find principled reasons for saying that there can(not) be bare particulars or that there can(not) be uninstantiated properties. Again, the more interesting ones take some guidance from Aristotle in such matters. I’m not averse to doing ontology in this way, and indeed, I have spent much of my philosophical career doing just that. But in this book I want to reveal a second way, following a path blazed by Fred Sommers (his main works are found in the Bibliography) after he had fully explored the territory opened by following the first way. Where the first path leads to an investigation of “what there is,” concentrating on categories, kinds, individuals, properties, universals, and the like, the second begins by taking the actual world as ontologically fundamental. Existence is a notion to be handled with care. Our inclination is to take existence (and even non-existence) to be a property of the things to which we ascribe it. We say the planet Mars exists (or that the Greek god Mars does not exist), and take ourselves to be describing Mars. However, Hume (in the Treatise, I.ii.6 and I.iii.7) and Kant (in the Critique, A598B626-A600B628) taught us an invaluable lesson: ‘exist’ is “not a real predicate.” The ascription of existence (or non-existence) to a thing provides no information concerning that thing. Existence is not a property of individual things (Aristotelian primary substances). Though the HumeKant lesson is generally honored in word (talk is cheap) by philosophers, it is not as often honored in deed. After all, ordinary speakers still use such expressions as ‘exist’. And surely such uses are not senseless. If existence
4 is not a property of individuals, then what is it? If it isn’t senseless to ascribe it at all, then what (if not individuals) could it be a property of? In The Foundations of Arithmetic, §53, Frege offered an answer: existence is a “second-order predicate,” a property of concepts. His ontology admits only concepts and objects (individuals). Since existence cannot be a property of the latter, it must be a property of the former. To say, for example, that ghosts exist is not to say anything about ghosts; rather, it is to say something about the concept of ghost (viz., that that concept is not empty, some thing satisfies it, falls under it). For Frege, the “existential quantifier”, (›x), should be read as ‘something, x, is such that it falls under the concept...’. Thus ‘Ghosts exist’, ‘There are ghosts’, ‘Something is a ghost’, etc.), formulated as ‘(›x)Gx’, simply says that the concept of ghost is instantiated, not empty. Like Frege, Russell also learned the Hume-Kant lesson. However, unlike Frege, Russell (in The Philosophy of Logical Atomism, lecture V) took existence to be “essentially a property of a propositional function.” Thus the statement ‘Ghosts exist’, ‘(›x)Gx’, does not ascribe any property (viz., existence) to ghosts, nor (with Frege) to the concept of ghost. For Russell (at least in 1918) the statement ascribes the property of being sometimes true to the propositional function ‘Gx’, saying that at least one “determined” instance of it (i.e., one in which the undetermined variable, x, has been replaced by a determinate expression such as a name, perhaps ‘Casper’, or a definite description, perhaps ‘Hamlet’s father’) is true. Quine’s “to be is to be the value of a variable” expresses this view neatly and memorably. To repeat, the Hume-Kant lesson is invaluable. Existence and nonexistence are not properties of individuals. But there are reasons to reject both Frege’s claim that they are properties of concepts and Russell’s claim that they are properties of propositional functions. I will not delay our progress by discussion now of those reasons, except to note that if either claim is accepted, then a non-circular account of what makes any truth true will not be forthcoming. But, if Frege and Russell’s claims are to be rejected, then we are back to the question left unanswered by Hume and by Kant: If existence and non-existence are not properties of individuals, and if their ascriptions are not senseless, then what are they properties of? Sommers has offered what he calls a commonsense realist answer to that question. He not only answers the question left by Hume and Kant, he also avoids the undesired consequences of the answers offered by Frege and by Russell. Suppose I tell you that there are no mice in my house.
5 What am I talking about? Surely not the mice in my house – there are none. Nor would it make any sense to say that I’m talking about all the mice in the world (saying of them that they are to be located outside of my house). Clearly what I’m talking about is my house. Consider the soup I just had for lunch. It contains carrots, onions, celery, tomato juice and salt. It contains no meat, no beans, no peas. Now suppose I describe my soup to you. I might say that it was hot, nutritious and not expensive. I might go on to say that it was salty, oniony, had carrots, but was not meaty and had no peas. When I say that the soup is salty I’m not talking about the salt in the soup. When I say that it has no peas I’m not talking about peas (e.g., all the peas in the world are missing from my soup and to be found elsewhere!). All of my descriptions are of the soup. Notice that when I describe the soup as hot, nutritious and not expensive, I’m ascribing properties that the soup has as a whole. When I describe the soup as salty or pea-less, I’m ascribing properties that the soup has by virtue of what is or is not in it. Soups are totalities; they are constituted by their ingredients – their constituents. We deal with totalities all the time. Some totalities are variable. My soup will be the same soup if, after a few bites I add some pepper. The totality of cells that make up my body has both diminished and been augmented many times during my life, but it is still my body. Sets are invariable totalities. Adding or subtracting members, constituents, always results in a different set. Worlds are (variable) totalities. The actual world is a totality of all the things that are in it, all of its constituents. Alice’s Wonderland is a totality. The world of Greek mythology, the world of Sherlock Holmes and the world as it was before 9/11, are all totalities. Parts of totalities can be totalities. All the things in my house constitute a totality. All the things in Canada, all the things in North America, All the things in London during the 19th century, are totalities. Any totality (e.g., my soup, my house, Canada, the world, Wonderland) can be described in terms of properties it has in virtue of its constituents. These are constitutive properties. Like any property, constitutive properties can be positive or negative. Married was a positive property of Socrates; unmarried was a negative property of Spinoza. Salty is a positive constituent property of my soup; meatless is a negative constituent property of my soup. Miceless is a negative property of my house. Politician-ish, having politicians, is a positive constitutive property of the world; ghostless is a negative constitutive property of the world (though a positive constitutive property of Hamlet). Totalities can be
6 positively constitutively characterized by what they have in them; they can be negatively constitutively characterized by what they lack. Generally, if T is a totality, to say that T is X-ish is to say that T has some X as a constituent; to say the T is X-less is to say that T lacks any X. Every statement (sentence used to express a proposition and implicitly claim truth for it) that we make is made relative to some determinable (though usually not explicitly determined) domain of discourse. Moreover, every such domain is a totality (usually the actual world, some specifiable part of the world, some non-actual world, or some abstract domain like the set of natural numbers). Normally the context of a statement provides ample clues as to what the appropriate domain of discourse is. Normally, when I say that no horses can fly, my domain of discourse is the actual world; when I say that Pegasus was a flying horse it must be understood that my domain of discourse is the world of Greek mythology. In most ordinary discourse situations a speaker’s “default” domain is the actual world or some salient spatial or temporal part of it. You cannot hope to determine the truth or falsity of what I state without understanding the domain relative to which I speak. Now this is where existence and non-existence make their real entrance. As we’ve seen, to predicate existence or non-existence of something is to claim that the relevant domain of discourse has or lacks it. Existence and non-existence are not properties of individuals – they are properties of domains (relevant totalities). That’s why to say that there are no flying horses, that flying horses do not exist, is not to say anything about flying horses. To say, relative to my soup, that there are carrots is to say something about the soup – not the carrots. Existence (non-existence) is not a property of individuals (the Hume-Kant lesson). But it is also not a property of concepts nor of propositional functions. Existence is a property of domains, totalities relative to which statements are made. Like existence, truth is a notion to be handled with care. A commonsense realist account of truth is in terms of correspondence. Statements are sentences used to express propositions for which truth is implicitly claimed. A statement is true if and only if the proposition it is used to express is true. A proposition is true just in case it corresponds to a fact (which makes it true). In his famous 1950 debate on truth with Austin, Strawson taught us a lesson almost as valuable as Hume and Kant’s. His lesson about truth was that, whatever facts might be, they are not to be found in the world. If facts are not things in the world then where are they? Most subsequent analytic philosophers have decided that there could
7 be no other place for them, so the best thing to do with facts is to reject them completely. Facts, qua truth-makers, are not needed. The term ‘fact’ is still retained, however. Now facts are simply taken to be nothing more than true statements. Truth can’t be a matter of statements being made true by facts (and there most certainly is no truth-making fact to which a true statement could correspond – whatever correspondence might be). Instead, truth is a matter of coherence with other truths, or it is a matter of speakers’ intentions or beliefs, or it is a matter of social solidarity, or it is a primitive notion requiring no analysis, or it is a device of disquotation, or … . For the commonsense realist, by contrast, facts (as truth-makers) are the objective correlates of true propositions. If Strawson’s injunction against facts being in the world is correct, then such a realist must have an answer to the question left open: If facts are not things in the world then where are they? The answer is simply that truth-making facts are constitutive properties of the world. The presence of goats in the world is a fact. It is the fact that makes it true that there are goats. The absence of ghosts from the world is a fact. It is a fact that makes it true that there are no ghosts. Note that there are no facts that are falsity-makers (no “false facts”). To say that a proposition is false, or that the statement used to make it is false, is simply to say that no fact makes it true. Like existence and truth, reality is another notion that must be handled with care. Russell once said that there is only one world, the real world. And this is right as far as it goes. The world we inhabit is the real world (no matter what Plato or David Lewis might say). Other worlds are unreal (e.g., the possible world in which there are no politicians, fictitious worlds like Wonderland, or intensional abstract worlds like the world of pure mathematics). Of course, any of these worlds might be treated as an individual. When Leibniz said that the actual world was chosen by God from an infinity of possible worlds, he spoke relative to a domain consisting of the totality of possible worlds, claiming that one of its constituents was the God-chosen actual world. However, generally speaking the actual world is not treated as an individual (among other worlds); it is a domain of discourse relative to which statements are made. The constituents of the real world are themselves real. Indeed, every constituent of the real world is real; none is unreal. Constituents of unreal worlds are, for the most part, unreal. But some constituents of unreal worlds are nonetheless real. The Tower of London is a constituent of the real world and is also a constituent of the world of Sherlock Holmes. Kripke is a constituent of a possible world in
8 which he plays hockey on weekends. Nonetheless, he is not thereby just a possible individual. He is a constituent of the real world; he is real (he just possibly plays hockey on weekends). The properties of real things are real. A property that belongs to nothing real is not a real property. For example, if the world is not perfect and if nothing in it is perfect, then perfection may well be a property, but it is not a real one. (In this regard, it’s always good to keep in mind the crucial distinction between properties and concepts. Our concept of perfection is in perfect order even if perfection is not a real property, just as we have our concept of Sherlock Holmes even though Holmes is not real.) In summary, ontological independence is a sufficient but not a necessary condition for being real. Some individuals (e.g., the play Hamlet) are dependent on real individuals (Shakespeare) and are themselves real; others are dependent on real individuals but are themselves unreal (the fictitious prince Hamlet). All the constituents of the real world are real individuals, and their properties are real. Some pairs of worlds share some constituents (e.g., the Tower of London is in the real world and the world of Sherlock Holmes). All worlds other than the real one are dependent (in various distinct ways) on the real world. So, there are at least two ways of doing ontology. One way is to concentrate on what there is, to try to say, in a systematic and principled way, something about what the existence or reality of such things might be, to try to say how they are organized relative to one another. How is the ontologically dependent related to the independent, how are individuals related to universals, how do individuals constitute natural or un-natural kinds, how are parts related to wholes, how do things instantiate their properties, how are properties related to universals or kind, how are kinds or categories related to one another? And most importantly, which, if any, of these are ontologically fundamental? The second way is to take the real world as ontologically fundamental. Concentration on things in the world need not blind the ontologist to the world itself. The second way gives the world its ontological due. After initially engaging with ontological questions early on, analytic philosophers came to generally take the view that such questions were either pointless (pseudo-questions) or had already found an answer in the close analysis of language (regimented and supervised by the dictates of the new logic). How did this happen? The final years of the twentieth century saw a number of claims concerning the end of analysis and the beginning of “post-analytic” philosophy. There were perhaps a number of reasons for this. Most of the
9 truly great builders and developers of analytic philosophy are now gone. Postmodernism has attracted many of the more enthusiastic (if not more talented) younger philosophers. Economic and political pressures within the academy have put philosophers in general and analysts in particular, in ever more precarious positions. As well, much of the negative attitude many have toward analysis is due to a number of sources within the analytic fold itself. We are now well-begun into the next century and still there are philosophers who continue to practice philosophy in a decidedly analytic way. I am one of them. Still, given the recent downturn in the prestige enjoyed by analysis, those of us who continue to do it need to ask ourselves how we got here. I don’t have all the answers required, but I do think at least one historical line of development can offer some insight. This book is intended as an example of how one might continue to go about the business of philosophy in a generally analytic way without following that historical path that led at least some to doubt the value of analysis as an appropriate way to approach philosophical problems. There is little doubt that what made analytic philosophy what it has been for more than a century was the formulation of predicate logic by Frege and the exploitation of that logic for philosophical purposes by subsequent philosophers, most especially Russell, Wittgenstein, Carnap and Quine. So-called Continental philosophy drifted farther and farther from Anglo-American analytic philosophy by generally ignoring the new logic. That such a split would take place was far from obvious back in the late nineteenth century when Frege and Husserl were still on speaking terms. But by the beginning of the twentieth century the two lines (Husserl, phenomenology, existentialism, deconstructionism, postmodernism, etc., on the one hand, and Frege, Russell, Tractatus, logical atomism, logical positivism, Oxford analysis, etc., on the other) had already begun to grow apart. It is now often said that the differences here are a matter of differences in the kinds of issues seen as central to philosophy. Continental philosophers seem most interested in issues in ethics, social and political theory, literary theory, sociology and psychology; analysts are seen to be more interested in epistemology, natural science and mathematics. However, it seems to me that the salient differences between the two groups are methodological. The central role played by formal logic (especially the new mathematical logic) drove analytic philosophers to demand that the standards of rigor found in mathematics and science be applied to philosophy as well. This was a
10 good thing (and one of the reasons I was first attracted to the analytic way of doing philosophy). It must be remembered, of course, that the new logic was built initially to serve mathematical needs (especially to serve as the foundation of mathematics). By the time Gödel proved that no sufficiently powerful and consistent formal system could also be proven to be complete, the mathematical credentials of the new logic were becoming suspect. Still the new logic had a valuable role to play. Russell showed in his theory of descriptions how it could be used as a powerful tool for solving philosophical problems. And in spite of a mid-century Oxford backlash, analytic philosophers have been fairly unanimous in their view that logical analysis, whether rigorously formal or not, is the heart of any method for dealing with philosophical problems. As it happens, the kind of philosophical problem Russell had aimed to solve with his theory of descriptions was an ontological one. Problems of ontology involve issues like what there is, what kinds or categories of things there are, how things are related, the status of properties, universals, ideas, etc. The problems are as old as philosophy itself, and Plato and, especially, Aristotle addressed them systematically and vigorously. Long before Descartes highjacked the term, “First Philosophy” was a matter of metaphysics, viz., ontology. No one completely devoid of an interest in the problems of ontology can be called a philosopher. When Boole began to formulate his logical algebra, logic had tended to be viewed by philosophers such as Mill as intimately tied to epistemology. It might have been termed “formal epistemology.” After Boole logic was seen as a tool for exploring the more abstract and general features of the real world (as logicians such as Russell, Tarski, and Quine saw it), a matter of ontology. Indeed, Russell wrote that “logic is concerned with the real world just as zoology, though with more abstract and general features” (Russell 1919, p.169). In Logical Investigations, published at the very beginning of the twentieth century, Husserl coined the term “formal ontology.” Ontology could be carried out by discerning the formal structures lying behind our perceptions and thoughts about the world. Russell, and soon Wittgenstein, were also addressing central ontological issues. The implicit claim being made by the early analysts was that ontology was a matter of discerning certain hidden formal features (structures) of language. And how might one go about revealing the structures that mirror the world? Logic, naturally. The assumption was that formal logic reveals the innate structures of the world. It wasn’t long
11 before Russell was saying that philosophy was little more than formal logic. In the Tractatus Wittgenstein spelled out more vividly than anyone else just how formal logic determines not only the logical structure of (descriptive) language but the “scaffolding of the world” itself. Formal ontology turns out to be formal logic. The early analytic ontologists presumed that a formal ontology must rest on a formal logic. But they also assumed that that formal logic must be the new predicate logic initiated by Frege. In their enthusiastic rush to adopt Frege’s logic as their Organon, their dismissal of the older logic of terms initiated by Aristotle was facile, unfair, and precipitous. There are many sound reasons for considering some strengthened version of the old logic as a viable tool for philosophical analysis (see especially, Sommers and Englebretsen 2000, Englebretsen 1996, and Oderberg 2005). Nonetheless, the assumption that the ontological structure must be mirrored by any formal logic has yet to be established. It is a good bet that some formal, structural features of natural language will reveal something about ontology, about what and how things are. And, of course, when one looks for formal features of language the first and most obvious thing that strikes one is the fact that language has (perhaps hidden) a logical structure. Logical structure is a matter of syntax, how terms and more complex expressions are related to one another independently of what they might mean. What else could count as a structural account of language other than syntax, logic? While syntax deals with how terms and expressions are related to one another, semantics deals with how terms and complexes of expressions are related to things, to extra-linguistic items. If the ontologist is looking for formal features of language that might reveal ontological structure, then if there is a “formal semantics,” it would surely offer better prospects than a formal logic (in the sense of a formal syntax). This may be especially true if it happens that the kind of formal logic meant to be a guide to ontology is one built primarily for mathematical purposes. It is semantics, not syntax, that aims to lay bare the language-world relation. But what is a formal semantics? As it happens, there have been a large number of philosophers engaged in formal semantics for a very long time – and they are still at it. Aristotle outlined a formal ontology based on his formal semantics, Ryle made insightful suggestions about the general semantic constraints on natural language, and Fred Sommers is still in the process of building a rich and powerful formal ontology resting on work he did decades ago on formal semantics.
12 When analysts adopted modern predicate logic they rejected traditional term logic authored by Aristotle. Unfortunately, they also tended to reject all the rest of Aristotle (viz., what had not already been rejected by the early modern philosophers starting with Descartes and Locke). In particular, this meant that they rejected Aristotle’s insights about sense structure and ontology. In the essay that follows I intend to retrieve these Aristotelian insights, showing how they have been coupled with Ryle’s semantic ideas and used by Sommers to build a truly formal ontology. Along the way I will show how these theories have been profitably exploited by philosophers and others, and how much more there is yet to be revealed.
II TERMS AND THINGS Jedem Wort entspricht ein Gegenstand. Wilhelm von Humboldt A word is dead when it is said. Emily Dickinson
A Classes, Categories and Types [Natural kinds] are parted off from one another by an unfathomable chasm, instead of a mere ordinary ditch with a visible bottom. Mill So we are in the dark about the nature of philosophical problems and methods if we are in the dark about types and categories. Ryle Type theory is no more than formal ontology. It does not tell us what sorts of things there are, only how we may coherently say they are there. Sommers
Socrates had a large number of characteristics. He was Greek, wise, snub-nosed, and so on. We could characterize Socrates by saying of him
14 that he was Greek, wise, … . We could just as easily characterize him as French, or a sailor, or red-haired. Of course, he didn’t really have these latter characteristics. They are characteristics of others, but not of him. We speak truly whenever we characterize a thing as having a characteristic that it has; otherwise we speak falsely. A thing can be characterized as having characteristics that it does not have. Since anything can be characterized either truly or falsely, the characteristics it has are vastly exceeded by the ways it can be characterized. In sum, having a given characteristic is not the same as being characterized in a given way. Being wise is a characteristic of Socrates; being said to be French is a characterization of Socrates. Socrates might well have characteristics that no one has characterized him as having. It can be confusing to talk about having a characteristic and being characterized as having a characteristic. So let’s reserve the term ‘property’ for the characteristics that a thing has. We can use ‘characterization’ for ways in which is it said to have a property (whether it in fact has it or not). Truth is a matter of somehow having our words correspond to the ways things are – to things having the properties they have (our characterizations of the thing match the characteristics of that thing). Socrates had the property of wisdom and we say ‘Socrates was wise’. The relations that hold among words, things, and truth are subtle, complex and extremely important. Nevertheless, there are relations between words and things that hold quite independently of truth, of the way things are. Such relations, instead, are a matter of what we say rather than of what is – characterizations rather than properties. Needless to say, clarity about these relations is a prerequisite for getting things straight about truth. We can make a start at this by saying something about the word ‘word’. Linguists and anthropologists are keenly interested in words, but the logically astute philosopher looks deeper. Propositions are expressed by sentences (or subsentential clauses), which in turn are made up of terms (here we need to see logic underneath grammar, morphology, etc.). Terms may be simple or complex. Moreover, a term might be formed by the use of a single word or a complex of words, depending on the natural language (English, French, Greek, etc.) being used. In English, ‘red’, ‘Greek’, ‘wise’, are terms; as well, ‘taught Plato’, and ‘in the agora’ are terms. Terms are what the medieval scholastic philosophers called categoremata. These were contrasted (from a logical point of view) with syncategoremata, which are words or word complexes but not terms. Categorical terms can be used in characterizations; syncategorematic
15 words (usually called “logical expressions” or “formatives”) are used to turn terms into more complex terms – including sentences. English examples of formatives are ‘and’, ‘or’, ‘only if’, ‘if … then’, ‘every’, ‘not’, ‘non’, ‘is’, ‘isn’t’. Philosophers often say that categorical terms have a meaning “on their own” while logical expressions do not. Obviously, to say anything at all requires the use of both terms and logical expressions. (For a thorough discussion of the logic of terms see Sommers 1982, Englebretsen 1996, and Sommers and Englebretsen 2000.) When a term is used to characterize a thing it is said to be predicated of that thing (and is thus called a “predicate term”). In ‘Socrates is wise’ the term ‘wise’ is predicated of Socrates. ‘French’ is predicated of Socrates in ‘Socrates is French’. Any term is a predicable term; it can be used to characterize something; it can be predicated. Terms always come in pairs. One member is positive while the other is negative (being positive or negative are known as the term’s charge). English examples of such pairs are ‘red’/‘non-red’, ‘wise’/‘unwise’, ‘in the Agora’/‘not in the Agora’, ‘massive’/‘massless’, ‘even’/‘uneven’, ‘hopeful’/‘hopeless’. Any particular thing, such as Socrates or the number 2, can have a property expressed by one member of any such term-pair, or by no member of such a pair, but (and this is dictated by logic and commonsense) not by both members of such a pair. Socrates can be married or unmarried, but not both. In fact, he had the former property. By contrast, Socrates is neither even nor uneven, while the number 2 is even but neither married nor unmarried. But, as I said above, our initial interest here is not in terms and properties but in terms and characterizations. It is important now to notice that from this point of view all that matters is whether some member of a given term-pair or no member of that pair can be predicated of a given thing. Any thing that can be said to be P can just as well be said to be nonP. Both members of the term pair ‘P’/‘nonP’ are predicable of all the same things (though it goes without saying that they are never both true of all the same things). If neither member of a term-pair can be used to characterize (truly or falsely) a thing, then both members are impredicable of that thing. Thus, both ‘even’ and ‘uneven’ are impredicable of Socrates and both ‘married’ and ‘unmarried’ are impredicable of 2. From our present point of view, the charge on any term is immaterial (though it certainly matters when it comes to truth). We can think, for now, of terms as either positive or negative, as absolute (as in the absolute value of a numeral), ignoring their charges and writing them accordingly. Thus ‘*P*’ would be a term whose charge is ignored; it is either ‘P’ or ‘nonP’.
16 Given any absolute term and any thing, that term either spans that thing or it does not (see Sommers 1963). A term spans a thing just in case it can be used to characterize that thing whatever the term’s charge happens to be. So ‘*married*’ spans Socrates but not 2; ‘*even*’ spans 2 but not Socrates. Spanning is a relation between terms and things that has nothing to do with the actual properties of things, with the ways things are, with truth. Spanning is a matter of sense (not truth). A term spans a thing if it “makes sense” to predicate the term of that thing. A term fails to span a thing if predicating it of the thing makes no sense (nonsense). ‘The number 2 is married’ and ‘The number 2 is unmarried’ are senseless – 2 is not spanned by ‘*married*’. On the other hand, since 2 is prime, it follows that 2 is *prime*(and, of course, so is 4). All the things that share a given property can be said to form a class relative to that property. The class of husbands, for example, consists of all the men who have the property of being married. All the things that can be truly said to be red form the class of red things. It is an elementary principle that, given any two classes, one and only one of the following can hold: (i) both classes have all the same members, (ii) one class can be completely contained in the other, all the members of one are members of the other, (iii) each can be partially contained in the other, they share some – but not all – members in common, (iv) neither contains any part of the other, they share no members in common. For example, the class of men has the same membership as the class of male, adult humans. We say that the two classes are equivalent; whatever is a member of one is a member of the other. The class of husbands is completely contained in the class of men. The first class is said to be properly contained in the second; whatever is a husband is also a man, but not all men are in the class of husbands. The two classes men and parents share some – but not all – of their members. They are said to overlap or intersect; they have at least one member in common, but neither class is included in the other. Finally, the class of men and the class of women share no members in common. The two classes are said to be mutually exclusive. So, given any two classes, either they are equivalent, or one is properly included in the other, or they intersect, or they exclude one another. Notice that when two classes are equivalent we could say that each includes the other (in such cases the inclusion is said to be non-proper). Such are the rudiments of a theory of classes. They are relatively simple and obvious, and they have been understood and well-explored by mathematicians and logicians for a very long time.
17 Consider now the class of classes, the class that has every class as a member (we ignore here the paradoxes that appear to arise from this notion). There is a subclass of this, that is, a class of classes that is properly included in it. This subclass is of special importance for metaphysics. These special kinds of classes that form this special subclass of classes are called categories. They differ from classes in general in one very important way: given a pair of them, either one or both includes the other, or they are mutually exclusive, but they never merely intersect (if they have any member in common, then at least one of them is included in the other). And there’s more. Among categories there is a special subclass, one properly included in the class of all categories (which, again, is itself properly included in the class of all classes). These special kinds of categories are called types. Types differ from categories in general in one very important way: given a pair of them, they are mutually exclusive, but they are never inclusive. Given any pair of types, either they are equivalent or they are mutually exclusive. In summary then, categories are classes that can never merely intersect with one another (but they can be either equivalent or one can properly include the other); types are categories that can never properly include one another. We arrive at classes by considering the things that share a given property. How do we arrive at categories (and, subsequently, types)? We could say that classes are determined by paying attention to terms and the things they are true of. When it comes to categories, what we need to pay attention to are terms and what they span. Just as a class can be defined in terms of a term and what it is true of, a category can be defined in terms of a term (even just an absolute term) and what it spans. Let us simply say that a category is just the class of all things spanned by a given term. Thus, while lots of men, including Socrates and Obama, are in the class of husbands, many more men (indeed, all men) are in the category of *husband*(i.e, things that can be sensibly said to be husbands – including bachelors). The category of *husband*, the category determined by the term ‘husband’, consists of all the things that are either husbands or nonhusbands (which would include the Pope, my bachelor friend Harvey, etc., but not the number 2 or the apple on my desk). Classes are a matter of truth, what the properties of things are; categories are a matter of sense, how things can be sensibly characterized – whether truly or falsely.
18
B Trees The structure of language, unlike that of music, thus becomes a mirror of the structure of the world as presented to the intelligence. Grammar, philosophically studied, is akin to the deepest metaphysics. Santayana Thus it should not be too surprising to find that the logical structure that is necessary for natural language to be used for a tool of reasoning should correspond in some deep way to the grammatical structure of natural language. G. Lakoff Language may be a distorting mirror but it is the only mirror we have. Dummett
1
Aristotle and Ryle What exactly is meant by the word “category”, whether in Aristotle or in Kant or Hegel, I must confess that I have never been able to understand. I do not myself believe that the term “category” is in any way useful to philosophy, as representing any clear idea. Russell
The notions of spanning, categories and types have only been hinted at so far. They are due to the work of Fred Sommers, which began in the late 1950s. His interest at that time was in accounting for the restraints we implicitly recognize in our thought and speech in order to avoid nonsense – the kind of thing Gilbert Ryle called “category mistakes.” An important result of these efforts turned out to be a theory about the relationship between language and ontology, between terms and things. More particularly, language has an overall structure, determined by the relations
19 that hold among the senses of terms, and (via the spanning relation) it can be determined that the ontology has that same structure, determined by the relations that hold among the categories of things. Language and ontology are isomorphic. In his search for the hidden rules that account for our avoidance of category mistakes (nonsense) and the ways in which terms and things are connected so that the language/ontology structure is generated, Sommers had two important predecessors. Aristotle had provided a theory concerning the predicability relation, which, if properly understood, could be used to show how language has a hierarchical structure. Ryle had provided a test, a rule, which, he believed, we implicitly follow when avoiding category mistakes. Though in fact Sommers began with an investigation of Ryle’s rule, it is best for our own purposes to begin with Aristotle (good advice for any philosopher or logician any time). An important feature of Aristotle’s syllogistic logic, as laid out in Prior Analytics, is what Peter Geach has called “Aristotle’s thesis of interchangeability” (Geach 1972, 47). The thesis holds that the sentences that enter into a syllogism as premises or conclusion are copulated pairs of terms, and, importantly, any term can appear in any of the two termpositions in a sentence. Indeed, in any valid syllogism it will always be the case that at least one term will change its position (e.g., the middle term in a first figure syllogism). As it happens, Geach thought that the interchangeability thesis was an unmitigated disaster for the development of formal logic, a disaster comparable to Adam’s Fall, and one that awaited Frege to put things right. As it happens, there are good reasons to not follow Geach here (for my response to Geach’s view see Englebretsen 1981 and 1985). Now some categorical sentences (viz., particular affirmations and universal negations) are (simply) convertible; the rest are not. This means that while the interchangeability thesis guarantees that the two terms of any categorical sentence can exchange positions, only in those that are convertible will truth be preserved. Consider, for example, the true statement made by ‘Every bachelor is a man’. Term interchange yields the sensible, but false, statement ‘Every man is a bachelor’. By contrast, in the case of a convertible sentence, both sense and truth are preserved, as in the pair ‘Some logicians are singers’ and ‘Some singers are logicians’. But now consider the sentence ‘Some men are old’. Its valid conversion is ‘Some old (things) are men’. Again, both sense and truth are preserved in such a conversion, but, even so, why are we much more likely to use the
20 first rather than the second of these logically equivalent sentences? Aristotle raised this very question in Posterior Analytics, 81b10-41. Aristotle held that the logical, and especially the semantic, relations among terms were identical to, or at least revealed, the relations that hold among things and their attributes (or properties). At 81b23-24, after a brief review of syllogistic inference, he wrote: “If, however one is aiming at truth, one must be guided by the real connexions of subjects and attributes.” Sentences in which predication reflects such “real connexions” are “natural,” while sentences which do not are unnatural (“accidental”). The predication (in effect, the term order) in ‘Some men are old’ is natural. It is merely accidental in ‘Some old (things) are men’. While such term-pairs as ‘men’ and ‘old’ in such sentences are syntactically symmetric (due to interchangeabilty) and logically symmetric (due to convertibility), they are semantically asymmetric. And this asymmetry is due to an ontological asymmetry, an asymmetry to be found in the “real connexions” of subjects and attributes. For Aristotle, any pair of terms satisfies interchangeability. However, only some pairs are “reciprocally” predicable (e.g., ‘logician’ and ‘singer’), either ordering in the predication would be natural (82a1520; cf. 81b25-29, 83a14-21, and Prior Analytics 43a33-36). Other pairs (e.g., ‘men’ and ‘old’) are naturally predicable only in one direction, predication in the other direction would be accidental, unnatural. Finally, he noted that some term pairs are simply impredicable altogether. For example, any predication tying ‘married’ and ‘number’ would be senseless. I could add here that, given Aristotle’s account of primary substances as the ultimate subjects of predication, he would have agreed that in any predication between a singular term (e.g., ‘Socrates’) and a general term (‘wise’), only the predication of the general term to the singular would be natural. In any case, it must always be remembered that, from a purely logical point of view, even accidental predications are in good logical order. If the natural ‘Socrates is wise’ is true, then so is the unnatural ‘Some wise (thing) is Socrates’. Aristotle’s recognition that for most term-pairs, on semantic grounds alone, either predication is asymmetric, natural in only one direction (‘men’/‘old’), or impossible, senseless (‘married’/‘number’), suggests that he took the terms of a language to exhibit a determined structure. Furthermore, his belief that the semantic relations between pairs of terms reflect the “real connexion” between things and attributes suggests that he took the semantic structure of language to match the ontological structure
21 determined by these “connexions.” The question then naturally arises: What is the nature of this linguistic/ontological structure? Aristotle’s answer is that the structure is a hierarchical tree. The principle that determines this structure is a rule that governs term predicability. Sommers (Sommers 2005, 3) has formulated Aristotle’s rule succinctly as follows: If B and C are mutually impredicable, and A is predicable of both, then A is naturally predicable of B and C and B and C are not naturally predicable of A. We could call this Aristotle’s Tree Rule. Here is how it is used to grow a tree. Think of a term that is merely accidentally predicable of a second term as “lower” than that term. “Higher” terms are naturally predicable of more terms than are terms lower relative to them. For example, ‘old’ is higher than ‘men’ since predication here only goes naturally in one direction, ‘old’ is naturally (perhaps even mutually, “reciprocally”) predicable of terms that ‘men’ is not predicable of at all (e.g., ‘theory’). From this point of view, natural predication only occurs when the subjectterm of a sentence is lower than the predicate-term. The lowest terms, then, will be singular. We can display a tree generated by Aristotle’s Tree Rule by simply writing higher terms above lower terms and drawing straight line segments only between predicable pairs. In so doing, we make use of a corollary of Aristotle’s rule – call it Aristotle’s Transitivity Corollary: Any term naturally predicable of a second term is naturally predicable of any term that second term is naturally predicable of. According to this corollary, since ‘old’ is naturally predicable of both ‘men’ and ‘theory’ it is also naturally predicable of ‘logicians’ and ‘Darwinism’. This is because ‘men’ is (mutually) predicable of ‘logicians’ and ‘theory’ is (naturally) predicable of ‘Darwinism’. Using Aristotle’s rule and corollary we can generate the following tree:
22 interesting ' ( old even ' ( ' men theory chances ' ' ( logician Relativity Darwinism ' ( Aristotle Frege
( prime number ' ( 2 4
Such trees are finite hierarchies just as Aristotle had claimed (83b25-32), with terms like ‘interesting’ at the apex, since such terms are naturally predicable of any other term. Note also that terms that are found on different branches cannot be predicated sensibly of one another at all – such predications are nonsense (e.g., ‘2' and ‘men’ or ‘even’ and ‘logician’). In summary, Aristotle held that in spite of the logical interchangeability of terms, and even the convertibility of some term-pairs in certain kinds of categorical sentences, we nonetheless tend to find some predications of such term-pairs unnatural (and others even senseless). This suggest that natural predication among terms is hierarchical (a “higher” term, A, naturally predicable of a “lower” term, B, which is predicable of A only accidentally, unnaturally). As well, Aristotle was convinced that the distinction we seem to make intuitively between natural and unnatural predications is merely a reflection of the “real connexions” that hold between subjects and their attributes (i.e., between things and their characteristics, their properties). Thus, a finite hierarchical tree, generated by the principles that govern the predicability relations among terms, can be constructed for a natural language, a tree that at the same time reveals the structure determined by the connections between things and their properties. Language and ontology appear to be isomorphic when viewed by Aristotle’s lights. In “Categories” (Ryle 1959) Ryle explored the idea that both terms and things seem to divide naturally into distinct “categories.” It was Aristotle, of course, who originally tried to spell out this idea. Aristotle began his own Categories by distinguishing two kinds of relations that can hold among “things that are” – being said of a subject and being in a subject. Since something either might or might not be said of a given thing, and it either might or might not be in a given thing, the result was
23 the fourfold classification known as Aristotle’s “ontological square.” The relation of being said of is fairly clearly meant to be the relation of predication. The relation of being in is not so clear, but Aristotle does tell us that he does not mean in as part of. What he means is conveyed by the examples he gives of each of the four kinds generated by his two kinds of relations. For example, man is said of a subject (Socrates) but is not in any subject; white is both said of and in a subject (this wall); the-white-of-thiswall is in, but not said of, this wall (and it is not in any other thing of which white is said). Likewise, Socrates’-knowledge-of-grammar is in, but not said of, Socrates. Finally, Socrates is neither said of nor in any other subject. Just what is the ontological status of these four kinds of things? Some are easier to discern than others. Things that are neither said of nor in anything else are obviously intended to be individual things (what Aristotle calls “primary substances”), things that are in the “most basic sense” such that “if they did not exist nothing else would” (Categories 2b5-6). So the relation of predication (being said of) and the relation of being in are dependency relations: whatever stands in one or both of these relations to something else is ontologically dependent upon that thing. Thus man depends upon men (e.g., Socrates, Plato, Harvey). Consequently, Aristotle calls things that are said of individual subjects (primary substances) “secondary substances” (much later they tended to be called “kinds” or “essences”). Likewise, what is in but not said of a subject is dependent upon it. Socrates’ knowledge of grammar is in him (in his soul according to Aristotle), belongs to him, and could not be in any other subject; nor could it be without Socrates – it depends on him in order for it to be at all. Things like Socrates’ knowledge of grammar and the white of this wall, often referred to as “property instances” or “tropes” by later philosophers, enjoyed very little attention from Aristotle after Categories. Finally, in addition to individuals (primary substances), kinds (secondary substances), and tropes, there are those things that are both said of and in subjects. For example, knowledge of grammar is said of Socrates (we say, for example, ‘Socrates knows grammar’) and it is in Socrates (Socrates particular knowledge of grammar is unique to him – it is in him). These things are often called “accidents.” Accidents, things both said of and in subjects are, therefore, doubly dependent. They have being only by virtue of some instance of them (a trope) being in a subject, and that instance in turn has being only by virtue of the subject itself – the primary substance.
24 Notice that individuals and tropes are singular in some sense. Kinds and accidents are general – they can be said of many things (or at least more than just one thing). As well, kinds and accidents can be predicated, while individuals and tropes cannot. We can think of kinds and accidents as types of predicates, and the Greek word for predicates is kategoria (categories). Aristotle’s main interest in Categories is in things that can be said of a subject. Consequently, he offers his tenfold division of the categories: the category of (secondary substance, and the other categories gleaned from the various sorts of accidents (quality, quantity, relation, etc.). Thus far I’ve been fairly promiscuous in my use of the word ‘thing’. Just what ontological status do things that belong to kinds or accidents have? Aristotle had contrasted things that are (Categories 1a20) with things that are said (1a16). What are said are surely terms and combinations of terms. Secondary substances (kinds) and accidents were clearly meant to be among the things that are. Yet from one point of view they seem to be linguistic items – terms, predicates. After all, predication is a matter of logic or grammar, and linguistic rather than an ontological matter. So man and white are merely the terms ‘man’ and ‘white’. From a different point of view they are genuinely ontic rather than merely linguistic. They are types of universals or properties. Aristotle seems to have wanted it both ways. And this is not so surprising given that, in general, he saw what was at least a close similarity (even an isomorphism, if not identity) between language and ontology. (For a brief survey of some of the ways the ontological status of things that are said of was dealt with among the Scholastics see Englebretsen 1996, ch. 1.) It was the question of whether predicates, categories, were taken by Aristotle to be terms, universals, or both that Ryle raised in his own “Categories.” Ryle coined the term “category mistake” in The Concept of Mind (Ryle 1949). He attempted to elucidate what he meant by the term with a series of well-known examples (16-18). What they all have in common is that the speaker who has committed a category mistake has taken something that belongs to one category as belonging to a different category. He went on then to show that Cartesian dualism amounts to one big category mistake – or a series of smaller mistakes. However, Ryle failed to be adequately clear about just what he took categories (and their members) to be. He seems to have offered at least three different accounts. In saying what he means by “myth” (since he intended to go on to show that the Cartesian theory is a myth – “Descartes’ Myth”) he wrote that a
25 myth “is a presentation of facts belonging to one category in the idiom appropriate to another (8). Shortly after that he wrote that he intended “to indicate to what logical types the concepts under investigation ought to be allocated” (8). Still later he seemed to indicate that it is “items in the … vocabulary” (17) or “terms” (22) that constitute types or categories (he used ‘type’ and ‘category’ interchangeably). So, are categories made up of facts, concepts, or words/terms? I think that it’s safe to say that Ryle, when being most careful, would have said something like the following. Properly speaking, category mistakes are statements (sentences used to make truth-claims, say what the facts are) in which at least one term belonging to a given category is used as if it belongs to a different category. Terms, when used in sentences, express concepts. The category to which a given term properly belongs is determined by the concept it is ordinarily used to express. Thus, what we say reveals how we think, our terms reveal our concepts. As he said, category “mistakes were made by people who did not know how to wield the concepts …” (17). In 1949, then, Ryle seems to have believed that categories consisted of concepts (which some philosophers, such as Frege, had taken to be the senses of terms). A decade later, when he wrote “Categories,” he appears to have taken term categorization to be more basic than conceptual categorization. There he sought the foundation of Aristotle’s division of things that are said of (predicates) into the various categories of substance, quality, quantity, relation, etc. How did Aristotle arrive at his categories? “What,” Ryle asked, “did Aristotle think that his list of Categories was a list of?” (Ryle 1959, 65). Ryle began by showing how such a list of categories (types of predicates) could be generated. Given a particular subject, an individual such as Socrates, what are the different types of questions one could ask about it? One could ask of Socrates such questions as: What is he? How is he? Where is he? How tall is he? Whom does he teach? and so forth. “There are as many different types of predicates of Socrates as there are irreducibly different sorts of questions about him” (66). Aristotle’s categories are the “different types of predicates.” While Ryle had little interest in any Aristotelian-like finite list of types of predicates, he did want to discover just how one might go about determining whether or not any given pair of predicates belonged to the same the same type – to the same category. His test was formulated in terms of sentences and terms, what he called “sentence-factors” (69), –
26 what he had called concepts in 1949. The linguistic test, using sentences and sentence-factors, merely reflected an implicit semantico-conceptual test. A “sentence frame” is the result of removing a sentence-factor from a sentence (69). Many expressions can become sentence-factors by filling the resulting gap in the sentence frame. In some cases the result will be a grammatically correct sentence; in other cases the result will be ungrammatical. But not every case of sentence frame gap-filling that results in a grammatically correct sentence will be sensible. Some (actually most) will be absurd, senseless. Consider the sentence ‘Socrates is wise’. Removing the term (sentence-factor) ‘Socrates’ yields the sentence frame ‘… is wise’. The gap here can be grammatically filled by such terms as ‘my teacher’, ‘Harvey’, ‘the equator’, ‘2’, or ‘the Tower of London’. But only the first two substitutions would make sense (truth is not involved here); the others would be “absurd” (70 and 76), what he earlier called “category mistakes.” What the test here reveals is that terms such as ‘Socrates’ and ‘2’ are radically different with respect to the kinds of expressions (sentence frames) with which they can have sensible intercourse. Even given this Rylean test, we want to know if the examples of absurdity offered above are due to the nature of the terms used, the concepts they are used to express, or the objects they denote. Surely this kind of absurdity is not a matter of just grammar. Perhaps it “results from the improper coupling not of expressions but of what the expressions signify” (77). Ryle, being unsure, counseled prudence: So it is, on the whole, prudent to talk logic in the semantic idiom and to formulate our theories and inquiries in such a way as to advertise all the time that we are considering whether such and such an expression may or may not be coupled in such a such ways with other expressions. (76) So we stick to talking about terms. The question of just what categories are categories of (Ryle’s formulation: “What are types types of?” 76) hasn’t been answered. But he has at least raised the question, and, moreover, he has introduced philosopher to the question that ends “Categories”: “But what are the tests of absurdity?” (81). There is a chicken-and-egg quality about the set of issues we’ve been discussing here. It seems that, whatever categories turn out to be categories of, our knowledge that two things belong to different categories
27 is a matter of knowing that there is at least a pair of statements – each somehow involving one of these things – such that one statement (or the proposition it is normally used to express) is absurd and the other is not. However, our knowledge that a given statement is absurd (or not) seems to be a matter of whether or not a term (or concept? or thing?) belongs to the appropriate category. As we have seen, Ryle generally took categories to consist of linguistic expressions, terms, sentence-factors, the these, in turn, rest on the categories consisting of what they are used to express (concepts, proposition-factors). And it was in terms of such “things” that he formulated (or nearly did) the rule that inspired Sommers’ own theory of categories. In “Categories” Ryle wrote: Two proposition-factors are of different categories or types if there are sentence-frames such that when the expressions for those factors are imported as alternative complements to the same gap-signs, the resultant sentences are significant in the one case and absurd in the other. (77-78) As we have also seen, one can think of a sentence-frame as merely a predicate expression (à la Frege’s notion that predicate expressions, unlike names, are unsaturated, incomplete, gapped). So, in effect, Ryle was claiming that a pair of concepts, proposition-factors, are categorially distinct whenever predication of each by a common predicate is absurd in one case and not in the other (i.e., whenever there is a predicate that applies sensibly to one but not to the other). Ryle then added that “whenever a particular gap-sign is thus tolerant of typically dissimilar complements, that sign has typical ambiguity” (78). In other words, whenever two concepts are typically (i.e., categorially) distinct and yet there is a predicate that is sensibly predicable of each, then that predicate is ambiguous. Implicit in these passages is a rule that governs predicates and what they can sensibly be predicated of. It is a rule about category difference and predicate ambiguity. We can formulate it as Ryle’s Rule for Enforcing Ambiguity If a given predicate term applies sensibly to two things, then either they belong to the same category or the term is ambiguous.
28 Let the predicate term be ‘P’ and the two things be designated ‘a’ and ‘b’; finally, let straight line segments indicate the relation of sensible predication. We can construct the following diagram for the kind of situation covered by Ryle’s Rule: ‘P’ ' ( a
b
The rule requires that any time such a situation holds the following must be the case: either a and b belong to the same category or ‘P’ is ambiguous. When Ryle argued in The Concept of Mind for the category mistakenness, absurdity, of Descartes’ Myth, he made use of just this rule. According to the rule, any theory that allows the situation diagramed above, with ‘P’ univocal and a and b categorially distinct, … is entirely false, and false not in detail but in principle. It is not merely an assemblage of particular mistakes. It is one big mistake and a mistake of a special kind. It is, namely, a category-mistake. (16) The ground of the Cartesian theory is the myth of the “ghost in the machine,” which … maintains that there exist both bodies and minds; that there occur physical processes and mental processes; that there are mechanical causes of corporeal movement and mental causes of corporeal movement. (22) In other words, a theory grounded on the myth of the ghost in the machine is committed to all of the following: minds and bodies are categorially distinct, but ‘exists’ is sensibly predicable of both; physical process and mental process are categorially distinct, but ‘occurs’ is sensibly predicable of both; the mechanical and the mental are categorially distinct, but ‘causes corporeal movement’ is sensibly predicable of both. Each situation is a case of the inverted-v configuration governed by Ryle’s Rule. So, unless ambiguity is forced on the predicates involved, any theory
29 that countenances any such situation is absurd. While a dualist might be forced to adhere to such an absurd doctrine, Ryle opted for ambiguity. It is perfectly proper to say, in one logical tone of voice, that there exist minds and to say, in another logical tone of voice, that there exist bodies. But these expressions do not indicate two different species of existence, for ‘existence’ is not a generic word like ‘coloured’ or ‘sexed’. They indicate two different senses of ‘exist’, somewhat as ‘rising’ has different senses in ‘the tide is rising’, ‘hopes are rising’, and ‘the average age of death is rising’. (23) Ryle’s Rule for Enforcing Ambiguity makes good sense here. It enforces ambiguity just where we expect to find it. After all, the tide and the average age of death are surely categorially distinct (we want to say “they’re different sorts of things altogether’), so we take ‘rising’ to be used in different senses when applied to each. Indeed, we can find many cases where Ryle’s rule seems to be in force. For example, ‘rational’ must be taken to be ambiguous over Socrates and the number 2; ‘true’ seems clearly to be ambiguous over sentences and propositions. Likewise, ‘well supported’ must be ambiguous over my pants and Darwin’s theory. But, is ‘exist’ really ambiguous over minds and bodies? Given Hume and Kant’s lesson, is ‘exist’ even a predicable term? For Ryle’s rule to actually be a rule that governs how we ordinarily talk about things and express our concepts, it must be universally applicable. A decade after Ryle published “Categories” Sommers showed that is was not a rule characterizing our ordinary language. In doing so, Sommers not only formulated a new “rule for enforcing ambiguity” but offered a rich and satisfying theoretical account of the nature of categories and what constitute them (something both Aristotle and Ryle were less than clear about), and he laid out, far more rigorously than Aristotle had, the architecture of language and ontology.
2
Tree Rules They don’t seem to have any rules in particular. Lewis Carroll
30 Rule 42 Lewis Carroll
a
Translation Rules [T]houghts rule the world. Emerson What’s the French for fiddle-de-dee? Lewis Carroll
We’ve made a start, with the help of Aristotle and Ryle, but there is still much to do. In particular, we need to get clearer about our notions of predication, predicability, terms, spanning, categories, and the like – not to mention the more knotty notions of property, existence, truth, object, and reality. Sommers’ main claim in his early work (especially Sommers 1959, 1963, 1965, 1971) was that ordinary language and ontology are structurally isomorphic. This means, first, that language has a structure and that ontology has a structure, and, second, these structures are the same. Moreover, he argued that is structure could be represented as a nonreticulating binary tree (such as the example we saw above in our discussion of Aristotle). Sommers’ first task was to account for the structure of language – the “ordinary language tree.” The next task was to build the ontological tree and establish the isomorphism of the two. Once this isomorphism is established, one could “read” the ontological structure from the language tree. The items displayed at the nodes of a language tree are the senses of the terms of the language. Terms with more than one sense have those senses located at different places on the tree. The line segments connecting tree nodes represent a specific kind of relation between pairs of term senses displayed at the nodes. Pairs of term senses (from now on simply terms) connected by a straight line segment are said to be “U-related;” pairs not so connected are said to be “N-related.” “U” here stands for “use.” Pairs of U-related terms can be used together to form sensible subject-predicate sentences. N-related pairs form only subject-predicate sentences that are not sensible (nonsense – N). Sentences formed by pairs of N-related terms are, as we will see, category mistakes. A complete understanding of a
31 given term would involve knowing each of its senses and all of the other senses of terms to which these senses are U-related; thus it would be a knowledge of all the ways that term could be used sensibly. “The sense of an expression will be its location with respect to other expressions, its semantic range. It is what it ‘makes sense’ with as contrasted with what it fails to make sense with” (1959, 161). When two terms, say S and P, are U-related or N-related, this can be indicated by the expressions ‘U(SP)’ and ‘N(SP)’. Since these relations are obviously symmetric, the order of the terms in such expressions is irrelevant. Two terms are said to be connected just in case they are terms in the same language. Formally, the connected conditions are: (i) If U(XY) and U(XZ) then Y and Z are connected. (ii) If Y and Z are connected and W and Z are connected then so are Y and W. Two terms are part of the same language if and only if they are connected. The set of terms of a language can be defined as the largest set of terms such that each term in the set is connected to every other term in the set. A model language tree can be constructed by writing all the terms of the language such that a solid line is placed only between U-related pairs. Here is a preliminary example of a partial tree (not the similarity between this structure and the hierarchical tree we saw in our discussion of Aristotle): interesting ' ( red number ' ( ( house person prime ( thoughtful Two terms are U-related if one can go from one to the other by following a path that is either continuously upward or continuously downward. If the path between a pair of terms has both upward and downward segments, then the two terms must be N-related. For example, given our partial tree, U(red, interesting), but also U(house, interesting); N(red, number), but also N(thoughtful, prime).
32 Consider this abstract tree:
H
'
D
' (
B
J
A
' (
E
' K
(
F
M
C ' (
'
(
L
P
( '
G
Q
(
R
We can see that every term is U-related to A and, for example, Q and D are N-related. If there were no apex term, A, we would simply have two different languages, since no member of the set {BDEHJ} would be connected to any member of the set {CFGKLMPQR}. The ontological importance of the ordinary language tree is that it shares a common structure with the ontological tree. In “Types and Ontology” (Sommers 1963), one of Sommers’ main tasks was to establish this isomorphism. Preliminary to an understanding of this is an understanding of several elementary principles that he developed there. One of the things he was most anxious to argue against was the adoption of what he called the “transitivity” rule for sense relations. The rule is the traditional view that if two terms are such that the both “make sense” with some third term then they must make sense with each other. We could state the rule like this: Transitivity Rule: For any three terms, P, Q, R, if U(PQ) and U(QR) then U(PQ) We will see that the Transitivity Rule is, in effect, a version of Ryle’s Rule for Enforcing Ambiguity. So what’s wrong with the rule? Sommers’ attack on the Transitivity Rule proceeded (in outline for now) as follows. He first distinguished between four kinds of types, called "-, B-, $-, and A-types. "-types turn out to be what we have called categories and $-types are simply our types. He gave semantic definitions for each in such a way that they are clearly exclusive of one another. He aimed to show that if the Transitivity Rule holds for U-relatedness, then the distinctions between these four kinds of types would be obliterated. He
33 gave syntactic definitions for B-types and A-types and then argued that if two terms are U-related they belong to the same B-type and if they are of the same B-type they are U-related. In contrast, two terms might be Urelated yet perhaps not belong to the same A-type. However, if the transitivity of U-relatedness holds, then two terms are U-related just in case they belong to the same A-type. But A-types and B-types would then be indistinguishable and the earlier fourfold classification of types would disintegrate. So Sommers concluded that the Transitivity Rule must be rejected and then replaced, which is the task of most of the remainder of “Types and Ontology.” The complement of a term is its logical contrary. Every term has exactly one logical contrary (or, as we saw earlier, terms come in logically charged pairs – positive and negative), but a term might have any number of non-logical, simple, contraries. For example, ‘red’ and ‘blue’ are simple contraries; so are ‘red’ and ‘green’, etc. But the logical contrary of ‘red’ is ‘non-red’. The logical contrary of a term is semantically equivalent to a disjunction of all of its non-logical contraries. Thus ‘non-red’ is equivalent to ‘blue or green or white or yellow or …’. Recall now that any term and its logical contrary must span the same things, where a term spans a thing if and only if it can be predicated sensibly, not category mistakenly, of that thing. Since any term and its logical contrary span the same things, we can ignore the charge of the term (i.e., whether it or its logical contrary is involved) and consider what Sommers called the “absolute” term. As we have already seen, absolute P, *P* (as with the absolute value of a numerical expression) can be read as the exclusive disjunction of P with its logical contrary (viz., ‘either P or nonP’). For the purposes of ontology, we need only consider a language tree of absolute term. Let us “absolutize” our earlier sample abstract tree. '
*A*
(
*B* *C* ' ( ' ( *D* *E* *F* *G* ' ( ' ( *H* *J* *K* *L* ' ( *M* *Q* ' ( *P* *R*
34 Two terms could, of course, happen to have just the same uses (i.e., stand in all the same U- and N-relations as each other). This would mean that they have the same location on the tree; they are, as we will say, categorially synonymous. This means that we will often (in fact, virtually always for ordinary language) have several terms sharing a given node on a language tree. For example, the set of color terms, ‘red’, ‘blue’, ‘yellow’, etc., will be at the same note. By Sommers’ definition, a B-type will consist of all those terms that are connected to each other by a continuous upward or downward path (i.e., those that are mutually U-related). On the tree above we have the following B-types: {ABDH}, {ABDJ}, {ABE}, {ACFK}, {ACG}, {ACFLQP}, and {ACFLQR}. An A-type is constituted by all the terms at a given node, the set of terms that are categorially synonymous. While the A- and B-types are sets of terms, the "- and $-types (from now on just categories and types, respectively) are sets of things, individuals. A set of things constitutes a category with respect to a given term, say P (using term letters to name themselves where there is no danger of confusion), if and only if that term spans all of the things in the set and nothing outside the set. Since P spans whatever nonP spans, and vice versa, and the set spanned by *P* is the union of the sets spanned by P and nonP, we can say that a category is always determined by an absolute term. So, for every set of categorially synonymous absolute terms on the language tree (i.e., all the terms occupying the same node), there will correspond a category on the ontological tree. Since categorially synonymous terms span exactly the same things, we can say, as well, that for each node on the language tree there exists exactly one corresponding "-type, category. In other words, given a tree such as ours, we can read the expression ‘*P*’ either as ‘absolute P’ or as ‘the category with respect to P’. Thus the mapping of ontology onto language, via the spanning relation begins. Every member of a type ($-type) is spanned by all and only all of the members of a given B-type. The B-types of a tree can be found by tracing a continuously downward path from the top node, the apex, of a language tree to each bottom node. So there will be a type corresponding to each bottom node on the tree. Bottom nodes are simply special kinds of nodes, just as types are simply special kinds of categories. In summary, then, given any tree such as our sample tree, we can read the symbols on the tree (e.g., ‘*A*’, ‘*B*’, etc.) either as absolute terms or as categories. There is a one-one correspondence (by definition) between absolute terms and
35 categories (and thus, between bottom terms and types). This means that our tree, when read in one way, is a map of sense relations among terms, and, when read in another way, is a map of relations among categories. The task of establishing an isomorphism between the linguistic and ontological structures appears to be achieved. However, an important question remains to be answered. What kind of relation among categories is represented on the ontological tree? When the tree is read as a language tree, the line segments between pairs of nodes represent U-relatedness. What do these line segments represent when the nodes are taken as locations for categories? In order to establish a complete isomorphism between language and ontology, Sommers had to find some principle that would allow one to “translate” U-relatedness into some relation between pairs of categories. This principle is the requisite principle of isomorphism between language and ontology. He wrote: There are two sorts of categories which are of major importance to ontologist. These are the categories which are all inclusive,containing all others as sub-categories, and those that are completely exclusive, containing no sub-categories at all. (1963, 354) If there is such an all-inclusive category, then the terms that determine it (i.e., the terms at the apex node of the language tree) must be U-related to every other term in the language. If there are such all exclusive categories, then the terms that determine them (i.e., the terms at the bottom nodes of the language tree) must be U-related to a finite set of other terms, all of which are mutually U-related. In other words, the bottom node terms must belong to only one B-type. The structural principle that Sommers offered is the Law of Categorial Inclusion. This law, along with its derivative theorems, satisfies equally well his requirement for a law that establishes an isomorphism between linguistic and ontological structures and his requirement for a replacement for the Transitivity Rule. He initially presented the law unformalized. The existence of dominating categories and categories of the lowest level is however assured by a fundamental law which governs all categories. This law can be derived from a “syntactical” rule governing the distribution of category correct
36 statements in a natural language. Equally, the rule can be derived from the law. Indeed the law of categories and the law governing the distribution of category mistakes (N-related pairs) are two expressions of a structural isomorphism which holds between ‘language’ and ‘ontology’. In its categorial form the structural principle may be thus stated: If C1 and C2 are any two categories, then either C1 and C2 have no members in common or C1 is included in C2 or C2 is included in C1. Given this law and the already noted fact that the categories … defined by the predicates [terms] of any natural language are finite in number, it will follow that there must be one category that includes all others and several that include no others. I shall call this the law of categorial inclusion since it states that whenever two categories have some common membership one of the two is included in the other. (1963, 355) I have quoted at length because several point made in this passage are extremely important for the development of Sommers’ program. His answer to our question above was, in effect, that corresponding to the Urelation on the language tree is the category inclusion relation on the ontological tree. What is more, one version of this law is just the replacement required for the rejected Transitivity Rule. Here also is a convenient place to re-emphasize the correctness of Sommers’ claim that, given this law and the fact that the categories determined by a natural language are finite, it follows that there must be one all-inclusive category and several all-exclusive categories (viz., types). This simply means that since the set of categories is finite, there must be both a lower and an upper limit to the number of possible inclusion combinations between any two members of the set. The mains body of Sommers’ theory of structural ontology rests on the Law of Categorial Inclusion and its derivative theorems. Called Trules, Sommers offered three (T.1, T.2, T.3). T.1, the Law of Categorial Inclusion, can be formulated in three equivalent ways: T.1
U(PQ) / ((*P*d*Q*) v (*Q*d*P*)) U(PQ) / (x)(*P*x e *Q*x) v (x)(*Q*x e *P*x)
37 U(PQ) / (x)((x 0*P*) e (x 0*Q*)) v (x)((x 0*Q*) e x 0*P*)) Note that expressions such as ‘*P*’ are used for both terms and the categories determined by them; ‘v’ is used inclusively; ‘d’ is used for simple, rather than proper, inclusion. The first theorem derived from T.1 holds for any three terms, P, Q and R (concatenation represents conjunction): T.2 U(PQ)U(QR)N(PR)/ (*P*d*Q*)(*R*d*Q*)(*Q*é*P*)(*Q*é*R*)(*P*é*R*)(*R*é*P*) According to Sommers (here I replace his overscoring with ‘-’ for negation): The importance of T.2 lies in its clear statement of an equivalence between syntactical and categorical relations, between sense and non-sense gotten by conjoining terms, and the inclusions among the sets of things they span. From another side, T.2 states a simple criterion which validates the subject-predicate distinction. To see this clearly, let us substitute the symbol “7” where-ever we have the inclusion symbol “d” and let us interpret “Q7P” to mean: “of (What is) P it is significant to say that it is Q”. Or – what is the same thing – “that it is Q” is predicable of (what is) P. We now have U(XY)U(XZ)(N(YZ)/(X7Y)(X7Z)-(Y7X)-(Z7X)-(Y7Z)-(Z7Y)
This tells us that the middle term X of two significant pairs (sentences) is always the predicate with respect to the two N-related terms Y and Z of a triad, U(XY)U(XZ)N(YZ). The relational expression “is predicable of” is seen to be isomorphic with the relational expression “contains”. Both the relations of predication and of containment are transitive and nonsymmetrical. In this respect the predicative tie between terms differs from the tie of significance of U-relatedness since the U-relation is, as we have seen, nontransitive and symmetrical. (1963, 356) Thus the Law of Categorial Inclusion, by virtue of T.2, is a replacement for the Transitivity Rule in that it retains transitivity for type
38 sameness while rejecting transitivity for U-relatedness. Moreover, predication is nonsymmetrical and, according to Sommers, it demonstrates the need for the traditional subject-predicate distinction while serving as a test for finding which term of a pair is the predicate (this should bring to mind Aristotle’s rule concerning “natural” predicates). It is obvious that T.2 is a key theorem in Sommers’ program. The Law of Categorial Inclusion, along with its derivative theorems is meant to serve not only as an expression of structural isomorphism between language and ontology, it is also a replacement for the Transitivity Rule. We have seen that the Law, by virtue of T.2, does replace the Transitivity Rule, insofar as it retains transitivity for type sameness while rejecting transitivity for U-relatedness. But, for the Law to serve as a complete replacement for the Rule, it must be able to be used to enforce only necessary ambiguity. This is part of the role filled by the second derived theorem, T.3. T.3
-(U(PQ)U(QR)U(PS)N(PR)N(QS))
What T.3 says is that any term that is U-related to two N-related terms is U-related to any term that is U-related to either of those terms. For example, any term, Q, that is U-related to two N-related terms, P and R, is also U-related to any term, S, that is U-related to either P or R. So, if S is U-related to P, then Q and S cannot be N-related. In other words, it can’t ever be the case that U(PQ)U(QR)U(PS)N(PR)N(QS). This theorem is called the “tree rule” since it allows one to locate any term on a language tree. If one examines the tree for the values U(PQ)U(QR)U(PS)N(PR)N(QS), one finds that they cannot all occur simultaneously. Since U(PQ), U(QR), and N(PR), the tree initially looks like this: *Q* ' *P* But now if also U(PS), then either
( *R*
39 *Q* '
( *R*
*P* ' *S* or *S* (
*Q* '
(
*P*
*R*
If one can go from one term to another by following a continuously downward or upward path, then the two terms are U-related. Since N(QS), the first alternative tree cannot be correct, leaving the second alternative as the only possible one for the given values. However, Sommers said in his informal introduction of T.1 that if two categories have a common membership one must be included in the other. Looking at the second alternative tree, the Law of Categorial Inclusion tells us (reading the absolute terms now as their corresponding categories) that either (i) *S*and *Q* are included in *P* or (ii) *P* is included in both *S* and *Q*. But (i) cannot hold since if *Q* is included in *P*and *R*is included in *Q* then *R*is included in *P*, which is impossible since N(PR); and if *Q* is included in *P* and *Q* is included in *R* then *Q* constitutes a common membership for *P*and *R* such that one must include the other, which again is impossible since N(PR). Moreover, (ii) cannot hold since if *P* is included in both *Q* and *S* it constitutes a common membership for *Q* and *S*, so either *Q* is included in *S* or *S* is included in *Q*, which is likewise impossible since N(QS). Thus there seems to be good reason for wanting T.3 as a rule and showing that it can be derived from the Law (T.1). T.2 tells one how to translate U(PQ)U(QR)N(PR) into a statement about category inclusion. If, once more, we construct a tree for these three values, we obtain: *Q* ' *P*
( *R*
40 The important thing about T.2 is that it denies that, in the given case, *Q* is included in either *P* or *R*(i.e., -(*Q*d*P*) and -(*Q*d*R*)). For if *Q* were included in *P* and in *R*, then*P*and *R* would share some common membership, making it the case (contrary to the given values) that at least one of them be included in the other (i.e., U(PR)). Any two terms that are N-related to each other but U-related to a third term must both determine categories that are included in the category determined by the third term If this third category were included in either one or both of the first two, then the first two terms would be U-related – and this is just what T.2 says. Finally, we could add to these three rules the following new rule, one Sommers suggested in “Predicability” (Sommers 1965, 277): T.4
(P)(›x)(x0*P*)
What T.4 says, in effect, is that no category is empty. It turns out that T.4 is not only a reasonable rule to want to preserve, but it is required for the derivations of T.2 and T.1 (I will spare the reader the details of why this is so). Sommers’ Tree Theory, formulated in his papers (especially Sommers 1963) published during the 1960s was impressive, offering a powerful theory for exploring the structure of ontological categories and types. This is illustrated by the fact that both Strawson (Strawson 1967) and more recently Jacquette (Jacquette 2002) have anthologized “Types and Ontology.” We have seen that the main feature that the tree theory displays is an isomorphism between language and ontology. At the time of its construction, Sommers thought of his theory as simply the fulfillment of Russell’s program of formal ontology, which held that “(a) clarification of natural language is ontologically revealing and discriminatory of the sorts of things there are; (b) linguistic structures and ontological structures are isomorphic” (Sommers 1963, 350-351). What the tree theory says is that certain pieces of non-grammatical linguistic information (i.e., facts of language independent of any particular natural language) parallel some pieces of ontological information. Thus, the business of doing ontology can begin at least by looking to the structure of ordinary language. This isomorphism between language and ontology is complete. For every (absolute) term in the language there is a corresponding category. For terms that are categorially synonymous, located at the same node on the
41 language tree, the categories they determine are identical. So we can read the expressions displayed on a tree either as terms or as names of categories. Moreover, T.1, the Law of Categorial Inclusion, guarantees that the expression *P* _________ *Q* (a pair of terms connected by a line segment) found on a tree can be read either as an expression of a U-relation between two terms or as an expression of a category inclusion relation between two categories. And in Sommers’ discussion of T.2 we learned that the relation of being predicable of corresponds to the relation of inclusion. Any tree can be interpreted as either a language tree (displaying the sense relations among its terms) or as an ontological tree (displaying the inclusion relations among its categories – including types). Rules that express only relations between (senses of) terms might be called sense rules. T.3 is an example of a sense rule. Corresponding to each sense rule will be a rule expressing only relations between categories (or members of categories). Such rules might be called categorial (also category) rules. The Law of Categorial Inclusion is a prime example of such a rule. The categorial rule parallel to T.3 might be Sommers’ rule about “category straddlers.” “Strictly speaking there are no category straddlers” (Sommers 1965, 279-280). In other words, it is not the case that there is a thing which belongs to two categories, neither of which includes the other. Hence, given a tree, if we consider it as a language tree, we can read off T.3. If we consider it as an ontological tree, we can read off the rule about category straddlers. They tell us, in effect, that no tree can contain the following configuration (where the letter variables can be read as either absolute terms or as the categories determined by those terms): S (
Q ' ( P
R
In addition to sense rules and category rules there are rules that actually express the isomorphism between language and ontology. The reason we can “translate” back and forth between the two is that any expression on the tree can be taken either as a term or as a category and every pair of expressions connected by a continuously downward or
42 upward path of line segments can be taken as exhibiting either a U-relation or an inclusion relation. T.1 and the definition of a category ("-type) are such rules. Rules like these give not sense information, like T.3, and not categorial information, like the straddler rule, but rather define a connection between terms and categories or between term relations and category relations. Such rules might appropriately be called translation rules. A translation rule tells us something about both language and ontology, about both terms (i.e., senses of terms) and categories (of things). Sommers’ “rule for enforcing ambiguity” is the most important translation rule (I have discussed this rule in many places, most recently in Englebretsen 2005). Sommers’ Rule for Enforcing Ambiguity If a, b, and c are any three things and P and Q are predicates [terms] such that it makes sense to predicate P of a and b but not of c and it makes sense to predicate Q of b and c but not of a, then P must be equivocal over a and b or Q must be equivocal over b and c. Conversely, if P and Q are univocal predicates, then there can be no three things such that P applies to a and b but not c while Q applies to b and c but not a. (Sommers 1965, 265-266) Compare Sommers’ Rule for Enforcing Ambiguity above with Ryle’s Rule for Enforcing Ambiguity (If a given predicate term applies sensibly to two things, then either they belong to the same category or the term is ambiguous). The latter requires that any term that applies sensibly to (spans) a pair of things that are categorially distinct, belong to different categories, must be ambiguous. If, however, the term is not ambiguous, it is univocal, then the two things must belong to the same category. Ryle’s rule does seem to work in many cases. As we’ve seen, the term ‘rational’ can be sensibly applied to both Socrates and the number 2. But, assuming any ontological theory that countenances any kind of categorial distinctions, surely people and numbers are members of different categories. Consequently, we quite reasonably take ‘rational’ to be ambiguous. The term ‘blue’ is likewise taken to be ambiguous over such categorially different things as material objects, on the one hand, and certain moods and emotional states, on the other. Ryle argued that terms like ‘causes corporeal movement’ and ‘exists’ would have to be ambiguous for Cartesians who believe that minds and bodies constitute different
43 categories (kinds of substance for Descartes) while allowing such terms to apply to both. Ryle’s argument rested implicitly on his rule. Nonetheless, there are many counter-examples to his rule. Terms such as ‘interesting’ and ‘mentioned by Leibniz’ are sensibly applicable (whether truly or falsely) to both people and numbers, yet most ordinary speakers would balk at construing such terms as ambiguous. Sommers’ Rule for Enforcing Ambiguity is more complex but more subtle than Ryle’s. It is a much improved version of the earlier rule in that it takes into account the kinds of counter-examples that plague Ryle’s version. Look again at Sommers’ rule. We could diagram it like this (where the line segments represent the spanning relation between a term and a thing): P / a
Q \
/ b
\ c
What Sommer’s Rule for Enforcing Ambiguity does is prohibit such an arrangement. Suppose our two terms are univocal, then b would amount to a common member of the two categories determined by the two terms. Now we know that if two categories share any member in common then at least one must include the other, by the Law of Categorial Inclusion, T.1. However, this would be impossible if the two terms are N-related, which they are. This is because they determine two categories that are such that neither is included in the other, for a isn’t in the category determined by Q and c isn’t in the category determined by P. In short, if two categories do share a common member, then either one shares all the members of the other or at least one of the terms determines more than one category (i.e., is ambiguous). Another way of putting this: there can be no category straddlers, such as b above. Calling either Ryle’s or Sommers’ rule a “rule for enforcing amgibuity” is in some ways misleading. Enforcing ambiguity on terms is only one way to eliminate the dreaded M-configuration(/ \ / \) diagramed above. There are other ways (as we will soon see). Sommers made it clear that he intended the rule as a tool for determining the categorial coherency, the ontological legitimacy, of any philosophical theory. “Note that the rule for ambiguity does not tell us which of two predicates is ambiguous … we only apply such a rule to check the coherency of philosophical positions”
44 (Sommers 1965, 266). Similarly, Ryle applied his rule, implicitly, to cast doubt on the categorial coherence of Cartesian dualism. As a kind of test, consider the dualist theory of mind. It appears to countenance the following arrangements of terms and things: thinks
weighs 70kg / \ / \ René’s mind René René’s body Any theory that permits this kind of M-configuration is categorially incoherent. Such coherency can be regained in several ways (for an attempt to account for all of them see Englebretsen 1975). One or both terms could be taken as ambiguous (i.e., ambiguity could be force on one or both of them). For example, Strawson (Strawson 1959, 103) briefly considers, but rejects, taking terms such as ‘weighs 70kg’ to be ambiguous over persons and material objects. One could follow Ryle and deny minds; or be an idealist who denies material objects. Or, one could be a dualist and deny that persons are anything more than a composite of minds and bodies – René is not just one thing but two, a composite rather than an individual. In any of these alternatives the M-configuration is avoided by either introducing new elements (e.g., additional term senses) or by eliminating one or more of the individuals (for the materialist: René’s mind, for the idealist: René’s body, for the dualist: René). There are many other examples of theory incoherence being avoided by applying Sommers’ rule. Consider this M arrangement of terms and things: heard seen / \ / \ the ringing of the bell the bell the color of the bell While Berkeley might have achieved categorial coherence by eliminating material objects such as bells, claiming that they are not single individuals but merely composites of immediate sense impressions such as sounds and colors, others would simply take perception terms to be ambiguous over both material objects and objects of perception. Either way, categorial coherence is regained. As a final example, consider a theory that permits the following Mconfiguration:
45 hot
democratic / \ / \ the planet Venus Mexico the United Nations Perhaps nations are composites of societies and land masses. Soon we will say more about the notion of individual. For now we might keep in mind Aristotle’s idea that a primary substance is an individual thing that is ontologically independent, unique, knowable, etc. At this stage, however, it should already be clear that any category that is not a type will properly include one or more other categories, which, if these are not types, will in turn properly include categories (categories that are not types). Types, in contrast, are categories that have no subcategories. So, types are not constituted by any sub-categories. A type is constituted by the things (individuals) that are spanned by all the terms that determine that type. This means that descending from each bottom node on an ontological tree we could add lines of spanning to each individual constituting that type. The Law of Categorial Inclusion and the Rule for Enforcing Ambiguity will guarantee that no individual will belong to more than one type. During the 1960s and ’70s, Sommers’ Rule for Enforcing Ambiguity was the subject of a large number of critical discussions (many are listed in the Bibliography). Not all of those who sought to reject the rule had a full understanding of it. Some critics failed to see what kind of rule it actually is (for example, Haack 1968 and Reinhardt 1965-66). Others simply failed to recognize the force of the rule. Those in the first group took the rule to be a sense rule rather than a translation rule, mistakenly believing that it is meant as a test for discovering term ambiguity. Needless to say, many terms exhibit ambiguity that is not the result of having that ambiguity forced on them by anything like Sommers’ rule. Pointing to examples of such terms in no way touches Sommers’ rule (or even Ryle’s). The fact is, as Sommers tried to make clear, the rule is a test of the categorial coherence of any theory. Any theory that breaks the rule, i.e., permits the M-configuration involving three individuals and two terms, is categorially flawed and can be rendered coherent by doing any of a number of things – most prominently, enforcing ambiguity on at least one of the terms. In any case, the rule is in no way a rule for discovering which terms of a language are ambiguous. In the second group of critics are those (e.g., Chandler 1968 and van Straaten 1968) who, while recognizing that the rule was
46 never intended as a rule for discovering term ambiguity, nonetheless failed to appreciate the real force of the rule as a tool for testing the categorial coherence of a theory. This was the result of their failure to count the last sentence of Sommers’ statement of the rule as a necessary part of it. “Conversely, if P and Q are univocal predicates, then there can be no three things a, b, and c such that P applies to a and b but not c while Q applies to b and c but not a” (Sommers 1965, 166). In other words, confronted with the task of rendering categorially coherent a theory that allows the Mconfiguration, our choice is not simply to make one or both terms ambiguous. As we saw above, we could eliminate one or more of the things, or we could deny of one (or more) of them that it is actually just one thing (an individual) but instead is actually two things (a composite), or we could chose a combination of such strategies. In summary, for those who would use Sommers’ Rule for Enforcing Ambiguity, the following points must be kept in mind: (a) the rule is not a sense rule for testing or discovering term ambiguity, (b) it is a translation rule for testing theory coherence, and (c) the enforcement of coherence on a theory that is categorially incoherent need not be the result just of enforcing ambiguity on some term. The rule is the heart of Sommers’ formal ontology. We might call all of the rules that are the concern of the ontologist ontological rules, whether they be sense rules, categorial rules, or translation rules. A formal ontologist can get at the business of doing ontology by looking either to the sense rules or to the category rules, for they are “isomorphic” in the sense described thus far. But as we have seen, if one of the ontologists’ tasks is to ensure theory coherence, then this is best done by using the translation rules (such as the Rule for Enforcing Ambiguity). Actually, the business of the ontologist is, in a way, threefold. One concern is with language and is directed at avoiding nonsense, category mistakes. Another concern is with the arrangements of categories and is directed at avoiding category straddlers. These two tasks collapse into the general concern with theories and is directed at avoiding categorial incoherence. Thus, in judging a theory to be categorially incoherent by use of the Rule for Enforcing Ambiguity, the ontologist’s verdict could be taken in either of two ways. Either the guilty theory can be said, in one mode of speaking, to allow some category mistake to be true, or, in another mode, to allow some category straddler into the ontology. A category straddler might be thought of as a category mistake in the material (ontological) mode.
47 b
A Note on Vacuousity How could I seek the empty world again? Emily Brontë I shall softly and suddenly vanish away – And the notion I cannot endure! Lewis Carroll
Sommers gave an analysis of the notion of a vacuous statement (Sommers 1965) that is, I believe, both correct and, for the most part, useful (for more see, e.g., Englebretsen 1972). However, there is at least one kind of vacuous statement that Sommers failed to sufficiently account for, and which, if ignored, could raise problems for his ontological program. He wrote this about vacuousity: We predicate a term of a thing by affirming or denying it of that thing, and a term is said to be predicable of a thing if and only if either the affirmation or denial is true. For example, the term clean is impredicable of the equator since neither the affirmation ‘is clean’ nor the denial ‘is not clean’ is true of the equator. A statement predicating the affirmation or the denial of the term clean of the equator is a category mistake. And in general, if Pa is a statement in which ‘is P’ is said of a and PNa is a statement in which ‘is not P’ is said of a, then the term P is said to be predicable of a only if either Pa or PNa is true. When Pa and PNa are both false, we may call them vacuous statements. (272-273) He then went on to mention various species of vacuous statement, each one differing from the others on the basis of the cause of its vacuousity. Category mistakes are vacuous because the subject of predication is not the “sort” or “type” of thing that does or does not satisfy that predicate (i.e., it is not spanned by that predicate term, is not in the category determined by that term). On the other hand, the “paradoxes of predication” (e.g., semantic paradoxes such as The Liar, or Grelling’s) are not vacuous for this reason. These paradoxes can be proven (formally) to be vacuous. Briefly, a paradox of predication is vacuous because it logically implies both the negation of its affirmation as well as the negation of its denial. A
48 third species of vacuous statements are conditionals with false antecedents or vacuous consequents (see Sommers 1964 and 1965). Finally, there are those statements that are vacuous because the subject-term fails to refer. The statements that the present king of France is bald and that the present king of France is not bald are both false, not because the present king of France is not the type of thing that is or fails to be bald, but because ‘the present king of France’ is an expression that fails to refer. This fourfold classification is insufficient. There are statements that are vacuous not because they are category mistakes, not because they are paradoxical, not because they are conditionals of a certain sort, not because their subject expressions fail to refer. In a footnote to the passage quoted above, Sommers mentioned this other kind of statement, but he failed to admit that such statements are vacuous. He noted: … if my house is on a hill we do not – unless special explanations are given – either affirm or deny that the house is taller than the hill; nevertheless, taller than the hill is predicable of my house. Similarly, since Socrates never heard of Moses we should not say that he was or was not in awe of Moses. But this does not mean that being in awe of Moses (unlike being in awe of Parmenides) is impredicable of Socrates. The point is that special explanations could be given to free the context for predication in these cases and such explanation would not involve tampering with the sort or type of thing the house or Socrates is. (272) Sommers was correct about the house example. While we do not ordinarily either affirm or deny taller than the hill of the house, “special explanations could be given to free the context for predication.” In other words, an ordinary context could be found in which we would be willing to make the required predication. But this is not the case for the Socrates example. Let me suggest the following (with which I believe Sommers would agree). Given any statement that is such that we do not ordinarily predicate the given term of the given subject, that statement is vacuous if the only explanation possible for freeing the context for the required predication involves either (1) tampering with the sort or type of thing the subject is, (2) tampering with the logical formulation of the statement, (3) tampering with the truth of its truth-functional constituents, or (4) tampering with the truth of any statement that it presupposes (in the sense in which statements, rather than speakers, presuppose).
49 The statement about the house on the hill is not vacuous because the freeing of the context for the required predication does not involve any of the alternatives (1) through (4). The situation is different for the statement that Socrates was in awe of Moses. We are unwilling to predicate ‘was in awe of Moses’ of Socrates. Moreover, the only possible way of freeing the context so that we would make such a predication would involve tampering with the truth of a statement presupposed by the given statement. The statement that Socrates was in awe of Moses presupposes the statement that Socrates had heard of, was aware of, Moses, which is false but which we would have to construe as true in order to be willing to either affirm or deny ‘was in awe of Moses’ of Socrates. Clearly, statements like this are vacuous, though they are not category mistakes, semantically paradoxical, or conditionals of a certain kind. Nor are they vacuous because their subject-terms fail to refer. Socrates (unlike the number 2) is indeed the sort of thing that could be in awe of Moses (just like the Liar statement is the sort of thing that could be false or just as the present king of France is the sort of thing that could be bald). What I am proposing is a modification of Sommers’ fourfold classification of vacuous statements by making the fourth kind of vacuousity more general. His fourth kind consists of just those statements that presuppose some existence statement that happens to be false. My fourth kind consists of just those statements that presuppose any statement (e.g., that Socrates was aware of Moses, that there is a present king of France, etc.) that is false. Later we will see that, quite generally, a statement presupposes another statement if and only if both it and its logical contrary entail that other statement. A statement is vacuous if and only if any statement it presupposes is false. (For more see Englebretsen 1972, 1973, 1975, 1983, 1984a.) What does the recognition of vacuousity for such statements have to do with Sommers’ ontological program? Consider again his definition of a category (α-type): “(A predicate will be said to span a thing if it is predicated of it either truly or falsely but not absurdly.) Thus an α-type may be defined as the set of all an only those things that are spanned by some (monadic) predicate” (Sommers 1963, 329). Now consider the predicate expression ‘was in awe of Moses’. The category that is the set of all and only those things spanned by this term is just the set of all and only persons, since only of persons can ‘was in awe of Moses’ be sensibly, nonabsurdly, affirmed or denied. But is ‘was in awe of Moses’ affirmed or denied of Socrates non-absurdly? From what Sommers said concerning
50 the vacuousity of statements, it would seem that Socrates cannot belong to the category consisting of persons. Surely any ontological theory that would allow this is quite mistaken. Following my proposal, it would be better to define spanning in this way: A term will be said to span a thing if and only if it (or its logical contrary) can be affirmed of that thing truly or falsely but not category mistakenly. In effect, this says: A term P spans a thing x just in case ‘x is P’ and ‘x is nonP’ are not category mistakes. Notice that saying that ‘x is P’ is not a category mistake does not entail that P is predicable of x. ‘Was in awe of Moses’ spans Socrates, since the statement that Socrates was in awe of Moses, while vacuous (because it presuppose what is false – that Socrates was aware of Moses), is not a category mistake. The result is that spanning and predicability are distinct relations. If P is predicable of x, then P spans x, but the converse does not hold, since there are vacuous statements that are not category mistakes. Predicability might be viewed as a special kind of spanning relation. P spans x just in case ‘x is P’ is not a category mistake. P is predicable of x just in case ‘x is P’ is not a category mistake and not vacuous in any other way. In light of this distinction some modifications are in order for Sommers’ theory thus far. First of all, in the statement of the Rule for Enforcing Ambiguity, the phrase “it makes sense to predicate … of” must be read as “spans.” Secondly, Sommers’ second formulation of T.1 (the Law of Categorial Inclusion) must be rejected. Consider the following: Sommers recognized at least three kinds of “predicability relations,” (1) the way in which two terms are such that at least one is predicable of the other (i.e., the U-relation), (2) the way in which a term is said to be predicable of a thing (Sommers 1965, 272), and (3) the way in which a term is said to be predicable of another term (275). To this list we have added (4) spanning (as different from (2)). My contention is that (1) and (3) are definable in terms of (4), but (2) is not so definable. Thus we have: a) b) c)
U(PQ) =df P is predicable of Q or Q is predicable of P P is predicable of Q =df (x)(Q spans x e P spans x) P spans x =df ‘x is P’ (and ‘x is nonP’) is category correct
The second formulation of Sommers’ T.1 is: U(PQ) / (x)(*P*x e*Q* x) v (x)(*Q*x e*P*x). But if the spanning/predicability distinction is made, then this formulation cannot be used. From this formulation of T.1 it follows that two terms, P and Q, are N-related just in case there is
51 something of which P but not Q is predicable and something of which Q but not P is predicable. But, when P is, for example, ‘was in awe of Aristotle’ and Q is ‘was in awe of Moses’, x is Plato, and y is Moses’ brother Aaron, the two terms turn out to be N-related, which is certainly counter-intuitive. It is the notion of spanning, rather than predicability, that is essential to Sommers’ ontological program. One symptom of his failure to see this distinction was his inability to clearly account for those statements that are such that, even though their subjects are spanned by their predicate-terms, they are still vacuous. It’s time now to turn back to the tree theory and to the rules that constitute it. In the process of doing so we will learn more about just what is meant by thing and by absurdity (category mistakenness). c
Levels of Rectitude One need not agree with the crowd in everything, but one must not desert or subvert the common and received use of words. Aristotle
A thing is categorially possible just in case it is an individual. An individual is a thing that is such that all the terms that span it are mutually U-related. We can also say what it means to say that some kind or class of things is categorially possible. D-things (things that are D, things of which D is true) are categorially possible if and only if D spans only individuals. Moreover, DC-things (things that are D and C) are categorially possible if and only if U(DC). Consequently, for example, odd even things are categorially possible – but not red even things. Generally, D-things are categorially possible if and only if D is a term on the language tree, since only terms that span only individuals occur on the tree. There is an obvious objection to all of this. When D is equivalent to an N-related pair (e.g., ‘red number’) then DC-things are categorially possible since C would span at least whatever D spans (viz., nothing), making U(DC). We seem forced to say that for any X, red numbers that are X are categorially possible, which seems absurd. Are terms that are equivalent to an Nrelated pair of terms U-related to every term?
52 A term is any expression that can serve as the subject or predicate of a grammatically correct sentence. A predicable term is a term that belongs to the set of mutually connected terms of a language. Such a term will thus be U-related to at least one other term. An impredicable term is a term that is not a predicable term. Things spanned by such impredicable terms are non-individuals (i.e., not in the ontology, in the same way that impredicable terms are not in the language). Only U-related terms can be used to form new compound predicable terms. As we’ve seen, recalling T.4, every term (predicable or not) spans something. So, we can reply to the objection above that it just is not the case that terms like ‘red number’ span nothing – they span a thing, but not an individual. Every predicable term spans some individual. No impredicable term spans any individual. The question of whether or not impredicable terms are U-related to every term arises only when we mistakenly view such terms as determining empty categories, which are thus included in all other categories. T.4 rules this out. We can now speak more clearly about the notions of thing and individual. We can say that x is a thing if and only if some term spans x. So red numbers are things since they are spanned by ‘red number’, at least. We can say that x is an individual if and only if any pair of terms spanning x are U-related. Since only pairs of predicable terms can ever be U-related, and since only U-related terms can form new compound predicable terms, this means that x is an individual if and only if whatever spans x is a predicable term. Since individuals are things, if x is an individual, then whatever spans x is a predicable term and some term does span x. Whatever is an individual is spanned by some predicable term. Red numbers and red houses are things, since each are spanned by some term or other. But red house are individuals while red numbers are not. All the terms spanning red houses are mutually U-related (thus no impredicable terms span red houses). In contrast, red numbers are spanned by at least two terms that are not U-related (‘red’ and ‘number’). Since any term formed by two predicable but N-related terms is itself an impredicable term, red numbers are spanned by at least one impredicable term – they are not individuals. It should be obvious that what is of concern to the ontologist are individuals. To see more clearly how this fits with the Tree Theory of ontology we need to take a closer look at Sommers’ ideas about the ways language rules are related to one another in what he called “levels of rectitude” (parts of what follow are adapted from Englebretsen 1976).
53 In “The Ordinary Language Tree” (Sommers 1959), Sommers wrote: The reason one would rule out a sentence like ‘K is tall and not tall’ is not because a category mistake was committed. It is because of other rules than those of sense. In fact, if (T, not T) were a category mistake it would make no sense to call it an inconsistent or selfcontradictory sentence. A sentence which is a category mistake cannot get to be contradictory. (181) This suggests that a sentence that is categorially incorrect is somehow ruled out at a lower, earlier level than one that is logically incorrect. Questions about the logical consistency of a sentence don’t even get raised for a sentence that is already taken to be a category mistake. Sommers then spelled out this suggestion more fully in “Types and Ontology” (Sommers 1963). A linguistic sequence may be correct or incorrect in different ways. I shall consider three such ways by way of illustrating the general character of clarification. A sequence may be grammatical or ungrammatical, it may be category correct or category mistaken, it may be consistent or inconsistent. We may call these ways of being correct or incorrect “levels of rectitude.” The reason for calling them levels is that a sequence which is incorrect in one way must be correct in other ways and the ways it must be correct are therefore “lower” than, because presupposed by, the way it is incorrect. Also, an incorrect sequence is neither correct nor incorrect with respect to other ways, and these ways are “higher” since they presuppose the rectitude of the sequence. For example, an ungrammatical sentence is not a sentence at all; it cannot therefore make a category mistake. Thus the incorrectness we call a category mistake presupposes the grammaticalness of the sentence. Again, a category mistake is neither consistent nor inconsistent. If I say “his anger was triangular and not triangular” I have not contradicted myself; I have said nothing and retracted nothing. An inconsistent sentence is neither true nor false empirically. Thus, inconsistency as a way of being incorrect presupposes both the grammaticalness and the category correctness of the sequence. Again, empirical falsity presupposes that the sequence is grammatical, category correct, and consistent. In short, any sequence which is incorrect at one level of rectitude must
54 be correct at all lower levels and is neither correct nor incorrect at any higher level. (348) We can construct the following “divided line” to illustrate Sommers’ levels of rectitude: level 3: empirically false __________________________________________ level 2: inconsistent __________________________________________ level 1: category mistaken __________________________________________ level 0: ungrammatical Formally, if a sequence, S, is incorrect at level n, then S is correct at every level mn. Rules that govern sequences at level 0, rules for distinguishing sequences that are sentences from those that are not, are grammatical rules. Rules that govern sentences at level 1, rules for distinguishing category mistakes from those that are not, are sense rules (the rules governing the language tree). Rules that govern sentences at level 2, rules for distinguishing inconsistent sentences from those that are consistent, are logical rules. In a way, grammatical, sense, and logical rules are all linguistic rules. If there are rules governing sentences at level 3, rules for distinguishing between empirical truth and falsity, they are not linguistic. The few obvious candidates are the laws of physical sciences when construed as useful hypotheses rather than as definitions or tautologies (e.g., ‘Nothing is faster than light’ which rules out such sentences as ‘He drove faster than the speed of light’, or ‘Mules are sterile’, which rules out ‘This is the offspring of a mule and a zebra’). The Tree Theory establishes an isomorphism between our language and ontology. For example, corresponding to the set of categorially synonymous terms located at a given node on the language tree there is a category of individuals on the ontological tree; corresponding to the top node of the language tree, the set of term spanning everything, there is an all-inclusive category in the ontology; corresponding to each bottom node in the language tree there is a type in the ontology. As well, corresponding
55 to the U-relations represented on the language tree there are category inclusion relations on the ontological tree. Also, category mistakes correspond to category straddlers (individuals purportedly belonging to pairs of exclusive categories). Now, taking into account the isomorphism between language and ontology, we can extend the idea of levels of rectitude. It can be shown that, by virtue of this isomorphism, just as there are levels of rectitude in the language there are corresponding levels of rectitude in the ontology. In doing so we see that impossible is the ontological correlate of the linguistic incorrect. Corresponding to ungrammatical sequences are things not in the ontology, nothing (i.e., a non-sentence is the linguistic correlate of an ontological non-thing). Just as only sequences correct at level 0, sentences, are candidates for higher levels of linguistic rectitude, only things are candidates for higher levels of ontological rectitude. The important level for the ontologists is that corresponding to level 1. A language tree, by displaying the sense relations among terms, rules out predications between N-related pairs (viz., category mistakes). Analogously, the ontological tree, by displaying the membership and inclusion relations among things and categories of things, rules out category straddlers, non-individuals. Just as ungrammatical sequences are incorrect at level 0 and thus nonthings are impossible at that level, category mistakes are incorrect at level 1 and non-individuals are impossible at that same level. Read as a linguistic rule, the rule at level 1 says that the predication between two Nrelated terms is a category mistake. Read as an ontological rule, it says that a thing cannot belong to two categories neither of which includes the other. Just as such predications are categorially incorrect, such things (non-individuals) are categorially impossible. Just as ‘Some numbers are blue’ is a category mistake, blue numbers are categorially impossible. Logical rules, read as linguistic rules, prevent contradictions. When read as ontological rules they prohibit logically impossible individuals. The logical rule that prohibits ‘This number is both prime and non-prime’ as incorrect at level 2 (i.e., logically incorrect, inconsistent, logically false, contradictory) likewise prohibits, in the ontology, non-prime primes. Finally, when a sentence is ruled as empirically false (say by a law of empirical science) some individual is ruled empirically impossible in the ontology (e.g., faster than light Toyotas and pregnant mules). There is more to say here. Just as the rule of rectitude (a sequence incorrect at level n is correct at every level mn) holds linguistically, it holds ontologically as
56 well. A thing impossible at some level is possible at all lower levels and is neither possible nor impossible at any higher level. A logically impossible individual (e.g., a non-prime prime or a 3 meter man less than 2 meters in height) must be categorially possible and not a non-thing. The question of its empirical possibility does not even arise. It might be objected at this point that non-prime primes cannot be possible in any sense. But clearly they are categorially possible. For a thing to be categorially possible it is sufficient that it be an individual, not a category straddler. Thus, there can be no two terms N-related but both spanning it. Whatever spans non-prime primes (numbers) spans primes (numbers). Contradictions, but not category mistakes, can be produced by predicatively tying contrary terms. So, things like non-prime primes, round squares, and men who are simultaneously taller and shorter than 2 meters, are logically impossible but categorially possible. On the other hand, categorially impossible things like blue numbers, valid philosophers, and sad sidewalks are neither logically possible nor logically impossible. We might, then, revise our divided line to show the correspondence between the two sets of levels of rectitude. level 3: empirically false empirically impossible ____________________________________________________ level 2: inconsistent logically impossible ____________________________________________________ level 1: category mistaken categorially impossible ____________________________________________________ level 0: ungrammatical non-thing, nothing We now have a clear way of distinguishing between things like blue numbers, square circles, faster than light Toyotas. The differences lie in the ways we can legitimately talk about such things. Talk about blue numbers must involve category mistakenness. Talk about square circles must involve contradiction, but need not involve category mistakenness. Talk about faster than light Toyotas must involve (now) scientific implausibility, empirical impossibility, but need not involve either contradiction or category mistakenness. Sommers’ claim was that it is the sense relations among terms that is of interest to the ontologist. Consequently, it is individuals, categorially possible things, and the categories and types to which they belong, that the
57 ontologist talks about. We can even say that while the scientist’s level of inquiry is at level 3, the logician’s at level 2, and the grammarian’s at level 0, the ontologist’s level of inquiry is 1. It is this level of inquiry, between logic and grammar, that has so often been ignored by philosophers. But why has this level been so generally ignored? There are at least two reasons. We have seen that the ontologist’s task is in a sense threefold. Concerned with language, it is directed at the avoidance of category mistakes; concerned with ontology, it is directed at the avoidance of category straddlers. These two concerns collapse into a third, over-all concern directed at the avoidance of incoherent theories. Traditionally, the concern with category mistakes has been delegated to grammarians and interested philosophers. Grammarians too often found their tools inadequate for the job of describing and explicating the nature of such mistakes, and philosophers too often were satisfied with discovering ad hoc ways of restricting and avoiding them. So the avoidance of category straddlers, and in general, the avoidance of theory incoherence, was taken to fall within the domain of logic. The logician’s inevitable judgment on category straddlers and theories that allow them was ‘logically impossible’ and ‘inconsistent’. Yet the level of inquiry between grammar and logic needs to be recognized. It is at this level that a theory as a whole is judged to be ontologically coherent or not. As we saw earlier, corresponding to each sense rule and its parallel categorial rule there is a translation rule (e.g., T.3, a sense rule, the Law of Catgorial Inclusion, a categorial rule, and the Rule for Enforcing Ambiguity, a translation rule). Carrying out further our augmentation of the notion of levels of rectitude, we can exhibit corresponding levels of rectitude for translation rules. Thus, for example, the Rule for Enforcing Ambiguity is a translation rule at level 1. The importance of such translation rules is that it is a translation rule that is most often used to rule on a theory’s ontological coherence. Likewise, a kind of “logical rule for enforcing ambiguity” can be used at level 2 to judge a theory’s logical consistency. Thus one way of rendering a theory that permits an assertion such as ‘It is raining here now but it isn’t raining here now’ logically consistent is to enforce ambiguity on some term, say ‘raining’ so that the assertion in interpreted as ‘It is raining (some moisture is falling) here now but it isn’t raining (large drops of moisture are steadily falling) here now’. In fact, one can argue, as Sommers did (Sommers 1963, 349-50) that language rules in general can be used quite naturally to enforce term ambiguity.
58 Any language rule, when it is violated, can be used in one of two ways. (1) We can throw out the offending sequence, or (2) we can use the rule to introduce ambiguity in such a way that the rule is no longer violated by the sequence. In artificial languages the rules are used only in the first way; in natural languages we often use the rule to “clarify.” It is sometimes said by those favoring constructed languages that the natural language is shot through with ambiguity because it is anarchical, not governed by rules. The opposite is true; the ambiguity is a product of the rules. It is due to their satisfaction. (350) We can now depict, as we did for linguistic and ontological rules, levels of rectitude for theories. Again, an analogous rule of rectitude will hold. Any theory failing at level n must be successful at every level m>n and neither successful nor unsuccessful at any level m>n. Our final version of the divided line, then, will look like this: Linguistic Sequences Ontological Entities Theories _______________________________________________________ level 3: empirically implausible empirically impossible implausible _______________________________________________________ level 2: inconsistent logically impossible inconsistent ______________________________________________________ level 1: category mistaken categorially impossible incoherent _______________________________________________________ level 0: ungrammatical non-thing, nothing non-theory An incoherent theory succeeds in being a theory but is neither consistent nor inconsistent, neither plausible nor implausible. Theories that are inconsistent, or consistent but implausible, or those that are plausible must be coherent. In other words, any theory that aims at plausibility, or even just consistency, must be imbedded in some coherent theory.
59 One measure of the importance of Sommers’ theory is that much of philosophy and science is the devising of theories about one sort of thing or another, theories meant to be consistent, and usually, plausible. If a theorist sets out to check a theory at any level above 0 he or she must be aware of the ontology in which it is imbedded. The essential first step here is to make the theory coherent by guaranteeing that it allow for only individuals (things like square circles and pregnant mules will be ruled out at higher levels). This is accomplished by obeying the Rule for Enforcing Ambiguity. The importance of such an ontology for philosophy cannot be exaggerated. If every theory is imbedded in an ontology, and if philosophy and science involve, inter alia, the formation and examination of various kinds of theories, then a coherent ontology must be the essential first step.
C Bearing Fruit They have been at a great feast of languages, and stolen the scraps. Shakespeare
Sommers claimed that he wanted to reinstate at least a part of the “old Russell program for an ontology” (Sommers 1963, 327). He held that the question of formal ontology is: What sorts of things are there? a question whose answer is determined by the structure of language. Where he departed from Russell was in his conviction that natural language has a formal structure and that it is that structure that properly serves the program of formal ontology. Russell had tried to impose the structure of a “logically corrected” language onto natural language. As we’ve seen, Sommers’ own program of formal ontology, the “science of categories” (Sommers 1963, 351) sticks to the semantic relations among terms of natural language and establishes a thorough isomorphism between the structure determined by those relations and the ontological structure determined by inclusion relations among categories of things. (For a critique of Sommers’ aim see Nelson 1964).
60 We have also seen that this program of formal ontology was presaged by Aristotle’s idea that there are logical and semantic relations among terms that correspond to relations among things and their properties. In Aristotle’s case, these semantic relations concerned natural, as opposed to accidental, non-natural, predicability relations between pairs of terms. Both Aristotle and Sommers held that natural language has a formal, rule-governed structure, one that could be described as a hierarchical tree. For Aristotle, this structure can be said to be generated by Aristotle’s Tree Rule (If B and C are mutually impredicable terms, and A is predicable of both, then A is naturally predicable of B and of C and B and C are not naturally predicable of A), along with Aristotle’s Transitivity Corollary (Any term naturally predicable of a second term is naturally predicable of any term that second term is naturally predicable of). Nevertheless, Sommers’ program differed from Aristotle’s in many ways. Sommers replaced Aristotle’s use of the natural/accidental predicability relation with the U-/N-relation between terms and the spanning relation between terms and things. As well, he replaced Aristotle’s tree-generating rule and corollary with his own tree rules. According to the type theory due to Russell, two things that are of the same type as a third thing, are themselves of the same type. This means that type sameness is transitive. According to Ryle, two terms that can be sensibly applied to a given thing are such that a sensible, category correct, sentence can be constructed using those terms as subject and predicate. This means that term-type sameness is transitive. What Sommers showed was that these two notions of type are insufficient. There are four kinds of types: two types of terms (A-types and B-types) and two types of things ("-types, i.e., categories, and $-types, i.e., types proper). Russell recognized only the third of these, while Rule recognized only the first. Those are indeed transitive regarding membership (the other two are not transitive regarding membership). Sommers spelled out in detail these differences with Russell and Ryle (Sommers 1963, 328-331). Sommers’ Tree Theory preserves the many valuable insights that have been available from many predecessors. But it goes far beyond anything anyone else has done in advancing a clear, well-articulated version of formal ontology based on the connection between the terms of natural language and the things that constitute the categories of ontology. Still, it’s just a theory. Two questions need to be asked about any theory (whatever it is about). (1) Is there any non-theoretical, perhaps empirical (not just historical), justification for any of it? (2) Does it offer theoretical
61 potential and have valuable theoretical consequences? If linguistic rules are actually rules that we apply to discriminate between the grammatical and the ungrammatical, between the category mistaken and the category correct, between the logically admissible and the logically inadmissible, how did we come by them? Did we learn them from others? Or, were we somehow gifted with them at birth? And in either case, is it really the case that we (normal users of natural language) do indeed employ such rules? Let’s begin with grammar. Grammar is what you know when you know that a given sequence of sounds or marks is a meaningful expression (term or sentence) of a language you understand. You and I both know that 1, but not 2 or 3, is a grammatically correct sentence of English because we both have knowledge of the rudiments of English grammar. 1 2 3
Today the sun is obscured by clouds. Clouds by the is today sun obscured. Je me souviens.
If I happen to understand French, then my knowledge of French grammar accounts for my ability to judge that 3 is a grammatically correct sentence of French. To have knowledge of grammar is to have a kind of linguistic ability. Where did we get such knowledge, such ability? Our knowledge about our native language(s) is different in important ways from our knowledge about languages we’ve learned after early childhood. First of all, notice that our knowledge about what is or is not a grammatical sequence is not the result of our having memorized all the correct strings (as someone might memorize all the national capitals of Europe). We can tell whether or not a sequence is grammatical in our native language even if we’ve never seen or heard it before. What we know when we know a grammar are not lists of grammatically correct sequences but rules for generating such sequences. Moreover, even though some exposure to the language is essential to trigger grammatical knowledge, such knowledge is, for the most part, quite independent of such exposure. The consensus among most experts is that grammatical knowledge is an innate, speciesspecific ability common to all normal humans. It is an ability to form hypotheses about the acceptability of linguistic sequences and to modify those hypotheses in light of exposure to the linguistic behavior of other language users. A young child’s innate drive to find regularity, rule satisfaction, in her experience leads to such tentative hypotheses, guesses,
62 as that English speakers form the past tense of verbs by adding ‘ed’ to the “normal” verb (i.e., the present tense version). The idea that the capacity to form hypothetical, tentative grammatical rules in reaction to (so far) limited language exposure explains how we learn our native language so quickly and easily, without special instruction, and why we seem, for the most part quite unaware (until given special additional training) of our own knowledge of the grammatical rules governing our (spoken or written) expressions. This all contrasts sharply with our learning of foreign languages, juggling, chemistry, or the capitals of Europe. It must be emphasized that our innate grammatical knowledge is not knowledge of each specific rule governing expressions in our native language. It is an innate capacity, an innate ability and inclination, to form rules. As it happens, there is good evidence that there are some rules of grammar that are universal, not language-specific, common to all known languages. Such rules constitute a “universal grammar.” It is that grammar that is said to be known by all speakers. Indeed, the role these rules play in our linguistic knowledge is far greater than that played by the specific grammar rules of any particular native language. Our linguistic knowledge is not confined to our tacit, innate knowledge of rules of grammar. We not only know from a very early age which sequences are or are not grammatical (even if only by virtue of some tentative hypothesis), we also have knowledge about the sound system of our (perhaps any) human language. There seem to be universal rules governing what sounds and combinations of sounds can or cannot be constituents of spoken languages. This knowledge, too, seems to be tacit and universal. Moreover, we also have such knowledge about the senses and combinations of senses of linguistic expressions. For example, we have early, tacit, and universally shared (among English speakers) knowledge that ‘dog’ and ‘cat’ have different senses, that the difference in sense between ‘red’ and ‘blue’ is not of the same order as that between ‘red’ and ‘flying’, that the sense of ‘male’ is part of the sense of ‘boy’, that ideas can’t sensibly be said to be green, that many expressions can have a plurality of sense, and that many expressions can share a sense in common. Our tacit linguistic knowledge is an ability and inclination to form rules – rules of grammar (syntax), rules of sound (phonetics), and rules of sense (semantics). As well, it is a knowledge of logic, a capacity to form rules governing judgments of consistency and entailment. Logical knowledge is linguistic knowledge, parasitic on grammatical knowledge. Our unconscious ability to formulate and use rules of logic must await our
63 ability to use other linguistic rules, especially rules of grammar and of sense. But it need not wait long. There is a widespread myth that logic is the proper province of mature, specially trained adults. Nevertheless, basic logical ability, the ability to reckon logically at all, seems innate. There’s no denying that our logical knowledge (our ability to reckon logically, not our understanding of abstract systems of logic) can be greatly enhanced by special training. But this is true of any native ability, including our grammatical, musical, mathematical, and motor-sensory abilities. There are in fact good reasons to believe that our basic ability to engage in simple logical reckoning is indeed innate. (For more on these see Sommers 2002, 2008, 2008a). Lance Rips, one of many psychologists who have investigated reasoning as a cognitive ability, has written this: The innateness of basic logical abilities is not a bitter pill, however. The abilities are exactly the sort of things one might expect to be innate, if anything is, given the central role of reasoning in cognition and learning … the presence of innate logical abilities doesn’t imply that people never make mistakes in deduction, or that they are incapable of improving their inference skills, any more than the presence of innate grammatical abilities implies that people never make grammatical errors, or that they can’t improve their grammar through learning. (Rips, 1994, 375) A growing number of psychologists and linguists have come to adopt this notion that logical abilities are innate (“logical nativism”). Stephen Crain and Drew Khlentzos, for example, have claimed the following: At present, we see no plausible alternative to logical nativism. Empirical evidence from child language (including 2- year-old children) and cross-linguistic research (from typologically different languages) supports logical nativism, and several a priori arguments provide additional grounding. (Crain and Khlentzos 2008, 53) More recently they have concluded that “children do not learn logic – it comes naturally to them” (Crain and Khlentzos 2010, 30). In the development of his own renewed term logic, Sommers has argued much the same point, citing the early appearance of reasoning skills
64 in children and the rapidity with which most of us make simple inference. All of this strongly suggests that logic, like grammar, ought to lend itself to an examination of “ratiocination in an empirical spirit” (Sommers 2008, 122; see also Sommers 2005a, 117-118). In the course of becoming rational animals, human beings have had eons to develop language and to hit upon a method of deductive reasoning with the sentences of their natural languages. (Sommers 2008a, 5) Modern thinkers regard logic as a purely formal discipline like number theory, and not to be confused with any empirical discipline such as cognitive psychology, which may seek to characterize how people actually reason. Opposed to this is the traditional view that even a formal logic can be cognitively veridical – descriptive of procedures people actually follow in arriving at their deductive judgments (logic as Laws of Thought). (Sommers 2008, 115) The phenomenon of discursive ratiocination thus challenges us to give empirically correct accounts of our intuitive, i.e., give empirically correct accounts of our intuitive, i.e., unconscious, mode of reckoning. (Sommers 2008, 116) If it makes sense to engage in empirical investigations of our grammatical and logical skills, does it likewise make sense to attempt an empirical investigation of our abilities to keep track of sense relations among terms, avoid making category mistakes, etc.? Linguists, following the pioneering lead of Noam Chomsky, established the view of linguistic knowledge broadly sketched above. The theories of formal ontology from Aristotle to Sommers highlight and exploit one element of language – sense rules, level 1 rules of rectitude. The first of our two questions was, in effect: Are there any non-theoretical, perhaps empirical, justifications for accepting anything like the Tree Theory? More particularly, we might ask if there is any evidence that we actually speak and think in the ways that theory posits. The cognitive psychologist Frank Keil set himself the task of answering that very question. (See especially Keil 1979, 1981, 1981a, 1983, 1989, 2005, and Keil and Kelly 1986; also see Rakison 2003.) In Semantic and Conceptual Development: An Ontological Perspective (Keil 1979), Keil set out the results of his investigations aimed
65 at revealing what cognitive constraints apply to our “ontological knowledge,” our “conception of the basic categories of existence, of what sorts of things there are (1). Keil began with the fact that not all term pairs can be tied in a meaningful predication (some sequences are senseless, category mistaken). He also adopted Sommers’ notion that predicability relations in natural language (spanning, U-relatedness, N-relatedness) reflect our ontological knowledge. In particular, following Sommers, he argued that our language and our ontology are structurally isomorphic and that they are both constrained jointly by a prohibition against the Mconfiguration. “[T]he M constraint becomes a necessary condition for natural human concepts” (22). Thus far, the theory built by Sommers and adopted by Keil, the tree theory, is just that, a theory. A theory is acceptable (given its consistency, etc.) only insofar as it can provide an adequate and acceptable explanation of certain phenomena, just in case it satisfies an adequate empirical evaluation. This evaluation is the result of several carefully designed experiments (including follow-up studies) meant to determine whether persons’ intuitions are consonant with the M constraint (ch. 4). In the follow-up studies, young children, including both English speaking groups and Spanish speaking groups, were surveyed (ch. 7). Keil drew a number of conclusions from his empirical investigations. Keeping in mind that the subjects of the experiments were unaware of the theory being tested, the most striking of these results was that the subject’s “patterns of intuitions were in conformity with the theory not only because they converged on the same structure, but also because that structure honored the M constraint” (45). This result held true for children as well as adults, though the children’s trees tended to be smaller (reflecting smaller vocabularies). Keil concluded that (a) “the M constraint is honored at all ages” and (b) “the trees illustrate a specific developmental pattern, namely increasing differentiation and hierarchical organization” (80); children seem to use only a subset of the categories employed by adults (164). As Keil subsequently remarked, the studies show that “all humans tend to share the same ontological tree” (161). The M constraint appears to be an extremely strong principle that exerts it influence over trees at any state of development, even in pre-schoolers. The general developmental patterns suggested by the initial study appeared repeatedly in the two follow-up studies with elementary-school children. These patterns therefore appear to be quite robust.
66 In addition to demonstrating the replicability of the previous findings, the three additional studies made clearer the fact that the trees represent more than mere linguistic phenomena. Use of different syntactic formats and a different language had little effect on the patterns of development. Similarly, neither general tree configurations nor particular nodes appeared to be responsible for either M constraint or the patterns of development. Instead – and this notion was clearly bolstered by the answers to the more extensive probe questions – the trees appear to represent growth of an underlying conceptual knowledge of ontological categories, a knowledge that is intricately linked to language via predictability but which seems to be the original source of predicability phenomena. (117-118) Later (Keil 1986), replying to criticism of his conclusions concerning the universal application of the M constraint, Keil emphasized much more strongly his conviction “that the constraint arises out of conceptual structure at the level of ontological knowledge … [linguistic structure is a] clue to the structure of underlying ontological knowledge” (174) Even more recently, rehearsing the lessons learned from his experiments of thirty years earlier, Keil has pointed to the fact that work in metaphysics can have a “fruitful albeit unintended influence on psychological research” (Keil 2005, 68). Sommers’ work in formal ontology did not make any psychological predications, but it did inspire research in cognitive psychology, carrying “with it an implicit experimental research program” (Keil 2005, 68). As Sommers wrote in response: Keil notes that this kind of philosophical influence on empirical research is often unintended. It was certainly so in my case. To me back in the 1950s and ’60s, the category structure belonged broadly to logic as an priori science. I did not think of it in terms of actual cognitive processes or developmental psychology. Keil’s empirical research and findings actually surprised me. But they seemed to me to be unimpeachable, and I have since come off my a priori high horse even on so pure a topic as deductive logic. (Sommers 2005a, 217)
67 Keil’s studies attempting to establish empirical support for Sommers’ theory became the inspiration for subsequent work by both cognitive psychologists and philosophers. For example, the psychologist Daniel Osherson made use of Keil’s studies in order to establish the necessary conditions for the “naturalness” of conceptual structures (Osherson 1978). He conjectured that there must be innate constraints on concepts. “There must be, then, some predisposition to emerge from childhood with one set of concepts rather than another” (263). According to Osherson, Sommers’ tree theory (i) provides at least one necessary condition for the naturalness of a conceptual scheme (viz., the M constraint), (ii) seems free of “unqualified counterexamples” (269, n2), (iii) is empirically falsifiable (269), and (iv) is strongly confirmed by Keil’s empirical studies aimed at verifying “the psychological reality of Sommers style predicability trees” (271). Unfortunately, Osherson’s understanding of Sommers’ theory was compromised by the fact that he had followed Keil’s practice of using the term ‘term’ to denote both an individual and an expression denoting that individual. Like Keil, he reserved ‘predicate’ for linguistic expressions (which Sommers had usually called ‘terms’). One consequence of this is that category straddlers were mistakenly construed by Keil and Osherson as ambiguous expressions (e.g., Osherson’s M- and W-configurations, 268). We will see that Osherson was not alone in being misled in this way. An ancient but perennial question raised by philosophers has concerned just how kinds (species, genera, natural kinds, categories, types, sorts, etc.) of things are to be understood. We’ve seen that Aristotle and Sommers both held that such understanding must be in terms of how such things are structured. The philosopher Richmond Thomason agreed (Thomason 1969, 97). Calling what Sommers refers to as categories “natural kinds,” Thomason argued that an essential property of trees taxonomies of natural kinds is: “No natural kinds a and b of a taxonomic system overlap unless a